|Max Lungarella (2007), Scholarpedia, 2(8):3104.||doi:10.4249/scholarpedia.3104||revision #91198 [link to/cite this article]|
- Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education, one would obtain the adult brain [...] Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child. (Turing, 1950, pp.456)
Developmental robotics (also known as epigenetic robotics or ontogenetic robotics) is a highly interdisciplinary subfield of robotics in which ideas from artificial intelligence, developmental psychology, neuroscience, and dynamical systems theory play a pivotal role in motivating the research. The main goal of developmental robotics is to model the development of increasingly complex cognitive processes in natural and artificial systems and to understand how such processes emerge through physical and social interaction. Robots are typically employed as testing platforms for theoretical models of the emergence and development of action and cognition – the rationale being that if a model is instantiated in a system embedded in the real world, a great deal can be learned about its strengths and potential flaws. Unlike evolutionary robotics which operates on phylogenetic time scales and populations of many individuals, developmental robotics capitalizes on “short” (ontogenetic) time scales and single individuals (or small groups of individuals).
Human intelligence is acquired through a prolonged period of maturation and growth during which a single fertilized egg first turns into an embryo, then grows into a newborn baby, and eventually becomes an adult individual – which, typically before growing old and dying, reproduces. The processes underlying developmental changes are inherently robust and flexible as demonstrated by the amazing ability of biological organisms to devise adaptive strategies and solutions to cope with environmental changes and guarantee their survival. Because evolution has selected development as the process through which to realize some of the highest known forms of intelligence, it is reasonable to assume that development is mechanistically crucial to emulate such intelligence in machines and other human-made artifacts.
History and Early Theorizations
The idea that development might be a good avenue to understand and construct cognition is not new. Already Alan Turing suggested that in order to build “intelligent machines” one might want to start by “simulating the child’s mind” (see epigraph). In the context of robotics, many of the tenets underlying developmental robotics can be traced back to at least three conceptual breakthroughs in research on intelligent systems: (a) embodied artificial intelligence (embodied AI), that is, the notion that intelligence (e.g. common sense) can only be the result of learned experience of a body living in the real world (e.g. Brooks et al., 1998; Pfeifer and Scheier, 1999; Pfeifer and Bongard, 2007); (b) synthetic neural modeling, i.e. a technique in which large-scale computer simulations are employed to analyze the interactions among the nervous system, the phenotype, and the environment of a designed organism as behavior develops (Edelman et al., 1992; Reeke et al., 1990); and (c) the notion of enaction according to which cognitive structures emerge from recurrent sensorimotor patterns that enable action to be perceptually guided (Varela et al., 1991). These three breakthroughs share the assumption that intelligence and intelligence-like processes might be best understood by studying the dynamical and reciprocal interaction across multiple time scales between brain and body of an agent, and its environment. Not surprisingly, many of the early theorizations of developmental robotics discussed the emergence and development of sensorimotor intelligence in the context of embodied systems (e.g. Ferrell and Kemp, 1996; Rutkowska, 1994; Sandini et al., 1997).
Aspects and Areas of Interest
Developmental robotics differs from traditional robotics and artificial intelligence in at least two crucial aspects. First, there is a strong emphasis on body structure and environment as causal elements in the emergence of organized behavior and cognition requiring their explicit inclusion in models of emergence and development of cognition (Asada et al., 2001; Blank et al., 2005; Lungarella et al., 2003; Weng et al., 2001; Zlatev and Balkenius, 2001). Although some researchers use simulated environments and computational models (Kuniyoshi and Sangawa, 2006; Mareschal et al., 2007; Westermann et al., 2006), more often developmental robots are embedded in the real world as physical analogues of real organisms (e.g. Arbib et al., 2007; Kozima and Nakagawa, 2007; Metta and Fitzpatrick, 2003; Pfeifer et al., 2007; Sporns, 2007; for examples, see Figs. 1 and 2). Second, the idea is to realize artificial cognitive systems not by simply programming them (e.g. to solve a specific task), but rather by initiating and maintaining a developmental process during which the systems interact with their physical environments (i.e. through their bodies, tools, or other artifacts), as well as with their social environments (i.e. with people, other robots, or simulated agents) – cognition, after all, is the result of a process of self-organization (spontaneous emergence of order) and co-development between a developing organism and its surrounding environment. Andy Clark uses the term “cognitive incrementalism” to denote the bootstrapping of intelligence, the rationale being that throughout life “you get indeed get full-blown, human cognition by gradually adding bells and whistles to basic strategies of relating to the present at hand” (Clark, 2001). In other words, incrementalism designates the process of starting with a minimal set of functions and building increasingly more functionality in a step by step manner on top of structures that are already present in the system.
The spectrum of developmental robotics research can be roughly segmented into four primary areas of interest. The borders of these categories are not as clearly defined as this classification may suggest and instances may exist that fall into two or more of these categories. We do hope, however, that the suggested grouping provides at least some order in the large spectrum of issues addressed by developmental roboticists.
- Socially oriented interaction: This category comprises research on robots that communicate or learn particular skills via social interaction with humans or with other robots. Examples include research on imitation learning, communication and language acquisition, attention sharing, turn-taking behavior, and social regulation (e.g. Breazeal and Scassellati, 2002; Dautenhahn, 2007; Fong et al., 2003; Steels, 2006).
- Non-social interaction: These studies are characterized by a direct and strong coupling between sensor and motor processes and the local environment (e.g. inanimate objects), but do not involve any interaction with other robots or humans. Examples are visually-guided grasping and manipulation, tool-use, perceptual categorization, and navigation (e.g. Fitzpatrick et al., 2006; Metta and Fitzpatrick, 2003; Nabeshima et al., 2006).
- Agent-centered sensorimotor control: In these studies, the goal is to investigate the exploration of bodily capabilities, changes of morphology (e.g. perceptual acuity, or strength of the effectors) and their effects on motor skill acquisition, self-supervised learning schemes not specifically linked to any functional goal, and models of emotion. Examples include self-exploration, categorization of motor patterns, motor babbling, and learning to walk or crawl (e.g. Demiris and Meltzoff, 2007; Kuniyoshi and Sangawa, 2006; Lungarella and Berthouze, 2002).
- Mechanisms and principles: This category embraces research on mechanisms or processes thought to increase the adaptivity of a behaving system. Many examples exist: developmental and neural plasticity, mirror neurons, motivation, freezing and freeing of degrees of freedom, and synergies; research into the characterization of complexity and emergence, as well as the effects of adaptation and growth; practical work on body construction or development (e.g. Arbib et al., 2007; Blank et al., 2005; Lungarella and Sporns, 2006; Oudeyer et al., 2007; Pfeifer et al., 2007). Further work in this area of interest relates to design principles for developmental systems.
By contrast to traditional subjects such as physics or mathematics, which are described by basic axioms, the fundamental (“universal”) principles governing the dynamics of developmental systems are unknown. Could there be laws governing developmental systems? Could there be a theory? Although various attempts have been initiated (Asada et al., 2001; Brooks et al., 1998; Metta, 2000; Weng et al., 2001), it is fair to say that to date no such theory has emerged (Lungarella et al., 2003). Towards such a theory, one attractive possibility is to point out a set of candidate design principles. Such principles can be abstracted from biological systems (e.g. they can be revealed by observations of human and animal development), and their inspiration can take place at several levels, ranging from a “faithful” replication of biological mechanisms to a rather generic implementation of biological principles leaving room for other dynamics that are intrinsic to artifacts but are not found in natural systems. It is generally believed that such a principled approach is preferable for constructing intelligent autonomous systems with desired properties because it allows the capturing of design ideas and heuristics in a concise and pertinent way, avoiding blind trial-and-error (for additional information and principles refer to: Lungarella, 2004; Pfeifer and Bongard, 2007; Pfeifer et al., 2007; Prince et al., 2005; Smith and Breazeal, 2007; Smith and Gasser, 2005; Sporns, 2007; Stoytchev, 2006).
The further success of developmental robotics will depend on the extent to which theorists and experimentalists will be able to identify universal principles spanning the multiple levels at which developmental systems operate. In what follows, we briefly indicate some of the “hot” issues that will need to be tackled in the future.
- Semiotics: It is necessary to address the issue of how developmental robots (and embodied agents in general) can give meaning to symbols and construct semiotic systems. A promising approach – explored under the label of “semiotic dynamics” – is that such semiotic systems and the associated information structure are not static, but are continuously invented and negotiated by groups of people or agents which use them for communication and information organization (Steels, 2006).
- Core knowledge: An organism cannot develop without some built-in ability. If all abilities are built in, however, the organism does not develop either. It will therefore be important to understand with what sort of core knowledge and explorative behaviors a developmental system has to be endowed so that it can begin developing novel skills on its own. One of the greatest challenges will be to identify what those core abilities are and how they interact during development in building basic skills (e.g. RobotCub Roadmap, 2007; Spelke, 2000).
- Core motives: It is necessary to conduct research on general capacities such as creativity, curiosity, motivations, action selection, and prediction (i.e. the ability to foresee consequence of actions). Ideally, no tasks should be pre-specified to the robot, which should only be provided with an internal abstract reward function, some core knowledge, and a set of basic motivational (or emotional) "drives" that could push it to continuously master new know-how and skills (Breazeal, 2003; Oudeyer et al., 2007; Lewis, 2000; RobotCub Roadmap, 2007; Velasquez, 2007).
- Self-exploration: Another important challenge is the one of continuous self-programming and self-modeling (e.g. Bongard et al., 2006). Control theory assumes that target values and statuses are initially provided by the system’s designer, whereas in biology, such targets are created and revised continuously by the system itself. Such spontaneous “self-determined evolution” or “autonomous development” is beyond the scope of current control theory and needs to be tackled in future research.
- Active learning: In a natural setting, no teacher can possibly provide a detailed learning signal and sufficient training data. Mechanisms will have to be created for the developing agent to collect relevant learning material on its own and for learning to take place in an “ecological context” (i.e. with respect to the environment). One significant future avenue will be to endow systems with the possibility to recognize progressively longer chains of cause and effect (Chater et al., 2006).
- Growth: As mentioned in the introduction, intelligence is acquired through a process of self-assembly, growth, and maturation. It will be important to study how physical growth, change of shape and body composition, as well as material properties of sensors and actuators affect and guide the emergence and development of cognition and action. This will allow connecting developmental robotics to computational developmental biology (Gomez and Eggenberger, 2007; Kumar and Bentley, 2003).
The study of intelligent systems raises many fundamental, but also very difficult questions. Can machines think or feel? Can they autonomously acquire novel skills? Can the interaction of the body, brain, and environment be exploited to discover novel and creative solutions to problems? Developmental robotics may be an approach to address such long standing issues. The field is currently bubbling with activity. Its popularity is in part due to recent technological advances in robotics which have allowed the design of humanoid robots whose “kinematic complexity” is close to that of humans (Figs. 1 and 2). The success of the infant field of developmental robotics will ultimately depend on whether it will be possible to crystallize its central assumptions into a theory. While much additional work is surely needed to arrive at or even approach a general theory of development, the beginnings of a new synthesis are on the horizon. Perhaps, finally, we will come closer to understanding and building (growing) human-like intelligence. Exciting times are ahead of us.
- Arbib, M., Metta, G. and van der Smagt, P. (2007). Neurorobotics: from vision to action. In: B.Siciliano and O.Khatib (eds.) Springer Handbook or Robotics (Chapter 64).
- Asada, M., MacDorman, K.F., Ishiguro, H. and Kuniyoshi, Y. (2001). Cognitive developmental robotics as a new paradigm for the design of humanoid robots. Robotics and Autonomous Systems, 37:185–193.
- Bjorklund, E. M. and Green, B. (1992). The adaptive nature of cognitive immaturity. American Psychologist, 47:46–54.
- Blank, D., Kumar, D., Meeden, L. and Marshall, J. (2005). Bringing up robot: fundamental mechanisms for creating a self-motivated, self-organizing architecture. Cybernetics and Systems, 36:125–150.
- Bongard, J.C., Zykov, V. and Lipson, H. (2006). Resilient machines through continuous self-modeling. Science 314:1118-1121.
- Breazeal, C. and Scassellati, B. (2002). Robots that imitate humans. Trends in Cognitive Sciences, 6:481–487.
- Breazeal, C. (2003). Emotion and sociable humanoid robots. Int. J. of Human-Computer Studies, 59:119–155.
- Brooks, R.A., Breazeal, C., Irie, R., Kemp, C.C., Marjanovic, M., Scassellati, B. and Williamson, M.M. (1998). Alternative essences of intelligence. In: Proc. of 15th Nat. Conf. on Artificial Intelligence, 961–978.
- Chater, N., Tenenbaum, J.B and Yuille, A. (2006). Probabilistic models of cognition. Trends in Cognitive Sciences (Special Issue), 10(7):287–344.
- Clark, A. (2001). Mindware: An Introduction to the Philosophy of Cognitive Science. Oxford University Press: Oxford.
- Dautenhahn, K. (2007). Socially intelligent robots: dimensions of human-robot interaction. Phil. Trans. Roy. Soc. B: Biol. Sci., 362(1480):679–704.
- Demiris, Y. and Meltzoff, A. (2007). The robot in the crib: a developmental analysis of imitation skills in infants and robots. Infant and Child Development.
- Edelman, G.M., Reeke Jr, G.N., Gall, W.E., Tononi, G., Williams, D. and Sporns, O. (1990). Synthetic neural modeling applied to a real-world artifact. Proc. of Nat. Acad. of Sciences, 89:7267-7271.
- Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48:71–99.
- Ferrell, C.B. and Kemp, C. (1996). An ontogenetic perspective on scaling sensorimotor intelligence. In: Embodied Cognition and Action: Papers from the 1996 AAAI Fall Symposium.
- Fitzpatrick, P., Needham, A., Natale, L. and Metta, G. (2006). Shared challenges in object perception for robots and infants. Infant and Child Development.
- Fong, T., Nourbakhsh, I. and Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3-4):143–166.
- Gómez, G., Lungarella, M., Eggenberger Hotz, P., Matsushita, K., and Pfeifer, R. (2004). Simulating development in a real robot: on the concurrent increase of sensory, motor, and neural complexity. Proc. of 4th Int. Workshop on Epigenetic Robotics, pp. 119–122.
- Gómez, G. and Eggenberger, P. (2007). Evolutionary synthesis of grasping through self-exploratory movements of a robotic hand. Proc. IEEE Congress on Evolutionary Computation (accepted for publication).
- Kozima, H. and Nakagawa, C. (2007). Interactive robots as facilitators of children’s social development. In: A. Lazinica (eds.) Mobile Robots: Towards New Applications, Vienna: Advanced Robotic Systems, pp. 269-286.
- Kumar, S. and Bentley, P. (2003). On Growth, Form and Computers. Elsevier Academic Press: San Diego, CA.
- Kuniyoshi, Y. and Sangawa, S. (2006). Early motor development from partially ordered neural-body dynamics: experiments with a cortico-spinal musculo-skeletal model. Biol. Cybernetics, 95:589-605.
- Hendriks-Jansen, H. (1996). Catching Ourselves in the Act. MIT Press: Cambridge, MA. A Bradford Book (Chapter 15).
- Lewis, M.D. and Granic, I. (eds.) Emotion, Development, and Self-Organization – Dynamic Systems Approaches to Emotional Development. Cambridge University Press: New York.
- Llinas, R. (2001). I of the Vortex: From Neurons to Self. MIT Press: Cambridge, MA.
- Lungarella, M. and Berthouze, L. (2002). On the interplay between morphological, neural, and environmental dynamics: a robotic case-study. Adaptive Behavior, 10:223-241.
- Lungarella, M. (2004). Exploring Principles Towards a Developmental Theory of Embodied Artificial Intelligence. Unpublished PhD Thesis. University of Zurich, Switzerland.
- Lungarella, M., Metta, G., Pfeifer, R. and Sandini, G. (2003). Developmental robotics: a survey. Connection Science, 15:151–190.
- Lungarella, M. and Sporns, O. (2006). Mapping information flows in sensorimotor networks. PLoS Computational Biology, 2(10):e144.
- Mareschal, D., Johnson, M.H., Sirois, S., Spratling, M.W., Thomas, M.S.C., and Westermann, G. (2007). Neuroconstructivism: How the Brain Constructs Cognition, Vol.1. Oxford University Press: Oxford, UK.
- Metta, G. (2000). Babybot: A Study on Sensorimotor Development. Unpublished PhD Thesis, University of Genova, Genova, Italy.
- Metta, G. and Fitzpatrick, P. (2003). Early integration of vision and manipulation. Adaptive Behavior, 11(2):109–128.
- Nabeshima, C., Lungarella, M. and Kuniyoshi, Y. (2006). Adaptive body schema for robotic tool-use. Advanced Robotics, 20(10):1105–1126.
- Oudeyer, P.-Y., Kaplan, F. and Hafner, V.V. (2007). Intrinsic motivation systems for autonomous mental development. IEEE Trans. Evolutionary Computation, 11(1): 265-286.
- Pfeifer, R. and Scheier, C. (1999). Understanding Intelligence. MIT Press: Cambridge, MA.
- Pfeifer, R. and Bongard, J.C. (2007). How the Body Shapes the Way we Think. MIT Press: Cambridge, MA.
- Pfeifer, R., Lungarella, M. and Iida, F. (2007). Self-organization, embodiment, and biologically inspired robotics. Science, 318:1088-1093.
- Prince, C.G., Helder, N. A. and Hollich, G.J. (2005). Ongoing emergence: A core concept in epigenetic robotics. Proc. 5th Int. Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems
- Reeke Jr, G.N., Sporns, O. and Edelman (1990). Synthetic neural modeling: The “Darwin” series of recognition automata. Proc. IEEE, 78:1498-1530.
- RobotCub Roadmap (2007).  (last accessed July 29, 2007).
- Rutkowska, J.C. (1994). Scaling up sensorimotor systems: constraints from human infancy. Adaptive Behavior, 2(4):349-373.
- Sandini, G., Metta, G., Konczak, J. (1997). Human sensorimotor development and artificial systems. Proc. of Int. Symp. on Artificial Intelligence, Robotics, and Intellectual Human Activity Support for Nuclear Applications, 303-314.
- Smith, L.B. and Gasser, M. (2005). The development of embodied cognition: Six lessons from babies. Artiﬁcial Life, 11:13–30.
- Smith, L.B. and Breazeal, C. (2007). The dynamic lift of developmental process. Developmental Science, 10(1):61–68.
- Spelke, E. (2000). Core knowledge. American Psychologist, 55:1233–1243.
- Sporns, O. (2007). What neuro-robotic models can teach us about neural and cognitive development. In: D.Mareschal, S.Sirois, G.Westermann and M.H.Johnson (eds.) Neuroconstructivism: Perspectives and Prospects, Vol.2, p.179–204.
- Steels, L. (1991). Towards a theory of emergent functionality. Proc. of 1st Int. Conf. on Simulation of Adaptive Behavior, pp. 451–461.
- Steels, L. (2003). Evolving grounded communication for robots. Trends in Cognitive Sciences, 7(7):308–312.
- Steels, L. (2006). Semiotic dynamics for embodied agents. IEEE Intelligent Systems, 21(3):32–38.
- Stoytchev, S. (2006). Five basic principles of developmental robotics. NIPS Workshop on Grounding, Perception, Knowledge, and Cognition.
- Thelen, E. and Smith, L. (1994). A Dynamic Systems Approach to Cognition and Action. MIT Press: Cambridge, MA.
- Turkewitz, G. and Kenny, P. (1982). Limitation on input as a basis for neural organization and perceptual development: A preliminary theoretical statement. Developmental Psychobiology, 15:357–368.
- Turing, A.M. (1950). Computing machinery and intelligence. Mind, LIX(236):433–460.
- Varela, F.J., Thompson, E. and Rosch, E. (1991). The Embodied Mind. MIT Press: Cambridge, MA.
- Velasquez, J.D. (2007). When Robots Weep: A Computational Approach to Affective Learning. PhD Thesis, Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, Boston, USA.
- Weng, J.J., McClelland, J., Pentland, A., Sporns, O., Stockman, I., Sur, M. and Thelen, E. (2001). Autonomous mental development by robots and animals. Science, 291:599–600.
- Westermann, G., Sirois, S., Shultz, T.R. and Mareshal, D. (2006). Modeling developmental cognitive neuroscience. Trends in Cognitive Science, 10(5):227–223.
- Zlatev, J. and Balkenius, C. (2001). Introduction: why epigenetic robotics? In: C. Balkenius, J. Zlatev, H. Kozima, K. Dautenhahn and C. Breazeal (eds.) Proc. of 1st Int. Workshop on Epigenetic Robotics, pp.1–4.
- Valentino Braitenberg (2007) Brain. Scholarpedia, 2(11):2918.
- Olaf Sporns (2007) Complexity. Scholarpedia, 2(10):1623.
- James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629.
- Mark Aronoff (2007) Language. Scholarpedia, 2(5):3175.
- Jean-Marc Fellous (2007) Models of emotion. Scholarpedia, 2(11):1453.
- Wolfram Schultz (2007) Reward. Scholarpedia, 2(3):1652.
- Stevan Harnad (2007) Symbol grounding problem. Scholarpedia, 2(7):2373.
Evolutionary robotics, Social robotics, Machine learning, Adaptive Behavior, Biologically inspired robotics, Computational intelligence, Embodied intelligence, Symbol grounding problem, Artificial intelligence