|Fernando Silva et al. (2016), Scholarpedia, 11(7):33333.||doi:10.4249/scholarpedia.33333||revision #168217 [link to/cite this article]|
Evolutionary robotics is a field of research that employs evolutionary computation to generate robots that adapt to their environment through a process analogous to natural evolution. The generation and optimisation of robots are based on evolutionary principles of blind variations and survival of the fittest, as embodied in the neo-Darwinian synthesis (Gould, 2002).
Evolutionary robotics is typically applied to create control system for robots. Although less frequent, evolutionary robotics can also be applied to generate robot body plans, and to coevolve control systems and body plans simultaneously (Lipson and Pollack, 2000). In this respect, evolutionary robotics differs from the Artificial Life domain in the usage of physical robots. In particular, evolutionary robotics puts a strong emphasis on embodiment and situatedness, and on the close interaction of brain, body, and environment, which is crucial for the emergence of intelligent, adaptive behaviour and cognitive processes (e.g. Clark, 1997; Chiel & Beer, 1997; Nolfi & Floreano, 2002).
Evolutionary robotics is organised along two axes of research: one concerned with cognitive science (Harvey et al., 2005) and biology (Floreano and Keller, 2010); the other focused on using evolutionary robotics techniques for engineering purposes (Silva et al., 2016), with the long-term goal of obtaining a process capable of automatically designing and maintaining an efficient robotic system. Evolutionary robotics is a highly general approach, as it enables the synthesis of control or body plans given only a specification of the task, and is not tied to specific evolutionary algorithms, control systems, or types of robots (Bongard et al., 2006; Cully et al., 2015).
Similarly to more traditional evolutionary computation approaches, evolutionary robotics techniques operate with a population of candidate solutions or genomes. Each genome in the population encodes a number of parameters of one or more robots’ body plan or control system, the phenotype (see Fig. 1). If the genome describes a body plan, genome parameters can determine whether body parts should be added to the body plan, or configure parts of the body plan, such as the angle and range for specific joints (Bongard, 2011). If the genome encodes an artificial neural network-based controller, the synaptic weights can be represented as a real-valued vector at the genome level.
Evolutionary robotics techniques can be applied either offline, in simulation, or online, that is, on the physical robots while they operate in the task environment. In offline evolution, controllers are evolved in simulation for a certain number of generations or until a certain performance criterion is met, and then deployed on real robots. In online evolution, on the other hand, an evolutionary algorithm is executed on the robots themselves as they perform their tasks. Evolution is most commonly applied offline for a number of practical and strategic reasons. Firstly, offline evolution is typically less time consuming than online evolution, although suitable simulation environments have to be developed beforehand. Secondly, offline evolution allows the researcher to concentrate on developing the body plan or control method without having to address issues inherently associated with physical robots, such as wear and tear, potential damage, calibration drift, and so on.
Guiding the search in evolutionary robotics
The experimenter applying evolutionary robotics techniques often relies on a self-organisation process in which evaluation and optimisation are holistic, thereby eliminating the need for manual and detailed specification of the desired body plan or control system.
Similarly to other evolutionary methods, a traditional evolutionary robotics process requires only sparse feedback, which is given by a measure of overall performance, that is, a fitness score. The fitness function is therefore at the heart of multiple evolutionary robotics processes and rewards improvement towards a task-dependent objective, a metaphor for the pressure to adapt in nature. According to the three-dimensional fitness space framework proposed by Floreano and Urzelai (2000), a fitness function can be classified along the following three axes:
- Functional or behavioural, depending on whether the fitness function measures the components involved in the generation of the behaviour, such as the rotational speed of a robot’s wheel in a navigation task, or the effects of the behaviour, such as the distance covered by a robot in a given amount of time.
- External or internal, depending on whether the fitness computation is based on the measurement of variables that are only available to an external observer with access to precise information, such as the number of clusters formed by the robots in an aggregation task, or on information directly available to the robot through onboard sensor readings.
- Explicit or implicit, depending on the quantity and nature of constraints explicitly imposed in the working principles of solutions. The higher the number of components, the more explicit the fitness function and the more constrained the behaviour is.
As discussed in Nolfi and Floreano (2000), a behavioural-implicit-internal approach may be more adequate to design adaptive robots, because it imposes less restrictions on the evolutionary process, which in turn can lead to more efficient and effective behaviour adaptation.
A class of methods has recently emerged in evolutionary robotics in which evolution is driven by novelty or behavioural diversity instead of a typical fitness function, see Fig. 2. That is, candidate solutions are rewarded for displaying behaviours that differ from previously evolved behaviours, rather than according to a predefined performance objective. This open-ended search process was initially formalised in the novelty search algorithm (Lehman and Stanley, 2011), and triggered a significant body of work in evolutionary robotics, including new perspectives on how to potentially avoid premature convergence-related issues (Lehman and Stanley, 2011; Lehman et al., 2013; Mouret and Doncieux, 2012).
As an example, consider the evolution of a control system for a maze-navigating robot. The fitness function can be defined based on how close the robot gets to the goal, which intuitively describes the task to solve. However, mazes with obstacles that prevent a direct route may cause the fitness function to deceive evolution. If the candidate solution is instead characterised in behavioural terms, such as the robot’s final position in the maze (see Fig. 3), searching for novel behaviours has the potential to avoid the deception associated with the fitness functions in a number of tasks.
Combining novelty and fitness
In recent years, numerous approaches have been introduced to combine the respective advantages of fitness-based evolution and of novelty-based evolution, to obtain more effective optimisation procedures (Cuccu and Gomez, 2011; Lehman and Stanley, 2010; Mouret and Doncieux, 2009; Mouret and Doncieux, 2012).
Interestingly, recent contributions have introduced a new class of approaches called Quality Diversity algorithms (Lehman and Stanley, 2011b; Cully and Mouret, 2013; Pugh et al., 2015) or Illumination algorithms (Mouret and Clune, 2015); for a comprehensive review of Quality Diversity algorithms see Pugh et al. (2016). The goal of Quality Diversity algorithms is to discover a wide range of diverse candidate solutions, but where each candidate solution can also be optimised for performance, that is, with respect to a measure of quality. Quality Diversity algorithms have originated with the novelty search with local competition algorithm (NSLC; Lehman and Stanley, 2011b), a multiobjective formulation of novelty and fitness, but in which the fitness objective is changed from being a global measure to being one relative to a local neighbourhood of behaviourally similar individuals. The working principles of NSLC have inspired different algorithms, one of the most adopted being the MAP-Elites algorithm (Mouret and Clune, 2015). Given a behaviour characterisation with N dimensions, MAP-elites first transforms the behaviour space into discrete bins according to a user-defined granularity level, and then tries to find the highest-performing individual for each point in the discretised space in order to construct a behaviour-performance map.
Early pioneering contributions
Evolutionary algorithms were the subject of significant progress since the introduction of the concept of evolutionary search and initial discussions on the potential of machine intelligence by Turing (1950). The possibility of evolving robots was later evoked by the neurophysiologist Valentino Braitenberg (Braitenberg, 1984) in his thought-experiment on the creation of new robot designs. Almost a decade later, the field of evolutionary robotics began to develop. Pioneering groups at the University of Southern California, US (Lewis et al., 1992), at the Swiss Federal Institute of Technology in Lausanne, Switzerland (Floreano and Mondada, 1994, 1996), at Sussex University in the UK (Harvey et al., 1994), and at the Italian National Research Council (Nolfi et al., 1994) laid the foundation for a number of important studies that followed.
Lewis et al. (1992) evolved neural network controllers for a real six-legged robot by synthesising the controllers on a workstation, and downloading each controller to the real robot for performance evaluation. Each evaluation required a human observer to monitor and score the performance of the real robot. Complementarily, Floreano and Mondada (1994,1996) experimented with evolution of controllers directly in real robotic hardware, including navigation and homing behaviours for a Khepera robot. Given the low computational power of the Khepera robot, the actual evolutionary computation was performed on a workstation. The synthesis of successful controllers required up to ten days of continuous evolution. Harvey et al. (1994) evolved controllers for a real Gantry robot, and demonstrated principled approaches to the evolution of visually-guided robot behaviour (e.g. navigation and shape discrimination tasks), namely concurrent evolution of sensorimotor features and control systems. Finally, Nolfi et al. (1994) proposed evolving controllers in simulation and continuing evolution for a few generations in real hardware if a decrease in performance is observed when controllers were transferred.
The pioneering contributions facilitated progress and cross-fertilisation of ideas between different robotics domains. For example Brooks (1992), a pioneer in behaviour-based robotics, acknowledged the potential of evolutionary robotics techniques for control synthesis, and argued for different research avenues that are still being pursued, such as making evolution aware of regularities in morphological structure (e.g. symmetric sensor placement) and enabling to mirror them in the control structure (Silva et al., 2016). In summary, early pioneering contributions highlighted the potential of evolutionary robotics and gave rise to a number of different research directions, which we summarise below.
Main research directions
From the early years of evolutionary robotics, wheeled robots have been widely used in the field. Evolutionary robotics has nevertheless enabled synthesis of control for robots with varying morphologies, such as legged robots (Gong et al., 2010). Legged robots have significant potential because they can access types of terrain unsuitable for wheeled robots. In this respect, evolved gaits have outperformed engineered gaits in different situations (Yosinski et al., 2011), and have even been included in commercial products such as the first version of Sony’s AIBO robot (Hornby et al., 2005).
One of the main research topics in legged robots is the ability to recover from damage to one or more legs. Bongard et al. (2006); Bongard (2009) introduced a damage recovery approach based on three evolutionary algorithms. The approach is implemented according to an onboard combination of simulation and evolution. The first algorithm optimises a population of physical simulators in order to more accurately model the real environment. The second algorithm then creates exploratory behaviours for the real robot to execute so as to collect new training data for the first algorithm. Finally, the third algorithm uses the best simulator to evolve locomotion behaviours for a real quadruped robot. Besides increasing the number of successful controllers, the combination of the three evolutionary algorithms yields an important advantage: enabling the robot to recover from unanticipated situations such as physical damage to one of its legs (Bongard et al., 2006). The working principles of Bongard’s approach has fostered the development of novel approaches such as the intelligent trial-and-error algorithm (Cully et al., 2015), in which a detailed map of high-performing behaviours is constructed in simulation via the Quality Diversity algorithm MAP-Elites (Mouret and Clune, 2015) and then deployed on the real robot. During task execution, if performance drops below a user-defined threshold due to, for instance, physical damage to the robot’s body or changes in the environmental conditions, the robot can iteratively select a promising behaviour from the map, test it, and measure its performance until a suitable behaviour is chosen.
Control for multirobot systems
Control for robot collectives is typically challenging to design by hand because there is no general approach to derive the behaviour for individual robots based on a desired global behaviour or task description. In this respect, evolutionary robotics techniques have also been applied to evolve decentralised control for robot collectives. Quinn et al. (2003) were among the first to demonstrate the potential of evolutionary robotics, as they successfully evolved coordinated, cooperative behaviours for multirobot systems. A group of three robots was evolved to perform a formation-movement task without losing contact with each other, equipped only with minimal infrared sensors. After an initial coordination phase, different roles emerged depending on the relative position of robots and their history of interactions. Shortly after, Nelson et al. (2004) presented another notable example of collective behaviour evolution by having teams of real mobile robots playing a robotic version of the game Capture the Flag. Each team defended its own goal while trying to 'attack' the opposing team’s goal. Robot controllers relied entirely on processed video data for sensing the environment. More recently, an approach called multiagent HyperNEAT (D’Ambrosio et al., 2010; D’Ambrosio et al., 2011; D’Ambrosio and Stanley, 2013) has made it possible to represent controllers for groups of robots: (i) as a function of the control policy geometry, that is, the relationship between the role of the robots and their position in the group, which allows to dynamically change the group size without further evolution, and (ii) the situational policy geometry, which enables each robot to have multiple control policies and switch between them depending on the robot’s state. However, multiagent HyperNEAT requires the number of policies to be specified by the experimenter, and assumes that there is a geometric relationship between different policies (e.g. advancing and retreating are geometric opposites). A recent variation of HyperNEAT called multibrain HyperNEAT (Schrum et al., 2016) has been introduced as a potential solution to evolving multiple control policies without assuming geometric relationships between them.
Throughout the years, evolutionary robotics techniques were applied to a number of different tasks such as hole avoidance (Trianni et al., 2006), collective transport of objects (Groß and Dorigo, 2009), self-assembly (Ampatzis et al., 2009), coordinated motion (Sperati et al., 2008), and chain formation (Sperati et al., 2011). However, such studies were typically carried out either in simulation or in highly controlled environments such as small and enclosed arenas in laboratories. In a demonstration of evolutionary robots operating in a real and uncontrolled environment, Duarte et al. (2016) evolved control for a swarm of aquatic surface robots to execute common collective tasks, namely homing, clustering, dispersion, and area monitoring, and then composed the controllers for each task to carry out a complete environmental monitoring task.
Body plan evolution
The first study on coevolution of body plans and control systems was carried out by Lipson and Pollack (2000) who, inspired by the experiments of Sims (1994) in the artificial life domain, developed an approach in which both body plans and control for robots were fully optimised in simulation (see Fig. 1). The structure of the fittest robot was produced using additive manufacturing techniques. Stepper motors and microcontrollers were then manually attached to the physical structure and the performance of the robot was assessed. With the advent of new materials and fabrication techniques, less conventional approaches to design robots are emerging. A recent example is soft robotics, in which robots are composed of soft and hard materials. Among the first contributors, Hiller and Lipson (2012) showcased evolutionary design and fabrication of freeform soft robots capable of forward locomotion using soft volumetrically expanding actuator materials. Actuation was provided by the materials periodically varying in volume.
The role of evolutionary robotics in other fields of research
In addition to the use of evolutionary robotics techniques for engineering purposes, evolved robots can also provide insights as to how and why specific traits evolved in natural systems (Floreano and Keller, 2010). Examples include studies on the evolution of communication and signalling (Floreano et al., 2007; Mitri et al., 2010), deception and information suppression in foraging robots with conflicting interests (Mitri et al., 2009), evolvability (Lehman and Stanley, 2013; Wilder and Stanley, 2015), polymorphic mating strategies (Elfwing and Doya, 2014), and evolution of complexity, that is, whether, when, how, and why increased complexity evolved in biological populations (Auerbach and Bongard, 2014).
Open issues in evolutionary robot engineering
Arguably, the main axes of research in evolutionary robotics is the engineering of control systems. In this respect, researchers have been consistently faced with a number of issues (Silva et al., 2016), namely:
- The reality gap (Jakobi, 1997), which manifests itself when controllers evolved in simulation prove ineffective on the physical robots. Potential solutions include less formalised approaches such as using samples from the real robots’ sensors in simulation (Miglino et al., 1995) to more formalised approaches such as the transferability approach (Koos et al., 2013), in which the goal is to learn the discrepancies between simulation and reality in order to constrain the evolution of behaviours that do not cross the reality gap.
- The prohibitively long time necessary to evolve controllers directly on real robots (Matarić and Cliff, 1996). One way to eliminate the reality gap is to rely exclusively on real robots for controller evolution, which is extremely time-consuming at the current state of development. Potential solutions include embodied evolution (Watson et al., 2002), in which the evolutionary algorithm is distributed across a group of robots that evolve in parallel and exchange genetic information, seeding the evolutionary process with pre-evolved or pre-programmed partial or approximate solutions (Silva et al., 2014a), or the onboard combination of simulation-based evolution and online evolution (Bongard and Lipson, 2004, 2005; De Nardi and Holland, 2008, Bongard et al., 2006; Bongard, 2009; O’Dowd et al., 2011), in which each robot maintains models of the environment and of other robots, and the models are adapted based on differences observed in controller performance between the onboard simulation and reality. However, the performance benefits of such approaches are dependent on encounters between robots, which may be infrequent in large or open environments, on the size of the collective, and on the communication capabilities of the robots.
- The bootstrap problem (Nelson et al., 2009) and deception (Whitley, 1991) are issues inherent to the evolutionary approach that drive evolution towards local optima. One solution is to directly assist the evolutionary process, which includes (i) incremental evolution (Mouret and Doncieux, 2008; Christensen and Dorigo, 2006), in which a task is decomposed into different components in a top-down fashion, (ii) behavioural decomposition, in which the robot controller is divided into sub-controllers that are generated separately to solve a different sub-task, and then composed via a second evolutionary process (Moioli et al., 2008, Duarte et al., 2015), and (iii) semi-interactive human in-the-loop approaches (Celis et al., 2013; Woolley and Stanley, 2014). However, both approaches require a large amount of human knowledge. A potential solution is to direct the evolutionary process towards increasing exploration or exploitation of the search space by importing general techniques from evolutionary computation such as multiobjective algorithms. A different alternative is to exploit design for emergence techniques in which behaviour is considered a multi-layer system with different levels of organisation unfolding over different time scales (Nolfi, 2005, 2011; Yamashita and Tani, 2008). In such a system, short-term interactions between a robot and the environment give rise to low-level behaviours, the interaction between lower-level behaviours gives rise to higher-level behaviors, and higher-level behaviours cause changes to the lower-level behaviours and/or the interaction between the constituent elements (control system, body, and environment).
- The design of genomic encodings and of the genotype-phenotype mappings that enable the evolution of complex structures (Meyer et al., 1998). The vast majority of evolutionary robotics studies employ direct encoding (Nelson et al., 2009), in which genotypes directly specify a phenotype: each parameter is encoded and optimised separately, which leads to scalability issues. Indirect encodings, on the other hand, allow solutions to be represented as patterns of parameters, rather than requiring each parameter to be represented individually (Bentley and Kumar, 1999; Bongard, 2002; Risi, 2012; Seys and Beer, 2007; Stanley and Miikkulainen, 2003; Stanley 2007; Stanley et al., 2009, D’Ambrosio et al., 2014, Clune et al., 2011). However, indirect encodings are usually biased towards regular structures (e.g. symmetry), which makes it difficult for them to properly account for irregularities such as faults in the joints of four-legged robots (Clune et al., 2011). One solution is to combine indirect encodings with a refining process such as direct encodings by, for instance: (i) evolving with an indirect encoding and then switching to a direct encoding after a fixed, predefined number of generations (Clune et al., 2011), or (ii) evolving genomes composed of an indirect encoding part and a direct encoding part, and allowing evolution to automatically explore multiple encoding combinations (Silva et al., 2015).
- The absence of standard research practices in the field. For example, whereas there is an almost unanimous use of computer simulations in evolutionary robotics, there is not a prevalent simulation platform, which makes it difficult to reproduce results and to carry out comparative studies. Evolutionary robotics also suffers from the lack of benchmarks and testbeds. Even though there are multiple “common” tasks, there is no standard implementation of these tasks, meaning that it is currently not possible for researchers to assess an algorithm on a set of task instances. Such instances would be valuable for proofs of concept showing that a given algorithm has enough potential to be further explored, and for studies that analyse the strengths and limitations of a technique on a large number of different tasks.
The authors thank Kenneth Stanley and Stefano Nolfi for their constructive feedback and valuable comments.
Ampatzis, C., Tuci, E., Trianni, V., Christensen, A. L., and Dorigo, M. (2009). Evolving self-assembly in autonomous homogeneous robots: Experiments with two physical robots. Artificial Life, 15(4):465-484.
Auerbach, J. and Bongard, J. (2014). Environmental influence on the evolution of morphological complexity in machines. PLoS Computational Biology, 10(1):e1003399.
Bentley, P. and Kumar, S. (1999). Three ways to grow designs: A comparison of evolved embryogenies for a design problem. In Proceedings of the 1st Genetic and Evolutionary Computation Conference, pages 35-43. ACM Press, New York, NY.
Bongard, J. (2002). Evolving modular genetic regulatory networks. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 1872-1877. IEEE Press, Piscataway, NJ.
Bongard, J. (2009). Accelerating self-modeling in cooperative robot teams. IEEE Transactions on Evolutionary Computation, 13(2):321-332.
Bongard, J. (2011). Morphological change in machines accelerates the evolution of robust behavior. Proceedings of the National Academy of Sciences, 108(4):1234-1239.
Bongard, J. and Lipson, H. (2004). Automated robot function recovery after unanticipated failure or environmental change using a minimum of hardware trials. In Proceedings of the NASA/DoD Conference on Evolvable Hardware, pages 169–176. IEEE Press, Piscataway, NJ.
Bongard, J. and Lipson, H. (2005). Nonlinear system identification using coevolution of models and tests. IEEE Transactions on Evolutionary Computation, 9(4):361-384.
Bongard, J., Zykov, V., and Lipson, H. (2006). Resilient machines through continuous self-modeling. Science, 314(5802):1118-1121
Braitenberg, V. (1984). Vehicles: Experiments in synthetic psychology. MIT Press, Cambridge, MA
Brooks, R. A. (1992). Artificial life and real robots. In Proceedings of the 1st European Conference on Artificial Life, pages 3-10. MIT Press, Cambridge, MA.
Clark, A. (1997). Being There. Cambridge, MIT Press, Cambridge, MA.
Celis, S., Hornby, G. S., and Bongard, J. (2013). Avoiding local optima with user demonstrations and low-level control. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 3403-3410. IEEE Press, Piscataway, NJ.
Chiel, H.J. and Beer, R.D. (1997). The brain has a body: Adaptive behavior emerges from interactions of nervous system, body and environment. Trends in Neurosciences 20:553-557.
Christensen, A. L. and Dorigo, M. (2006). Incremental evolution of robot controllers for a highly integrated task. In Proceedings of the 9th International Conference on Simulation of Adaptive Behavior, pages 473-484. Springer, Berlin, Germany.
Clune, J., Beckmann, B. E., Ofria, C., and Pennock, R. T. (2009). Evolving coordinated quadruped gaits with the HyperNEAT generative encoding. In IEEE Congress on Evolutionary Computation, pages 2764-2771. IEEE Press, Piscataway, NJ.
Clune, J., Stanley, K., Pennock, R., and Ofria, C. (2011). On the performance of indirect encoding across the continuum of regularity. IEEE Transactions on Evolutionary Computation, 15(3):346–367.
Cuccu, G. and Gomez, F. (2011). When novelty is not enough. In Applications of Evolutionary Computation, pages 234-243. Springer, Berlin, Germany.
Cully, A., Clune, J., Tarapore, D., and Mouret, J.-B. (2015). Robots that can adapt like animals. Nature, 521(7553):503–507
Cully, A. and Mouret, J.-B. (2013). Behavioral repertoire learning in robotics. In Proceedings of 15th Genetic and Evolutionary Computation Conference, pages 175-182. ACM Press, New York, NY.
D’Ambrosio, D. , Lehman, J., Risi, S., and Stanley, K. O. (2010). Evolving policy geometry for scalable multiagent learning. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, pages 731-738. IFAAMAS, Richland, SC
D'Ambrosio, D., Lehman, J., Risi, S., and Stanley, K. O. (2011). Task switching in multirobot learning through indirect encoding. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2802-2809. IEEE Press, Piscataway, NJ.
D’Ambrosio, D., and Stanley, K. O. (2013). Scalable multiagent learning through indirect en- coding of policy geometry. Evolutionary Intelligence, 6(1):1–26.
D’Ambrosio, D., Gauci, J., and Stanley, K. O. (2014). HyperNEAT: The first five years. In Growing Adaptive Machines, volume 557 of Studies in Computational Intelligence, chapter 5, pages 159-185. Springer, Berlin, Germany
De Nardi, R. and Holland, O. E. (2008). Coevolutionary modelling of a miniature rotorcraft. In Proceedings of the 10th International Conference on Intelligent Autonomous Systems, pages 364-373. IOS Press, Amsterdam, The Netherlands.
Duarte, M., Costa, V., Gomes, J., Rodrigues, T., Silva, F., Oliveira, S. M., and Christensen, A. L. (2016). Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots. PLoS ONE, 11(3):e0151834.
Duarte, M., Oliveira, S. M., and Christensen, A. L. (2015). Evolution of hybrid robotic controllers for complex tasks. Journal of Intelligent & Robotic Systems, 78(3-4):463-484.
Elfwing, S. and Doya, K. (2014). Emergence of polymorphic mating strategies in robot colonies. PLoS One, 9(4):e93622.
Floreano, D. and Keller, L. (2010). Evolution of adaptive behaviour by means of Darwinian selection. PLoS Biology, 8(1):e1000292.
Floreano, D. and Urzelai, J. (2000). Evolutionary robots with on-line self-organization and behavioral fitness. Neural Networks, 13(4-5):431-443.
Floreano, D. and Mondada, F. (1994). Automatic creation of an autonomous agent: Genetic evolution of a neural-network driven robot. In Proceedings of the 3rd International Conference on Simulation of Adaptive Behavior, pages 421-430. MIT Press, Cambridge, MA.
Floreano, D. and Mondada, F. (1996). Evolution of homing navigation in a real mobile robot. IEEE Transactions on Systems, Man, and Cybernetics, 26(3):396-407.
Floreano, D., Mitri, S., Magnenat, S., and Keller, L. (2007). Evolution conditions for the emergence of communication in robots. Current Biology, 17(6):514-519.
Gong, D., Yan, J., Zuo, G. (2010). A review of gait optimization based on evolutionary computation. Applied Computational Intelligence and Soft Computing, 2010:1-12.
Gould, S. (2002). The Structure of Evolutionary Theory. Belknap Press, Cambridge, MA.
Groß, R. and Dorigo, M. (2009). Towards group transport by swarms of robots. International Journal of Bio-Inspired Computation, 1(1-2):1-13
Harvey, I., Husbands, P., Cliff, D. (1994). Seeing the light: Artificial evolution, real vision. In Proceedings of the 3rd International Conference on Simulation of Adaptive Behavior, pages 392-401. MIT Press, Cambridge, MA.
Harvey, I., Di Paolo, E., Wood, R., Quinn, M., and Tuci, E. (2005). Evolutionary robotics: A new scientific tool for studying cognition. Artificial Life, 11(1-2):79-98.
Hiller, J., and Lipson, H. (2012). Automatic design and manufacture of soft robots. IEEE Transaction on Robotics, 28(2):457-466.
Hornby, G., Takamura, S., Yamamoto, T., and Fujita, M. (2005). Autonomous evolution of dynamic gaits with two quadruped robots. IEEE Transactions on Robotics, 21(3):402-410.
Jakobi, N. (1997). Evolutionary robotics and the radical envelope-of-noise hypothesis. Adaptive Behavior, 6(2):325-368.
Koos, S., Mouret, J.-B., and Doncieux, S. (2013). The transferability approach: Crossing the reality gap in evolutionary robotics. IEEE Transactions on Evolutionary Computation, 17(1):122-143
Lehman, J. and Stanley, K. O. (2010). Revising the evolutionary computation abstraction: minimal criteria novelty search. In Proceedings of the 12th Genetic and Evolutionary Computation Conference, pages 103-110. ACM Press, New York, NY
Lehman, J. and Stanley, K. O. (2011). Abandoning objectives: Evolution through the search for novelty alone. Evolutionary Computation, 19(2):189-223.
Lehman, J. and Stanley, K. O. (2011b). Evolving a diversity of virtual creatures through novelty search and local competition. In Proceedings of the 13th Genetic and Evolutionary Computation Conference, pages 211-218. ACM Press, New York, NY.
Lehman, J. and Stanley, K. O. (2013). Evolvability is inevitable: Increasing evolvability without the pressure to adapt. PLoS ONE, 8(4):e62186
Lehman, J., Stanley, K. O., and Miikkulainen, R. (2013). Effective diversity maintenance in deceptive domains. In Proceedings of the 15th Genetic and Evolutionary Computation Conference, pages 215-222. ACM Press, New York, NY.
Lewis, M. A., Fagg, A. H., and Solidum, A. (1992). Genetic programming approach to the construction of a neural network for control of a walking robot. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 2618-2623. IEEE Press, Piscataway, NJ.
Lipson, H. and Pollack, J. (2000). Automatic design and manufacture of robotic lifeforms. Nature, 406:974-978
Matarić, M. and Cliff, D. (1996). Challenges in evolving controllers for physical robots. Robotics and Autonomous Systems, 19(1):67-83
Meyer, J.-A., Husbands, P., and Harvey, I. (1998). Evolutionary robotics: A survey of applications and problems. In Proceedings of the 1st European Workshop on Evolutionary Robotics, pages 1-21. Springer, Berlin, Germany.
Miglino, O., Lund, H., and Nolfi, S. (1995). Evolving mobile robots in simulated and real environments. Artificial Life, 2(4):417-434.
Mitri, S., Floreano, D., and Keller, L. (2009). The evolution of information suppression in communicating robots with conflicting interests. Proceedings of the National Academy of Sciences, 106(37):15786-15790.
Mitri, S., Floreano, D., and Keller, L. (2010). Relatedness influences signal reliability in evolving robots. Proceedings of the Royal Society of London B: Biological Sciences, rspb20101407.
Moioli, R., Vargas, P., Von Zuben, F., and Husbands, P. (2008). Towards the evolution of an artificial homeostatic system. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 4023–4030. IEEE Press, Piscataway, NJ.
Mouret, J. and Doncieux, S. (2008). Incremental evolution of animats’ behaviors as a multi-objective optimization. In Proceedings of the 10th International Conference on Simulation of Adaptive Behavior, pages 210–219. Springer, Berlin, Germany.
Mouret, J. and Doncieux, S. (2009). Overcoming the bootstrap problem in evolutionary robotics using behavioral diversity. In Proceedings of the 11th IEEE Congress on Evolutionary Computation, pages 1161-1168. IEEE Press, Piscataway, NJ.
Mouret, J. and Doncieux, S. (2012). Encouraging behavioral diversity in evolutionary robotics: An empirical study. Evolutionary Computation, 20(1):91-133
Nelson, A. L., Grant, E., and Henderson, T. C. (2004). Evolution of neural controllers for competitive game playing with teams of mobile robots. Robotics and Autonomous Systems, 46(3), 135-150.
Nelson, A., Barlow, G., and Doitsidis, L. (2009). Fitness functions in evolutionary robotics: A survey and analysis. Robotics and Autonomous Systems, 57(4):345-370.
Nolfi S. (2005). Behaviour as a complex adaptive system: On the role of self-organization in the development of individual and collective behaviour. Complexus, 2 (3-4):195-203.
Nolfi S. (2011). Behavior and cognition as a complex adaptive system: Insights from robotic experiments. In Handbook of the Philosophy of Science. Volume 10: Philosophy of Complex Systems, 443-463.
Nolfi, S. and Floreano, D. (2002). Synthesis of autonomous robots through artificial evolution. Trends in Cognitive Sciences, 6:31-37.
Nolfi, S., Floreano, D., Miglino, O., and Mondada, F. (1994). How to evolve autonomous robots: Different approaches in evolutionary robotics. In Proceedings of the 4th International Workshop on Synthesis and Simulation of Living Systems, pages 190-197. MIT Press, Cambridge, MA.
O’Dowd, P. J., Winfield, A. F. T., and Studley, M. (2011). The distributed co-evolution of an embodied simulator and controller for swarm robot behaviours. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4995-5000. IEEE Press, Piscataway, NJ.
Pugh, J. K., Soros, L., Szerlip, P. A., and Stanley, K. O. (2015). Confronting the challenge of quality diversity. In Proceedings of the 17th Genetic and Evolutionary Computation Conference, pages 967-974. ACM Press, New York, NY.
Pugh, J. K., Soros, L., and Stanley, K. O. (2016). Quality Diversity: A New Frontier for Evolutionary Computation. Frontiers in Robotics and AI 3(40).
Quinn, M., Smith, L., Mayley, G., and Husbands, P. (2003). Evolving controllers for a homogeneous system of physical robots: Structured cooperation with minimal sensors. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 361(1811):2321-2343.
Risi, S. (2012). Towards Evolving More Brain-Like Artificial Neural Networks. PhD thesis, University of Central Florida, Orlando, FL
Schrum, J., Lehman, J., and Risi, S. (2016). Using Indirect Encoding of Multiple Brains to Produce Multimodal Behavior. arXiv preprint arXiv:1604.07806.
Seys, C. W. and Beer, R. D. (2007). Genotype reuse more important than genotype size in evolvability of embodied neural networks. In Proceedings of the 9th European Conference on Artificial Life, pages 915-924. Springer, Berlin, Germany
Silva, F., Correia, L., and Christensen, A. L. (2014). Speeding up online evolution of robotic controllers with macro-neurons. In Proceedings of the 17th European Conference on the Applications of Evolutionary Computation, pages 765-776. Springer, Berlin, Germany.
Silva, F., Correia, L., and Christensen, A. L. (2015). R-HybrID: Evolution of agent controllers with a hybridisation of indirect and direct encodings. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, pages 735-744. IFAAMAS, Richland, SC
Sims, K. (1994). Evolving 3D morphology and behavior by competition. In Proceedings of the 4th International Conference on Simulation and Synthesis of Living Systems, pages 28-39. MIT Press, Cambridge, MA.
Sperati, V., Trianni, V., and Nolfi, S. (2008). Evolving coordinated group behaviours through maximisation of mean mutual information. Swarm Intelligence, 2(2-4):73–95
Sperati, V., Trianni, V., and Nolfi, S. (2008). Self-organised path formation in a swarm of robots. Swarm Intelligence, 5(2):97-119.
Stanley, K. and Miikkulainen, R. (2003). A taxonomy for artificial embryogeny. Artificial Life, 9(2):93-130.
Stanley, K. O. (2007). Compositional pattern producing networks: A novel abstraction of development. Genetic Programming and Evolvable Machines, 8(2):131-162.
Stanley, K. O., D’Ambrosio, D., and Gauci, J. (2009). A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15(2):185-212.
Trianni, V., Nolfi, S., Dorigo, M. (2006). Cooperative hole avoidance in a swarm-bot. Robotics and Autonomous Systems, 54(2):97-103.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236):433-460.
Watson, R., Ficici, S., and Pollack, J. (2002). Embodied evolution: Distributing an evolutionary algorithm in a population of robots. Robotics and Autonomous Systems, 39(1):1–18.
Whitley, L. (1991) Fundamental principles of deception in genetic search. In Proceedings of the 1st Workshop on Foundations of Genetic Algorithms, pages 221-241. Morgan Kaufmann, San Mateo, CA
Wilder, B. and Stanley, K. (2015). Reconciling explanations for the evolution of evolvability. Adaptive Behavior, 23(3), pp.171-179.
Woolley, B. G. and Stanley, K. O. (2014). A novel human-computer collaboration: Combining novelty search with interactive evolution. In Proceedings of the 16th Genetic and Evolutionary Computation Conference, pages 233-240. ACM Press, New York, NY.
Yamashita Y., and Tani J. (2008) Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment. PLoS Computational Biology 4(11): e1000220.
Yosinski, J., Clune, J., Hidalgo, D., Nguyen, S., Zagal, J., and Lipson, H. (2011). Evolving robot gaits in hardware: the HyperNEAT generative encoding vs. parameter optimization. In Proceedings of the 20th European Conference on Artificial Life, pages. 890-897. MIT Press, Cambridge, MA
Bongard J. (2013). Evolutionary robotics. Communications of the ACM 56(8):74-85.
Nolfi S., Bongard J., Husbands P., and Floreano D. (2016). Evolutionary Robotics. In Handbook of Robotics, 2nd Edition, 2035-2067. Springer, Berlin, Germany.
Nolfi, S., and Floreano, D. (2000). Evolutionary robotics: The biology, intelligence, and technology of self-organizing machines. MIT Press, Cambridge, MA.
Pfeifer, R., and Bongard, J. (2006). How the body shapes the way we think: a new view of intelligence. MIT Press, Cambridge, MA.
Silva, F., Duarte, M., Correia, L., Oliveira, S. M., and Christensen, A.L. (2016). Open issues in evolutionary robotics. Evolutionary Computation, 24(2):205-236.
Biologically inspired robotics, Neuroevolution, Animat, Developmental robotics, Swarm robotics