Neuroevolution

From Scholarpedia
Joel Lehman and Risto Miikkulainen (2013), Scholarpedia, 8(6):30977. doi:10.4249/scholarpedia.30977 revision #137053 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Risto Miikkulainen

Neuroevolution is a machine learning technique that applies evolutionary algorithms to construct artificial neural networks, taking inspiration from the evolution of biological nervous systems in nature. Compared to other neural network learning methods, neuroevolution is highly general; it allows learning without explicit targets, with only sparse feedback, and with arbitrary neural models and network structures. Neuroevolution is an effective approach to solving reinforcement learning problems, and is most commonly applied in evolutionary robotics and artificial life.

Contents

Overview

The neuroevolution approach to artificial intelligence is motivated by the evolution of biological nervous systems. Correspondingly, neuroevolution applies abstractions of natural evolution (i.e. evolutionary algorithms) to construct abstractions of biological neural networks (i.e. artificial neural networks). An overall ambitious objective is to evolve complex artificial neural networks capable of intelligent behavior. As a result, neuroevolution can be seen both as a means to investigate how intelligence evolved in nature, as well as a practical method for engineering artificial neural networks to perform desired tasks.

Similar to natural selection in nature, which is driven only by feedback from reproductive success, neuroevolution is guided by some measure of overall performance. Whereas the most common artificial neural network learning algorithms operate through supervised learning and therefore depend upon a labeled corpus of input-output pairs, a main advantage of neuroevolution is that it allows learning even when such corpora are not available, based only on sparse feedback. For example, in game playing, vehicle control, and robotics, the optimal actions at each point in time are not always known; it is only possible to observe how well a sequence of actions worked, e.g. resulting in a win or loss in the game. Neuroevolution makes it possible to find a neural network that optimizes behavior given only such sparse feedback, without direct information about what exactly it should be doing.

Furthermore, neuroevolution generalizes to a wide range of network architectures and neural models; applying neuroevolution requires only that the performance of networks can be evaluated over time, and that the behavior of the networks can be modified through evolution. While most neural learning methods focus on modifying only the strengths of neural connections (i.e. their connection weights), neuroevolution can additionally optimize other parameters, such as the structure of the network (e.g. adding neurons or connections), the type of computation performed by individual neurons, and even learning rules that modify the network during evaluation. Interestingly, integrating such lifetime learning rules (e.g. Hebbian or neuromodulated plasticity) allows evolved neural networks to learn from experience. In this way, neuroevolution can facilitate exploring biological adaptation on multiple time scales.

The most common applications of neuroevolution are in reinforcement learning, evolutionary robotics, and artificial life. Sample applications include evolving behaviors for board games and video games, controlling mobile robots, and investigating the evolution of biologically-relevant behaviors.

Basic Algorithms

Figure 1: Evolving neural networks.

Typically in neuroevolution, a population of genetic encodings of neural networks is evolved in order to find a network that solves the given task. Most neuroevolution methods follow the usual generate-and-test loop of evolutionary algorithms (Figure 1). Each encoding in the population (a genotype) is chosen in turn and decoded into the corresponding neural network (a phenotype). This network is then employed in the task, and its performance over time measured, obtaining a fitness value for the corresponding genotype. After all members of the population have been evaluated in this manner, genetic operators are used to create the next generation of the population. Those encodings with the highest fitness are mutated and crossed over with each other, and the resulting offspring replaces the genotypes with the lowest fitness in the population. The process therefore constitutes an intelligent parallel search towards better genotypes, and continues until a network with a sufficiently high fitness is found.

Several methods exist for evolving neural networks depending on how the networks are encoded. The most straightforward encoding, sometimes called conventional neuroevolution (CNE), is formed by concatenating the numerical values for the network weights (Schaffer et al. 1992, Yao 1999, Floreano et al. 2008). This encoding allows evolution to optimize the weights of a fixed neural network architecture, an approach that is easy to implement and is practical in many domains. However, the CNE approach (1) can converge to deceptive local optima; (2) requires experimenters to choose an appropriately sized and structured network topology; and (3) scales poorly to more difficult problems because the number of parameters to be evolved grows linearly or quadratically in the size of the network.

More sophisticated encodings have been devised to alleviate these problems. One approach is to consider evolution at the level of solution components instead of full solutions (Moriarty et al. 1999, Potter and De Jong 2000, Gomez et al. 2008). That is, instead of a population of complete neural networks, a population of network fragments, neurons, or connection weights is evolved. Each individual is evaluated as one part of a full network, and its fitness reflects how well it cooperates with other individuals in the particular solutions it helps compose. In this manner, the complex problem of finding a solution network is broken into several smaller subproblems.

Another approach is to evolve the network topology in addition to the weights. The idea is that choosing an appropriate topology is difficult for experimenters yet can have a large effect on the experimental outcome (Angeline et al. 1994, Yao 1999, Floreano et al. 2008). Additionally, evolving topologies can achieve better performance than evolving weights only (Harvey et al. 1997, Stanley and Miikkulainen 2004). One promising approach is to start evolution with simple solutions and gradually make them more complex. Such a process takes place in biology and is a powerful approach in machine learning in general.

All the above methods map the genetic encoding directly to the corresponding neural network: Each part of the encoding corresponds to one part of the network, and vice versa. Indirect encoding, in contrast, specifies a process through which the network is constructed, such as cell division, spatial embedding, or generation through expanding grammatical rules (Kitano 1990, Gruau and Whitley 1993, Stanley and Miikkulainen 2003, Floreano et al. 2008, Stanley et al. 2009). Such an encoding can be highly compact (i.e. a large number of connections can be encoded with few parameters), and also take advantage of modular solutions. Modularity implies that the same structures can be repeated with minor modifications, as often occurs in biology. Successfully scaling neuroevolution to levels of biological complexity (e.g. to millions or billions of neurons) likely depends upon developing more sophisticated indirect encodings, making such encodings an important direction for future research.

Extensions

The basic mechanisms of neuroevolution can be augmented in several ways, making the process more efficient and facilitating solving more difficult problems. One of the most basic extensions is incremental evolution, or shaping: Evolution starts on a simple task, and once that is mastered, the solutions are evolved further on a more challenging task, and through a series of such transfer steps, eventually on the actual goal task itself (Harvey et al. 1994, Gomez and Miikkulainen 1997, Bongard 2011). Shaping can work through changing the environment, such as gradually increasing the difficulty of the task, or by changing the fitness function, e.g. by rewarding gradually more complex behaviors. It is often possible to solve challenging tasks by approaching them incrementally even when they cannot be solved directly.

Many general extensions to evolutionary computation methods apply particularly well to neuroevolution. For instance, intelligent mutation techniques such as those employed in evolutionary strategies are effective because the weights often have suitable correlations (Heidrich-Meisner and Igel 2009). Similarly, multiobjective evolutionary algorithms that balance achievement over competing objectives can often benefit neuroevolution (García-Pedrajas et al. 2005, Van Hoorn et al. 2009, Mouret and Doncieux 2012). Neural networks can also be evolved through coevolution, where individuals in the population compete or cooperate with each other (Fogel 2001, Stanley and Miikkulainen 2004, García-Pedrajas et al. 2005). In particular, a coevolutionary arms race can be facilitated by topology-evolving neuroevolution: As the network becomes gradually more complex, evolution is likely to elaborate on existing behaviors instead of replacing them. Finally, various general evolutionary computation methods for increasing exploration or diversity within the population are often effective and necessary when applying neuroevolution to difficult problems. For example, reducing competition between networks with different topologies, ages, or functionalities often improves performance (Stanley and Miikkulainen 2004, Bongard 2011, Lehman and Stanley 2011, Mouret and Doncieux 2012).

On the other hand, several extensions utilize the special properties of the neural network phenotype. For instance, neuron activation functions, initial states, and learning rules can be evolved to fit a given task (Schaffer et al. 1992, Yao 1999, Floreano et al. 2008). Importantly, evolution can be combined with other neural network learning methods (Ackley and Littman 1992, Gruau and Whitley 1993, Floreano et al. 2008). In such approaches, while evolution usually provides the initial network, it then adapts further during its evaluation in the task. The adaptation can take place through Hebbian or neuromodulated plasticity, allowing for adaptation through environmental feedback. Alternatively, supervised learning such as back-propagation can be used, provided targets are available. Interestingly, even if the optimal behaviors are not known, such training can be useful: Networks can be trained to imitate the most successful individuals in the population, or part of the network can be trained to predict the next inputs. The weight changes may be encoded back into the genotype, implementing Lamarckian evolution; alternatively, they may affect selection through the Baldwin effect, i.e. networks that learn well will be selected for reproduction even if the weight changes themselves are not inherited.

There are also several ways to bias and direct the learning system using human knowledge. For instance, human-coded rules can be encoded in partial network structures, and incorporated into the evolving networks as structural mutations. Such knowledge can be used to implement initial behaviors in the population, or it can serve as advice during evolution (Stanley et al. 2005, Bryant and Miikkulainen 2007, Van Hoorn et al. 2009). In cases where rule-based knowledge is not available, it may still be possible to obtain examples of human behavior. Such examples can then be incorporated into evolution, either as components of fitness, or by explicitly training the evolved solutions towards human behavior, e.g. through backpropagation. Similarly as was discussed above, knowledge about the task and its components can be utilized in designing effective shaping strategies. In this manner, human expertise can be used to bootstrap and guide evolution in difficult tasks, as well as direct it towards the desired kinds of solutions.

Applications

Neuroevolution methods are powerful especially in continuous domains of reinforcement learning, and those that have partially observable states. These domains include many real-world applications of reinforcement learning; the most obvious application is adaptive, nonlinear control of physical devices. For instance, neural network controllers have been evolved to drive mobile robots, automobiles, and even rockets (Beer and Gallagher 1992, Harvey et al. 1997, Lipson and Pollack 2000, Nolfi and Floreano 2000, Hornby and Pollack 2002, Gomez and Miikkulainen 2003, Togelius et al. 2007, Vasalem et al. 2012). The control approach has been extended to optimize systems such as chemical processes, manufacturing systems, and computer systems (Gomez et al. 2001, Conradie et al. 2002, Greer et al. 2002, Whiteson and Stone 2006). However, a crucial limitation with current approaches is that the controllers usually need to be developed in simulation and then transferred to the real system. Evolution is typically strongest as an off-line learning method where it is free to explore potential solutions in parallel.

Second, neuroevolution has proved useful in designing players for board games such as checkers, chess, and othello (Moriarty et al. 1995, Fogel 2001, Fogel et al. 2004). Interestingly, the same approach works in constructing characters in artificial environments, such as games and virtual reality. Non-player characters in current video games are usually scripted and limited; neuroevolution can be used to evolve complex behaviors for them, and even adapt them in real time (Lucas 2005, Stanley et al. 2005, Togelius et al. 2011). Neuroevolution can thus facilitate new kinds of video games, such as games where players train a team of AI agents. Similarly, artifacts such as weapons can be constructed by evolved neural networks, thus enabling games where players collaboratively breed new in-game content that otherwise would have to be explicitly designed by human experts (Togelius et al. 2011, Risi et al. 2012).

Third, evolution of neural networks is a natural tool for problems in artificial life, and is increasingly being applied to explore issues that are difficult to probe through more traditional techniques in evolutionary biology. While the particular selection pressures that led to key evolutionary transitions in nature are transitory and leave little direct evidence, neuroevolution can be applied in controlled experiments to investigate what conditions are necessary for certain behaviors to evolve. Thus it is possible to design neuroevolution experiments on how behaviors such as foraging, pursuit and evasion, hunting and herding, collaboration, and even communication may emerge in response to environmental pressure (Werner and Dyer 1990, Beer and Gallagher 1992, Cangelosi and Parisi 1997, Nolfi and Floreano 2000, Rawal et al. 2010). Neuroevolution can also be applied to investigate more abstract evolutionary tendencies, like the evolution of modularity or how biological development interacts with evolution (Kashtan and Alon 2005, Bongard 2011, Clune et al. 2013). In addition, analyzing evolved neural circuits, and understanding how they map to function, can lead to insights into biological networks (Aharonov et al. 2001, Keinan et al. 2006).

Conclusion

Neuroevolution is a biologically-inspired method for creating artificial intelligence. Open research challenges include creating effective and scalable indirect encodings, understanding and synthesizing the evolutionary pressures leading to high-level intelligence, and scaling neuroevolution to evolve cognitive behaviors such as multimodal behavior, communication, and lifetime learning. Meeting such challenges may yield practical approaches to evolving intelligent robot controllers and video game controllers as well as insight into biological neural networks and the evolution of intelligence itself.

References

  • Ackley, David H. and Littman, Michael L. (1991). Interactions between learning and evolution. In: Artificial Life II. Addison Wesley.
  • Aharonov-Barki, Ranit; Beker, Tuvik and Ruppin, Eytan (2001). Emergence of memory-driven command neurons in evolved artificial agents. Neural Computation 13(3): 691-716. doi:10.1162/089976601300014529.
  • Bongard, Josh (2011). Morphological change in machines accelerates the evolution of robust behavior. Proceedings of the National Academy of Sciences 108(4): 1234--1239. doi:10.1073/pnas.1015390108.
  • Bryant, Bobby D. and Miikkulainen, Risto (2007). Acquiring visibly intelligent behavior with example-guided neuroevolution. In: Proceedings of AAAI 2007. AAAI Press.
  • Clune, Jeff; Mouret, Jean-Baptiste and Lipson, Hod (2013). The evolutionary origins of modularity. Proceedings of the Royal Society B 280(1755): .
  • Conradie, Alex E.; Miikkulainen, Risto and Aldrich, Christiaan (2002). Adaptive control using neural swarming. In: Proceedings of the Genetic and Evolutionary Computation Conference.
  • Floreano, Dario; Dürr, Peter and Mattiussi, Claudio (2008). Neuroevolution: from architectures to learning. Evolutionary Intelligence 1: 47-62. doi:10.1007/s12065-007-0002-4.
  • Fogel, David B. (2001). Blondie24: Playing at the Edge of AI. Morgan Kaufmann, San Francisco, CA.
  • Fogel, David B.; Hays, Timothy J.; Hahn, Sarah L. and Quon, James (2004). A self-learning evolutionary chess program. Proceedings of the IEEE 92(12): 1947-1954. doi:10.1109/jproc.2004.837633.
  • García-Pedrajas, Nicolás; Hervás-Martínez, César and Ortiz-Boyer, Domingo (2005). Cooperative coevolution of artificial neural network ensembles for pattern classification. IEEE Transactions on Evolutionary Computation 9(3): 271-302. doi:10.1109/tevc.2005.844158.
  • Gomez, Faustino; Schmidhuber, Jurgen and Miikkulainen, Risto (2008). Accelerated neural evolution through cooperatively coevolved synapses. Journal of Machine Learning Research 9: 937-965.
  • Gomez, Faustino; Berger, Doug and Miikkulainen, Risto (2001). A neuroevolution method for dynamic resource allocation on a chip multiprocessor. In: 2001 International Joint Conference on Neural Networks, pp. 2355-2361. IEEE.
  • Gomez, Faustino and Miikkulainen, Risto (2003). Active guidance for a finless rocket using neuroevolution. In: Genetic and Evolutionary Computation Conference. Springer.
  • Greer, Brian; Hakonen, Henri; Lahdelma, Risto and Miikkulainen, Risto (2002). Numerical optimization with neuroevolution. In: 2002 Congress on Evolutionary Computation. IEEE.
  • Gruau(1993). Adding learning to the cellular development of neural networks: Evolution and the Baldwin effect Evolutionary Computation 1: 213-233. doi:10.1162/evco.1993.1.3.213.
  • Harvey, Inman; Husbands, Philip and Cliff, Dave (1994). Seeing the light: Artificial evolution, real vision. In: International Conference on Simulation of Adaptive Behavior. MIT Press/Bradford Books.
  • Harvey, Inman; Husbands, Philip; Cliff, Dave; Thompson, Adrian and Jakobi, Nick (1997). Evolutionary robotics: The Sussex approach. Robotics and Autonomous Systems 20(2): 205-224. doi:10.1016/s0921-8890(96)00067-x.
  • Heidrich-Meisner(2009). Neuroevolution strategies for episodic reinforcement learning. Journal of Algorithms 64(4): 152-168.
  • Hornby(2002). Creating high-level components with a generative representation for body-brain evolution. Artificial Life 8(3): 223-246. doi:10.1162/106454602320991837.
  • Kashtan(2005). Spontaneous evolution of modularity and network motifs. Proceedings of the National Academy of Sciences 102(39): 13773-13778. doi:10.1073/pnas.0503610102.
  • Keinan, Alon; Sandbank, Ben; Hilgetag, Claus C.; Meilijson, Isaac and Ruppin, Eytan (2006). Axiomatic scalable neurocontroller analysis via the Shapley value. Artificial Life 12(3): 333-352. doi:10.1162/artl.2006.12.3.333.
  • Lehman(2011). Abandoning objectives: Evolution through the search for novelty alone. Evolutionary Computation 19(2): 189-223. doi:10.1162/evco_a_00025.
  • Lipson(2000). Automatic design and manufacture of robotic lifeforms. Nature 406: 974-978.
  • Lucas, Simon M. (2005). Evolving a neural network location evaluator to play Ms. Pacman. In: IEEE Symposium on Computational Intelligence and Games, pp. 203-210.
  • Moriarty(1995). Discovering complex Othello strategies through evolutionary neural networks. Connection Science 7(3): 195-210. doi:10.1080/09540099550039228.
  • Mouret(2012). Encouraging behavioral diversity in evolutionary robotics: An empirical study. Evolutionary Computation 20(1): 91-133. doi:10.1162/evco_a_00048.
  • Nolfi, Stefano and Floreano, Dario (2000). Evolutionary Robotics. MIT Press, Cambridge, MA.
  • Potter(2000). Cooperative coevolution: An architecture for evolving coadapted subcomponents. Evolutionary Computation 8(1): 1-29. doi:10.1162/106365600568086.
  • Rawal, Aditya; Rajagopalan, Padmini and Miikkulainen, Risto (2010). Constructing competitive and cooperative agent behavior using coevolution. In: IEEE Conference on Computational Intelligence in Games.
  • Risi, Sebastian; Lehman, Joel; D'Ambrosio, David; Hall, Ryan and Stanley, Kenneth O. (2012). Combining search-based procedural content generation and social gaming in the Petalz video game. In: Artificial Intelligence and Interactive Digital Entertainment Conference.
  • Schaffer, J. David; Whitley, Darrell and Eshelman, Larry J. (1992). Combinations of genetic algorithms and neural networks. In: COGANN-92, pp. 1-37.
  • Stanley(2004). Competitive coevolution through evolutionary complexification. Journal of Artificial Intelligence Research 21: 63-100.
  • Stanley, Kenneth O.; Bryant, Bobby D. and Miikkulainen, Risto (2005). Real-time neuroevolution in the NERO video game. IEEE Transactions on Evolutionary Computation 9(6): 653-668. doi:10.1109/tevc.2005.856210.
  • Stanley, Kenneth O.; D'Ambrosio, David B. and Gauci, Jason (2009). A hypercube-based encoding for evolving large-scale neural networks. Artificial Life 15(2): 185-212. doi:10.1162/artl.2009.15.2.15202.
  • Togelius, Julian; Lucas, Simon and Nardi, Renzo (2007). Computational intelligence in racing games. In: Advanced Intelligent Paradigms in Computer Games, pp. 39-69. Springer.
  • Togelius, Julian; Yannakakis, Georgios N.; Stanley, Kenneth O. and Browne, Cameron (2011). Search-based procedural content generation: A taxonomy and survey. IEEE Transactions on Computational Intelligence and AI in Games 3(3): 172-186. doi:10.1109/tciaig.2011.2148116.
  • Van Hoorn, Niels; Togelius, Julian; Weirstra, Daan and Schmidhuber, Jurgen (2009). Robust player imitation using multiobjective evolution. In: IEEE Congress on Evolutionary Computation, pp. 652-659.
  • Valsalam, Vinod K.; Hiller, Jonathan; MacCurdy, Robert; Lipson, Hod and Miikkulainen, Risto (2012). Constructing controllers for Physical Multilegged Robots using the ENSO Neuroevolution approach. Evolutionary Intelligence 5(1): 1-12. doi:10.1007/s12065-011-0067-y.
  • Werner, Gregory M. and Dyer, Michael G. (1990). Evolution of communication in artificial organisms. In: Artificial Life II, pp. 549-587.
  • Whiteson(2006). Evolutionary function approximation for reinforcement learning. Journal of Machine Learning Research 7: 877-917.
  • Yao, Xin (1999). Evolving artificial neural networks. Proceedings of the IEEE 87:9: 1423-1447. doi:10.1109/5.784219.

External links

Neuroevolution software:

Author websites:

See Also

Algorithm, Brain, Evolution, Evolutionary Computation, Neural networks, Optimization, Recurrent neural networks, Reinforcement learning

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools