From Scholarpedia
Jeff Krichmar (2008), Scholarpedia, 3(3):1365. doi:10.4249/scholarpedia.1365 revision #140694 [link to/cite this article]
This revision has not been approved and may contain inaccuracies
Revision as of 19:35, 24 March 2008 by Juan Pablo Carbajal (Talk | contribs)
Jump to: navigation, search
Figure 1: A brain-based device with a simulated cerebellum for predictive motor control. The device is built on the Segway Robotic Mobility Platform. The device navigated a path dictated by the orange traffic cones that were spaced a few inches apart. The BBD’s task was to traverse a curved course outlined by traffic cones, without collisions. Initially, collisions or near collisions with the cones generated a reflexive movement away from the obstacle and a reflexive braking response. These reflex commands were also used as error signals to the cerebellar model via simulated climbing fiber inputs. Success in this task required the BBD’s cerebellum to associate predictive visual motion cues, which came from optic flow generated by self-movement, with the correct movements to avoid collisions with the cone boundaries. Adapted from (McKinstry et al., 2006).

Neurorobots (Seth et al., 2005) are robotic devices that have control systems based on principles of the nervous system. These models operate on the premise that the “brain is embodied and the body is embedded in the environment”. Therefore, neurorobots are grounded and situated in a real environment. The real environment is required for two reasons. First, simulating an environment can introduce unwanted and unintentional biases to the model. For example, a computer-generated object presented to a vision model has its shape and segmentation defined by the modeler and directly presented to the model, whereas a device that views an object hanging on a wall has to discern the shape and figure from ground segmentation based on its on active vision. Second, real environments are rich, multimodal, and noisy; an artificial design of such an environment would be computationally intensive and difficult to simulate. However, all these interesting features of the environment come for “free” when a neurorobot is placed in the real world. The field of Neurorobotics started in the late 1980s. Kawato and colleagues built a series of robotic devices to test how the cerebellum adapts movements (Kawato and Gomi, 1992; Gomi and Kawato, 1992; Miyamoto et al., 1988). Gerald Edelman's group tested the Theory of Neuronal Group Selection (Edelman 1978) by introducing the Darwin series of automata (Reeke et al., 1992). Since this time, the number of neuroroboticists has expanded into a full community of researchers studying a wide-range of neuroscience topics.

A neurorobot has the following properties:

  1. It engages in a behavioral task.
  2. It is situated in a real-world environment.
  3. It has a means to sense environmental cues and act upon its environment.
  4. Its behavior is controlled by a simulated nervous system having a design that reflects, at some level, the brain’s architecture and dynamics.

As a result of these properties, neurorobotic models provide heuristics for developing and testing theories of brain function in the context of phenotypic and environmental interactions. Also, neurorobotic models may provide a foundation for the development of more effective robots, based on an improved understanding of the biological bases of adaptive behavior.


Classes of neurorobotic models

There are too many examples of neurobiologically inspired robotic devices to exhaustively list in this brief review. However, the approach has been applied to several distinct areas of neuroscience research:

  1. Motor control and locomotion
  2. Learning and memory systems
  3. Value systems and action selection

The remainder of this article will briefly touch on a few representative examples; the interested reader should refer to the cited references for more detail.

Motor control and locomotion

Neurorobots have proved useful for investigating animal locomotion and motor control, and for designing robot controllers. Neural models of central pattern generators, pools of motorneurons that drive a repetitive behavior, have been used to control locomotion in robots (Ijspeert et al., 2007; Kimura et al., 2007; Lewis et al., 2005). Kimura and colleagues have shown how neurorobotics can provide a bridge between neuroscience and biomechanics by demonstrating emergent 4-legged locomotion based on central pattern generator mechanisms modulated by reflexes. Their group developed a model of a learnable pattern generator and demonstrated its viability using a series of synthetic and humanoid robotic examples. Ijspeert and colleagues constructed an amphibious salamander-like robot that is capable of both swimming and walking, and therefore represents a key stage in the evolution of vertebrate-legged locomotion. A neurorobotic implementation was found necessary for (1) testing whether the models could produce locomotion both in water and on ground and (2) investigating how sensory feedback affects dynamic pattern generation.

An intriguing neural inspiration for the design of robot controllers is the mirror neuron system found in primates. Mirror neurons in the premotor cortex are active, both when a monkey grasps or manipulates objects and when it watches another animal performing similar actions (Rizzolatti and Arbib, 1998). Neuroroboticists, using this notion of mirror neurons, have suggested that complex movements such as reaching and locomotion may be achieved through imitation (Billard and Mataric, 2001; Schaal, 1999; Schaal et al., 2003; Schaal and Schweighofer, 2005; Tani et al., 2004).

Figure 2: Darwin XI, a brain-based device with a simulated hippocampus and its surrounding regions. Darwin XI is pictured at the choice point of its plus-maze environment. Darwin XI began a trial alternately at the East or West start arm, and used its artificial whiskers to follow the maze arm until it reached the choice point. As it followed the maze wall its whiskers sensed patterns of pegs, its camera sensed color cue cards on the perimeter, its compass provided heading, and its laser provided range information. In the beginning of training, Darwin XI was given a rewarding stimulus when it chose the South goal arm. After it successfully learned that task, the rewarding stimulus was switched to the North goal arm. Adapted from (Fleischer et al., 2007).

Another strategy for motor control in neurally inspired robots is to use a predictive controller to convert awkward, error prone movements into smooth, accurate movements. Recent theories of motor control suggest that the cerebellum learns to replace primitive reflexes with predictive motor signals. The idea is that the outcomes of reflexive motor commands provide error signals for a predictive controller, which then learns to produce a correct motor control signal prior to the less adaptive reflex response. Neurally inspired models have used these ideas in the design of robots that learn to avoid obstacles (McKinstry et al., 2006; Porr and Worgotter, 2003), produce accurate eye (Dean et al., 1991) and generate adaptive arm movements (Dean et al., 1991; Eskiizmirliler et al., 2002; Hofstotter et al., 2002). Figure <ref>F1</ref> shows a brain-based device, containing a model of the cerebellum and cortical area MT, which learned to predict collisions based on visual motion cues and adapted its movements accordingly.

Learning and memory systems

A major theme in neurorobotics is neurally inspired models of learning and memory. One area of particular interest is navigation systems based on the rodent hippocampus. Rats have exquisite navigation capabilities in both the light and in the dark. Moreover, the finding of place cells in the rodent hippocampus, which fire specifically at a spatial location, have been of theoretical interest for models of memory and route planning (O'Keefe and Nadel, 1978). Robots with models of the hippocampal place cells have been shown to be viable for navigation in mazes and environments similar to those used in rat spatial memory studies (Arleo and Gerstner, 2000; Burgess et al., 1997; Mataric, 1991; Milford et al., 2004). Recently, large-scale systems-level models of the hippocampus and its surrounding regions have been embedded on robots to investigate the role of these regions in the acquisition and recall of episodic memory (Banquet et al., 2005; Fleischer et al., 2007; Krichmar et al., 2005). Figure <ref>F2</ref> shows a brain-based device in a plus maze that developed episodic-like responses in its simulated hippocampus.

Figure 3: Darwin VIII views objects on two of the walls of an arena. When Darwin VIII breaks the beam from an IR emitter to an IR sensor, a tone is emitted from a speaker on the side of the red diamond. The tone triggers Darwin VIII’s value system and causes it to associate value with the red diamond. (Adapted from Seth et al. 2004b).
Figure 4: Snapshot of Darwin VIII’s neuronal unit activity during a behavioral experiment. Each pixel in the neural areas represents a neuronal unit; the activity is normalized from no activity (black) to maximum activity (bright colors), and the phase (i.e. timing of activity) is indicated by the color of the pixel. The neuronal units responding to the attributes of the red diamond share a common phase (red-orange color), whereas the neuronal units responding to the green diamond share a different phase (blue-green color). Adapted from (Seth et al., 2004b).

Another learning and memory property that is of importance to the development of neurorobotics is the ability to organize the unlabeled signals that robots receive from the environment into categories. This organization of signals, which in general depends on a combination of sensory modalities (e.g. vision, sound, taste, or touch), is called perceptual categorization. Several neurorobots have been constructed that build up such categories, without instruction, by combining auditory, tactile, taste, and visual cues from the environment (Krichmar and Edelman, 2002; Seth et al., 2004a; Seth et al., 2004b). Figures <ref>F3a</ref> and <ref>F3b</ref> show a brain-based device that developed categories for the objects it observed and solved the visual binding problem through synchronous activity in its simulated ventral visual stream. These categories emerged from the device’s experience exploring its environment.

Value systems and action selection

Figure 5: Darwin VII, a brain-based device that consists of a mobile base, a CCD camera, two microphones on either side of the camera, and sensors embedded in a gripper, which measures the surface conductivity of the metal blocks it manipulates. These sensory signals provide input to the neuronal simulation. In this experiment, the striped blocks have “good” taste (highly conductive), and the spotted blocks have “bad” taste (weakly conductive). The blocks also would emit tones for an auditory cue. Adapted from (Krichmar and Edelman, 2002).

Biological organisms adapt their behavior through value systems, which provide nonspecific, modulatory signals to the rest of the brain that bias the outcome of local changes in synaptic efficacy in the direction needed to satisfy global needs. Examples of value systems in the brain include the dopaminergic, cholinergic, and noradrenergic systems (Aston-Jones and Bloom, 1981; Hasselmo et al., 2002; Schultz et al., 1997). Behavior that evokes positive responses in value systems biases synaptic change to make production of the same behavior more likely when the situation in the environment (and thus the local synaptic inputs) is similar; behavior that evokes negative value biases synaptic change in the opposite direction. The dopamine system and its role in shaping decsion making has been explored in neurorobots and brain-based devices (Arleo et al., 2004; Krichmar and Edelman, 2002; Sporns and Alexander, 2002). Figure <ref>FD7</ref> shows a brain-based device that learned to associate a neutral stimulus (i.e. visual category) with an innate value (i.e. conductivity of metal blocks). Doya’s group has been investigating the effect of multiple neuromodulators in the “Cyber-rodent”; two-wheeled robots that move autonomously in an environment (Doya and Uchibe, 2005). These robots have drives for self-preservation and self-reproduction exemplified by searching for and recharging from battery packs on the floor and then communicating this information to other robots nearby through their infrared communication ports. In addition to examining how neuromodulators such as dopamine can influence decision making, neuroroboticists have been investigating the basal ganglia as a model that mediates action selection (Prescott et al., 2006). Based on the architecture of the basal ganglia, Prescott and colleagues embedded a model of the basal ganglia in a robot that had to select from several actions depending on the environmental context.


Higher brain functions depend on the cooperative activity of an entire nervous system, reflecting its morphology, its dynamics, and its interaction with the environment. Neurorobots are designed to incorporate these attributes such that they can test theories of brain function. The behavior of neurorobots and the activity of their simulated nervous systems allow for comparisons with experimental data acquired from animals. The comparison can be made at the behavioral level, the systems level, and the neuronal level. These comparisons serve two purposes: First, neurorobots can generate hypotheses and test theories of brain function. The construction of a complete behaving model forces the designer to specify theoretical and implementation details that can be easy to overlook in an ungrounded or disembodied theoretical model. Moreover, it forces these details to be consistent. Second, by using the animal nervous system as a metric, neurorobot designers can continually make their simulated nervous systems and resulting behavior closer to that of the model animal. This, in turn, allows the eventual creation of practical devices that may approach the sophistication of living organisms.


  • Arleo, A., and Gerstner, W. (2000). Modeling rodent head-direction cells and place cells for spatial learning in bio-mimetic robotics., Paper presented at: From Animals to Animats 6: Proceedings of the Sixth International Conference on Simulation of Adaptive Behavior (Paris, France: MIT Press).
  • Arleo, A., Smeraldi, F., and Gerstner, W. (2004). Cognitive navigation based on nonuniform Gabor space sampling, unsupervised growing networks, and reinforcement learning. IEEE Trans Neural Netw 15, 639-652.
  • Aston-Jones, G., and Bloom, F. E. (1981). Nonrepinephrine-containing locus coeruleus neurons in behaving rats exhibit pronounced responses to non-noxious environmental stimuli. J Neurosci 1, 887-900.
  • Banquet, J. P., Gaussier, P., Quoy, M., Revel, A., and Burnod, Y. (2005). A hierarchy of associations in hippocampo-cortical systems: cognitive maps and navigation strategies. Neural Comput 17, 1339-1384.
  • Billard, A., and Mataric, M. J. (2001). Learning human arm movements by imitation: Evaluation of a biologically inspired connectionist architecture. Robotics and Autonomous Systems 37, 145-160.
  • Burgess, N., Donnett, J. G., Jeffery, K. J., and O'Keefe, J. (1997). Robotic and Neural Simulation of the Hippocampus and Rat Navigation. Biological Science 352, 1535-1543.
  • Dean, P., Mayhew, J. E., Thacker, N., and Langdon, P. M. (1991). Saccade control in a simulated robot camera-head system: neural net architectures for efficient learning of inverse kinematics. Biol Cybern 66, 27-36.
  • Doya, K., and Uchibe, E. (2005). The Cyber Rodent Project: Exploration of Adaptive Mechanisms for Self-Preservation and Self-Reproduction. Adaptive Behavior 13, 149 - 160.
  • Edelman, G.M. (1987) Neural Darwinism: The Theory of Neuronal Group Selection. New York: Basic Books.
  • Edelman G.M., Reeke G.N., Gall W.E., Tononi G., Williams D., Sporns O. (1992) Synthetic neural modeling applied to a real-world artifact. Proc Natl Acad Sci USA 89(15):7267-71.
  • Eskiizmirliler, S., Forestier, N., Tondu, B., and Darlot, C. (2002). A model of the cerebellar pathways applied to the control of a single-joint robot arm actuated by McKibben artificial muscles. Biol Cybern 86, 379-394.
  • Fleischer, J. G., Gally, J. A., Edelman, G. M., and Krichmar, J. L. (2007). Retrospective and prospective responses arising in a modeled hippocampus during maze navigation by a brain-based device. Proc Natl Acad Sci U S A 104, 3556-3561.
  • Gomi H, Kawato M. (1992) A computational model of four regions of the cerebellum based on feedback-error learning. Biological Cybernetics 68(2):105-114.
  • Hasselmo, M. E., Hay, J., Ilyn, M., and Gorchetchnikov, A. (2002). Neuromodulation, theta rhythm and rat spatial navigation. Neural Netw 15, 689-707.
  • Hofstotter, C., Mintz, M., and Verschure, P. F. (2002). The cerebellum in action: a simulation and robotics study. Eur J Neurosci 16, 1361-1376.
  • Ijspeert, A. J., Crespi, A., Ryczko, D., and Cabelguen, J. M. (2007). From swimming to walking with a salamander robot driven by a spinal cord model. Science 315, 1416-1420.
  • Kawato M, Gomi H. (1992) A computational model of four regions of the cerebellum based on feedback-error learning. Biological Cybernetics 68(2):95-103.
  • Kimura, H., Fukuoka, Y., and Cohen, A. H. (2007). Biologically inspired adaptive walking of a quadruped robot. Philos Transact A Math Phys Eng Sci 365, 153-170.
  • Krichmar, J. L., and Edelman, G. M. (2002). Machine Psychology: Autonomous Behavior, Perceptual Categorization, and Conditioning in a Brain-Based Device. Cerebral Cortex 12, 818-830.
  • Krichmar, J. L., Seth, A. K., Nitz, D. A., Fleischer, J. G., and Edelman, G. M. (2005). Spatial navigation and causal analysis in a brain-based device modeling cortical-hippocampal interactions. Neuroinformatics 3, 197-221.
  • Lewis, M., Tenore, F., and Etienne-Cummings, R. (2005). CPG Design using Inhibitory Networks, Paper presented at: IEEE Conference on Robotics and Automation (Barcelona).
  • Mataric, M. J. (1991). Navigating with a rat brain: A neurobiologically-inspired model for robot spatial representation., In From animals to animats., J. Arcady Meyer, and S. W. Wilson, eds. (Cambridge, MA: MIT Press), pp. 169-175.
  • McKinstry, J. L., Edelman, G. M., and Krichmar, J. L. (2006). A cerebellar model for predictive motor control tested in a brain-based device. Proc Natl Acad Sci U S A 103, 3387-3392.
  • Milford, M. J., Wyeth, G. F., and Prasser, D. (2004). RatSLAM: A Hippocampal Model for Simultaneous Localization and Mapping, Paper presented at: Proceedings of the 2004 IEEE International Conference on Robotics & Automation (New Orleans, LA).
  • Miyamoto, H., Kawato, M., Setoyama, T., Suzuki, R. (1988) Feedback-error-learning neural network for trajectory control of a robotic manipulator. Neural Networks. Vol. 1, no. 3, pp. 251-265.
  • O'Keefe, J., and Nadel, L. (1978). The hippocampus as a cognitive map (Oxford: Clarendon Press).
  • Porr, B., and Worgotter, F. (2003). Isotropic sequence order learning. Neural Comput 15, 831-864.
  • Prescott, T. J., Montes Gonzalez, F. M., Gurney, K., Humphries, M. D., and Redgrave, P. (2006). A robot model of the basal ganglia: behavior and intrinsic processing. Neural Netw 19, 31-61.
  • Reeke, G.N., Jr., O. Sporns, W.E. Gall, G. Tononi, and G.M. Edelman (1993) A biologically based synthetic nervous system for a real-world device. In Artificial Neural Networks for Speech and Vision, R.J. Mammone, ed., pp. 457-473, Chapman & Hall, London.
  • Rizzolatti, G., and Arbib, M. A. (1998). Language within our grasp. Trends Neurosci 21, 188-194.
  • Schaal, S. (1999). Is imitation learning the route to humanoid robots? Trends Cogn Sci 3, 233-242.
  • Schaal, S., Ijspeert, A., and Billard, A. (2003). Computational approaches to motor learning by imitation. Philos Trans R Soc Lond B Biol Sci 358, 537-547.
  • Schaal, S., and Schweighofer, N. (2005). Computational motor control in humans and robots. Curr Opin Neurobiol 15, 675-682.
  • Schultz, W., Dayan, P., and Montague, P. R. (1997). A neural substrate of prediction and reward. Science 275, 1593-1599.
  • Seth, A. K., McKinstry, J. L., Edelman, G. M., and Krichmar, J. L. (2004a). Spatiotemporal processing of whisker input supports texture discrimination by a brain-based device, In Animals to Animats 8: Proceedings of the Eighth International Conference on the Simulation of Adaptive Behavior, S. Schaal, A. Ijspeert, A. Billard, S. Vijayakumar, J. Hallam, and J. A. Meyer, eds. (Cambridge, MA: The MIT Press), pp. 130-139.
  • Seth, A. K., McKinstry, J. L., Edelman, G. M., and Krichmar, J. L. (2004b). Visual Binding Through Reentrant Connectivity and Dynamic Synchronization in a Brain-based Device. Cereb Cortex 14, 1185-1199.
  • Seth, A. K., Sporns, O., and Krichmar, J. L. (2005). Neurorobotic models in neuroscience and neuroinformatics. Neuroinformatics 3, 167-170.
  • Sporns, O., and Alexander, W. H. (2002). Neuromodulation and plasticity in an autonomous robot. Neural Netw 15, 761-774.
  • Tani, J., Ito, M., and Sugita, Y. (2004). Self-organization of distributedly represented multiple behavior schemata in a mirror system: reviews of robot experiments using RNNPB. Neural Netw 17, 1273-1289.

Recommended reading

  • None

External links

See also

Brain, Hippocampus, Reinforcement Learning, Robotics

Personal tools
Focal areas