Models of mirror system

From Scholarpedia
Erhan Oztop (2007), Scholarpedia, 2(10):3276. doi:10.4249/scholarpedia.3276 revision #87439 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Erhan Oztop

The models addressed here are computational models that go beyond mere descriptions or sketches of the mirror neuron system and/or mirror neurons. A computational model allows the systematic study of the complex system that it models, which yields nontrivial predictions about the system. Often such models are experimented with computer simulations where the model parameters can be changed conveniently.

Experimental data on ‘mirror neurons’ is available only for monkeys; since electrophysiology to a large extent, cannot be used for investigating human cerebral cortex. Consequently, human data only addresses ‘mirror regions’, the brain sites (collectively called the mirror system) that become active in imaging studies for both action observation and action execution. Although brain imaging results show that mirror system may be associated with imitation and language (Iacoboni et al. 1999; Fadiga et al. 2002; Carr et al. 2003; Skipper et al. 2005), in monkeys the mirror neuron response is limited to transitive actions, and thus can only support a rudimentary form of imitation, sometimes referred to as emulation or stimulus facilitation. Therefore, models that assume a direct link between imitation and mirror neurons should be taken cautiously; because to a large extent, they do not include an account of what additional brain circuitry is needed or has evolved on top of the monkey-like mirror system (Oztop et al. 2006).

Contents

Auto-associative Memory Hypothesis of Mirror Neurons

Implementation of neural networks that inspire from Hebbian synaptic plasticity, leads to connectionist architectures referred as auto-associative or content addressable memories (e.g. Hopfield network). The crucial feature of an auto-associative memory is that a partial representation of a stored pattern can be used to reconstruct the whole. Figure 1 shows a possible association that can be established when a biological or an artificial agent acts. The association can take place among the motor code, the somatosensory, vestibular, auditory and visual stimuli sensed when the movement is executed. It can be hypothesized that mirror neurons are part of a similar mechanism: when the organism generates motor commands the representation of this command and the sensed (somatosensory, visual and auditory) effects of the command are associated within the mirror neuron system. Then at a later time when the system is presented with a stimulus that partially matches one of the stored patterns (i.e. vision or audition of an action alone) the associated motor command representation is retrieved automatically. This representation can be used (with additional circuitry) to mimic the observed movement.

Experimental evidence supports such an associative learning mechanism. For example, human mirror system can be made to responds oppositely to the observation of an action by training subjects to perform one action while observing another (unlike the case where one observes his/her own actions) (Catmur et al. 2007). Normally, the event-related muscle-specific responses to the transcranial magnetic stimulation of the motor cortex during observation of little and index finger movements are attributed to the mirror system function. However, after training, this normal mirror effect is reversed (Catmur et al. 2007) suggesting that the mirror system is adapted through the co-occurance of the motor code representation and the visual representation of the observed action in the cerebral cortex. Complementary to this, monkey neurophysiology points to a neurally plausible model of mirror neuron system involving STS (the superior temporal sulcus that contains neurons lacking motor response but otherwise similar to mirror neurons, PF (a part of the inferior parietal cortex that contains mirror-like neurons) and F5 (ventral premotor area where mirror neurons were originally found), which operates in a Hebbian learning framework (Keysers and Perret 2004).

Figure 1: A generic auto-associative memory for an agent (biological or artificial). When the agent moves the motor code yielding the movement and the sensed stimuli are stored together in the auto-associative memory. At a later time, a partial representation of the associated stimuli (e.g. vision) retrieves the whole memory item including the motor code, which when executed would yield the presented percept.


This line of thoughts has been explored through robotic implementations of imitation using a range of associative memory architectures. Elshaw et al. (2004) implemented an associator network based on the Helmholtz machine (Dayan et al. 1995) where the motor action codes were associated with vision and language representations. The learned association enabled the hidden layer of the network behave as mirror neurons; the hidden units could become active with one of motor, vision or language inputs. Kuniyoshi et al. (2003) used a spatiotemporal associative memory called the ‘non-monotone neural net’ (Morita 1996) to associate self generated arm movements of a robot with the local visual flow generated. Billard and Mataric (2001) used the DRAMA architecture (Billard and Hayes 1999), which is a time-delay recurrent neural network with Hebbian update dynamics. Oztop et al. (2005a) used an extension of Hopfield network utilizing product terms to implement a hand posture imitation system using a robotic hand. In spite of the differences in the implementation, the common property among the aforementioned associative memory models is the multimodal activation of the associative memory/network units. Thus when these models are considered as models of mirror neurons (note that not all models claim to be models for mirror neurons) then the explanation of the existence of mirror neurons becomes phenomenological rather than functional.

Modular Motor Learning and Imitation

The Modular Selection and Identification for Control (MOSAIC) model (Wolpert and Kawato 1998; Haruno et al. 2001) is a learning controller based on decentralized automatic module selection so as to achieve best control for the task at hand. The key ingredients of MOSAIC are modularity and the distributed cooperation and competition of the internal models. The basic functional units of the model are multiple predictor-controller (forward-inverse model) pairs where each pair competes to contribute to the overall control. The controllers with better predicting forward models (i.e., with higher responsibility signals) become more influential in the overall control. At a given instant, the responsibility signal for a given forward-inverse model pair represents the likelihood of the inverse model being effective in control at that instant. The likelihood is estimated using the forward prediction error.

MOSAIC model can be utilized for imitation and action recognition. This dual use of the model establishes some parallels between the model and the mirror neuron system. The realization of imitation (and action recognition) with MOSAIC requires three stages: (1) the visual information of the actor’s movement is converted into variables akin to the state (e.g. joint angles) which is fed to the imitator’s MOSAIC as the desired state, then (2) each controller generates the motor command required to achieve the observed trajectory. In this “observation mode”, the outputs of the controllers are not used for movement generation, but serve as input to the predictors paired with the controllers. Thus the next likely states (of the observer) become available as the output of the forward predictions. These predictions are then compared with the demonstrator’s actual next state to provide prediction errors that indicate, via responsibility signals, which of the controller modules of the observer must be active in order to replicate the observed movement (Wolpert et al. 2003). Therefore the output of the predictors or the responsibility signals might be considered analogous to mirror neuron activity.

There are similar modular approaches to imitation and control. Demiris et al. (2002; 2003) proposed a similar architecture where the forward and inverse modules considered were at a higher level; an inverse model, for example, could encompass a full behavior (e.g. ‘picking up the mug’). The architecture can be related to mirror neurons because the behavior modules are active during both movement generation and observation. The confidence values of the modules (akin to responsibility signals of MOSAIC model) can be envisioned to be analogous to mirror neuron responses. Demiris et al. (2002; 2003) arrived at several predictions about mirror neurons albeit considering ‘imitation-ability’ and ‘mirror neuron activity’ interchangeable.

An Evolutionary Approach

Evolutionary algorithms incorporate aspects of natural selection to solve an optimization problem. An evolutionary algorithm maintains a population of individuals that evolves according to rules of selection, recombination, mutation and survival. The optimization problem is identified by a shared environment which in turn, defines the fitness of each individual. The individuals are simulated to form generations; fitter individuals and their variants survive, while most not-fit individuals are eliminated. After many generations, the set of high performing individuals are taken as representing close-to-optimal solutions to the original problem. Within this evolutionary framework, Borenstein and Ruppin (2005) defined individuals as simple neuro-controllers that could sense the state of the world and the action of a teaching agent (inputs) and generate actions (outputs). Individuals generated output with a simple 1-hidden-layer feedforward neural network. A fixed random mapping from world state to actions defined the optimal behavior. Consequently, the individuals that proved more adept at learning could increase the potential of their traits’ survival. The genetic code of the individuals determined the properties of the network connections: type of learning, initial strength, either inhibitory or excitatory, and rate of plasticity.

The simulation with populations of 200 individuals evolving for 2000 generations showed interesting results. The best individuals developed neural controllers that could learn to imitate the teacher. The analysis of the units in the hidden layer of the neuro-controllers revealed units which were active both when observing the teacher and executing the correct action. The conclusion drawn was that there is an “essential link between the ability to imitate and a mirror system”. Although the model showed imitation and mirror neurons may come to coexist through life-like evolution, the assumption of existence of a perfect individual acting correctly among the evolving individuals makes this argument weaker. Ideally one would like to have individuals of homogenous performance, which evolve to learn from each other.


A Developmental View

It has been reported that some monkey mirror neurons respond to the observation (and execution) of the action of tearing a sheet of paper (Kohler et al., 2002). This action is not in the ecological behavior repertoire of the monkeys in the wild. So, it is unlikely that mirror neurons are innate. In line with this observation, the MNS model (Oztop and Arbib 2002) took a developmental point of view and tried to explain how the mirror neurons could develop during infancy. The MNS model was specified as a systems level model of the (monkey) mirror neuron system for grasping. The main hypothesis of the model was that the temporal profile of the features an infant experiences during self-executed grasps (e.g. distance to the target object) provides the training stimuli for the mirror neuron system to develop. The computational focus of the model was the development of mirror neurons by self observation. The motor production component of the system was assumed to be in place and not modeled as neural networks. The schemas making up the model were implemented with different level of granularity. Conceptually those schemas correspond to brain regions as the following. The inferior premotor cortex plays a crucial role for reach and grasp, area F4 being involved in the control of the reach component (Gentilucci et al. 1988), and area F5 being involved in the distal control (Rizzolatti et al. 1988). According to the model anterior intraparietal area (AIP) extracts the affordances the object offers for grasping, and relays this information to area F5 (canonical neurons) that project to primary motor cortex. The reach part of the grasping is mediated through a parallel pathway: several parietal areas (MIP/LIP/VIP) represent and relay the location of the target object to area F4 which in turn projects to primary motor cortex. The remaining modules of the model constitute the sensory processing (STS and area 7a) and the core mirror circuit (F5 mirror neurons and area 7b). The focus of the simulations was the 7b-F5 complex (core mirror circuit). Simulations consisted of training and testing; during training the simulated infant (modeled as a kinematics model of an arm-hand system) produced grasping actions according to the motor code represented by active F5 canonical neurons. This code was used as the training signal for the core mirror circuit so that mirror neurons learned which hand-object visual feature trajectories corresponded to the canonically encoded grasps. After training (i.e. in testing phase), the network could recognize the grasp type from the visual features extracted during observation of a grasp action, with correct classification often being achieved before the hand reached the object. The input and outputs of the core mirror circuit was computed using various schemas providing a context to analyze the circuit. The circuit itself was implemented as a feedforward neural network (1-hidden layer back-propagation network with sigmoidal activation units; hidden layer: area 7b; output layer: F5mirror).

What allowed self-observation to yield a system that could correctly respond to others’ hand action actions is the fact that the visual features extracted during grasp execution was defined with respect to the goal, and was invariant with respect to the executor of the action (i.e. self or the other). Despite the use of a non-physiological neural network, simulations with the model generated a range of predictions about mirror neurons that suggest new neurophysiological experiments. For example one prediction was that a precision pinch applied to a wide object must activate multiple neurons (power and precision grasps responsive neurons) during the early portion of the movement observation. Only later must the activity of the precision pinch neuron dominate and the activity of the power grasp neuron should diminish. Recently, Bonaiuto et al. (2005) developed the MNS2 model, a new version of the MNS model of action recognition learning by mirror neurons of the macaque brain, using a recurrent architecture that is biologically more plausible than that of the original model. Moreover, MNS2 extends the capacity of the model to address data on audio-visual mirror neurons.

A Motor Control Role

The Mental State Inference (MSI) model attempts to give an account of how mental state inference can be realized with one’s own motor system, once a forward prediction capability is available for motor control (Oztop et al. 2005b). In the model, a generic visual feedback circuit involving the parietal and motor cortices is assumed, and it is proposed that the mirror neurons in the ventral premotor area could be involved in forward prediction. The postulated functioning of the model for visual feedback control proceeds as the following. The parietal cortex extracts visual features relevant for the control of a particular goal-directed action (X, the control variable) and relays this information to the premotor cortex. The premotor cortex computes the motor signals to match the parietal cortex output (X) to the desired neural code (Xdes) relayed by prefrontal cortex. The “desired change” generated by the premotor cortex is relayed to dynamics related motor centers for execution. The forward prediction circuit (forward model) estimates the sensory consequences of the ventral premotor output for eliminating the sensory delays involved in the visual feedback circuit. As in the MNS model, area F5 (canonical) is involved in converting the parietal output (areas 7b and AIP) into motor signals, which are used by primary motor cortex and spinal cord for actual muscle activation. In other words, area F5 non-mirror neurons implement a control policy (assumed to be learned earlier) to reduce the error represented by parietal output. During observation mode, F5 mirror neurons are used to create motor imagery or mental simulation of the movement for mental state inference. The ability to predict the future visual features based on the kinematics of goal directed actions enables the basic feedback circuit to be extended into a system for inferring the intentions of others. An observer “guesses” the mental state of a demonstrator and simulates the action that is appropriate for that mental state. The match of the simulated sensations with the sensation of observed movement signals the correctness of the guess. The simulated mental sensations and actual perception of movement is compared in a mental state search mechanism. According to the MSI model, the dual activation of mirror neurons (forward model) is explained by these two processes: (1) automatic engagement of mental state inference during action observation, and (2) the forward prediction task undertaken by the mirror neurons for motor control during action execution. The simulations with this modeled showed that mental state interference ability could be bootstrapped upon a motor-to-visual predictive mechanism once the control is specified with respect to the target object (i.e. when a object centered reference frame is used).

A Dynamical System Approach

Jun Tani and his coworkers (2004) proposed a model addressing learning, imitation and autonomous behavior generation in artificial agents. Certain ingredients of their model are advocated as models of mirror neurons. The proposed network is a generative learning architecture called Recurrent Neural Network with Parametric Biases (RNNPB). In this architecture the spatiotemporal patterns are associated with so called Parameteric Bias Vectors (PB). RNNPB self-organizes the mapping between PBs and the spatiotemporal patterns (behaviors) during the learning phase.

The learning is performed in an off-line fashion by providing the sensory-motor training stimuli (e.g. position of a moving hand and the joint angles of the arm) for each behavior in the training set. The goal of the training is twofold (1) to adapt the neural network weights so that the it becomes a time series predictor for the sensory-motor stimuli, and (2) to create PB vectors for each behavior. Both of these adaptations are based on the prediction error. After learning, the model represents a set of behaviors as dynamical systems tagged by the PB vectors created during the learning phase The behaviors are generated via the associated PB vectors; given a fixed PB vector, the network autonomously produces a sensory-motor stream corresponding to the behavior associated with the PB vector. When the network is set to ‘observe’ mode, the sensory data (from the observed movement) is used to compute a PB vector that is associated with a behavior that matches the observed one as much as possible.

This model has been shown to allow a humanoid robot to imitate and learn actions via demonstration. During execution, a fixed PB vector selects one of the stored motor patterns. For recognition PB unit outputs iteratively converge to the action observed. Although mirror neurons do not determine the action to be executed in monkeys (Fogassi et al. 2001), the firing patterns of mirror neurons are correlated with the action being executed. Thus PB vector units may be considered analogous to mirror neurons.

References

Billard A, Hayes G (1999) DRAMA, a Connectionist Architecture for Control and Learning in Autonomous Robots. Adaptive Behavior 7: 35-63

Billard A, Mataric MJ (2001) Learning human arm movements by imitation:: Evaluation of a biologically inspired connectionist architecture. Robotics and Autonomous Systems 37: 145-160

Bonaiuto J, Rosta E, Arbib MA (2005) Recognizing Invisible Actions, Workshop on Modeling Natural Action Selection. In: Workshop on Modeling Natural Action Selection, Edinburgh

Borenstein E, Ruppin E (2005) The evolution of imitation and mirror neurons in adaptive agents. Cognitive Systems Research 6

Carr L, Iacoboni M, Dubeau M-C, Mazziotta JC, Lenzi GL (2003) Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. PNAS 100: 5497-5502

Catmur C., Walsh V, Heyes C.(2007) Sensorimotor Learning Configures the Human Mirror System. Current Biology 17(17): p. 1527-1531.

Dayan P, Hinton GE, Neal RM, Zemel RS (1995) The Helmholtz Machine. Neural Computation 7: 889-904

Demiris Y, Hayes G (2002) Imitation as a dual-route process featuring predictive and learning components: a biologically-plausible computational model. In: Dautenhahn K, Nehaniv C (eds) Imitation in Animals and Artifacts. MIT Press

Demiris Y, Johnson M (2003) Distributed, predictive perception of actions: a biologically inspired robotics architecture for imitation and learning. Connection Science 15

Elshaw M, Weber C, Zochios A, Wermter S (2004) An Associator Network Approach to Robot Learning by Imitation through Vision, Motor Control and Language. In: International Joint Conference on Neural Networks, Budapest, Hungary, pp 591-596

Fadiga L, Craighero L, Buccino G, Rizzolatti G (2002) Speech listening specifically modulates the excitability of tongue muscles: a TMS study. Eur J Neurosci 15: 399-402

Fogassi L, Gallese V, Buccino G, Craighero L, Fadiga L, Rizzolatti G (2001) Cortical mechanism for the visual guidance of hand grasping movements in the monkey - A reversible inactivation study. Brain 124: 571-586

Haruno M, Wolpert DM, Kawato M (2001) MOSAIC model for sensorimotor learning and control. Neural Computation 13: 2201-2220

Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, Rizzolatti G (1999) Cortical mechanisms of human imitation. Science 286: 2526-2528

Keysers C, Perrett DI (2004) Demystifying social cognition: a Hebbian perspective. Trends Cogn Sci 8(11): p. 501-7.

Kohler E, Keysers C, Umilta MA, Fogassi L, Gallese V, Rizzolatti G. (2002), Hearing sounds, understanding actions: action representation in mirror neurons. Science 297(5582): p. 846-8.

Kuniyoshi Y, Yorozu Y, Inaba M, Inoue H (2003) From Visuo-Motor Self Learning to Early Imitation - A Neural Architecture for Humanoid Learning. In: International Conference on Robotics & Automation. IEEE, Taipei, Taiwan

Morita M (1996) Memory and Learning of Sequential Patterns by Nonmonotone Neural Networks. Neural Netw 9: 1477-1489

Oztop E, Arbib MA (2002) Schema Design and Implementation of the Grasp-Related Mirror Neuron System. Biological Cybernetics 87: 116-140

Oztop E, Chaminade T, Cheng G, Kawato M (2005a) Imitation Bootstrapping: Experiments on a Robotic Hand. In: IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan

Oztop E, Wolpert D, Kawato M (2005b) Mental state inference using visual control parameters. Brain Res Cogn Brain Res 22: 129-151

Oztop E, Kawato M, Arbib M (2006) Mirror neurons and imitation: a computationally guided review. Neural Netw 19: 254-271

Skipper JI, Nusbaum HC, Small SL (2005) Listening to talking faces: motor cortical activation during speech perception. NeuroImage 25: 76-89

Tani J, Ito M, Sugita Y (2004) Self-organization of distributedly represented multiple behavior schemata in a mirror system: reviews of robot experiments using RNNPB. Neural Netw 17: 1273-1289

Wolpert DM, Doya K, Kawato M (2003) A unifying computational framework for motor control and social interaction. Philos Trans R Soc Lond B Biol Sci 358: 593-602

Wolpert DM, Kawato M (1998) Multiple paired forward and inverse models for motor control. Neural Networks 11: 1317-1329

Internal references

See Also

Mirror Neurons, Reach and Grasp, Premotor Cortex, Parietal Cortex

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools