Notice: Undefined offset: 5358 in /var/www/ on line 5961
Chaotic itinerancy - Scholarpedia

Chaotic itinerancy

From Scholarpedia
Ichiro Tsuda (2013), Scholarpedia, 8(1):4459. doi:10.4249/scholarpedia.4459 revision #197905 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Ichiro Tsuda

Chaotic itinerancy is a closed-loop trajectory through high-dimensional state space of neural activity that directs the cortex in sequence of quasi-attractors. A quasi-attractor is a local region of convergent flows (attractant, absorbent) giving ordered, periodic activity and divergent flows (repellant, dispersive) giving disordered, chaotic activity between the regions.

Quasi-attractors are associated with perceptions, thoughts and memories, the chaos between them with searches, and itinerancy with sequences in thinking, speaking and writing. Chaotic itinerancy differs from itineraries in symbolic dynamics where sequencing is arbitrary, from saddle points with measure-zero attracting flows, from Kelso/Bressler’s metastable neurodynamics, and from Freeman/Kozma’s cinematic neurodynamics.


On the Finding of Chaotic Itinerancy

The first finding of chaotic itinerancy

Figure 1: Chaotic itinerancy generated by coupling of one-dimensional Milnor attractors [Tsuda and Umemura, 2003]. Colored trajectories indicate attractor ruins (see text) including quasi-attractors, and the black ones indicate chaotic trajectories that move over a higher dimension of phase space. A nearly stationary motion in each neighborhood of an attractor can be seen.
Figure 2: Trajectories starting from a given initial condition are superimposed. Attractor ruins are shown by colored dots, and chaotic trajectories by black ones.

At least up to the late 1980s, complex and dynamic behaviors in high-dimensional dynamical systems attracted attention in various fields, such as nonlinear physics, hydrodynamics, condensed matter physics, physical chemistry, and even neuroscience.

Ikeda studied optical turbulence and found complex phenomena in his model of a delayed-feedback optical system, one of which showed transitory dynamics among several optical modes [Ikeda et al., 1989]. A similar phenomenon was observed by many others in optical turbulence (see, for example, [Anderson, 1987; Arecchi, 1990; Davis, 1990]). Independently, Tsuda found a similar transitory phenomenon in a neural network model of associative memory [Tsuda et al., 1987], in which not only a single association of a certain memory, but also a successive association of memories were realized. Kaneko proposed coupled map lattices (CML) and globally coupled maps (GCM) to describe turbulence-like complex, dynamic, and bifurcation phenomena appearing in various fields. He also found transitory behaviors in intermediate regions, between a fully chaotic region and an ordered region, of two-dimensional parameter space consisting of the coupling strength among elementary individual maps and a degree of nonlinearity inherent in each map [Kaneko, 1990]. In GCM, such a transition occurs when multiple attractors are weakly destabilized, and the underlying states in the transitions were considered to be attractors in Milnor’s sense. The Milnor attractor concerned here is expressed as a minimal set among attractors whose basins of attractions have positive Lebesgue measures. Thus, there can be trajectories leaving the attractor.


Kaneko, Ikeda, Davis, and Tsuda discussed the universality of this kind of transitory behavior. The common features were as follows: the successive transitions among quasi-attractors [Tsuda, 2001; Haken, 2006] are chaotic; the stability of the states changes via transitions; there are distribution functions for the transitions, such as transition probabilities and distributions of the residence-time of trajectories in a neighborhood of quasi-attractors; the drastic change of local dimensionality during the transitions, etc. Here, local dimensionality can be calculated by instantaneous Lyapunov exponents defined by the eigenvalues of Jacobian matrix at each position of a trajectory. These authors called the phenomenon characterized by these factors a “chaotic itinerancy” [Ikeda et al., 1989; Kaneko, 1990; Davis, 1990; Tsuda, 1991a], in which the neighboring regions of quasi-attractors were named “attractor ruins”, as these regions in phase space are not occupied by conventional attractors, but include trajectories which can be nearly stationary, as if a conventional attractor remained there. An attractor in Milnor’s sense, but not in a geometric and conventional sense, provides a good concept for describing a quasi-attractor, because under even infinitesimal perturbations, all the neighboring trajectories escape from the attractor sooner or later, as it can include a region or a set with neutral stability. This destabilized state of the attractor yields an attractor ruin. Thus, in chaotic itinerancy, complete convergence to an attractor fails and the continuing presence of a collapsing gradient leads to the appearance of the effect of real nonlinearity even in a neighborhood of the attractor.

Can Transitory Phenomena in the Brain be Interpreted as Chaotic Itinerancy?

Chaotic Transitions in the Brain

The complex transitions among distinct states of brain activity are not merely random; rather, they are transitory dynamics with specific features such as nonstationary, repetitive, and chaotic transitions. The typical phenomena observed experimentally seem to be chaotic transitions among “quasi-attractors” for the rat and rabbit olfactory system, particularly in the olfactory bulb [Freeman, 1987; Skarda and Freeman, 1987; Freeman, 1995; Freeman, 1999], irregular transitions between synchronization and desynchronization of subthreshold dynamics in the cat visual cortex [Gray et al., 1992], irregular reentry of synchronization of phase differences in human electrocorticogram (ECoG) [Freeman, 2003], and the task-related propagation of wave packets consisting of \(\gamma\)-waves with around 30–90 Hz oscillations and \(\beta\)-waves with around 10–30 Hz oscillations in the rat olfactory and widely connected areas [Kay et al., 1995; Kay et al., 1996].

In particular, Freeman and colleagues observed chaotic transitions during active behaviors in animals. Rats learn odor, which is represented by a limit-cycle attractor in the olfactory bulb. After the learning of several odor inputs, the activity of the olfactory bulb becomes a chaotic wandering among learned states if an input is new, however, the activity converges to one of the learned states, i.e., the activity is represented by a limit-cycle attractor if an input has been learned previously [Freeman, 1987; Skarda and Freeman, 1987]. This shows the typical attended dynamics of the animal. Furthermore, the state transitions occur spontaneously, which can yield a ready state for appropriate responses to external stimuli as well as perceived states. Freeman has asserted that the changes occur by the change of the attractor landscape. Kay also proved the existence of state transitions in the field potentials in the olfactory bulb, hippocampus, and entorhinal cortex of rats. The transitions occur during successive periods of anticipation of odor inputs, perception of odor, judgment for action, and actual action [Kay et al., 1995; Kay et al., 1996]. The transitions may reflect the representation of the animal’s experience, i.e., episodes.Orderly sequences of spatial patterns relating to intentional behavior were found in the ECoG of olfactory, visual, auditory, somatic and entorhinal cortices, and modeled as a cinematographic process. It shares with chaotic itinerancy in the concept of sequential frames, but it differs in requiring convergence to one attractor prior to dissolution of an entire attractor landscape at each descent into chaos [Freeman, 2006; Freeman and Vitiello, 2006; Freeman and Quian Quiroga, 2012].

Transitions without external inputs, i.e., as spontaneous cortical activity, or as ongoing activity have also been measured using quantities that reflect field potentials, such as local field potential (LFP), calcium imaging, electroencephalogram (EEG), and ECoG. The brain changes its activity in the absence of stimuli such that the spontaneously activated pattern or ongoing activity is similar to what would be observed if the stimuli were actually presented [Kenet et al., 2003; Goldberg et al., 2004; Mason et al., 2007; Freeman, 2003].

In particular, Arieli et al. [1996] and Kenet et al. [2003] showed that ongoing activity contains a set of dynamically switching cortical states, in which these dynamic states were suggested to reflect expectations about the sensory inputs. This finding indicates that the brain is always in an active idling state [Freeman, 1987], with possible responsive patterns being evoked to enable quick responses to any stimulus. Spontaneous cortical activity has also been suggested to appear in accordance with wandering mental processes because of the activation of default networks [Mason et al., 2007].

Other types of spontaneous activity have also been observed. One of them stems from the study performed by Freeman and Zhai [2009], who observed spontaneous activity in animal and human brains, and performed a data analysis in terms of a random number moderated by refractory periods. These authors found that the spontaneous activity could be characterized by black noise, the power spectrum density of which follows \(1/f^x\), where \(x \ge 2\). The appearance of black noise activity usually means that extremely rare events predominate. However, here ‘black noise’ means that the refractory periods operate not as a sharp high-cutoff filter. Instead, the refractory periods reduce the power over the entire spectrum in proportion to frequency in the logarithmic scale in both variables, giving the slopes between -2 and -4, as evaluated by the impulse response of the cortex. Another type has been observed in cultures of the hippocampal CA3 [Sasaki et al., 2007]. The transition occurred in the presence of a high concentration of carbachol, which is an agonist of muscarinic acetylcholine receptors, among five kinds of states: random firing states, up–down states, steady firing states, \(\theta\) rhythm activity, and partially synchronized states. The transition was not regular; rather, it was chaotic, such as what shown in a noisy tent map. In contrast to the effect of carbachol, the input of atropine, which is an antagonist of muscarinic acetylcholine receptors, prohibited the transition and had a strong tendency to force the CA3 network to either one of the five states described above, depending on the initial conditions. The finding of this spontaneous transition in CA3 is important because the hippocampal CA3 can be considered as playing a role in the internal reconstruction of episodes.

The significance of Tsuda’s model in computational neuroscience

Figure 3: Successive association of memory patterns. Four spatial patterns (the face of a woman, a scene with a pine tree, the face of an animal, and a set of fruits) and their negative patterns are embedded into synaptic connections of the network of Aihara’s chaotic neurons using a Hebbian learning algorithm. The total number of neurons is \(1,572,864=24\times 256^2\), where the total number of pixels is \(256^2\)and 24 neurons were set in each pixel. M. Oku and K. Aihara [Oku and Aihara, 2011] at Tokyo University designed and produced the model.The moving image shown by reducing the number of pixels.

Before Tsuda’s work on dynamic associative memory, a mathematical model of associative memory succeeded in describing a single association in a neural network [Amari, 1977; Kohonen, 1978; Hopfield, 1982]. In particular, in the Hopfield type of network, it was proved using a Hebbian learning algorithm that an attractor dynamics subserves for the realization of associative memory, in which a trajectory starting from an initial condition put in a basin of attraction converges to the attractor, which is presumed to represent a certain memory. Thus, this type of model realizes a single association of memory as a pattern completion. On the other hand, in the case of successive association, the study of neural network was restricted to the condition that a rule for order of association was given [Amari, 1977]. Thus, any emergent properties could not be treated by these theories. Tsuda’s model describes a dynamically successive association of memories with a self-organized rule that emerges in a recurrent neural network with inhibitory feedback connections under a Hebbian learning algorithm. This kind of successive association of memories can be described by dynamic transitions between dynamical memory states. The dynamic process and its transition rule were studied and a circle map with criticality as a transition rule was found. Criticality appeared twofold: a circle map was in a critical stage between a chaotic map and a stable one; and fixed points representing memories were described by indifferent fixed points, which were similar to those introduced by Milnor as a typical example of an attractor defined to include even an invariant set with neutral stability [Milnor, 1985]. Based on the work regarding successive association of memories, Körner and his colleagues [Körner et al., 1991] observed a similar transitory behavior in a neural network model of parallel-in-sequence processing of information, which was proposed as a dynamic model of internal attention of vision [Treisman, 1982; Crick, 1984]. Aihara and his colleagues made a model for dynamic associative memories [Adachi and Aihara, 1997], using the network of chaotic neurons proposed by Aihara [Aihara, 1990], and observed a similar chaotic transition between memories. A similar, but not transient, behavior was also observed in a neural network model for association of memories, in which each memory was represented by a limit cycle attractor [Nara and Davis, 1992]. In the Nara–Davis model, a decrease in the number of synaptic connections leads to a transition from attractor dynamics subserving a single association of memory to chaotic dynamics subserving a dynamic and successive association of memories. Horn and Ophur proposed a similar model, in which the successive associations of memories were noise-induced [Horn and Opher, 1996]. In noised dynamical systems, trajectories can be kicked out of quasi-attractors by noise and attractor ruins can be yielded if noisy trajectories are regulated to behave in a nearly stationary fashion in each neighborhood of quasi-attractors. In such a case, chaotic itinerancy can appear also in noised dynamical systems.

Attractor Ruins

As these transitory phenomena observed in the brain include chaotic transitions, whether these can be interpreted in terms of chaotic itinerancy depends mainly on the presence of nearly stationary motion in a neighborhood of ordered state, which is guaranteed by the appearance of attractor ruins. Freeman has proposed an attractor landscape that yields mesoscopic states through state transitions [Freeman, 1987; Skarda and Freeman, 1987]. In his proposal, the trajectory representing a dynamic change in brain activity converges asymptotically to a selected attractor; however, escape occurs by collapse of the entire landscape. While collapse of the entire landscape should be realized by a sudden change in system parameter(s), the appearance of attractor ruins can occur via a continual change in the parameter(s). However, because the attractor landscape is activated by limbic command, the nearly stationary motion in a neighborhood of the selected attractor depends on the activity of the limbic system. If the limbic command changes, not suddenly but more slowly than the converging dynamics, which brings about a slower change in the attractor landscape compared with the convergence of the states, the nearly stationary motion can be generated by the appearance of the attractor ruin. If collapse of the entire landscape occurs, it is unlikely that attractor ruins are present; rather, a sudden change from one attractor to another must be observed, so that the nearly stationary motion can simply appear on the attractor only.

Conversely, the “I don’t know” state that Freeman observed in animal perception seems to be identified with chaotic itinerancy because of the appearance of attractor ruins. An input of unknown odor gives rise to chaotic transitions among learned odor states, and nearly stationary motion appears in a neighborhood of a limit-cycle attractor that represents a learned odor state. Thus, the learned state is weakly destabilized and expressed by an attractor ruin. Only after learning, the trajectory converges to a new limit-cycle attractor. The strongly chaotic activity of the ‘I don’t know’ basin of attraction is required for trial-and-error Hebbian learning, because the Hebbian synapses require pre- and post-synaptic activity, but it must be novel to avoid reinforcing some existing attractor. Actually, Tsuda’s model guarantees this trial-and-error Hebbian learning even for the period of recalling memories by avoiding the reinforcement of existing attractors due to the continual transient dynamics.


Bressler and Kelso [2001] proposed a concept that is similar to chaotic itinerancy, i.e., the dynamic functional binding of local and global information processing based on metastability to represent both a specific information processing to each cortical local area and global integration of the local information processing. If each specific processing works as a metastable state in some potential function with a gradient, global integration will be realized by the formation of such a function, which may generate a global attractor. However, the manner by which such a function is formed is unknown. An attractor ruin in chaotic itinerancy is not metastable; in contrast, an ordered state as a precursor of an attractor ruin is rather metastable. If the system is trapped in a certain metastable state, energy is required to go beyond a local maximum barrier, even if the transitions are caused by the change of attractor landscape; only a sufficient energy supply allows the transition. Conversely, a basis of transitions in chaotic itinerancy is provided by an attractor ruin that possesses a neutral stability, thus, the transitions demand only a low-level energy supply, i.e., even arbitrarily small perturbations can trigger the transitions. Usually, in dynamical systems, a neutral stable state appears in a critical stage of bifurcations, so that this state is structurally unstable, i.e., it becomes unstable via a small change in a bifurcation parameter. However, a neutral stable state in chaotic itinerancy in neural systems is structurally stable via the common mechanism of masking effects by the presence of inhibitory interneurons [Körner et al., 1991].

Unstable Attractor

Another type of state transitions among attractors via arbitrarily small perturbations has been observed in pulse-coupled oscillators. Timme et al. [2002] analyzed the structure of basins of attractions of periodic attractors and found a specific feature of the basin structure in which each periodic attractor is surrounded by the basins of other attractors, but is remote from its own basin. Because in this type of situation arbitrarily small perturbations bring about transitions among periodic attractors, such attractors are unstable. In the sense that this unstable attractor is also consistent with the definition of attractor proposed by Milnor [1985], it is similar to, but not the same as, attractor ruins in chaotic itinerancy. The nearly stationary motion in this type of couplings of oscillators can appear only on the attractors. Although pulse-coupled oscillator systems can provide mathematical models for both vertebrate and invertebrate locomotion, in which pattern generators can be coupled by pulses, it is unlikely that these also provide models of cortical neural networks for perception, memory, and cognition, as more complex neural activity is coupled by more continual variables of electric potentials and chemicals.

Heteroclinic Cycle

Rabinovich and colleagues studied another dynamical system that may account for a certain type of cortical transition observed in the olfactory system of insects [Rabinovich et al., 2001; Afraimovich et al., 2004]. As a representation of the transition, these authors proposed a heteroclinic cycle of saddles. This transition mechanism is based on a generic property of saddle connections, in which the transition is not always chaotic. Usually, saddle connections are not structurally stable, because an unstable manifold of one saddle can coincide with a stable manifold of the other saddle with measure-zero in phase space. However, it may appear to be structurally stable under the presence of symmetry [Guckenheimer, 1988]. This holds under the condition that the sum of the dimensions of the unstable manifolds of one saddle and of the stable manifolds of the other saddle exceeds the dimension of phase space. One can confirm this condition in each invariant subspace of symmetric dynamical systems. Using this theory, transitions via saddle connections may occur in some areas of the brain. However, the presence of such symmetry in the brain must be investigated in more detail.

One may still discuss the possibility of the appearance of chaotic itinerancy in heteroclinic cycles [Tsuda, 2009]. It is interesting to consider the memory capacity of networks of competing neuron groups. Rabinovich et al. estimated this capacity at approximately \(e(N-1)!\), where \(N\) is the number of neurons and \(e\) is a Napier’s constant, \(2.71828\cdots\), calculating the possible number of heteroclinic cycles [Rabinovich et al., 2001]. On the other hand, to calculate the critical dimensionality of the appearance of chaotic itinerancy, Kaneko [2002] estimated two factors that supposedly determine the dimensionality of the chaotic transition. Let \(N'\) be the system’s dimension, and let us assume that the number of states in each dimension is two, taking into account the presence of two stable states separated by a saddle. The number of admissible orbits cyclically connecting the subspaces, using, for instance, heteroclinic cycles, increases in proportion to \((N'-1)!\), whereas the number of states increases in proportion to \(2^{N'}\). If the former number exceeds the latter, then not all orbits can necessarily be assigned to each of the states, thus causing the transitions. In this situation, we expect itinerant motions between states. In chaotic itinerancy, this critical number is six [Kaneko, 2002; Kaneko and Tsuda, 2003]. We identify \(N'\) with \(N\). In such a case, one may conclude that the transition via heteroclinic cycles occurs when the memory capacity is smaller than the number of states, whereas chaotic itinerancy occurs in the opposite condition. This implies that the motion-related neural activity, in which the number of motor commands is small enough for the number of modality of motion, can appear as the transition behavior via heteroclinic cycles, whereas perception, cognition-related neural activity can appear as the transition behavior via chaotic itinerancy, because the memory capacity for perception and cognition must be much larger than the perceived and cognitive states.

The necessity of chaotic itinerancy as a new dynamical concept

In dynamical systems such as those described by differential equations, i.e., vector fields, dimensionality is crucial to produce a variety of solutions. Here, we discuss dissipative dynamical systems, in which an attractor describes an asymptotic state of a dynamical trajectory with a given initial condition. Dynamical systems with less than three dimensions possess two types of attractors: fixed points and limit cycles. Dynamical systems with three dimensions can further produce tori and strange attractors, namely chaotic attractors. The presence of chaos is essentially new in three-dimensional dynamical systems. Are there any essentially new dynamics in higher-dimensional dynamical systems?

Rössler [1979] proposed the idea of hyperchaos as a new attractor in dynamical systems with more than three dimensions. Hyperchaos is defined by the presence of a multiple number of positive Lyapunov exponents, namely the presence of a multiple number of independently expansive directions. Thus, hyperchaos is considered to describe a general class of higher-dimensional chaos. Thus, the following question arises: what is a specific behavior among higher-dimensional chaotic behaviors? In other words, what is the classification of hyperchaos, or could specificity exist in dynamical systems with degrees of freedom higher than three? Chaotic itinerancy can provide specific transitory dynamics among “quasi-attractors”.

One can describe various phenomenological states in terms of the concept of attractors in dynamical systems. The steady state is described by a fixed point, the periodic state by a limit cycle, the quasi-periodic state by a torus, and the irregular state by a strange attractor. To describe the transitory phenomena, one needs another dynamical concept, such as chaotic itinerancy.

A saddle connection provides an alternative description of transition between states, as mentioned in the previous section. Generically, in low-dimensional systems, a saddle connection is structurally unstable, so that it cannot be an appropriate model for transitory phenomena. However, if the system has some kind of symmetry, the saddle connection becomes structurally stable [Guckenheimer, 1988]. In such a case, successive transitions between states represented by saddles can occur, but are not always chaotic. Furthermore, the transitory phenomena discussed here are characterized by the presence of nearly stationary motion in each neighborhood of an attractor. Chaotic itinerancy is a generic structure that allows chaotically successive transitions and nearly stationary motion.

Furthermore, transitory dynamics demands a new concept of attractor, because the transition should be associated with the instability of such an attractor itself. The concept of chaotic itinerancy expresses the chaotic transitions between “quasi-attractors”. The trajectories behave as if the attractors still exist, in the sense that a positive measure of orbits is attracted to a neighborhood of an original attractor. However, such an attracted area is not asymptotically stable.

It should be noted that this idea of quasi-attractor is similar to the attractor concept proposed by Milnor [1985]. However, if a Milnor attractor exits, the trajectories converge to a Milnor attractor unless it has a riddled basin, and then cannot leave from a neighborhood of an attractor without additional perturbations. Thus, we used the term “attractor ruin”, which includes the region of a quasi-attractor and indicates the states that appear soon after the destabilization of attractors.

Chaotic itinerancy has been numerically observed in many systems [Kaneko and Tsuda, 2003]. Typical systems include GCM [Kaneko, 1990], CML [Tsuda and Umemura, 2003], networks of neuron maps [Adachi and Aihara, 1997], coupled differential equations [Fujii and Tsuda, 2004a; Fujii and Tsuda, 2004b; Tsuda et al., 2004], delay-differential equations [Ikeda et al., 1989], and skew product transformations [Tsuda et al., 1987; Tsuda, 1992].

The characteristics of chaotic itinerancy have also been clarified (see, for example, [Kaneko and Tsuda, 2001]). The distribution of the residence time in attractor ruins follows a power law [Tsuda, 1992] or an exponential law [Tsuda and Umemura, 2003], depending on the models. The chaotic transition usually occurs in high-dimensional phase space; however, in the case that chaotic trajectories are confined to a “narrow tube”-like structure in phase space, the transition can be described by low-dimensional chaos [Tsuda et al., 1987; Tsuda, 1991b]. On the Lyapunov spectrum, the following three specific characteristics have been described. (1) many of the Lyapunov exponents accumulate in a neighborhood of zero [Kaneko, 1990; Tsuda, 1992]; (2) the zero exponents beside the direction of orbit (in the case of flow) show large fluctuations and never converge [Sauer, 2003], and (3) even the largest exponent fluctuates and exhibits extremely slow convergence [Tsuda and Umemura, 2003].

Is there a mathematical concept that can represent an attractor ruin? Possible scenarios of the appearance of chaotic itinerancy have been discussed from the mathematical viewpoint. Because a mathematical description is not the purpose of this article, the readers are recommended to refer to [Tsuda, 2009], in which five scenarios have been proposed. We expect that the question of whether other possibilities of scenarios exist will be investigated.


The author would like to thank Hiraku Kuroda, Makito Oku and Hiroshi Watanabe for composing the movies (Fig. 1 and 3). This work was partially supported by Human Frontier Science Program (HFSP) (RGP0039/2010) and was also partially supported by Grant-in-Aid for Scientific Research on Innovative Areas “The study on the neural dynamics for understanding communication in terms of complex hetero systems (No. 4103) ( 21120002 )” of The Ministry of Education, Culture, Sports, Science, and Technology, Japan.


Adachi M, Aihara K (1997) Associative dynamics in a chaotic neural network. Neural Networks 10: 83-98.

Afraimovich VS, Zhigulin VP, Rabinovich MI (2004) On the origin of reproducible sequential activity in neural circuits. Chaos 14: 1123-1129.

Aihara K, Takabe T, Toyoda M (1990) Chaotic neural networks. Phys Lett A 144: 333-340.

Amari S (1977) Neural theory of association and concept-formation. Biol Cybern 26: 175-185.

Anderson DZ, Erie MC (1987) Resonator memories and optical novelty filters. Opt Eng 26: 434-444.

Arecchi FT, Giacomelli G, Ramazza PL, Residori S (1990) Experimental evidence of chaotic itinerancy and spatiotemporal chaos in optics. Phys Rev Lett 65: 2531-2534.

Arieli A, Sterkin A, Grinvald A, Aertsen A (1996) Dynamics of ongoing activity: explanation of large variability in evoked cortical responses. Science 273: 1868-1871.

Bressler SL, Kelso JAS (2001) Cortical coordination dynamics and cognition. Trends in Cogn Sci 5: 26-36.

Crick F (1984) Function of the thalamic reticular complex: the searchlight hypothesis. Proc Natl Acad Sci USA 81: 4586-4590.

Davis P (1990) Chaos and neural networks. Proc. First Symposium on Nonlinear Theory and Its Applications: 97-102.

Freeman WJ (1987) Simulation of chaotic EEG patterns with a dynamic model of the olfactory system. Biol Cybern 56: 139-150.

Freeman WJ (1995) Societies of brains – a study in the neuroscience of love and hate. Lawrence Erlbaum Associates Inc. Hillsdale.

Freeman WJ (1999) How Brains Make up Their Minds. Weidenfeld & Nicholson London.

Freeman WJ (2003) Evidence from human scalp EEG of global chaotic itinerancy. Chaos 13: 1067-1077.

Freeman WJ (2006) A cinematographic hypothesis of cortical dynamics in perception. In: Karakas S, Basar E (eds.) Intern. J. Psychophysiology 60(2): 149-161.

Freeman WJ, Vitiello G (2006) Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics. Phys Life Rev 3: 93-118.

Freeman WJ, Zhai J (2009) Simulated power spectral density (PSD) of background electrocorticogram (ECoG). Cogn Neurodyn 3: 97-103.

Freeman WJ, Quian Quiroga R. (2012) Imaging Brain Function with EEG. Advanced Temporal and Spatial Analysis of Electroencephalographic and Electrocorticographic Signals. New York: Springer.  

Fujii H, Tsuda I (2004a) Itinerant dynamics of class I* neurons coupled by gap junctions. Lecture Notes in Computer Science 3146. Springer-Verlag Berlin Heidelberg New York, pp140-160.

Fujii H, Tsuda I (2004b) Neocortical gap junction-coupled interneuron systems may induce chaotic behavior itinerant among quasi-attractors exhibiting transient synchrony. Neurocomputing 58-60: 151–157.

Goldberg JA, Rokni U, Sompolinsky H (2004) Patterns of ongoing activity and the functional architecture of the primary visual cortex. Neuron 42: 489-500.

Gray C, Engel AK, Koenig P, Singer W (1992) Synchronization of oscillatory neuronal responses in cat striate cortex: temporal properties. Visual Neuroscience 8: 337-347.

Guckenheimer J, Holmes P (1988) Structurally stable heteroclinic cycles. Math Proc Camb Phil Soc 103: 189-192.

Haken H (2006) Beyond attractor neural networks for pattern recognition. Nonlinear Phenomena in Complex Systems 9: 163-172.

Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad of Sci USA 79: 2254-2258.

Horn D, Opher I (1996) The importance of noise for segmentation and biding in dynamical neural systems. Intl J of Neural Syst 7: 529-535.

Ikeda K, Otsuka K, Matsumoto K (1989) Maxwell-Bloch turbulence. Prog Theor Phys Suppl 99: 295-324.

Kaneko K (1990) Clustering, coding, switching, hierarchical ordering, and control in network of chaotic elements. Physica D 41: 137-172.

Kaneko K, Tsuda I (2001) Complex systems: chaos and beyond. Springer-Verlag Berlin Heidelberg New York.

Kaneko K (2002) Dominance of Milnor attractors in globally coupled dynamical systems with more than 7±2 degrees of freedom. Phys. Rev. E 66: 055201(R).

Kaneko K, Tsuda I eds (2003) Focus issue on chaotic itinerancy. Chaos 13: 926-1164.

Kay L, Shimoide K, Freeman WJ (1995) Comparison of EEG time series from rat olfactory system with model composed of nonlinear coupled oscillators. Int J Bifurcat and Chaos 5: 849-858.

Kay L, Lancaster LR, Freeman WJ (1996) Reafference and attractors in the olfactory system during odor recognition. Int J Neural Syst 7: 489-495.

Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A (2003) Spontaneously emerging cortical representations of visual attributes. Nature 425: 954-956.

Kohonen T (1978) Associative memory—a system theoretical approach. Springer-Verlag Berlin Heidelberg New York.

Korner E, Schickoff K, Tsuda I(1991) Dynamic inhibitory masking by means of compensation learning in neural networks. Neurocomputers and attention I. eds Holden AV, Kryukov, VI Manchester University Press Manchester pp309-317.

Mason MF, Norton MI, Van Horn JD, Wegner DM, Grafton ST, Macrae CN (2007) Wandering minds: the default network and stimulus-independent thought. Science 315: 393-395.

Milnor J (1985) On the concept of attractor. Comm Math Phys 99: 177-195.

Nara S, Davis P (1992) Chaotic wandering and search in a cycle-memory neural network. Prog Theor Phys 88: 845-855. Oku M, Aihara K(2011) Associative Dynamics of Color Images in a Large-Scale Chaotic Neural Network, NOLTA, IEICE, Vol. 2, No. 4, pp. 508-521.

Rabinovich M, Volkovskii A, Lecanda P, Huerta R, Abarbanel HDI, Laurent G (2001) Dynamical encoding by networks of competing neuron groups: winnerless competition. Phys Rev Lett 87: 068102.

Rossler OE (1979) An equation for hyperchaos. Phys Lett A 71: 155-157.

Sasaki T, Matsuki N, Ikegaya Y (2007) Metastability of active CA3 networks. J Neurosci. 17: 517-528.

Sauer T (2003) Chaotic itinerancy based on attractors of one-dimensional maps. Chaos 13: 947-952.

Skarda CA, Freeman WJ (1987) How brains make chaos in order to make sense of the world. Behav and Brain Sci 10: 161-195.

Timme M, Wolf F, Geisel T (2002) Prevalence of unstable attractors in networks of pulse-coupled oscillators. Phys Rev Lett. 89: 154105-1-4.

Treisman A (1982) Perceptual grouping and attention in visual search for features and for objects. J Exp Psychol Hum Percept Perform. 8:194-214.

Tsuda I, Korner E, Shimizu H (1987) Memory dynamics in asynchronous neural networks. Prog Theor Phys 78: 51-71.

Tsuda I (1991a) Chaotic itinerancy as a dynamical basis of hermeneutics of brain and mind. World Futures 32: 167-185.

Tsuda I (1991b) Chaotic neural networks and thesaurus. Neurocomputers and attention I eds Holden, AV and Kryukov, VI Manchester University Press, Manchester pp 405-424.

Tsuda I (1992) Dynamic link of memories–chaotic memory map in nonequilibrium neural networks. Neural Networks 5: 313–326.

Tsuda I (2001) Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behav Brain Sci 24: 793-847.

Tsuda I, Umemura T (2003) Chaotic itinerancy generated by coupling of Milnor attractors. Chaos 13: 926-936.

Tsuda I, Fujii H, Tadokoro S, Yasuoka T, Yamaguti Y (2004) Chaotic itinerancy as a mechanism of irregular changes between synchronization and desynchronization in a neural network. J. of Integr Neurosci 3: 159-182.

Tsuda I (2009) Hypotheses on the functional roles of chaotic transitory dynamics Chaos 19: 015113-1 -10.

Internal references

John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815. Walter J. Freeman and Harry Erwin (2008) Freeman K-set. Scholarpedia, 3(2):3238. Walter J. Freeman and Robert Kozma (2010) Freeman’s mass action. Scholarpedia, 5(1):8040. Hans Liljenström (2012) Mesoscopic brain dynamics. Scholarpedia, 7(9):4601. Anil Seth (2007) Model of consciousness. Scholarpedia, 2(1):1328. Christophe Letellier and Otto E. Rossler (2007) Hyperchaos. Scholarpedia, 2(8):1936.

See also Bifurcations, Chaos, Chaotic neuron, Complexity, Dynamical systems, Memory, Metastability in the brain, Neural correlates of consciousness, Synergetics

Personal tools

Focal areas