# Scale-free neocortical dynamics

This revision has not been approved and may contain inaccuracies
Curator and Contributors

1.00 - Walter J. Freeman

Figure 1: Distributions of the numbers of connections at varying distances are schematized in log-log coordinates. The straight line indicates power-law 1/f.

Scale-free dynamics of neocortex is characterized by hierarchical self-similarities of patterns of synaptic connectivity and spatiotemporal neural activity, seen in power-law distributions of structural and functional parameters and in rapid state transitions between levels of the hierarchy.

## Introduction

According to von Neumann (1958) brains do in "very few short steps" what computers do with "exquisite numerical precision over many logical steps". The challenge is to characterize those few steps as neural operations by which spatiotemporal patterns emerge from interactions of cortical neurons over broadly distributed synaptic connections. The patterns are observed with multichannel recordings that serve to evaluate state variables as trajectories in brain state space and patterns as points along the trajectories. A hierarchy of patterns stems from recordings of microscopic axonal spike trains, mesoscopic dendritic field potentials, and macroscopic images of scalp EEG/MEG and fMRI. Each pattern manifests a part of a brain state that forms by a state transition. The "very few short steps" in the action-perception cycle of intention can be identified with state transitions across levels, upwardly through the hierarchy in perception and cognition and downwardly in planning and executing goal-directed actions.

Topographic mapping by parallel arrays of axons supports the fine structure that enables brains to discriminate precisely small changes in input as fine as single sensory receptors, and through focused control to restrict changes in output even to single motor units. In contrast, the steps between microscopic sensory inputs and microscopic motor outputs require wide divergence and convergence of axonal and dendritic connections to encompass very many neurons. Then the first of the "very few" few central steps requires divergence from a high density of sensory information carried by few neurons to a low density of perceptual information carried by populations of neurons. The penultimate central step of responding requires convergence of motor information from a broad, low-density distribution of firing to a few selected neurons firing rapidly and with precision, that is, a localized high-density distribution of firing.

A proposed way to model the intervening central steps is to conceive the synaptic connections of cortex as forming a continuous, self-similar hierarchy of scale-free connectivity. For heuristic purposes this hierarchy must be divided into levels, because measurement presupposes the imposition of scales. Each level has the statistical properties of divergence and convergence though with differing emphases. The time and space scales of the technologies used to observe and measure brain structures and activities give access to three levels of connectivity among cortical neurons: microscopic among neurons (synapses), mesoscopic among populations of neurons (hypercolumns), and macroscopic among modules within each cerebral hemisphere (modular networks). Note that the scales are not intrinsic to the networks; they are imposed by the observers.

Necessary evidence to support the hypothesis of scale-free dynamics includes:

• power-law distributions of anatomical connectivity;
• power-law distributions of functional parameters of neural activity;
• widespread, near-simultaneous state transition from each spatiotemporal pattern to the next, retaining each pattern long enough to measure and statistically verify its size, location, duration, spectral composition, and spatial texture (Freeman, 2006).

## Random graph theory, brain structural connectivity: neurons, hypercolumns, modules

Concepts from theories of evolving random graphs and networks provide the framework for synthesizing descriptions of connectivity. The term "random" refers to indeterminacy and the assignment of a probability value to a link between each pair of nodes, not to lack of structure. The convention adopted here is that the random graph is a topological description in which distance is measured by the number of links and nodes between any pair. The random network has a metric by which distance is also calculated by the Euclidean length of a link in the 3-D brain. At the microscopic level the nodes represent neurons, and the links represent synapses between pairs of neurons. At the mesoscopic level the nodes represent local populations, and the links represent densities of types of synapses in local domains such as hypercolumns connected internally and externally by axonal and dendritic bundles. At the macroscopic level the nodes represent modules, lobes and Brodmann areas with distinctive cytoarchitectures that are linked by long tracts. A graph/network at one level is a node at a higher level by integration and condensation and vice versa by differentiation and expansion. Then a global, scale-free model can be adapted to include the three levels that provide the biological data required for parameterizing and testing the model.

Like a brain, a random graph is not static. It evolves its state subject to sudden state transitions based in its changing connectivity. Construction begins with a few nodes (vertices) having links (edges) among them (Bollobás, 2001. Each node is given the probability of an output connection to every other node (mostly zero in sparse networks), and the probability of an input connection from every other node. The properties of the graph are determined by functions giving the constraints on assignments of probability (see Fig.<ref>f1</ref>).

A graph evolves by addition at each time step of a new node with its input and output connections. The diameter of a graph is the maximum distance between any pair of nodes. The average path length is the average distance taken over of all pairs of nodes in the graph. The density of connection ranges from very sparse to fully connected. A network requires also that the Euclidean lengths of the connections in 2-space or 3-space be specified.

Evaluation of these properties can be adapted to the estimation of structure in cortex but only by working with statistical distributions; owing to the immense numbers involved, the path lengths among all pairs are not computable. At the microscopic level the nodes are neurons that initially replicate in large numbers, migrate to the surface of the brain, and grow axons and dendrites by which they form synaptic connections. Neurons continue to branch, extend, and form new connections, long after replication and loss of excess neurons by apoptosis (programmed cell death) have ceased. In topological graphs connection distances are measured by the number of synapses (mono-, di-, poly-synaptic) between neurons. In networks the distances are measured by the radial lengths of axons/dendrites from cell bodies. The large surface area of the dendritic tree and the high packing density of neurons (105/mm3) and synapses (109/mm3) accommodate the 104 synapses on each neuron sent by 104 other neurons (Braitenberg and Schüz, 1998). Connection density is sparse, each neuron with less than 1% of neurons within its dendritic arbor; the likelihood of reciprocal connections between pairs is less than 10-6. Given that each neuron transmits to ~104 others, the number of its targets in 3 steps would approach 1012, which well exceeds the 1010 neurons in cortex, hence the depth of a cortical network can be estimated as 3.

Bollobás and Riordan (2002) derived a measure of the diameter of the scale-free network of $$n$$ nodes, which is $$\log n/\log\log n$$. The neocortical diameter can be calculated for each hemisphere by the number of neurons (0.5x1010) and the number of synapses per neuron (104), giving $$n$$ = 5x1013 and a diameter of 12. The numbers far exceed those addressed by random graph theorists, justifying minimally a 3-level hierarchy of nodes as neurons (treating multiple possible synapses between any neuron pair as one edge), hypercolumns and modules. The reduction of 1010 neurons and 1014 synapses to a depth of 3 and a diameter of 12 underscores the simplification offered by random network theory for describing cortical dynamics, for which the first step is the description of its global topology.

## A neural example of power-law distribution

### in structural connectivity

Examples of differing constraints are schematized in Fig.<ref>f1</ref>. In the original formulation of the random graph by Erdös and Renyí (1960) the probability of connections is uniform with distance (Fig.<ref>f1</ref>, lower line). In random cellular networks the connections are restricted to nearest or next nearest neighbor nodes (Chua, 1998). In small-world graphs a low percentage of the local connections is replaced with uniformly distributed long connections (e.g., Watts and Strogatz, 1998; Bollobás, 2001)). This dramatically reduces the depth by bridging across many nodes. Small-world graphs are related to cortical connectivity by axons that project long distances but are few in number compared with local axons and collaterals in clusters. However, in current usage the small-world concept applies to graphs such as the internet without a spatial metric, but it has limited utility in describing cortical networks.

Figure 2: Distributions of measurements of lengths of axons that were made in histological sections of Golgi preparations. The data were re-plotted from semi-log into log-log coordinates.

The distributions of axonal lengths are commonly thought to be exponential (Fig.<ref>f1</ref>, dashed curve). They may actually be power-law (the slanted line) because of experimental limitations in determinations of distributions of axonal lengths. For short distances observations using light microscopy omit unmyelinated axons, which in electron micrographs substantially outnumber the myelinated axons. For long distances the observations of axon lengths in Golgi preparations are made in tissue sections of limited thickness (e.g., 300 microns, Fig.<ref>f2</ref>) in small mammals (e.g., mouse) with deficits in long connections. Considering the well-documented self-similarity of many axonal and dendrite trees and cell bodies across scales (Bok, 1959), and pending further investigation of control mechanisms in large-scale embryological growth, it is reasonable to propose that the distributions of cortical structural connection lengths are power-law, and to recommend to neuroanatomists that they test this hypothesis.

In preferential random graphs/networks (Barabási, 2002), the probability of a node receiving new connections is proportional to the number of connections it already has. This principle is applicable both to single neurons attracting synaptic links and to parallel fiber growth into axonal and dendritic bundles. The formation of input and output pathways of cortical sensory and motor areas is dominated by topographic mapping, which arises by virtue of the manner in which pioneer neurons form initial connections that are followed by successive new arrivals. Nodes that are not neurons but are local populations of neurons, manifested in dendritic bundles, barrels, bubbles, patches, hypercolumns, etc. Thus preferentiality is prominent in the development of various corticocortical connections, all within the context of randomness. Bollobás (2001) has shown that current models of preferentiality have an intrinsic inconsistency in getting started. This problem is addressed by modelers of neocortex, who have identified the pre-plate, a transitory layer early in cortical development, which provides the guidance needed by the "pioneer" neurons that construct the basis for subsequent emergence of topographic maps – and hubs. Traffic between theory and experiment is bidirectional.

Neural connection by synapses (not gap junctions) is unidirectional; reciprocal interconnections are by different axons and synapses. Topologically directedness (Barabási, 2002) divides a large network into a giant component, one or more input components, one or more output components, and miscellaneous islands having insufficient densities of linkages to participate in the other three groups. The utility and importance of I/O islands for describing sensorimotor cortices is obvious. The newly developing concept of the “macroscopic giant component” is one of the most important and exciting ideas to emerge from graph theory into brain theory, as the possible basis for the unity of consciousness. The isolated "islands" offer a new theoretical approach to creativity by sudden insight, as described by Hadamard (1954) and Koestler (1990) and demonstrated physiologically by Ohl, Scheich and Freeman (2003).

### in a functional parameter

Figure 3: PSD of EEG from human subject asleep (Freeman et al., 2006).

Special significance is attached to the graph/network with power-law distributions of connection distances (Wang and Chen, 2003; Breakspear, 2004; Chen and Shi, 2004), owing to functional properties introduced by self-similarity at different scales of observation and measurement. In the dynamics of random networks in the brain, as described by neuropercolation theory, the state of a structural scale-free network depends on the density of negative feedback connections between excitatory and inhibitory nodes, the ratio of short to long connections (the exponent of the power law), and the magnitude of the background noise. In a critical range determined by the network, a state transition occurs from subcritical oscillations with a flat power spectral density (PSD) (white noise (Schroeder, 1991)) to 1/f2 oscillations at criticality, and then to supracritical oscillations with a PSD peak in the gamma range (Kozma et al., 2006). The EEG of human and animal subjects at rest gives 1/fa PSD with a near 2 (brown noise); in deep slow-wave sleep a is near 3 (black noise, Fig.<ref>f3</ref>). Subjects engaged in tasks give PSD with peaks of power in excess of the regression line, typically in the theta (3-7 Hz) and beta-gamma bands (20-80 Hz).

### in a parameter of state transition

Input to cortex can change its activity in either of two ways. Perturbation by input may drive it but not change its state. The ratio of the output function to the perturbing input function defines the operation by the cortex on the input to give the output. Examples are impulse input with electrical shocks, light flashes, auditory clicks, etc., giving evoked potentials that document cortical relaxation to its initial condition without change in state. The return to baseline allows repetition and averaging over trials to give event-related potentials. Contrastingly an input to cortex may induce a state transition. The difference is crucial for understanding cortical dynamics. Both processes are initiated by input. Both can result in spatially coherent oscillations at multiple frequencies. Both oscillations are modulated spatially in both the amplitude and phase of shared carrier waves.

Figure 4: Increasing the window duration increased the mean duration of phase cones (from Freeman et al., 2006).

In the relaxation process the amplitude modulation (AM) and phase modulation (PM) are imposed by the stimulus input. In the state transition the AM pattern emerges from the cortical connectivity that has previously been modified by learning (Freeman, 2006). The PM pattern has the form of a cone, which is a radially symmetric set of circular isophase contours centered at a point on the cortical surface. The phase cone shows that the state transition by which the AM pattern forms does not occur simultaneously everywhere but begins at a site of nucleation (the apex of the cone) and spreads radially at the conduction velocity of intracortical axons (not the velocity of the afferent axons). The radial spread is limited by soft boundary condition of the half-power magnitude that results from progressive phase lag from the apex, giving the diameter of the AM pattern. The duration of the state is measured by how long the apex holds its sign and location on the cortical surface. The distributions of durations are power-law (Fig.<ref>f4</ref>)

## Implications of scale-free neocortical dynamics

### Hubs

Preferentiality in a scale-free network, depending on the probabilities of input and output connections, supports the emergence of hubs, which are nodes of exceptionally high density of connectivity including long-distance links (Bollobás, 2001). Commonly studied networks with hubs include the route maps of major airlines and the patterns of connectivity on the Internet (Barabási, 2002). If neocortical connectivity and dynamics are scale-free, then for every cognitive function that can be adequately controlled, one or more hubs should be observable at which connection and activity densities are maximal. Then the hot spots of high metabolic activity revealed by macroscopic imaging (fMRI, etc.) provide evidence not for localized functions but for hubs in non-local macroscopic states that organize mesoscopic and microscopic functions.

### State transitions

Above a certain threshold of connection density, a scale-free network can undergo an abrupt state transition and resynchronization globally and virtually instantaneously no matter how large its diameter. Scale-free dynamics can explain how mammalian brains operate on the same time scales despite differences in size ranging to 104 (mouse to whale).

### Focal lesions

Random lesions of scale-free networks have negligible effects; lesions of hubs are catastrophic. Examples in humans are coma and Parkinson’s disease from small brain stem lesions.

## References

• Barabásí A.-L. (2002) Linked. The New Science of Networks. Cambridge MA: Perseus.
• Bollobás B. (2001) Random Graphs. Cambridge Studies in Advanced Mathematics 2nd Ed. Cambridge UK: Cambridge UP.
• Bollobás B., Riordan O. (2002) The diameter of a scale-free random graph. [1]
• Bok ST (1959) Histonomy of the Cerebral Cortex. Amsterdam: Elsevier.
• Braitenberg V., Schüz A. (1998) Cortex: Statistics and Geometry of Neuronal Connectivity, 2nd edition, Berlin: Springer-Verlag.
• Breakspear M. (2004) Dynamic connectivity in neural systems: Theoretical and empirical considerations. Neuroinformatics 2(2): 205-225.
• Chen Q., Shi D. (2004) The modeling of scale-free networks. Physica A 333: 240-248.
• Chua L.O. (1998) CNN. A Paradigm for Complexity. Singapore: World Scientific.
• Erdös P., Renyí A. (1960) On the evolution of random graphs. Publ Math Inst Hung Acad Sci 5: 17-61.
• Freeman W.J. (2006) Origin, structure, and role of background EEG activity. Part 4. Neural frame simulation. Clin Neurophysiol 117: 572-589.
• Freeman W.J., Holmes M.D., West G.A., Vanhatalo S. (2006) Fine spatiotemporal structure of phase in human intracranial EEG. Clin Neurophysiol 117, 6, 2006, pp 1228-1243.
• Hadamard J (1954) The Psychology of Invention in the Mathematical Field. New York: Dover.
• Koestler A (1990) The Act of Creation. London: Penguin.
• Kozma R., Puljic M., Balister P., Bollobás B., Freeman W.J.. (2006) Phase transitions in the neuropercolation model of neural populations with mixed local and non-local interactions. Biol Cybern 92: 367-379.
• Ohl FW, Deliano M, Scheich H, Freeman WJ (2003) Early and late patterns of stimulus-related activity in auditory cortex of trained animals. Biol. Cybernetics online: DOI 10.1007/s00422-002-0389-z
• Schroeder M. (1991) Fractals, Chaos, Power Laws: Minutes from and Infinite Paradise. San Francisco: WH Freeman.
• von Neumann J. (1958) The Computer and the Brain. New Haven CT: Yale UP.
• Wang X.F. and Chen G.R. (2003) Complex networks: small-world, scale-free and beyond. IEEE Trans Circuits Syst, 31: 6-20.
• Watts D.J. and Strogatz S.H. (1998) Collective dynamics of "small-world" networks. Nature 393: 440-442.