# Neuropercolation

Post-publication activity

Curator: Robert Kozma

Neuropercolation is a family of stochastic models based on the mathematical theory of probabilistic cellular automata on lattices and random graphs and motivated by structural and dynamical properties of neural populations. The existence of phase transitions was demonstrated both in continuous and discrete state space models, e.g. in specific probabilistic cellular automata and percolation models. Neuropercolation extends the concept of phase transitions to large interactive populations of nerve cells.

## Probabilistic Cellular Automata: Definitions and Basic Properties

### Cellular Automata

Figure 1: Illustration of percolation on the 2-dimensional torus, with local update rule given by $$\ell=2\ ,$$ i.e., a site becomes active if at least 2 of its neighbors are active. The first 4 iteration steps are shown. At the 8th step all sites become active, i.e., the initial configuration percolates over the torus (Bollobas, 2001).

In a basic two-state cellular automaton, the state of any lattice point $$x \in \mathbb{Z}^d$$ is either active or inactive. The lattice is initialized with some (deterministic or random) configuration. The states of the lattice points are updated (usually synchronously) based on some (deterministic or probabilistic) rule that depends on the activations of their neighborhood. For related general concepts, see cellular automata such as Conway's Game of Life, Chua's cellular neural network, as well as thermodynamic models like the Ising model and Hopfield nets (Berlekamp et al, 1982; Kauffman, 1990; Hopfield, 1982; Brown and Chua, 1999; Wolfram, 2002).

### Bootstrap Percolation

In the original bootstrap percolation model, sites are active in the original configuration independently with probability $$p\ .$$ The update rule however is deterministic: an active site always remains active, and an inactive site becomes active if at least $$\ell$$ of its neighbors are active at the given time (Aizeman and Lebowitz, 1988). If the iterations ultimately lead to a configuration when all sites become active, it is said that there is percolation in the lattice. A main question in bootstrap percolation concerns the presence of percolation as the function of lattice dimension $$d\ ,$$ initial probability $$p\ ,$$ and neighborhood parameter $$\ell\ .$$ It can be shown that on the infinite lattice $$\mathbb{Z}^d\ ,$$ there exists a critical probability $$p_c=f(d,\ell)\ ,$$ that there is percolation for $$p>p_c\ ,$$ and no percolation for $$p<p_c\ ,$$ with probability one. The critical probability defines a phase transition between conditions leading to percolation and conditions which do not percolate (Balister et al., 1993, 2000; Bollobas and Stacey, 1997). For a finite lattice, such as the $$d$$-dimensional torus $$\mathbb{Z}^d_N\ ,$$ the probability of percolation is a continuous function of $$p\ ,$$ and hence there is no precise threshold value for $$p\ .$$ However the probability of percolation rises rapidly from a value close to zero, to a value close to one near some threshold function $$p_c=f(N,d,\ell)\ .$$

• Example 1 (Percolation threshold on infinite lattices): In the case of the 3-dimensional (infinite) lattice ($$d=3$$), a simple example of local neighborhood consists of the 6 direct neighbors of the site, and itself. Selecting $$\ell=3$$ means that an inactive site becomes active if at least 3 of its neighbors are active. It is shown that for $$d=3\ ,$$ $$\ell=3$$ the critical probability $$p_c=0$$ (Schonmann, 1992).
• Example 2 (Percolation on the finite torus): It is of practical interest to study bootstrap percolation on finite lattices. E.g., $$\mathbb{Z}^d_N$$ denotes the $$d$$-dimensional torus of size $$N^d\ .$$ For $$d=3\ ,$$ $$\ell=3\ ,$$ Cerf and Cirillo (1999) proved the conjecture of Adler, van Enter, and Duarte (1990), Adler (1991), extending the above result of Schonmann (1992), that the threshold probability is of the order $$1/\log\log N\ ,$$ for a sequence of bootstrap percolation models as $$N\to\infty\ .$$ An example of percolation on the 2-dimensional torus, $$d=2\ ,$$ and $$\ell=2$$ is given in Fig.1.

### Random Bootstrap Percolation

Standard bootstrap percolation has the strict limitation that an active site always remains active. This condition is relaxed in the random bootstrap percolation, which can model for example percolation in a polluted environment (Gravner and McDonald, 1997). Accordingly, at every iteration step, an active site is removed with dilution probability $$q\ .$$ In the case of the 2-dimensional lattice with the 2-neighbor rule $$\ell=2\ ,$$ the process percolates with probability one, if $$q/p^2$$ is small enough, and there is no percolation in the opposite case. Generalizations of the original bootstrap percolation models are abundant. A systematic overview of the state-of-art of percolation is given in Bollobas and Riordan (2006). Neuropercolation describes further generalizations of random bootstrap percolations motivated by principles of neural dynamics.

## Basic Principles of Dynamics of Neural Masses

The continuum approach to the brain leads to the concept of neural mass, and its spatiotemporal activity can be interpreted in terms of dynamic system theory (Babloyantz and Desthexhe, 1986; Schiff et al, 1994; Hoppensteadt and Izhikevich, 1998; Freeman, 2001; Stam et al., 2005; Steyn-Ross et al, 2005). Some models utilize encoding in complex cycles and chaotic attractors (Aihara et al, 1990; Andreyev et al., 1996; Ishii et al, 1996; Borisyuk and Borisyuk, 1997; Kaneko and Tsuda, 2001). A hierarchical approach to neural dynamics is formulated by Freeman (1975, 2001). It is summarized here as the 10 Building Blocks of the dynamics of neural populations. Here we list the first 5 principles relevant to neuropercolation at present:

• State transition of an excitatory population from a point attractor with zero activity to a non-zero point attractor with steady-state activity by positive feedback.
• Emergence of oscillations through negative feedback between excitatory and inhibitory neural populations.
• State transitions from a point attractor to a limit cycle attractor that regulates steady-state oscillation of a mixed excitatory-inhibitory cortical population.
• Genesis of chaos as background activity by combined negative and positive feedback among three or more mixed excitatory-inhibitory populations.
• Distributed wave of chaotic activity that carries a spatial pattern of amplitude modulation made by the local heights of the wave.

Various components of these and related neurodynamic principles have been implemented in computational models. For example, the Katchalsky K-models use a set of ordinary differential equations with distributed parameters to describe the hierarchy of neural populations starting from micro-columns to the hemispheres (Freeman et al, 2001; Kozma et al, 2003). Neuropercolation, on the other hand, uses tools of percolation theory and random graphs to model principles of neurodynamics based on discrete approach. Extensive work is conducted on the formation and dynamics of structural and functional clusters in the cortex (Bressler, 2006; Sporns, 2006; Jirsa and McIntosh, 2007). Neuropercolation describes these effects in discrete models, and future studies aim at establishing the connection between discrete and continuous approaches.

## Generalizations of Percolation Theory for Neural Masses

### Properties of Neuropercolation Models

Basic bootstrap percolation has the following properties: (i) it is a deterministic process following random initialization; (ii) the model always progresses in one direction, i.e., from inactive states to active ones and never backwards. Under such conditions, these mathematical models exhibit phase transitions with respect to the initialization probability $$p\ .$$ Neuropercolation models develop neurobiologically motivated generalizations of bootstrap percolations. Neuropercolation incorporates the following major conditions inferred based on the features of the neuropil, the filamentous neural tissue in the cortex.

• Interaction with noise: The dynamics of the interacting neural populations is inherently non-deterministic due to dendritic noise and other random effects in the nervous tissue and external noise acting on the population. This is expressed by Szentagothai (1978, 1990): "Whenever he is looking at any piece of neural tissue, the investigator becomes immediately confronted with the choice between two conflicting issues: the question of how intricate wiring of the neuropil is strictly predetermined by some genetically prescribed blueprint, and how much freedom is left to chance within some framework of statistical probabilities or some secondary mechanism of trial and error, or selecting connections according to necessities or the individual history of the animal." A possible resolution of the determinism-randomness dilemma was based on the principle described as "randomness in the small and structure in the large" (Anninos et al. 1970, Harth et al. 1970). Neuropercolation includes randomness in the evolution rules, and it is described in random cellular automata and in other models. Randomness plays a crucial role in neuropercolation models. The situation resembles the case of stochastic resonance (Moss and Pei, 1995; Bulsara and Gammaitoni, 1996). An important difference from chaotic resonance is the more intimate relationship between noise and the system dynamics, due to the excitable nature of the neuropil (Kozma et al., 2001; Kozma, 2003).
• Long axon effects: Neural populations stem ontogenetically in embryos from aggregates of neurons that grow axons and dendrites and form synaptic connections of steadily increasing density. At some threshold the density allows neurons to transmit more pulses than they receive, so that an aggregate undergoes a state transition from a zero point attractor to a non-zero point attractor, thereby becoming a population. Relevant behaviors have been described in random graphs and conditions for phase transitions are given (Erdos and Renyi, 1960, Bollobas, 1985). In neural populations, most of the connections are short, but there are a relatively few long-range connections mediated by long axons (Das and Gilbert, 1995). The effect of long-range axons are similar to small-world phenomena (Watts, Strogatz, 1998; Strogatz, 2001) and it is part of the neuropercolation model.
• Inhibition: Another important property of neural tissue that it contains two basic types of interactions: excitatory and inhibitory ones. Increased activity of excitatory populations positively influence (excite) their neighbors, while highly active inhibitory neurons negatively influence(inhibit) the neurons they interact with. Inhibition contributes to the emergence of sustained narrow-band oscillatory behavior in the neural tissue (Aradi et al., 1995; Arbib et al., 1997). Inhibition is key in various brain structures; e.g., hippocampal interneurons are almost exclusively inhibitory (Freund and Buzsaki, 1996). Inhibition is inherent in cortical tissues and it controls stability and metastability observed in brain behaviors (Kelso, 1995; Xu and Principe, 2004; Ilin and Kozma, 2006; Kelso and Angstrom, 2006; Kelso and Tognoli, 2007). Inhibitory effects are are part of neuropercolation models.

Neural populations may exhibit scale-free behavior in their structure, dynamics, and function (Aldana and Larralde, 2004; Sporns, 2006; Scale-Free Neocortical Dynamics). Neuronal avalanches have been identified as processes leading to scale-free dynamics in cortical tissue (Beggs et al, 2003). Scale-free behavior in random graphs has been rigorously analyzed by percolation methods (Bollobas, 2001; Bollobas and Riordan, 2003, 2006). Physical and computational modeling of scale-free phenomena, including preferential attachment, produced some spectacular results (Albert and Barabasi, 2002; Barabasi, 2002; Newman et al., 2002). See also Scale-free Neocortical Dynamics entry in this Encyclopedia.

### Probabilistic Cellular Automata

A broad family of probabilistic cellular automata is defined over $$d$$-dimensional discrete tori $$\mathbb{Z}^d_N$$ (Balister et al., 2006). Let $$A$$ be the set of possible states. In the simplest case there are just 2 states: active (+) or inactive (-), which case is considered here. The (closed) neighborhood of node $$x$$ is denoted by $$\Gamma_x \subset \mathbb{Z}^d_N\ .$$ At a given time instant t, $$x$$ becomes active with probability $$p$$ which is the function of the state of the sites in $$\Gamma_x\ .$$ Since $$\Gamma_x$$ is a closed neighborhood, $$p$$ may depend on the state of $$x$$ itself. Accordingly, $$p$$ is a function $$p\colon A^{\Gamma_x}\times A\to [0, 1]$$ that assigns for each configuration $$\phi\colon\Gamma_x\to A$$ and each $$a\in A$$ a probability $$p_{\phi,a}$$ with $$\sum_{a\in A}p_{\phi,a}=1$$ for all $$\phi\ .$$ We define a sequence of configurations $$\Phi_t\colon\mathbb{Z}^d_N\to A$$ by setting $$\Phi_{t+1}(x)=a$$ independently for each $$x\in \mathbb{Z}^d_N$$ with probability $$p_{\phi,a}\ .$$ We start the process with some specified initial distribution over the torus $$\Phi_0\ .$$ The process $$\Phi_t$$ is called probabilistic cellular automaton or PCA. These models have also been referred to as contact processes and have been studied in some cases on infinite graphs (Holley and Liggett, 1995). Probabilistic cellular automata generalize deterministic cellular automata such as Conway's game of life. Probabilistic automata display very complex behavior, including fixed points, stable limit cycles, and chaotic behaviors, which pose extremely difficult mathematical problems and are beyond reach of thorough analysis in general. Several rigorous results have been achieved in specific instances.

### Isotropic Cellular Automata

It is often assumed that $$p_{\phi,a}$$ depends only on the cardinality of the set of the neighbors which are in active state, and on the state of the given site. These models are called isotropic. Then the notation $$p^{-}_r$$ is used instead of $$p_{\phi,+}\ ,$$ where $$r$$ is the number of active sites in $$\Gamma_x$$ and $$\Phi(x) = -\ .$$ Similarly, $$p^{+}_r$$ is used for the given $$r$$ and with the condition $$\Phi(x) = +\ .$$ Isotropic models are substantially more restrictive than the general case, but they still have complex behavior, sometimes including spontaneous symmetry breaking (Balister et al, 2006). We call the model fully isotropic if $$p^{+}_r=p^{-}_r=p_r$$ for all $$r\ .$$ In this case, the site itself is treated on the same basis as its neighbors. If the behavior of the isotropic model is unchanged while interchanging + and -, it is called symmetric.

• Example 3 (Probabilistic Cellular Automata with Majority Voting Rule): The case of two-dimensional torus is considered of lattice size $$N \times N\ .$$ Define $$\Gamma_x$$ be the local neighborhood which consists of 5 nodes, i.e., the 4 nearest neighbors and the node itself. For a fixed probability $$0 < p < 1\ ,$$ the probabilistic majority voting is expressed as follows. The probability of being active the next time step is given by $$p$$ if the majority of the nodes in the neighborhood are active, and $$(1-p)$$ if the minority of the nodes are active. These are also called $$p$$-majority percolation. The majority voting rule defines an isotopic and symmetric model with transition probabilities $$p^{-}_{0} = p^{-}_{1} = p^{-}_{2}$$ = $$(1-p) = p^{+}_{0} = p^{+}_{1} = p^{+}_{2}\ ,$$ and $$p^{-}_{3} = p^{-}_{4} = p^{-}_{5}$$ = $$p = p^{+}_{3} = p^{+}_{4} = p^{+}_{5}\ .$$

### Mean Field Models

Mean field models are related to probabilistic cellular automata as follows. In the mean field model, instead of considering the number of active nodes in the specified neighborhood $$\Gamma\ ,$$ the activations of $$|\Gamma|$$ randomly selected grid nodes are used in the update rule (with replacement). Since there is no ordering of the neighbors, the transition probabilities depend only on the number of active states in the selected $$|\Gamma|$$-tuples. It is clear that the mean field model does not depend on the topology of the grid. Considering a 2D torus of size $$N\times N\ ,$$ the density of active points $$\rho_t\in[0,1]$$ is defined as $$\rho_t = N_{A,t}/N^2\ ,$$ where $$N_{A,t}$$ is the number of active nodes on torus at time $$t\ .$$ The density $$\rho_t$$ acts as an order parameter and it can exhibit various dynamic behaviors depending on the details of the probabilistic rules. Mean field models are mathematically more tractable at present and they provide initial insight into the dynamics of more general neuroperoclation models.

## Mathematical Results on Phase Transitions in Neuropercolation

### Phase Transitions in Random Majority Percolation Models

In local models, a rigorous proof has been found of the fact that for extremely small values of $$p$$ (depending on the size of the lattice $$N$$) the model spends a long time in either low- or high-density configurations before the very rapid transition to the other state (Balister et al., 2005). Fairly good bounds have been found on the (very long) time the model spends in the two essentially stable states and on the (comparatively very short) time it takes to cross from one essentially stable state to another. The proof is only given for the case of a very weak random component. The behavior of the lattice models differs from that in the mean field model in the manner of these transitions. For the mean field model, transitions typically occur when random density fluctuations result in about one half of the states being active. When this occurs, the model passes through a configuration which is essentially symmetric between the low- and high-density configurations, and is equally likely then to progress to either one. In the lattice models, certain configurations with very low density can have a large probability of leading to the high-density configuration, and transitions from low to high density typically occur via one of these (non-symmetric) configurations.

Figure 2: Low density configuration of active sites on an $$N \times N$$ torus that nevertheless will with high probability lead to a high-density configuration in time $$O(N/p)\ .$$ Each band is of width at least 2 and wraps around the torus.

It is also known that there is a constant $$p_0<0.5$$ such that for $$p_0 p_c\ ,$$ the model spends most of its time with a density about 0.5, but for $$p<p_c$$ and $$N$$ sufficiently large, the model spends most of its time in either a low-density or a high density state.

• Example 4 (Phase Transition in 2D Majority Percolation): Consider a 2-dimensional torus of size $$N \times N$$ with $$p$$-majority transition rules, when the neighborhood contains 5 sites. Then one only needs two thin intersecting bands of active sites to ensure a high probability of reaching the high-density state in a short time; an example of the required 2-band configuration is shown on Figure 2. The transition is proven for probability $$p \propto 1/N^{2}\ ,$$ and it is conjectured to be valid for a broader range of probabilities (Balister et al., 2005).

### Large-scale Deviations in Mean-field Models of Probabilistic Cellular Automata

In mean field models described previously, a given number of randomly selected grid nodes are used in the update rule (with replacement). The number of selected sites is chosen as the cardinality of the neighborhood set $$|\Gamma|\ .$$ Mean field models have at least one stable fixed point and can have several stable and unstable fixed points, limit cycles, and chaotic oscillations. For large lattice size $$N\ ,$$ the mean density of active sites $$\rho_t$$ is given approximately as a normal distribution. The mean of the normal distribution is $$f_{m}(x)\ ,$$ where (for a fully isotropic model):

$f_m(x) = \sum_r{{|\Gamma|} \choose {r}}p_rx^r(1-x)^{|\Gamma|-r}.$

Iterations of the map $$\rho_{t+1} = f_{m}(\rho_t)$$ can result in stable fixed points, limit cycles, or chaotic behavior depending on the initial value $$\rho_0\ .$$ Various conditions have been derived for stable fixed point solutions, and phase transitions between stable fixed points have been analyzed in various mean field models (Balister et al., 2006).

Figure 3: Stable and unstable fixed points of the mean field models as the function of the system noise $$p\ .$$ Solid line: stable fixed points, dash: unstable fixed points.
• Example 5 (Phase Transitions in 2D Mean Field Models): Consider the symmetric fully isotropic mean field model on the 2-dimensional lattice. Transition probabilities are reduced to the ones given in Example 3. The fixed point is determined by the condition that $$x_{t} = f_{m}(x_{t})\ .$$ This fixed point is denoted as $$\rho\ .$$ Using the majority update rule, one can readily arrive at a transcendental equation for the fixed points. It can be shown that there is a stable fixed point for $$p_c < p \leq 0.5\ ,$$ while there are two stable and an unstable fixed points for $$p < p_c\ .$$ Here $$p_c$$ is the critical probability, and the exact value $$p_c = 7/30$$ is derived in the case of neighborhood size $$|\Gamma| = 5$$ and majority update rule. Near the critical point the density versus $$p$$ relationship approximates the following power law behavior with very good accuracy$|\rho - 0.5| \propto (p_{c} - p)^{\beta},$ where $$\beta =0.5\ .$$ Figure 3 illustrates the stable density values as solid lines. Density level 0.5 is the unique stable fixed point of the process above the critical point $$p\ge p_c\ ,$$ while it becomes unstable below $$p_c\ .$$

### Transition Time in Majority Percolation and Mean Field Models

The average time between transitions is governed by the average time it takes for one of these special configurations to occur (see Fig. 2), and transitions do not typically go through symmetric configurations. A snapshot of the model transitioning half way from a high-density to a low-density configuration will look very different from a snapshot of the transition from a low to a high-density configuration. On an $$N \times N$$ torus in the case of local majority percolation the average time between transitions is $$\exp(O(N \log p))$$ (Balister et al, 2005). For the mean field model, the average waiting time between transitions is $$\exp(O(N^2 \log p))$$ (Balister et al., 2006). The transition itself is however fast, requiring only time $$O(N/p)\ .$$ The rapid transitions between persistent states can be interpreted in the context of metastability as introduced in HKB model by Kelso and colleagues (Kelso, 1995; Kelso and Tognoli, 2007). The theoretical results justify the use of the terminology 'neuropercolation', describing the exponentially long waiting period, followed by a quick transition from one metastable state to another. The quick transition can be described effectively as a percolation phenomenon.

### Open Mathematical Problems

Probabilistic cellular automata, random majority percolation, and various neuropercolation models are relatively new and little known mathematical objects. They pose a number of challenging mathematical problems, including the following ones: What is the behavior of the $$p$$-majority cellular automata in the general case? What are the conditions of stable states? Is there a phase transition depending on $$p\ ?$$ How does additional randomness, e.g., rewiring with long-range connections, influence the dynamics? How to estimate the time the system stays in a stable state before it flips into another stable state? Answering these and a lot of related questions with mathematical rigor is beyond reach at present. Computational simulations can provide guidance for working hypotheses toward further mathematical analysis, as described in the next section.

## Computational Models of Neuropercolation and Critical Behavior

### Critical Behavior in Local Probabilistic Cellular Automata

Figure 4: Snapshots of 3 PCA systems with noise levels $$p$$ = 0.11, 0.134, and 0.20, respectively. The second diagram illustrates critical behavior, while the other two figures show subcritical (ferromagnetic) and supercritical (paramagnetic) regimes.

As opposed to mean field models, an analytical solution is not available for the local models and computer simulations are used to study these systems. First, nearest neighbor configuration is considered with $$p$$-majority percolation on the 2-dimensional torus. Figure 4 illustrates the system behavior for $$p$$ values 0.11, 0.134, and 0.20, respectively. The first panel of Fig. 4 is for $$p=0.11$$ and one can see the dominance of active sites (white). This is an illustration of clear nonzero magnetization as in ferromagnetic states. On the third panel of Fig. 4 $$p=0.20$$ and the active and inactive sites are equally likely. The magnetization is close to zero (paramagnetic regime). The middle panel on Fig. 4 shows a behavior where very large clusters of active and inactive sites are formed. This case has been calculated for $$p=0.134\ .$$ Finite size scaling theory of statistical physics is applied to characterize the observed behavior.

The behavior of the local PCA is qualitatively similar to mean field models shown in Fig. 3. Namely, there is a critical probability $$p_c\ ,$$ and for $$p>p_c$$ the stationary density distribution of $$\rho_t$$ is unimodal, while it becomes bimodal for $$p<p_c\ .$$ There are two phases, one with high density and one with low density, similarly to mean field models. Calculations show that, in the local model, the critical probability is significantly below the one obtained for the mean field$p_c\approx 0.134\ ,$ compared to $$p_c=0.233\ ,$$ respectively. The exponent of the power law scaling of $$m$$ near the critical point is different as well, compare $$\beta =0.5$$ for the mean field model, and $$\beta \approx 0.130$$ for the local model. Methods of finite size scaling from statistical physics are used to interpret these findings, see next section.

### Critical Exponents and Finite Size Scaling

The methodology previously developed for Ising spin glass systems (Binder, 1981) is applied here to characterize processes in PCA. If the number of active and inactive sites is equal at a given time, the activation density $$\rho_t$$ becomes 0.5. This corresponds to a basal state in magnetic materials with no magnetization. Deviations from the 0.5 level at any time are given as ($$|\rho_t - 0.5|$$) signify magnetization. The expected value of magnetization $$m$$ is estimated for a series of $$T$$ iterations as follows$<m> = <|\rho_t - 0.5|> \approx 1/T \sum_{t=1}^T{|\rho_t-0.5|}\ .$ The susceptibility is defined using magnetization $$m$$ as$\chi = <m^2> - <m>^2 \ .$ For the definition of correlation length $$\xi\ ,$$ see (Makowiec, 1999).

For Ising systems, magnetization, susceptibility, and correlation length satisfy a power law scaling behavior near criticality. In order to determine whether the terminology critical behavior is justified in the case of neuropercolation models, various statistical properties of the computed processes have been evaluated.

Recall, that in mean field models, the scaling law for magnetization is given by Ex. 5, near the critical probability $$p_c\ .$$ The scaling laws for $$\chi$$ and $$\xi$$ are defined similarly:

$\chi \sim |p - p_c|^{-\gamma} ~~~, \xi \sim |p - p_c|^{-\nu}$

The fourth order cumulants are defined as $$U(N, p) = <m^4>/<m^2>^2\ ,$$ where $$N$$ is the lattice size, and $$p$$ is the noise parameter. Finite size scaling theory tells that the fourth order cumulants are expected to intersect each other at a unique point which is independent of lattice size. The corresponding probability of this unique point is the critical probability, see Fig. 5.

Figure 5: Critical probability estimation using the fourth order cumulants given by Eq. 3; the curves correspond to lattice sizes 45, 64, 91 and 128. The obtained value of $$p_c$$ = 0.13423 $$\pm$$ 0.00002 (Puljic et al., 2005).

In order to test the consistency of the critical behavior in neuropercolation models, the identity relationship $$2\beta + \gamma = 2\nu$$ has been calculated. Recall that this identity holds for the critical exponents in Ising systems (Binder, 1980). This identity is considered as a measure of the quality of the estimation on the critical exponents is a given system.

 $$\beta$$ $$\gamma$$ $$\nu$$ Error PCA 0.1308 1.8055 1.0429 0.02 TCA 0.12 1.59 0.85 0.13 Ising ( 2D ) 0.125 1.75 1 0 CML 0.115 1.55 0.89 0.00

The results of PCA calculations are summarized and in Table 1 along with the parameters of the Ising system, Toom CA, and coupled map lattice models (CML). The 'Error" in the last column indicates the error of the identity function of the critical exponents. As Table 1 shows, the identity function is satisfied with high accuracy in the studied neuropercolation models. This indicates that the local PCA exhibits behavior close to an Ising model, i.e., it belongs to the weak Ising class (Kozma et al., 2005). This result also lends support to the terminology generalized phase transitions in the context of the studied neuropercolation models. These concepts are generalized further in even more complex neuropercolation models with small world effects and inhibition.

### Long-range Axonal and Inhibition Effects in Neuropercolation

Figure 6: Activation density as the function of the noise level in systems with no random long-range neighbor, and with various ratios of remote neighbors (Kozma et al., 2005).

Long axon effects are modelled when a certain proportion ($$0 \leq q \leq 1)$$ of regular lattice connections is replaced (rewired) by randomly selected links (Kozma et al., 2004; Puljic and Kozma, 2005). The case of $$q = 0$$ describes completely regular lattice connections, while $$q = 1$$ means that all connections are selected at random as in mean field models. An intermediate value of $$q$$ characterizes a system with some rewiring, just as in the small-world models (Strogatz, 2002).

Figure 6 contains results that generalize the mean field case, c.f., Fig. 3. Different Curves correspond to different rewiring ratios (Kozma et al., 2005). The rightmost curve corresponds to the mean field case (all connections are rewired), while the leftmost curve describes the regular lattice with local connections only (no rewiring). Intermediate situations are shown with the curves between local and mean field models.

 $$\beta$$ $$\gamma$$ $$\nu$$ Error PCA:local 0.1308 1.8055 1.0429 0.02 SW:6.25% 0.3071 1.1920 0.9504 0.09 SW:12.5% 0.4217 0.9873 0.9246 0.02 SW:100% 0.4434 0.9371 0.9026 0.02

The critical exponents obtained for models with various degrees of small-world effects are given in Table 2; notations are same as in Table 1. In the case of SW (6.25$$\%$$) model, 6.25$$\%$$ of the local lattice connections are rewired to. Table II shown that the non-local systems may belong to a weak-Ising class, where the hyperscaling relationship is approximately satisfied (Puljic, Kozma, 2005).

Figure 7: Phase lag values evolving in time for a two-layer lattice system with 6.25$$\%$$ nonlocal (axonal) connections for a system with 256 channels; (a) Noise level 13% (subcritical): high synchrony is seen across the array. (b) Noise level 15% (critical noise): there is spontaneous, intermittent desynchronization across the array. (c) Noise level 16% (super-critical noise): the synchrony between channels is diminished (Puljic, Kozma, 2006).

The behavior of neuropercolation model with excitatory and inhibitory nodes is illustrated on Fig. 7. Due to the negative feedback, these models may generate sustained limit cycle and non-periodic oscillations, similar to the behavior previously observed in models based on coupled differential equations. The spatial distribution of synchronization shows that the subcritical regime is characterized by rather uniform synchronization patterns. On the other hand, supercritical regime shows high-amplitude, unstructured oscillations. Near critical parameters, intermittent oscillations emerge, i.e., relatively quiet periods of weak oscillations followed by periods of intensive oscillations in the synchronization (Puljic and Kozma, 2006). The sparseness of connectivity to and from inhibitory populations acts as a control parameter, in addition to the system noise level $$p$$ and the rewiring ratio $$q\ .$$ The system shown in Figs. 7a-c has a few $$\%$$ of connectivity between excitatory and inhibitory units.

### Example of Ontogenetic Development and Criticality in the Neuropil

Figure 8: Illustration of self-organization of critical behavior in the percolation model of the neuropil. By way of structural evolution, the neuropil evolves toward regions of criticality or edge-of-criticality. Once the critical regions are established, the connectivity structure remains essentially unchanged. However, by adjusting the noise and gain levels, the system can be steered towards or away from critical regions (Kozma et al., 2005).

The following hypothesis is proposed regarding the emergence of critical behavior in the neuropil. The neural connectivity is sparse in the neuropil at the embryonic stage. Following birth, the connectivity increases and ultimately reaches a critical level, at which the neural activity becomes self-sustaining. The brain tissue as a collective system is at the edge of criticality. The combination of structural properties and dynamical factors, like noise level and input gain, the system may transit between subcritical, critical, and supercritical regimes. This mechanism is illustrated on Fig. 8. By way of structural evolution, the neuropil evolves toward regions of criticality or edge-of-criticality. Once critical regions are established, the connectivity structure remains essentially unchanged. However, by adjusting the noise and/or gain levels, the system can be steered towards or away from critical regions. Clearly, the outlined mechanism is incomplete and in realistic neural systems a host of additional factors play crucial role. However, the given mechanism is very robust and it may provide the required dynamical behavior in a wide range of real life conditions.

## References

• Adler, J. (1991) Bootstrap percolation, Physica A, 171, 453-470.
• Adler, J., van Enter and J. A. Duarte (1990) Finite-size effects for some bootstrap percolatioin models, J. Statist. Phys, 60, 322-332.
• Aihara, K., Takabe T., Toyada M., (1990). Chaotic neural networks, Phys. Lett. A., 144(6-7), 333-340.
• Aizeman and Lebowitz (1988) Metastability effects in bootstrap percolation, Journal Phys. A, 21, 3801-3813.
• Albert, R., Barabási, A.L., Statistical mechanics of complex networks, Rev. Mod. Phys. 74, 47 (2002).
• Aldana, M., Larralde, H. (2004) Phase transitions in scale-free neural networks: Departure from the standard mean-field universality class, Phys. Rev. E, 70, 066130.
• Andreyev, Y.V., Dimitriev, A.S., Kuminov D.A. (1996) 1-D maps, chaos and neural networks for information processing, Int. J. Bifurcation and Chaos, 6(4), 627-646.
• Anninos PA, Beek B., Csermely T., Harth E., Pertile G. (1970) Dynamics of neural structures, J.Theor. Biol., 26: 121-148.
• Aradi, I., Barna G., Erdi P. (1995), Chaos and Learning in the olfactory bulb, Int. J. Intel. Syst., 10(1), 89-117.
• Arbib, M.A., Erdi, P., Szentagothai, J. (1997) Neural Organization: Structure, Function, Dynamics, MIT Press, Cambridge, MA.
• Babloyantz, A., and Desthexhe A., (1986), Low-dimensional chaos in an instance of epilepsy, Proc. Natl. Acad. Sci. USA, 83, 3513-3517.
• Balister P., Bollobas, B., and A. Stacey (1993) Upper bounds for the critical probability of oriented percolation in two dimensions, Proc. Royal Soc., London Sr., A., 400., no 1908, 202-220.
• Balister, P.N., Bollobas, B., and A. M. Stacey (2000) Dependent percolation in two dimensions, Probability Theory and Related Fields, 117, No.4, 495-513.
• Balister, P., Bollobas, B., Johnson, R., Walters, M. (2005) Majority Percolation (submitted, revised).
• Balister, P., B. Bollobas, R. Kozma (2006) Large-Scale Deviations in Probabilistic Cellular Automata, Random Structures and Algorithms, 29, 399-415.
• Barabasi A-L (2002) Linked. The New Science of Networks. Cambridge MA: Perseus Press.
• Beggs, J. M. and D. Plenz (2003). Neuronal avalanches in neocortical circuits. J Neurosci 23(35): 11167-77.
• Berlekamp, E.R., JH Conway, and RK Guy, (1982) Winning Ways for your mathematical plays, Vol. 1: Games in General, Academic Press, New York, NY.
• Binder, K. Finite scale scaling analysis of Ising model block distribution function, Z. Phys. B. 43, 119-140, 1981.
• Bollobas, B., and Stacey, A. (1997) Approximate upper bounds for the ciritcal probability of oriented percolation in two dimensions based on rapidly mixing Markov chains, J. Appl. Probaility, 34. no. 4, 859-867.
• Bollobas B (2001) Random Graphs. Cambridge Studies in Advanced Mathematics 2nd Ed. Cambridge University Press, Cambridge, UK.
• Bollobas, B., Riordan, O. (2003) Results on scale-free random graphs. Handbook of graphs and networks, 1-34, Wiley-VCH, Weinheim.
• Bollobas, B., Riordan, O. (2006) Percolation. Cambridge University Press, Cambridge, UK.
• Borisyuk, R.M., Borisyuk, G.N., (1997), Information coding on the basis of synchronization neuronal activity, Biosystems, 40(1-2), 3-10.
• Bressler, S.L., Tognoli, E. (2006) Operational principles of neurocognitive networks, Int J Psychophysiol., 60(2), 139-48.
• Brown, R., Chua, L. (1999) Clarifying chaos 3. Chaotic and stochastic processes, chaotic resonance and number theory, Int. J. Bifurcation and Chaos, 9, 785-803.
• Bulsara, A., Gammaitoni, L. (1996) Tuning in to noise. Physics Today, March, 1996, 39-45.
• Cerf, R. and Cirillo, E.N., (1999) Finite size scaling in three-dimensional bootstrap percolation, Ann. Probab., 27, no. 4., 1837-1850.
• Das, A., Gilbert, C.D. (1995) Long-range horizontal connections and their role in cortical reorganization revealed by optical recording of cat primary visual cortex. Nature, 375, 780-784.
• Erdos, P. and Renyi A. (1960). On the evolution of random graphs, Publ. Math. Inst. Hung. Acad. Sci. 5: 17-61.
• Freeman, W.J. (1975) Mass Action in the Nervous System. Academic Press, New York.
• Freeman, W.J. How Brains Make up Their Minds, Columbia University Press, 2001.
• Freeman, W.J., Kozma, R., and Werbos, P. J., (2001). Biocomplexity - Adaptive Behavior in Complex Stochastic Dynamical Systems, BioSystems, 59, 109-123.
• Freund T.F., Buzsaki G. (1996) Interneurons of the hippocampus. Hippocampus 6:347-470.
• Gravner, J. and McDonald, E., (1997) Bootstrap percolation in a polluted environment, J. Stat. Phys., 87 (3-4), 915-927.
• Grimmett, G. (1999) Percolation in Fundamental Principles of Mathematical Sciences, Spinger-Verlag, Berlin.
• Grossberg, S. (1988), Nonlinear Neural Networks: Principles, Mechanisms, and Architectures, Neural Networks, 1, 17-61.
• Harth, E.M., Csermely, T., Beek, B., Lindsay, R.P. (1970) Brain functions and neural dynamics, J.Theor.Biol. 26: 93-100.
• Holley, R., T.M. Liggett (1995) Ann. Probability 5, 613–636.
• Hopfield, J.J., (1982) Neural networks and physical systems with emergent collective computational abilities, Proc. National Academy of Sciences USA, 79, 2554-2558.
• Hoppensteadt F.C., Izhkevich E.M. (1998) Thalamo-cortical interactions modeled by weakly connected oscillators: could the brain use FM radio principles? BioSystems, 48: 85-94.
• Ilin, R., Kozma, R. (2006) Stability of coupled excitatory–inhibitory neural populations and application to control of multi-stable systems, Phys. Lett. A 360, 66–83.
• Ishii, S., Fukumizu K., Watanabe S., (1996), A network of chaotic elements for information processing, Neur. Netw. 9(1), 25-40.
• Jirsa, V. K.; McIntosh, A.R. (Eds.) (2007) Handbook of Brain Connectivity, Understanding Complex Systems, Springer Verlag, Heidelberg. ISBN: 978-3-540-71462-0
• Kaneko K, Tsuda I. Complex Systems: Chaos and Beyond. A Constructive Approach with Applications in Life Sciences, 2001.
• Kauffman, S. A. (1990), Requirements for evolvability in complex systems: orderly dynamics and frozen components, Phys. D, 42, 135-152.
• Kelso, J. A. S. (1995) Dynamic Patterns: The Self-Organization of Brain and Behavior. MIT Press, Cambridge, MA.
• Kelso, J.A.S., Engstrom, D.(2006) The Complementary Nature. MIT Press, Cambridge, MA.
• Kelso, J.A.S, Tognoli, E., (2007) Toward a Complementary Neuroscience: Metastable Coordination Dynamics of the Brain, in: “Neurodynamics of Cognition and Consciousness,” Perlovsky, L. and Kozma, R. (eds), Understanding Complex Systems, Springer Verlag, Heidelberg.
• Kozma, R. and Freeman, W.J. (2001), Chaotic Resonance - Methods and applications for robust classification of noisy and variable patterns, Int. J. Bifurcation and Chaos, 11(6), 2307-2322.
• Kozma R, Freeman WJ, Erdi P. (2003) The KIV model – nonlinear spatio-temporal dynamics of the primordial vertebrate forebrain. Neurocomputing, 52: 819-826.
• Kozma, R., (2003) On the Constructive Role of Noise in Stabilizing Itinerant Trajectories, Chaos, 13(3), 1078-1090.
• Kozma, R., and Freeman, W.J., (2003) Basic Principles of the KIV Model and its application to the Navigation Problem, Int. J. Integrat. Neurosci., 2, 125-139.
• Kozma, R., Puljic, M., Balister, P., Bollobas, B., and Freeman, W. J. (2004). Neuropercolation: A random cellular automata approach to spatio-temporal neurodynamics. Lecture Notes in Computer Science, 3305, 435-443. http://repositories.cdlib.org/postprints/1013/
• Kozma, R., Puljic, M., Balister, P., Bollobas, B., and Freeman, W. J. (2005). Phase transitions in the neuropercolation model of neural populations with mixed local and non-local interactions. Biological Cybernetics, 92(6), 367-379. http://repositories.cdlib.org/postprints/999/
• Makowiec, D. (1999) Stationary states for Toom cellular automata in simulations, Phys. Rev. E 60, 3787-3796.
• Marcq, P., Chate, H., Manneville, P. (1997) Universality in Ising-like phase transitions of lattices of coupled chaotic maps, Phys. Rev. E, 55(3), 2606-2627.
• Moss, F. and Pei, X., (1995) Stochastic resonance - Neurons in parallel, Nature, 376, 211-212
• Newman, M.E.J., Jensen, I., Ziff, R.M. (2002) Percolation and epidemics in a two-dimensional small world, Phys. Rev. E, 65, 021904, 1-7.
• Puljic, M. and Kozma, R. (2005). Activation clustering in neural and social networks. Complexity, 10(4), 42-50.
• Puljic, M., Kozma, R. (2006) Noise mediate Intermittent Synchronization of behaviors in the Random Cellular Automaton Model of Neural Populations, Proc. ALIFEX, MIT Press.
• Schiff, S.J. et al, (1994). Controlling chaos in the brain, Nature, 370, 615-620.
• Schonmann, R. (1992) On the behavior of some cellular automata related to bootstrap percolation, Ann. Probability, 20(1), 174-193
• Stam, C.J., et al. (2005) Nonlinear dynamical analysis of EEG and MEG: Review of an emerging field, Clinical Neurophysiology 116 (2005) 2266-2301.
• Sporns, O. (2006) Small-world connectivity, motif composition, and complexity of fractal neuronal connections, BioSystems, 85, 55-64.
• Steyn-Ross, D.A., Steyn-Ross, M.L., Sleigh, J.W., Wilson, M.T., Gillies, I.P., Wright, J.J. The Sleep Cycle Modeled as a Cortical Phase Transition, Journal of Biological Physics 31: 547-569, 2005.
• Strogatz, S. H. (2001) Nature, 410 (6825) 268–276.
• Szentagothai, J. (1978) Specificity versus (quasi-) randomness in cortical connectivity; in: Architectonics of the Cerebral Cortex Connectivity, Brazier, M.A.B., and Petsche, H. (Eds.), New York, Raven Press, pp.77-97.
• Szentagothai, J. (1990) "Specificity versus (quasi-) randomness" revisited, Acta Morphologica Hungarica, 38:159-167.
• Watts DJ, Strogatz SH. Collective dynamics of “small-world” networks. Nature 1998, 393: 440-442.
• Wolfram, S. (2002) A New Kind of Science, Wolfram Media Inc., Champaign, IL.
• Xu, D., J.C. Principe, IEEE Trans. Neural Networks 15 (2004) 1053.

Internal references