# Neuronal Synchrony Measures

David Golomb (2007), Scholarpedia, 2(1):1347. | doi:10.4249/scholarpedia.1347 | revision #123400 [link to/cite this article] |

A **neuronal synchrony measure** is a number that quantifies the level
of synchrony of
a large population of neurons within a network. It is usually normalized to
be between 0 and 1. It is equal to 0 when the neurons in the population fire
in an asynchronized manner, it is equal to 1 when all those neurons
fire in full synchrony, exactly at the same times, and
it is between 0 and 1 for partially
synchronized states, i.e., states in which the firing times of the neurons are related (synchronized) but not identical (fully synchronized).

## Contents |

## Networks with spatially-independent coupling

### Asynchronous and synchronous states

Synchrony is a property of the activity of large neuronal networks.
It can be defined here for long-term dynamics under steady-state
conditions, i.e., after the state of the system has converged to
an attractor. This means that the inputs to the cells in the network
from cells outside of the network are constant, or at least stationary, in time.
A simple case is a network of \(N\) neurons where the
coupling between cells does not depend on the distance between them.
Such networks can settle into two generic states
termed **asynchronous** and **synchronous**. The states differ in
the way the temporal fluctuations of a global variable, such as the
population-average voltage or population-average synaptic conductance,
change with \(N\ .\) **Asynchronous** states are defined as
states where these global variables approach a time-independent limit
as \(N \rightarrow \infty\ .\) This reflects the
fact that the action potentials of individual neurons are at most very weakly
correlated. Summing \(N\) temporally uncorrelated
(or weakly correlated)
contributions results in a population-average variable whose
fluctuations have an amplitude of the order of \(1/\sqrt{N}\ .\)
Such a state can be self-consistent because the weak temporal variation in
the inputs to the different neurons may be insufficient to synchronize them.

One way to characterize the level of synchrony in a network
is by evaluating a global variable, e.g.,
the population-averaged voltage, Ca^{2+} concentration inside cells or instantaneous activity. In
the asynchronous state, the variance of such a quantity vanishes
as \(N\) increases, typically as \(1/N\ .\)
In contrast, in synchronous states, there
are temporal fluctuations on a global scale.
The variance of the global activity, as well as the variance of
the total synaptic conductance of a neuron, remains of order
unity even for large \(N\ .\)

### Measure of synchrony in large neuronal networks

Since synchrony is related to the fluctuations of global variables, it can be defined by averaging these fluctuations over a long time (Golomb and Rinzel 1993, 1994, Hansel and Sompolinsky 1992, Ginzburg and Sompolinsky 1994). To normalize the synchrony measure, this average is divided by the average fluctuations in the variables of single neurons. It is common to do this averaging over the membrane potential \(V\ ,\) as presented here, although other variables such as the instantaneous activity of neurons or the synaptic variables can be also used. One evaluates at a given time, \(t\ ,\) the quantity \(V(t)\) (Figure 1): \[\tag{1} V(t) = \frac{1}{N} \sum_{i=1}^{N} V_i(t) ~. \]

The variance of the time fluctuations of \(V(t)\) is \[\tag{2} \sigma_V^2 = \left\langle \left[ V(t) \right]^2 \right\rangle_t - \left[ \left\langle V(t) \right\rangle_t \right]^2 \]

where \(\left\langle \ldots \right\rangle_t = (1 / T_m) \int_0^{T_m} dt \, \ldots\) denotes time-averaging over a large time, \(T_m\ .\) After normalization of \(\sigma_V\) to the average over the population of the single cell membrane potentials \[\tag{3} \sigma_{V_i}^2 = \left\langle\left[ V_i(t) \right]^2 \right\rangle_t - \left[ \left\langle V_i(t) \right\rangle_t \right]^2 ~. \]

one defines a synchrony measure, \(\chi (N)\ ,\) for the activity of a system of \(N\) neurons by: \[\tag{4} \chi^2 \left( N \right) = \frac{\sigma_V^2}{ \frac{1}{N} \sum_{i=1}^N \sigma_{V_i}^2} ~. \]

This **synchrony measure**, \(\chi(N)\ ,\) is between 0 and 1.
The central limit theorem implies that
in the limit \(N \rightarrow \infty\) it behaves as:
\[\tag{5}
\chi \left( N \right) = \chi \left( \infty \right) +
\frac{a}{\sqrt{N}} + O(\frac{1}{N})
\]

where \(a > 0\) is a constant. In particular, \(\chi (N) = 1\) if the system is fully synchronized (i.e., \(V_i(t)=V(t)\) for all \(i\)). In synchronous, but not fully synchronous, states, \(\chi(\infty) >0\) (Figure 2A,B). In the asynchronous state, \(\chi \left( N \right) = O(1/\sqrt{N})\ ,\) namely \(\chi(\infty)=0\) (Figure 2C). To determine whether a neuronal network exhibits an asynchronized state, the synchrony measure \(\chi\) should be computed for various levels of \(N\ .\) If \(\chi\) varies from one realization of the same system to another, one has to simulate the system for a large-enough number of realizations for each value of \(N\) before examining the dependence of the realization-averaged value of \(\chi\) on \(N\) (Figure 2B).

In networks with more than one population of neurons, one can define a synchrony measure \(\chi\) for each population separately.

### Types of firing patterns

In homogeneous fully connected networks,
an important class of partially synchronous states,
\(0 < \chi(\infty) < 1\ ,\) is the **cluster state**
(Golomb et al. 1992, 1994, Golomb and Rinzel 1993, Hansel et al. 1993, 1995)
in which the system segregates into several clusters of neurons.
The firing patterns of all the neurons within a cluster are identical,
but neurons that belong to different clusters fire differently and
often alternately.
In noisy networks, sparse networks or networks with heterogeneity
in intrinsic neuronal
properties, the disorder generally smears
the clustering. For instance, in a **smeared 1-cluster state**,
the population voltage, \(V(t)\)
(equation (1), oscillates with time and has one peak as a function
of time in each time period of the population. More generally, in a
**smeared \(n\)-cluster state**,
\(V(t)\) has \(n\) peaks as a function of time in each
time period ( Figure 3) (Golomb and Hansel 2000, Golomb et al. 2001). Of course, this definition makes sense only when the network activity is sufficiently periodic that a natural time period can be defined.

### Cross-correlations and the level of synchrony

The level of synchrony affects the cross-correlation (CC) functions between the activities of pairs of neurons, defined by comparing the activity profiles of the two neurons across different time delays (Abeles 1991, Ginzburg and Sompolinsky 1994). Let us denote by \(x_i(t)\) a local observable, e.g., the instantaneous rate of the \(i\)-th neuron. The CC is defined as \[\tag{6} C_{ij} ( \tau ) = \frac{1}{T_m}\int_0^{T_m} dt \, x_i (t) \, x_j (t + \tau) ~. \]

When the system is asynchronous (\(\chi(\infty) = 0\)), the magnitude of the typical CCs is small and vanishes for large \(N\) and \(T_m\ .\) The CC of the activity of a pair of neurons is strongly dependent on their direct interaction. When the system is synchronous (\(\chi(\infty) > 0\)), the degree of synchrony between the inputs that two cells in the network receive from the rest of the network is itself of order unity. The CC of a pair of neurons is dominated by this common input, which may come partially or fully from the network. Therefore, the magnitude of the typical CCs is of the order unity.

### Detecting the asynchronous state in experiments

The criterion for synchrony based on computing the synchrony measure \(\chi\) for several values of \(N\) is difficult to check directly in experimental systems since this requires reliable measurements from networks with variable sizes. An alternative criterion is based on the behavior of population averages (Ginzburg and Sompolinsky 1994, Golomb et al. 2001). Let us suppose that we can measure the means of the local observables \(x_i(t)\) over a subpopulation of size \(K\ ,\) where \(K << N\ ,\) yielding \[\tag{7} X_K (t) \equiv \frac{1}{K} \sum_i^K x_i (t) ~. \]

Asynchronous states can be distinguished from synchronous states according to the \(K\) dependence of the variance of \(X\ ,\) \[\tag{8} \Delta(K) \equiv \langle ( X_K (t) - \langle X_K \rangle_t )^2 \rangle_t ~. \]

In an asynchronous state the local variables are weakly correlated, hence \[\tag{9} \Delta(K) \propto \frac{1}{K}~~, 1 << K << N ~. \]

On the other hand, in synchronous states \[\tag{10} \Delta(K) = O(1) \]

even for large \(K\ .\) The advantage of this criterion is that it does not rely on the absolute scale of \(\Delta\ ,\) but on its dependence on \(K\ ,\) which, unlike \(N\ ,\) can be varied experimentally. The limitation of this criterion is that the sampling of the \(x_i\)'s and the value of \(K\) should be such that the sums are not dominated by unusually strongly correlated variables.

## Networks with spatially-decaying connectivity

The synchrony measure \(\chi\) (equation 4) can be defined for an arbitrary architecture of a neuronal network. In general, however, it can describe the local level of synchrony only if the length of the system is smaller than or similar to the characteristic coupling decay length ("footprint length") \(\sigma\) of the network architecture. Suppose that the coupling between neurons decays with space, the length of the network is large and the dynamics includes noise, heterogeneity or sparseness. In such cases, it is expected that the firing patterns of neighboring neurons, but not the firing patterns of neurons that are located far from each other, will be coordinated. This behavior characterizes especially networks with one-dimensional geometry (see Neuronal Fields). To quantify the degree of local synchrony, one can define a local synchrony measure \(\chi\) that may depend on the position \(x\ .\) A proper way to do this is to consider a network with \(N\) neurons at every point \(x\ ,\) and to define the measure \(\chi (x, N)\) according to equation (4). Since one usually wants to simulate long chains of neurons, using this method is very time consuming. Another option to define \(\chi (x, N)\) is to consider all the neurons in the network within a relatively short distance around the point \(x\ .\) This distance can be, for example, the coupling decay length \(\sigma\ .\) If the synchrony between neurons does not decay very strongly with distance, the synchrony measure \(\chi\) computed over the neuronal population within a distance \(\sigma\) from a neuron located at \(x\) is a good approximation for the exact local synchrony measure (Golomb et al. 2006).

## Measure of synchrony for phase models

Under certain conditions, a model of a network of weakly coupled oscillating neurons can be reduced to a phase model, where each neuron is represented by a phase coordinate \(\phi\) (see also Pulse Coupled Oscillators, Kuramoto Model). The state of the network - the level and form of synchrony - is characterized by the global variables (order parameters) \(Z_n\) (Kuramoto 1984, Golomb and Hansel 2000, Golomb et al. 2001): \[\tag{11} Z_n = \frac{1}{T_m} \int_0^{T_m} \left| z_n (t) \right| \, dt \]

where \[\tag{12} z_n = \frac{1}{N}\sum_{j=1}^{N} e^{i \, n \, \phi_j } ~. \]

The first-order parameter, \(Z_1\ ,\) measures the tendency of the oscillator population to evolve in full synchrony\[Z_1 = 1\] if and only if \(\phi_j (t) = \phi (t)\) for all \(j\) and \(t\ .\) The high-order parameters, \(Z_n\ ,\) measure the tendency of the population to segregate into \(n\) clusters, composed of equal numbers of oscillators that take turns becoming active (e.g., firing or bursting). For example, if a large population segregates into two equally populated clusters that oscillate locked in anti-phase, \(Z_2 = 1\) while \(Z_1 = 0\ .\) In the limit of \(N \rightarrow \infty\ ,\) \(Z_n = \delta_{n,0}\) if and only if the system is in an asynchronous state.

## Measures of synchrony for two neurons

This article is concerned with measures of synchrony in large neuronal networks. Synchrony in small networks can also be defined, but it has a different meaning than the one described above. A variety of other measures for synchrony between two neurons have been developed. These include cross-correlations, variance-based measures, measures based on mutual information, phase relationships, and the timing of particular events, such as local maxima within the time course of a particular variable (Kreuz et al. 2007).

## References

- Abeles M. (1991) Corticonics: Neural Circuits of the Cerebral Cortex, Cambridge University Press, Cambridge. ISBN: 0521376173

- Ginzburg I. and Sompolinsky H. (1994) Theory of correlations in stochastic neuronal networks. Phys. Rev. E 50:3171-3191.

- Golomb D. and Hansel D. (2000) The number of synaptic inputs and the synchrony of large sparse neuronal networks. Neural Comp. 12:1095-1139.

- Golomb D., Hansel D. and Mato G. (2001) Mechanisms of synchrony of neural activity in large networks. In: Moss F. and Gielen S. (editors) Handbook of Biological Physics, Volume 4: Neuro-Informatics and Neural Modelling, Elsevier Science, Amsterdam, p. 887-968.

- Golomb D., Hansel D., Shraiman B. and Sompolinsky H. (1992) Clustering in globally coupled phase oscillators. Phys. Rev. A 45:3516-3530.

- Golomb, D. and Rinzel J. (1993) Dynamics of globally coupled inhibitory neurons with heterogeneity. Phys. Rev. E 48:4810-4814.

- Golomb D. and Rinzel J. (1994) Clustering in globally coupled inhibitory neurons. Physica D 72:259-282.

- Golomb D., Shedmi A., Curtu R. and Ermentrout G.B. (2006) Persistent synchronized bursting activity in cortical tissues with low magnesium concentration: a modeling study. J. Neurophysiol. 95:1049-1067.

- Golomb D., Wang X.-J. and Rinzel J. (1994) Synchronization properties of spindle oscillations in a thalamic reticular nucleus model. J. Neurophysiol. 72:1109-1126.

- Hansel D., Mato G. and Meunier C. (1993) Clustering and slow switching in globally coupled phase oscillators. Phys. Rev. E 48:3470-3477.

- Hansel D., Mato G. and Meunier C. (1995) Synchrony in excitatory neural networks. Neural Comput. 7:307-337.

- Hansel D. and Sompolinsky H. (1992) Synchrony and computation in a chaotic neural network. Phys. Rev. Lett. 68:718-721.

- Kreuz T., Mormann F., Andrezejak R., Kraskov A., Lehnertz K. and Grassberger P. (2007) Measuring synchronization in coupled model systems: A comparison of different approaches. Physica D 225: 29-42.

- Kuramoto Y. (1984) Chemical Oscillations, Waves and Turbulence, Springer, New York.

**Internal references**

- John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815.
- Jan A. Sanders (2006) Averaging. Scholarpedia, 1(11):1760.
- Eugene M. Izhikevich (2006) Bursting. Scholarpedia, 1(3):1300.
- Jonathan E. Rubin (2007) Burst synchronization. Scholarpedia, 2(10):1666.
- Frances K. Skinner (2006) Conductance-based models. Scholarpedia, 1(11):1408.
- James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629.
- Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.
- Carmen C. Canavier and Srisairam Achuthan (2007) Pulse coupled oscillators. Scholarpedia, 2(4):1331.
- Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459.
- Hermann Haken (2007) Synergetics. Scholarpedia, 2(1):1400.
- Thomas Kreuz (2011) Measures of neuronal signal synchrony. Scholarpedia, 6(12):11922.
- Thomas Kreuz (2011) Measures of spike train synchrony. Scholarpedia, 6(10):11934.

## Recommended reading

Kuramoto Y. (1984) Chemical Oscillations, Waves and Turbulence, Springer, New York.

Moss F. and Gielen S. (2001) Handbook of Biological Physics, Volume 4: Neuro-Informatics and Neural Modelling. Elsevier Science, Amsterdam,

## External links

## See also

Burst synchronization, Bursting, Conductance-Based Models, Integrate-and-Fire Neuron, Kuramoto Model, Phase Model, Pulse Coupled Oscillators, Synchronization.