Coherent activity in excitatory pulse-coupled networks

From Scholarpedia
Simona Olmi and Alessandro Torcini (2013), Scholarpedia, 8(10):30928. doi:10.4249/scholarpedia.30928 revision #143359 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Simona Olmi


An excitatory pulse-coupled neural network is a network composed of neurons coupled via excitatory synapses, where the coupling among the neurons is mediated by the transmission of Excitatory Post-Synaptic Potentials (EPSPs). The coherent activity of a neuronal population usually indicates that some form of correlation is present in the firing of the considered neurons. The article focuses on the influence of dilution on the collective dynamics of these networks: a diluted network is a network where connections have been randomly pruned. Two kind of dilution are examined: massively connected versus sparse networks. A massively (sparse) connected network is characterized by an average connectivity which grows proportionally to (does not depend on) the system size.

Neural collective oscillations have been observed in many contexts in brain circuits, ranging from ubiquitous $\gamma$-oscillations to $\theta$-rhythm in the hippocampus. The origin of these oscillations is commonly associated to the balance between excitation and inhibition in the network, while purely excitatory circuits are believed to lead to “unstructured population bursts” (Buzsàki, 2006). However, coherent activity patterns have been observed also in “in vivo” measurements of the developing rodent neocortex and hippocampus for a short period after birth, despite the fact that at this early stage the nature of the involved synapses is essentially excitatory, while inhibitory synapses will develop only later (Allene et al., 2008). Of particular interest are the so-called Giant Depolarizing Potentials (GDPs), recurrent oscillations which repeatedly synchronizes a relatively small assembly of neurons and whose degree of synchrony is orchestrate by hub neurons (Bonifazi et al., 2009). These experimental results suggest that the macroscopic dynamics of excitatory networks can reveal unexpected behaviors.

On the other hand, numerical and analytical studies of collective motions in networks made of simple spiking neurons have been mainly devoted to balanced excitatory-inhibitory configurations (Brunel, 2000), while few studies focused on the emergence of coherent activity in purely excitatory networks. Pioneering studies of two pulse coupled neurons have revealed that excitatory coupling can have desynchronizing effect, while in general synchronization can be achieved only for sufficiently fast synapses (van Vreeswijk et al., 1994; Hansel et al., 1995). Van Vreeswijk in 1996 has extended these analysis to globally (or fully) coupled excitatory networks of Leaky Integrate-and-Fire (LIF) neurons, where each neuron is connected to all the others. This analysis has confirmed that for slow synapses the collective dynamics is asynchronous ( Splay States ) while for sufficiently fast synaptic responses a quite peculiar coherent regime emerges, characterized by partial synchronization at the population level, while single neurons perform quasi-periodic motions (van Vreeswijk, 1996).



In the recent years, following the seminal study by van Vreeswijk, the robustness of the partially synchronized regime has been examined by considering the influence of external noise and the level of dilution in networks of different topologies. Partial synchronization survives to the introduction of a moderate level of noise (Mohanty and Politi, 2006) and it appears to be quite robust also to dilution.

In particular, for neurons connected as in a directed Erdös-Renyi graph (Albert and Barabàsi, 2002) it has been shown that the coherent activity always emerge for (sufficiently) high connectivities. However, while for massively connected networks, composed by a large number of neurons, the dynamics of the collective state (apart some trivial rescaling) essentially coincide with that observed in the fully coupled system (Olmi et al., 2010; Tattini et al., 2012), for sparse networks this is not the case (Luccioli et al., 2012). This is due to the fact that, for sufficiently large networks, the synaptic currents, driving the dynamics of the single neurons, become essentially identical for massively connected networks, while the differences among them do not vanish for sparse networks.

Sparse and massively connected networks reveal even more striking differences at the microscopic level associated to the membrane potentials' dynamics. As a matter of fact, for finite networks chaotic evolution has been observed in both cases. However, this chaos is weak in the massively connected networks, vanishing for sufficiently large system sizes, while sparse networks remain chaotic for any large number of neurons and the chaotic dynamics is extensive.

Model and Indicators

In a fully coupled network of $N$ neurons, the membrane potential \(u_i(t) \) of the \(i− \)th neuron evolves according to the following ordinary differential equation

\[ \dot{u}_i(t) = a − u_i(t) + g E(t) \qquad i = 1, · · · ,N \]

where all variables and parameters are expressed in adimensional rescaled units. According to the above equation, the membrane potential \(u_i \) relaxes towards the value \(a + gE(t) \), but as soon as it reaches the threshold value \( u_i = 1\), it is reset to \( u_i = 0 \) and a spike is simultaneously sent to all neurons. This resetting procedure is an approximate way to describe the discharge mechanism operating in real neurons. The parameter \( a > 1 \) is the supra-threshold input DC current and \(g > 0 \) gauges the synaptic coupling strength of the excitatory interaction with the neural field \( E(t)\). This field represents the synaptic current injected in each neuron and is given by the superposition of all the pulses emitted by the network in the past. Following (Abbott and van Vreeswijk, 1993), it is assumed that the shape of a pulse emitted at time \(t=0\) is given by an $\alpha$-function \(s(t)= \frac{\alpha^2 t}{N} {\rm e}^{-\alpha t} \), where \( 1/\alpha \) is the pulse-width. For this choice of the pulse shape it is easy to show that the field evolution is ruled by the following second order differential equation \[ \ddot E(t) +2\alpha\dot E(t)+\alpha^2 E(t)= \frac{\alpha^2}{N}\sum_{n|t_n<t} \delta(t-t_n) \ . \] In other words, \(s(t)\) represents an EPSP emitted at time \(t_n \) by a neuron reaching the threshold value. The solution \(E(t)\) for a generic time \(t_n<t<t_{n+1}\) between two spike emissions is the linear combination of such EPSPs and represents a macroscopic variable reproducing the network activity.

At variance with the fully-coupled network, where all neurons depend on the same "mean field" \(E(t)\), in a random diluted network neurons have different connectivities. As a result, it is necessary to introduce an explicit dependence of the neural field on the index \(i\). The field \(E_i(t)\) represents the linear superposition of the pulses \(s(t)\) received by neuron \(i\) at previous spike times \(t_n < t\) (the integer index \(n\) orders the sequence of the pulses emitted in the network), namely \[ E_i(t)= \frac{1}{k_i} \sum_{n|t_n<t} C_{j(n),i} \theta(t-t_n) s(t-t_n), \] where \(\theta(x)\) is the Heaviside function, $k_i$ is the number of afferent synapses (in-degree connectivity) of neuron $i$ and the pulse shape is still an $\alpha$-function. Furthermore, each pulse \(s(t)\) is weighted according to the strength of the connection \(C_{j,i}\) between the emitting (\(j(n)\)) and the receiving (\(i\)) neuron. The matrix entries are chosen randomly with a constant probability: namely, \(C_{j,i}=1\) (resp. \(C_{j,i}=0\)) with a probability \(p\) (resp. \((1-p)\)). In general, the connectivity matrix \(C\) is non-symmetric. The random network associated to such connectivity matrix is termed directed Erdös-Renyi network and it is characterized by an average (in-degree) connectivity $<k> = p \times N$. An undirected network has a symmetric connectivity matrix.

In order to characterize the evolution of the random neural network at a macroscopic level, it is convenient to introduce the following averaged fields \[ \bar E(t) = \frac{1}{N} \sum_{i=1}^N E_i(t) \qquad ; \qquad \bar P(t) = \frac{1}{N} \sum_{i=1}^N P_i(t) \] where \(P_i=\alpha E_i + \dot{E}_i\). Notice that in the fully coupled case $\bar E(t) \equiv E_i(t)$ and $\bar P(t) \equiv P_i(t)$ for any index $i$.

The level of homogeneity in the network can be measured at a "macroscopic level" in terms of the instantaneous standard deviation \(\sigma(t)\) among the local fields \(E_i\), namely \[ \sigma(t) = \left( \frac{\sum_{i=1}^{N}E_{i}^{2}({t})}{N}-\bar{E}^{2}({t})\right)^{1/2} \]

Finally, in order to quantify the degree of synchronization among the neurons, the modulus $R$ of the following order parameter (Kuramoto, 1984) is employed: \[ r(t) = \frac{1}{N} \sum_{j=1}^N {\rm e}^{i \theta_j(t)}=R(t)e^{i\psi(t)}, \qquad \theta_j(t) = 2\pi \frac{(t-t_{j,n})}{t_{j,n+1}-t_{j,n}} \qquad j=1,\ldots,N. \] Here \( \theta_j\) is the phase of the \(j\)-th neuron at time $t \in [t_{j,n}:t_{j,n+1}]$, where \(t_{j,n}\) (\(t_{j,n+1}\)) refers to the \(n\)-th (\(n+1\)-th) spiking time of neuron \(j\). For asynchronous dynamics $R$ is vanishingly small, $R \sim 1/\sqrt{N}$, while for a fully synchronized case \(R=1\).

Globally Coupled Networks

In excitatory pulse-coupled LIF networks two distinct collective states can be identified: the splay state and the partial synchronization. Both states can be characterized at two levels: the microscopic one, corresponding to the membrane potential dynamics, and the macroscopic one, associated to the behavior of the field \( E\).

Splay states have been found in many different contexts such as Josephson devices (Hadley and Beasley, 1987), multi-mode lasers (Wiesenfeld et al., 1990) and electronic circuits (Ashwin et al., 1990). In computational neuroscience, splay states have been mainly investigated for LIF neurons (Abbott and van Vreeswijk, 1993; van Vreeswijk, 1996; Bressloff, 1999; Chow and Kopell, 2000; Zillmer et al., 2007; Olmi et al., 2012), but some studies have been also devoted to the \( \theta\)-neurons (Dipoppa et al., 2012) and to more realistic neuronal models (Brunel and Hansel, 2006). On the other hand partial synchronization (PS) has been discovered in pulse coupled LIF networks (van Vreeswijk, 1996) and more recently observed also for phase oscillators (Rosenblum and Pikovsky, 2007) and electronic devices (Temirbayev et al., 2012) with global nonlinear coupling.

Splay State

Figure 1: Raster plots of a network with \({N}=200\) neurons and \(a=1.3\), \(g=0.4\) for (a) \(\alpha=3\) and (b) \(\alpha=9\).
Figure 2: (a) Minima and maxima of the mean field \({\bar E}(t)\) as a function of \(\alpha\) for \(g = 0.4\) and \(a = 1.3\).(b) Critical curve \(\alpha_{c}\) in the parameter space \(({g},\alpha)\) for \({a}=1.3\) in the \(N\rightarrow\infty\) limit.

The splay state is a collective mode emerging in fully coupled oscillator networks. In this state the evolution of all oscillators is periodic of period $T$, and it can be described by the same functional form, as follows \[ {x}_{j}({t})={X}(t+\frac{j T}{N}) \qquad{j}=1,...,{N} \] where each oscillator $j$ can be characterized by a different phase $\frac{2\pi j}{N}$. The peculiar characteristic of the splay state is that the phases are equally distributed in the interval \( [0,2\pi]\).

As shown in (Jin, 2002), in fully coupled neural networks neurons reach the threshold in an ordered manner and this order never changes in time. Therefore, to visualize the neuron dynamics it is convenient to order the neurons according to their potential values and then plot the index of the firing neuron as a function of the spike time emission (see Fig. 1(a)). This raster plot clearly shows that in the splay state, the interspike interval between two consecutive spikes in the network is constant and equal to $T/N$.

At a macroscopic level, the field \(E(t)\) remains constant in time, thus indicating a constant average network activity. In addition to this, the network dynamics is asynchronous, since the modulus of the order parameter \(R\) is exactly zero, as it can be demonstrated by noticing that the phases of the neurons are given by the following expression \(\{\theta_j(t)\}=\{2\pi [1-(j-1)/N]\}\).

Therefore splay states are important in that they provide the simplest instance of asynchronous behavior and can be thereby used as a testing ground for the stability of a more general class of dynamical regimes. In addition to this it has been shown in (Zillmer et al., 2007) that, for an excitatory neural network, there exist a critical line \(\alpha_c(a,g)\) in the parameter space \( (\alpha,g)\) which defines the region where the splay state is stable (as shown in Fig. 2b).

Partial Synchronization

Figure 3: (a) Averaged modulus of the order parameter $R$ as a function of \( \alpha\) for \(g = 0.4\) and \( a = 1.3\). (b) Macroscopic attractors as a function of \(\alpha\).

Above the critical line \( \alpha_c\) a new stable collective state (the Partial Synchronization) emerges via a super-critical Hopf bifurcation. The transition can be well appreciated by reporting the maximal and minimal value of \(E\) versus the pulse width, as shown Fig. 2a. Since the field $E$ is constant for splay states and periodically oscillating in the PS regime. This corresponds in the $(\bar E, \bar P)$-plane to point-like attractors for the splay state and closed curves for partially synchronous regimes, see Fig. 3b.

In the partially synchronized regime the dynamics of the neurons' membrane potentials is a quasi-periodic motion. This can be seen by analyzing the raster plot displayed in Fig. 1(b): a group of neuron reaches the threshold almost simultaneously; however the neurons participating to this almost synchronized group change in time. The recombination of the individual quasi-periodic microscopic motions into a macroscopic periodic oscillation is absolutely not trivial and it is still matter of study (Mohanty and Politi, 2006; Rosenblum and Pikovsky, 2007; Popovych and Tass, 2011). The period of the collective periodic oscillations, which arises in this state, does not coincide with (it is longer than) the average interspike-interval of the single neurons and the two quantities are irrationally related. This phenomenon is also called self-organized quasi periodicity.

Furthermore, PS can be characterized in terms of the modulus of the order parameter \(R\), which in this case is finite and oscillates periodically in time with the same period as the macroscopic field \(\bar E\). As shown in Fig. 3a the average $R$ value grows with $\alpha$ and tends towards the fully synchronized state. This will be reached only in the limit $\alpha \to \infty$. Indeed it is known that for infinitely rapid synaptic responses, as those associated to exponential- or $\delta$-pulses, the stable state for excitatory synapses is the fully synchronized one (Van Vreeswijk et al. 1994; Van Vreeswijk, 1996; Tsodyks et al., 1993).

Massively Connected Networks

Figure 4: Characterization of the partially synchronized state (PS) in terms of macroscopic fields in a massively connected network with $z=1$ and for different sizes. Panel a: macroscopic attractors in the \( (\bar E, \bar P)\) plane. The black curve corresponds to the attractor of a fully coupled networks with properly rescaled coupling constant. Panel b: Enlargement of the Figure (a). The curve at size \(N=100000\) (not reported for clarity) almost coincides with the fully coupled one. The parameters of the model are $g=0.4$, $a=1.3$ and $\alpha=9$.

The influence of the network properties on the macroscopic neural dynamics has been recently examined in this context in (Tattini et al. 2012). In particular, the authors considered random Erdös-Renyi networks with an average connectivity growing (sub)-linearly with the network size \(N\). Namely, the average connectivity scales as $$ <k> \propto N^z \qquad 0 < z \le 1 \quad; $$ thus exhibiting the same system size dependence as for a truncated power-law distribution of the connectivities, namely $P(k) \propto 1/k^{2-z}$. The authors limited the analysis to $z \in ]0;1]$, since in a recent study of the developing hippocampal networks it has been shown that the functional connectivity is characterized by a truncated power-law distribution with exponent $z \sim 0.7-0.9$ (Bonifazi et al., 2009). In the limit \(z \to 1\) the massively connected network, with connectivity proportional to \(N\), is recovered; while for \(z \to 0\) a sparse network, where the average probability to have a link between two neurons vanishes in the thermodynamic limit is retrieved (Golomb et al., 2001). The topology of Erdös-Renyi networks is modified by varying the parameter \(z\) in the interval \(]0:1]\), in particular as far as \(z \geq 0\) trees and cycles of any order are present in the network, while for \(z \to 1\) complete subgraphs of increasing order appear in the system (Albert and Barabàsi, 2002).

Figure 5: Phase diagram for the macroscopic activity of the network in the \((N,\langle k \rangle)\) plane. The (black) asterisks connected by the solid (black) line correspond to the transition values \(\langle k \rangle_c\) from asynchronous (AS) to partially synchronized (PS) regime estimated for Erdös-Renyi networks with constant probability. The other symbols refer to Erdös-Renyi with \(z > 0\): solid (resp. empty) symbols individuate asynchronous (resp. partially synchronized) states. Parameters as in the previous figure. (Modified from Tattini et al., 2012)

Similarly to what observed for fully coupled networks, two distinct dynamical phases are still present: an asynchronous state (AS) corresponding to a desynchronized dynamics of the neurons (which in the fully coupled networks correspond to the splay state) and a regime of partial synchronization (PS) associated with a coherent periodic activity of the network. A peculiar point to stress is that in the limit $N \to \infty$ the macroscopic dynamics of the fully coupled networks will be recovered for random networks for any exponent $z > 0$, as clearly shown in Fig. 4 for in the case $z=1$. Thus a random network is completely equivalent to a fully connected one for sufficiently large system sizes whenever the connectivity grows with the system size, the situation is different for sparse networks where the connectivity stays constant (Olmi et al., 2010; Tattini et al. 2012).

Once the model parameters are fixed, namely the pulse width $\alpha$, the coupling $g$ and the DC current $a$, the transition from AS to PS is now driven by the average connectivity value. In particular, by considering parameter values for which the PS is present in the fully coupled limit, one can observe that at low connectivity the system is in an asynchronous state, while PS emerges only above a certain critical average connectivity \(\langle k \rangle_c\). Furthermore, for sufficiently large networks, \(\langle k \rangle_c\) saturates to a constant value (see Fig. 5) suggesting that a minimal average connectivity is sufficient to observe coherent activity in systems of any size irrespectively of the kind of considered network: sparse or massively connected.

Figure 6: Maximal Lyapunov exponents as a function of the system size \(N\) for various \(z\)-values. Parameters as in the previous figure.(Modified from Tattini et al., 2012)

The average in-degree \(\langle k \rangle\) also controls the fluctuations in the input synaptic current (or analogously among the different field $E_i$). These can be measured by considering the standard deviation $\sigma(t)$ (defined in Sect. "Model and Indicators"), which due to the central limit theorem scales as $$ {\bar \sigma} \propto \frac{1}{\sqrt{<k>}} \propto N^{-z/2} \qquad ; $$ where the bar indicates a time average. Therefore, for Erdös-Renyi networks with average in-degree proportional to any positive power of \(N\), the fluctuations will vanish in the limit \(N \to \infty\), leading to a homogeneous collective behavior analogous to that of fully connected networks (see Fig. 4). However, the introduction of disorder in the network leads to a chaotic dynamics at the microscopic level of the single neurons. The chaotic motion can be characterized in terms of the maximal Lyapunov exponent \(\lambda_1\): regular orbits have non positive exponents, while chaotic dynamics is associated with \(\lambda_1 > 0\). For finite size networks, the dynamics is always chaotic for the considered model, however \(\lambda_1\) tends to zero for increasing network size whenever \(z > 0\), as shown in Fig. 6. This kind of deterministic irregular behavior vanishing in the large system size limit has been identified as weak chaos for coupled phase oscillators (Popovych et al., 2005).

Sparse Networks

Sparse networks represent a peculiar exception, since they remain intrinsically inhomogeneous and chaotic for any system size. In order to examine the influence of this kind of topology it is sufficient to examine a random network with constant connectivity $K$, which is independent of the network size $N$. At a macroscopic level, also in this case a transition from AS to PS can be observed. In particular, the collective dynamics can be characterized in terms of the standard deviation of the average field $\bar E$, namely \( \sigma_E= \sqrt{<\bar{E}^2>- <\bar{E}>^2} \). For an AS the standard deviation vanishes as $\sigma_E \propto 1/\sqrt{N}$, while in the presence of collective motions it stays finite, as shown in Fig. 7a. Similarly to what observed for massively connected networks, above a finite critical connectivity \(K_c\) a coherent collective dynamics emerges even in sparse networks, as shown in Fig. 7a.

Figure 7: Standard deviation of the mean field, \(\sigma_E\), versus \(K\) for \(N=1,000\) (black) circles, \(N=5,000\) (red) squares, \(N=10,000\) (green) triangles. The inset shows the macroscopic attractors for \(N=5,000\) and $K=3$ and \(K=200\). (b) Lyapunov exponent spectra (in the lower inset a zoom of the largest values) for \(K=20\) and \(N=240-480-960\). (c) Maximum Lyapunov exponent, \(\lambda_{1}\), versus N is shown, the (red) line represents the nonlinear fit \(\lambda_{1}=0.0894-2.3562/N\) and the (green) dashed line marks the asymptotic value.The parameters of the model are $g=0.2$, $a=1.3$ and $\alpha=9$. (Modified from Luccioli et al., 2012)

The most striking difference with respect to massively connected networks concerns the microscopic dynamics, as shown in Fig. 7c the maximal Lyapunov exponent converges to an asymptotic limit for increasing system sizes, therefore these networks will remain chaotic irrespectively of the network size. Furthermore, the dynamics is characterized by extensive high-dimensional chaos (Ruelle, 1982; Grassberger, 1989), i.e. the number of active degrees of freedom, measured by the fractal dimension, increases proportionally to the system size. Extensive chaos has been usually observed in diffusively coupled systems (Livi et al., 1986; Grassberger, 1989; Paul et al., 2007), where the system can be easily decomposed in weakly interacting sub-systems. Whenever the system is chaotically extensive the associated spectra of the Lyapunov exponents \(\{\lambda_i\}\) collapse onto one another, when they are plotted versus the rescaled index \(i/N\), as shown in Fig. 7b (Livi et al., 1986). Fully extensive behavior in sparse neural networks has been observed for the Theta neuron model in (Monteforte and Wolf, 2010) and the LIF model in (Luccioli et al., 2012). The previous results are obtained by assuming that all nodes are characterized by the same connectivity \(K\), but the same scenario holds assuming a Poisson degree distribution with average connectivity \(K\), as in Erdös-Renyi graphs.

The extensivity property is highly non-trivial in sparse networks, since in this case the dynamics is not additive. Contrary to what happens in spatially extended systems with diffusive coupling, where the dynamical evolution of the whole system can be approximated by the juxtaposition of almost independent sub-structures (Grassberger, 1989; Paul et al., 2007). Extensive chaos has not been observed in globally coupled networks, which exhibit a non-extensive component in the Lyapunov spectrum (Takeuchi et al., 2011).


  • Abbott L. F. and van Vreeswijk C. (1993). Asynchronous states in networks of pulse-coupled oscillators. Phys. Rev. E 48: 1483.
  • Albert R. and Barabàsi A. L. (2002). Statistical mechanics of complex networks. Rev. Mod. Phys. 74: 47–97.
  • Allene C.; Cattani A.; Ackman J. B.; Bonifazi P.; Aniksztejn L.; Ben-Ari Y. and Cossart R. (2008). Sequential generation of two distinct synapse-driven network patterns in developing neocortex. The Journal of Neuroscience 26: 12851-12863.
  • Ashwin P.; King G.P and Swift J.W. (1990). Three identical oscillators with symmetric coupling. Nonlinearity 3: 585.
  • Bonifazi P.; Goldin M.; Picardo M. A.; Jorquera I.; Cattani A.; Bianconi G.; Represa A.; Ben-Ari Y. and Cossart R. (2009). GABAergic Hub Neurons Orchestrate Synchrony in Developing Hippocampal Networks. Science 326: 1419-1424.
  • Brunel N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8: 183.
  • Brunel N. and Hansel D. (2006). How noise affects the synchronization properties of recurrent networks of inhibitory neurons. Neural Comput 18: 1066-1110.
  • Bressloff P. C. (1999). Mean-field theory of globally coupled integrate-and-fire neural oscillators with dynamic synapses. Phys Rev E 60: 2160-2170.
  • Buzsàki G. (2006). Rhythms of the Brain. Oxford University Press, New York.
  • Chow C. C. and Kopell N. (2000). Dynamics of spiking neurons with electrical coupling. Neural Comput 12: 1643-1678.
  • Dipoppa M.; Krupa M.; Torcini A. and Gutkin B. S. (2012). Splay states in finite pulse-coupled networks of excitable neurons. SIAM J Appl Dyn Syst 11: 864-894.
  • Golomb D.; Hansel D. and Mato G. (2001). Mechanisms of Synchrony of neural Activity in large Networks, in Handbook of biological physics. 887-967, Eds. Gielen S and Moss F, Elsevier, Amsterdam.
  • Grassberger P. (1989). Information content and predictability of lumped and distributed dynamical systems. Physica Scripta 40: 346.
  • Hadley P. and Beasley M. R. (1987). Dynamical states and stability of linear arrays of Josephson junctions. Appl. Phys. Lett. 50: 621.
  • Hansel D.; Mato G. and Meunier C. (1995). Synchrony in Excitatory Neural Networks. Neural Computation 7: 307.
  • Kuramoto Y. (1984). Chemical oscillations, waves, and turbulence. Springer-Verlag, Berlin.
  • Jin D. Z. (2002). Fast convergence of spike sequences to periodic patterns in recurrent networks. Phys. Rev. Lett. 89: 208102.
  • Livi R.; Politi A. and Ruffo S. (1986). Distribution of characteristic exponents in the thermodynamic limit. J. Phys. A: Math. Gen. 19: 2033.
  • Luccioli S.; Olmi S.; Politi A. and Torcini A. (2012). Collective dynamics in sparse networks. Phys. Rev. Lett. 109: 138103.
  • Mohanty P. K. and Politi A. (2006). A new approach to partial synchronization in globally coupled rotators. J. Phys. A 39: L415.
  • Monteforte M. and Wolf F. (2010). Dynamical Entropy Production in Spiking Neuron Networks in the Balanced State. Phys. Rev. Lett. 105: 268104.
  • Olmi S.; Livi R.; Politi A. and Torcini A. (2010). Collective oscillations in disordered neural networks. Phys. Rev. E 81: 046119.
  • Olmi S.; Politi A. and Torcini A. (2012). Stability of the splay state in networks of pulse-coupled neurons. The Journal of Mathematical Neuroscience 2: 12.
  • Paul M. R.; Einarsson M. I.; Fischer P. F. and Cross M. C. (2007). Extensive chaos in Rayleigh-Bénard convection. Phys. Rev. E 75: 045203(R).
  • Popovych O. V.; Maistrenko Y. L. and Tass P. A. (2005). Phase chaos in coupled oscillators. Phys. Rev. E 71: 065201(R).
  • Popovych, O. V. and Tass, P. A. (2011) Macroscopic entrainment of periodically forced oscillatory ensembles, Prog. Biophys. Molec. Biol., 105: 98.
  • Rosenblum M. and Pikovsky A. (2007). Self-Organized Quasiperiodicity in Oscillator Ensembles with Global Nonlinear Coupling. Phys. Rev. Lett. 98: 064101.
  • Ruelle D. (1982). Large volume limit of the distribution of characteristic exponents in turbulence. Commun. Math. Phys. 87: 287-302.
  • Takeuchi K. A.; Chatè H.; Ginelli F.; Politi A. and Torcini A. (2011). Extensive and Subextensive Chaos in globally Coupled Dynamical Systems. Phys. Rev. Lett. 107: 124101.
  • Tattini L.; Olmi S. and Torcini A. (2012). Coherent periodic activity in excitatory Erdös-Renyi neural networks: the role of network connectivity. Chaos 22: 023133.
  • Temirbayev A. A.; Zhanabaev Z. Z.; Tarasov S. B.; Ponomarenko V. I. and Rosenblum M. (2012). Experiments on oscillator ensembles with global nonlinear coupling. Phys. Rev. E, 85: 015204.
  • Tsodyks M.; Mitkov I. and Sompolinsky H. (1993). Pattern of synchrony in inhomogeneous networks of oscillators with pulse interactions. Phys. Rev. Lett. 71: 1280.
  • van Vreeswijk C.; Abbott L. F. and Ermentrout G. B. (1994). When inhibition not excitation synchronizes neural firing. Journal of Computational Neuroscience 1: 313.
  • van Vreeswijk C. (1996). Partial synchronization in populations of pulse-coupled oscillators. Phys. Rev. E 54: 5522.
  • Wiesenfeld K.; Bracikowski C.; James G. and Roy R. (1990). Observation of antiphase states in a multimode laser. Phys. Rev. Lett. 65: 1749.
  • Zillmer R.; Livi R.; Politi A. and Torcini A. (2007). Stability of the splay state in pulse-coupled networks. Phys. Rev. E 76: 046102.

Internal references

  • Arkady Pikovsky and Michael Rosenblum (2007) Synchronization Scholarpedia, 2(12):1459.

External links

Personal tools

Focal areas