# TheVirtualBrain

Post-publication activity

Curator: Paula Sanz Leon

Figure 1: TheVirtualBrain's logo.

TheVirtualBrain (TVB) is a neuroinformatics platform developed for simulating large-scale or "whole" brain network models. In contrast with other existing simulation engines (Brian simulator, NEST (NEural Simulation Tool), GENESIS, PCSIM, MOOSE, NEURON) the dynamics at the network nodes are not described by models of single neurons but rather by neural masses or neural fields (Deco et al., 2008).

## Purpose & Philosophy

The choice to develop a research tool directly focused on mesoscopic models of neural dynamics implemented at the scale of the whole primate brain comes from practical considerations about computational feasibility as well as a recognition of the fact that the most common experimental neuroimaging modalities (EEG, MEG, fMRI) in humans record on the large scale. EEG and MEG, by their nature, record a summation of neural activity from a large portion of the human brain, likewise fMRI, although recording localised changes, measures processes that operate on a much larger scale than single neurons and is typically used to record the activity of most if not all of a subject or patient's brain. It then seems natural to produce commensurate models of neural activity, ideally incorporating the spatial structure, such as the folded cortical surface and long-range myelinated fibre connections, which can be extracted from experimental data and provides an explicit constraint on source geometries -- and thus a first experimental constraint on our modelling work. A neuronal level modelling is currently not practical, however, with dynamics described at the mesoscopic level of neural masses or neural fields it becomes feasible to carry out these simulations on a modern desktop workstation.

The purpose then becomes to provide a common framework for collaboration and communication both between researchers developing these types of models, so that they may more easily and directly compare and contrast the models they are developing, and between computational neuroscientists and experimentalists or clinicians, so that these researchers can make use of these models as well as provide the much needed experimental constraints and validation of the underlying modelling work. With this in mind the design and focus of TVB has been developed. The components of these large-scale models have been separated as cleanly as possible, and a specific structure has been defined for the individual components, this structure is intended to serve the purpose of restricting the form of models enough to make direct comparison straight forward while still permitting a sufficiently large class of models to be expressed. Furthermore, in order to enable as broad access as possible to this modeling tool, a graphical interface has been developed, which is intended to provide both an intuitive and, hopefully, educational access to these models, thus making it possible for researchers who are not themselves computational neuroscientists to use them. As part of the graphical interface development, an extensive supporting infrastructure has been put in place, this includes: organisational features to help individual researchers to keep track of the large number of simulations that the platform enables them to run; collaborative features to ease the interaction of multiple researchers working on a single project; and the ability to perform some standard experimental style analysis of simulated data as well as an ability to export data, so that it may be analysed by the same methods and existing packages a researcher would apply to their experimental data. And naturally, in order to facilitate complete access, use and future extension, the entire project is open source (GNU GPLv2), itself building on an existing set of open source tools.

## Modeling Approach

TVB has been designed following a computational modeling paradigm that permits the model-based inference of network-based mechanisms on different brain scales that underlie the generation of macroscopic neuroimaging signals such as fMRI, EEG and MEG. TVB incorporates a biologically realistic, large-scale connectivity of brain regions in the primate brain, that is, the information of white matter neural fibre tracts as identified by tractography based methods (Hagmann et al., 2008, Honey et al., 2009, Bastiani et al., 2012), obtained from CoCoMac neuroinformatics database (Kötter, 2004, Kötter and Wanke, 2005), or from the recently structural connectivity datasets made available thanks to the Human Connectome Project (Essen and Ugurbil, 2012). Also, a realistic cortex model , as extracted from MRI data, is included. This approach allows a detailed spatial sampling, in particular of the cortical surface, resulting in suitable spatial support for an approximation of the neural activity as in neural field modeling (or network of neural masses) where each vertex represents a neural population model.

The cortex model and the connectivity matrix (the so-called connectome) represent two types of structural connectivity, that is long- and short-range connectivity. The connectivity matrix defines the connection strengths and time delays via finite signal transmission speed between two regions of the brain and the surface meshes allow for the application of arbitrary local connectivity kernels which represent short-range intra-cortical connections. In TVB networks themselves can be defined at two distinct spatial scales yielding two types of simulations (or brain network models): surface-based and region-based. In the former case, cortical and sub-cortical areas are shaped more realistically, each vertex of the surface is considered a node and is modeled by a neural population model; several nodes belong to a specific brain region, and the edges of the network have a distance of the order of a few millimeters. In the latter case of nodes only per region, the connectome itself is used as a coarser representation of the brain network model. The networks comprise discrete nodes, each of them models the neural population activity of a brain region and the edges represent the interregional fibers on the order of a few centimeters. Consequently, in surface-based simulations both types of connectivity, short- and long-range, coexist whereas in region-based simulations one level of geometry is lost: the short-range connectivity.

When traversing the scale to the large-scale network, then each network node is governed by its own intrinsic dynamics in interaction with the dynamics of all other network nodes. The equation representing the brain network model, when the space $\Omega$ is sampled with finite number of points $l$, reads:

$$\mathbf{\hat{P}}(\text{d}/\text{d}t)\mathbf{\hat{\Psi}}(t) = \mathbf{\hat{\Xi}}(t) - \mathbf{\hat{\Lambda}}(\mathbf{\hat{\Psi}}(t)) + \sum_{v=0}^{2} \mathbf{U}_{v}S(\mathbf{V_{loc}} \mathbf{\hat{\Psi}}(t - \mathbf{K}/c_{v})) \tag{1}$$

The equation describes the stochastic differential equation of a network of connected neural populations. The number of elements in the vectors $\mathbf{\hat{P}} =\mathbf{P}_1, \mathbf{P}_2, ..., \mathbf{P}_l ]$, $\mathbf{\hat{\Psi}} = \mathbf{\Psi}_1, \mathbf{\Psi}_2, ..., \mathbf{\Psi}_l ]$, $\mathbf{\hat{\Xi}} = \mathbf{\Xi}_1, \mathbf{\Xi}_2, ..., \mathbf{\Xi}_l]$, $\mathbf{\hat{\Lambda}} = \mathbf{\Lambda}_1, \mathbf{\Lambda}_2, ..., \mathbf{\Lambda}_{l}]$, $\mathbf{K}$ and the order of the square matrices $\mathbf{U}_{v}$ is $l\sum_{i=1}^{m}n_{i}$.

$\mathbf{P}_{j}$ is the temporal differential operator accounting for $m$ neural masses of the $j-th$ node.

$\mathbf{\Psi}_{j}$ is the vector representing the neural activity of the neural masses of the $j-th$ node.

$\mathbf{\Xi}_{j}$ is the vector of external interventions (noise and/or stimulation).

$\mathbf{\Lambda}_{j}$ is a state operator. It links the state variables within a neural mass.

$m$ is number of neural masses within a node.

$n$ is the number of state variables of a neural mass. $n$ can be different for each $m$.

$\mathbf{V_{loc}}$ & square matrix of order $\sum_{i=1}^{m}n_{i}$. Coupling scheme for local connectivity.

$\mathbf{K}$ is the matrix of interregional distances.

The hierarchy of anatomical connectivity that determines the spatial scale represented by the structural linkages between nodes (Freeman, 1975) are represented by the square matrices $\mathbf{U}_{v}$ of order $l\sum_{i=1}^{m}n_{i}$ at level $v$:

$\mathbf{U}_{v=0}$ represents local connectivity,

$\mathbf{U}_{v=1}$ is the homogeneous (spatially invariant) connectivity implemented as a local connectivity kernel,

$\mathbf{U}_{v=2}$ is the heterogeneous connectivity given by the connectome.

Finally, $c_{v}$ is the propagation speed through the path of level $v$. The space-time structure of the connectivity over the space $\Omega$ in $\mathbf{\hat{P}}$ is coded by $\mathbf{U}_{v=2}$ and $\mathbf{K}$. The current implementation considers $c_{v} = \infty$ for local ($v=0$) and short-range ($v=1$) connectivities.

Figure 2: This diagram depicts the two main distinguishable spatial scales at which a brain network and its connectivity are defined. The human cortex can be represented as a two dimensional surface embedded in three dimensional space discretely described by an ordered set of vertices, spanning triangles connected into a surface mesh (left). Local intracortical connections can be defined at this scale using a kernel function. Time-delays are introduced via the large-scale connectivity where each region is represented by one single node (whose position may correspond to the centre of mass of the vertices belonging to the same anatomical region). The long-range connections are defined by weights in the connectivity matrix which can be used as a coarser representation of the brain network (right). At the surface level both types of connectivity coexist.

### The generic two dimensional oscillator model

In the following sections this meoscopic model will be used as an example of a generic population model. The choice is motivated by various reasons, but foremost by the fact that a wide range of neuron models can be described by a two-dimensional (2D) dynamic system (FitzHugh, 1961, Nagumo,1962) and more recently Stefanescu and Jirsa (2008) have shown that population models of such neuron models preserve the mathematical form of the single neuron equations. This abstract 2D oscillator model generates a wide range of phenomena as observed in neuronal population dynamics such as multistability, coexistence of oscillatory and non-oscillatory behaviors, various behaviors displaying multiple time scales, to name just a few. The model is a dynamic system describing one neural mass ($m=1$) with two state variables ($n=2$). The dynamic equations of this model are composed of two ordinary differential equations comprising two nullclines (see Fig.3). The first nullcline is a cubic function as it is found in most neuron and population models; the second nullcline is arbitrarily configurable as a polynomial function up to second order. The manipulation of the latter nullcline's parameters allows to generate a wide range of different behaviors.

The equations for this model, as implemented in TVB, are:

dV = d * tau * (alpha * W - f * V**3 + e * V**2 + I)
dW = d * (a + b * V + c * V**2 - beta * W) / tau


$a$ : vertex of the quadratic nullcline or intersection

$b$ : slope of the linear nullcline when $c=0$ and declivity of the quadratic nullcline

$c$ : speed of change of the quadratic nullcline

$d$ : temporal scaling relative to the simulation time scale

$e$ : speed of change of the quadratic term of the cubic nullcline

$f$ : coefficient of the cubic term of the cubic nullcline

$g$ : coefficient of the linear term of the cubic nullcline

$\alpha$ : constant parameter to scale the feedback from the slow variable to the fast variable if $\tau \neq 0$

$I$: baseline excitability of the system and entry point for long-range and local coupling and stimulation patterns

$\tau$: temporal hierarchy parameter to introduce a time-scale separation between the two state variables

To eventually compare the implementation of different models in the general framework, Eq. (1), a formulation for this model, under the conventions described above, is derived. Using the following equation for describing a neural mass model:

$$\mathbf{D}(\text{d}/\text{d}t)\mathbf{\Phi}(t) = \mathbf{E}(t) - \mathbf{A}(\mathbf{\Phi}(t)) \tag{2}$$ where $\mathbf{\Phi}$ is the set of $n$ variables; then, each state variable $\left(\frac{\text{d}\varphi_{i}}{\text{d}t}\right)$ is a function of these $n$ variables and the way they are interconnected is given by the state matrix $\mathbf{A}$.

In this case $A(\mathbf{\Phi}(t)) = A^{0} [1\,1]^T + A^{1} [\varphi_1\,\varphi_2]^T + A^{2} [\varphi_1^{2}\,\varphi_2^{2}]^T + A^{3} [\varphi_1^{3}\,\varphi_2^{3}]^T$

$A^{0}= \begin{bmatrix} a_{11}^{0} = 0 & a_{12}^{0} = 0\\ a_{21}^{0} = \frac{d}{2\tau}a & a_{22}^{0} = \frac{d}{2\tau}a\\ \end{bmatrix}$, $A^{1}= \begin{bmatrix} a_{11}^{1} = d\tau\,g & a_{12}^{1} = d\tau\,a\\ a_{21}^{1} = \frac{d}{\tau}b & a_{22}^{1} = -\frac{d}{\tau}\beta\\ \end{bmatrix}$, $A^{2}= \begin{bmatrix} a_{11}^{2} = d\tau\,e & a_{12}^{2} = 0\\ a_{21}^{2} = \frac{d}{\tau}c & a_{22}^{2} = 0\\ \end{bmatrix}$, $A^{3}= \begin{bmatrix} a_{11}^{3} = -d\tau\,f & a_{12}^{3} = 0\\ a_{21}^{3} = 0 & a_{22}^{3} = 0\\ \end{bmatrix}$

the equations for the 'Generic2dOscillator' model under this convention read: $$\dfrac{\text{d}\varphi_{1}}{\text{d}t}= a_{11}^{1}\varphi_1 + a_{12}^{1} \varphi_2 + a_{11}^{2}\varphi_1^{2} + a_{11}^{3}\varphi_1^{3} + \epsilon_1\\$$ $$\dfrac{\text{d}\varphi_{2}}{\text{d}t}= a_{21}^{0} + a_{22}^{0} + a_{21}^{1}\varphi_1 + a_{22}^{1}\varphi_2 + a_{21}^{2} \varphi_1^{2}$$

where $\epsilon_1$ replaces the term $I$ that represents the entry point of extrinsic inputs to the neural mass.

## Scientific Modules

The simulation core of TVB brings together a mesoscopic model of neural dynamics with structural data. The scientific modules support the building of a full brain network model by linking a number of functional blocks. The simulator then recreates the emergent brain dynamics by numerically integrating the set of equations describing the large-scale network model (Eq. 1). The elementary components required to build such a network, including cortical population models, structural connectivity data, coupling functions and neuroimaging modality, have their representation as TVB Classes and are bound together in an instance of the Simulator class. This class has several methods to set up the spatiotemporal dimensions of the input and output arrays, based on configurable or intrinsic attributes of the individual components, as well as a cascade of specific configuration methods to initialize all the individual blocks required to build a minimal representation of a brain network model and launch a simulation. For instance:

• structural spatial support (e.g. Connectivity and Cortex), defines the number of nodes of the network;
• transmission speed (e.g. Connectivity.speed) contributes to time delays in the network;
• integration time step (e.g. HeunDeterministic.dt), together with the simulation length determines the number of time steps to be integrated;
• state space (e.g. Generic2dOscillator), sets the dimension of each node in the network given by the number of neural masses, state variables and modes (only for multimodal models);
• output space (e.g. EEG), sets the spatial domain dimension associated with this recording technique, i.e., number of sensors;

Note that in most neural mass models there is a state variable representing some type of neural activity (firing rate, average membrane potential, etc.), which serves as a basis for the biophysical monitors. The state variables used as source of neural activity depend both on the Model and the biophysical space that it will be projected onto (MEG, EEG, BOLD). However, how a given Monitor operates on a subset of state variables of interest is an intrinsic property of the monitor. Users with programming experience can, of course, define new monitors according to their needs. Currently, there is not a mechanism providing automatic support for general operations over state variables before they are passed to a monitor. As such, when the neural activity entering into the monitors is anything other than a summation or average over state variables then it is advised to redefine the Model in a way that one of the state variables actually describes the neural activity of interest.

### Scripting Interface and Examples

Users with programming experience may interact with the scientific core through the Python interactive scripting interface [1], enabling easy modelling and development. The simplest way to describe the scripting interface is with an example, the code below shows the steps required to build a region-based network model and to apply a stimulus:

First, import some of TVB's scientific modules

from tvb.simulator.lab import connectivity, models, coupling, integrators, monitors, equations, simulator


Initialise a Connectivity, local Model (fig. 3
Figure 3: Phase portrait and sample trajectory of the generic mesoscopic model with its default parameters.
) and long range Coupling
white_matter = connectivity.Connectivity()
white_matter.speed = numpy.array([4.0])
oscillator = models.Generic2dOscillator()
long_range_coupling = coupling.Linear(a=0.0126)


Select an Integrator

heunint = integrators.HeunDeterministic(dt=2**-4)


Choose the Monitors to record the output activity

mon_raw = monitors.Raw()
mon_tav = monitors.TemporalAverage(period=2**-2)
screen = (mon_raw, mon_tav)


A stimulus at the region level is generated following the code below and the resulting pattern is displayed in Fig. 4
Figure 4: Top-left: the spatial component of the stimulus, indicating the strength with which the stimuli enters each region; bottom-left: the temporal profile for the stimulus; right: the combination of both the spatial and temporal components.
#enable access to connectivity attributes
white_matter.configure()

#specify to which regions and with what strength the stimulus will be applied
regions = [0, 7, 13, 33, 42]
weights = numpy.zeros((white_matter.number_of_regions,))
weights[regions] = numpy.array([2.0**-2, 2.0**-3, 2.0**-4, 2.0**-5, 2.0**-6])

#define a temporal profile
eqn_t = equations.Gaussian()
eqn_t.parameters["midpoint"] = 16.0 #ms

#create an instance of a Stimulus datatype
stimulus = pattern.StimulusRegion(temporal = eqn_t,
connectivity = white_matter,
weight = weights)


Create an instance of the Simulator class

sim = simulator.Simulator(model = oscillator,
connectivity = white_matter,
coupling = long_range_coupling,
integrator = heunint,
monitors = screen,
stimulus = stimulus)
sim.configure()


Run the simulation

raw_data, raw_time, tavg_data, tavg_time = [], [], [], []
for raw, tavg in sim(simulation_length=64):
if not raw is None:
raw_time.append(raw[0])
raw_data.append(raw[1])

if not tavg is None:
tavg_time.append(tavg[0])
tavg_data.append(tavg[1])


The resulting time series are shown in Fig. 5, where the perturbation of the stimulus can be seen on the transient dynamics from the initial state
Figure 5: Left: Raw neural activity described by the first state variable of the mesoscopic model. Each trace represents the activity of one node. Red traces correspond to the nodes to which the stimulus was applied. Right: trajectories on the phase plane. Blue lines are the stimulated nodes trajectories.

##### Parameter Space Exploration

When studying large-scale brain network models, it often of interest to explore the role of certain parameters and how the shape the collective dynamics of the brain. For instance, (Ghosh et al., 2008, Deco et al., 2009) demonstrated the important role of three large-scale parameters in the emergence of different cluster synchronization regimes: the global coupling strength factor, time-delays (introduced via the long-range connectivity fibre tract lengths and a single finite transmission speed) and noise variance. Using TVB's scripting interface, it is easy to replicate the scheme and perform a parameter space exploration along mutiple dimensions. The following movie is a one dimensional parameter sweep along the the coupling strength (slope of a linear coupling function). The connectivity matrix upon which the large-scale network is built was the demonstration dataset. The evolution of the local dynamics were represented by the model Generic2dOscillator, configured in such a way that the dynamics of a single node are characterized by a stable spiral (excitable regime). Each frame represents a 256ms simulation for a given value of coupling strength. As this value increases, the incoming activity to each node affects the local dynamics finally leading the system into an oscillatory regime.

It is worth noticing that parameter sweeps can also be launched from TVB's web-interface providing a parallel simulation environment to non-computational researchers.

In TVB stimulation patterns can be used to directly stimulate cortical regions. This provides a way of accessing and altering neural dynamics in networks that are widely distributed anatomically. The induced activation propagates through long-range connections to other brain areas. By identifiying regions which have been activated during stimulation, it could be possible to infer structural and functional connectivity patterns in the healthy brain and in the altered brain. Likewise, TVB enables researchers to test the influence of stimulation patterns in different cortical areas and compare the resulting brain states.

## Datatypes

In the architecture of TVB, a common language permits the handling and flow of data between the scientific kernel and the supporting framework (see section below). This language is represented by TVB-Datatypes which are annotated data structures containing one or more data attributes and associated descriptive information, as well as methods for operating on the data they enclose. The definition of a Datatype is achieved using TVB's traiting system, which was inspired by the traiting system developed by Enthought (EnthoughtTraits). The traiting system of TVB, among other things, provides a mechanism for annotating data, that is, associating additional information with the data which is itself usually a single number or an array of numbers.

## Supporting Framework

The supporting framework provides a database backend, workflow management and a number of features to support collaborative work. The graphical user interface (GUI) is web based, making use of HTML 5, WebGL, CSS3 and Java Script (Bostock et al., 2011) tools to provide an intuitive and responsive interface that can be locally and remotely accessed.

### Graphical Interface

TVB's web interface consists of 6 main working areas:

1. User
2. Project
3. Simulator
4. Analysis
5. Stimulus
6. Connectivity

In 'User', users manage their accounts and TVB settings according to their available hardware resources. In 'Project', individual projects are managed, allowing sharing with other users, and in addition navigation tools are provided to explore the projects structure as well as the data associated with them. In 'Simulator' the large-scale network model is configured and simulations launched, additional viewers for structural and functional data are offered in 2D and 3D, as well as other displays to visualize the results of a simulation. A history of simulations is also available in this area. In 'Analysis' time-series and network analysis methods are provided and the results can be displayed thanks to built-in visualizers. In 'Stimulus', users can interactively create stimulation patterns to be used in the simulations, enabling modelling of stimulation protocols. Finally, in 'Connectivity', users are given a responsive interface to edit the connectivity matrices assisted by interactive visualization tools. Each of the aforementioned working areas has a second level menu providing users with sub-environments to perform specific tasks or access detailed information about a certain project and its associated data. Online-help is also provided to help users navigating through the interface and to quickly give them an overview of the modelling approach used in TVB.

### Data Compatibility

For exporting data generated by and used in TVB, an HDF5 based format (The HDF Group.Hierarchical data format version 5) was chosen. It can store huge pieces of data in a compact form; it allows grouping of data in a tree structure; it allows metadata assignment at every level; and it is a widely used format, accessed in several programming languages and applications. Additionally, each object in TVB has a global unique identifier (GUID) which makes it easy to identify an object across systems, avoiding naming conflicts among files containing objects of the same type. For importing neuroimaging data, generated independently by other applications, into the framework, a set of uploading mechanisms is provided. The following formats are supported: NIFTI-1 (volumetric time-series), GIFTI (surfaces) and CFF (connectome file).

## System Requirements & Availability

TVB can be installed in three main configurations, according to the hardware resources: 1) Stand Alone; 2) Client-Server or 3) Cluster. In the former, a local workstation needs to have certain display, computing power and storage capacity resources. In the second, TVB is running on a server connected through a network to client units, and thus accessible to a certain number of users. In this configuration, simulations profit of the server's computing power while visualization tasks use resources from the client machine. The latter is similar to the client-server configuration, but with the additional advantage of parallelization support in the back-end. When using TVB's web interface, users are recommended to have a high definition monitor (at least 1600 x 1000 pixels), a WebGL and WebSockets compatible browser (latest versions of Mozilla Firefox, Apple Safari or Google Chrome), and a WebGL-compatible graphics card, that supports OpenGL version 2.0 or higher.

Packages built for Linux, Mac, and Windows can be downloaded from here.

Source code repositories can be found here.

## References

• Amari, S (1977). Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics 22: 77-87.
• Bastiani, M; Shah, N; Goebel, R and Roebroeck, A (2012). Human cortical connectome reconstruction from diffusion weighted MRI: the effect of tractography algorithm. Neuroimage 62: 1732-1749.
• Bostock, M; Ogievetsky, V and Heer, J (2011). D3 Data-Driven Documents. IEEE Transactions on Visualization and Computer Graphics 17: 2301-2309.
• Deco, G; Jirsa, V; Robinson, P; Breakspear, M and Friston, K (2008). The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields. PLoS Computational Biology 4(8): e1000092. [2]
• Deco, G; Jirsa, V; Robinson, P; McIntosh, A and Sporns, O (2009). Key role of coupling, delay, and noise in resting brain fluctuations. PNAS 106: 10302-10307.
• Essen, D and Ugurbil, K (2012). The future of the human connectome. Neuroimage 62: 1299-1310.
• FitzHugh, R (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophysics Journal 1: 445-466.
• Freeman, W (1975). Mass Action in the Nervous System. ACADEMIC PRESS, New York San Francisco London .
• Ghosh, A; Rho, Y; McIntosh, A; Kötter, R and Jirsa, V (2008). Noise during Rest Enables the Exploration of the Brain's Dynamic Repertoire. PLoS Computational Biology 4: e1000196.
• Hagmann, P; Cammoun, L; Gigandet, X; Meuli, R and Honey, C (2008). Mapping the structural core of human cerebral cortex. PLoS Computational Biology 6: e159.
• Honey, C; Sporns, O; Cammoun, L; Gigandet, X and Thiran, J (2009). Predicting human resting-state functional connectivity from structural connectivity. PNAS 106: 2035-2040.
• Kötter, R (2004). Online retrieval, processing, and visualization of primate connectivity data from the CoCoMac database. Neuroinformatics 2: 127-144.
• Kötter, R and Wanke, E (2005). Mapping brains without coordinates. Philosophical Transactions of The Royal Society B: Biological Sciences 360: 751-766.
• Nagumo, R (1962). An Active Pulse Transmission Line Simulating Nerve Axon. Proc. IRE 50: 2061-2070.
• Stefanescu, R and Jirsa, V (2008). A Low Dimensional Description of Globally Coupled Heterogeneous Neural Networks of Excitatory and Inhibitory. PLoS Computational Biology 4: 26-36.
• Wilson, H and Cowan, J (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12: 1-24.