Neuronal parameter optimization

From Scholarpedia
Astrid A. Prinz (2007), Scholarpedia, 2(1):1903. doi:10.4249/scholarpedia.1903 revision #54915 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Astrid A. Prinz

Neuronal parameter optimization is the process of identifying sets of parameters that lead to a desired electrical activity pattern in a neuron or neuronal network model that is not fully constrained by experimental data.


The need for optimization

Single neurons and neuronal networks intended to reproduce an experimentally observed electrical behavior are modeled with systems of differential equations that contain parameters such as (but not limited to):

  • for single neuron models: membrane capacitance, maximal conductances, half activation and inactivation voltages and time constants of individual ionic currents, axial resistance, and morphological parameters such as cell size and axon or dendrite branch structure, length(s) and diameter(s).
  • for network models: all of the above for each neuron in the network, plus information about the number and connectivity of the neurons in the network and the properties of each synapse.

For details on the parameters that govern specific types of model neurons and networks, see for example conductance-based model neurons.

In the biological neurons and networks that inspire these models, it is practically never possible to measure all parameters needed to fully constrain the model in a single experimental preparation. Furthermore, the properties of neurons and networks vary even between animals of the same species or within the same animal (Marder and Goaillard 2006), and strategies such as

  • combining a subset of parameters measured in animal A with another subset measured in animal B or
  • obtaining model parameter values by averaging over measurements of the same parameter in different animals

usually fail to produce the desired model behavior (Golowasch et al. 2002).

Starting with a set of differential equations that constitutes a neuron or network model, it is therefore often necessary to find sets of model parameters that approximate the desired behavior through methods other than experimental measurement.

What is "optimal"?

Regardless of the optimization method used, model parameter optimization requires a measure for the "goodness" of model neuron or network activity, i.e. for how well the model produces the desired electrical activity pattern. As an alternative to maximizing the goodness of a model, parameter optimization methods can also minimize the difference between the model's activity and the biological target activity as measured by a "distance" or "error" function. Because both strategies are in use, this article does not distinguish between maximizing a goodness measure or minimizing an error measure and uses the two optimization strategies interchangeably.

The choice of goodness or error measure depends on the purpose of the model neuron or network and can have significant influence on the results and success of model parameter optimization. Examples of goodness or error measures are:

  • Root-mean-square difference between the voltage trajectories - spontaneous or in response to stimuli - generated by the model and the biological neuron or network it is supposed to model (Bhalla and Bower 1993).
  • Overlap between model and target voltage trajectories in the dV/dt versus V phase plane (LeMasson and Maex 2001, Achard and De Schutter 2006), a goodness measure that has the advantage of being insensitive to time shifts between voltage traces, but the disadvantage of loosing all timing information.
  • Similarity between features extracted from the model and target voltage traces, such as inter-spike intervals or spike amplitudes (Bhalla and Bower 1993).
  • All-or-none measures of goodness, such as whether a model's behavior is of a certain type, like bursting or tonically spiking, or falls within the experimentally observed range for characteristics such as burst period and duration (Prinz et al. 2003, 2004). Such digital goodness measures preclude the use of gradient descent algorithms.
  • Visual similarity between the voltage traces generated by a model and those of the experimental data it is supposed to mimic, as judged by the modeler (Guckenheimer et al. 1993). Such un-quantified and objective goodness measures preclude automated parameter optimization.

Parameter sets with a goodness above a satisfactory threshold are called "solutions" for the optimization problem in question. If there are multiple solutions for an optimization problem - as is often the case - the entirety of those solutions is referred to as the "solution space" of the problem.

Neuronal parameter optimization methods

Methods that are being used to identify model parameter sets that generate a desired behavior include:

  • hand-tuning
  • parameter space exploration
  • gradient descent
  • evolutionary algorithms
  • bifurcation analysis
  • hybrid methods that combine several of the above

Each of these methods will be briefly described below, including a discussion of their mutual advantages and disadvantages. A range of methods for neuronal parameter optimization is also described in (Achard et al. in press).


Perhaps the most widely used method to obtain a model parameter set that produces good model behavior is to manually change one or a few model parameters at a time, guided by trial-and-error and the modeler's experience and prior knowledge of neuronal or network dynamics, until the model's behavior is satisfactorily close to the experimentally observed target behavior - or until the modeler loses patience.


  • Does not require the design and programming of an optimization algorithm or goodness function.
  • Not computationally intensive.
  • Incorporates prior knowledge about neuron or network behavior.


  • Difficult even for experienced modelers.
  • Highly subjective.
  • Time-consuming.
  • If a good parameter set is found, it is never certain if there is a better one that has not been discovered.
  • If no good parameter set is found, it is not clear whether it is because none exists, or because existing good parameter sets were not discovered.


  • (Nadim et al. 1995) used hand-tuning to arrive at a functional model of the leech heartbeat elemental oscillator.
  • (Soto-Trevino et al. 2005) hand-tuned a multi-compartment model of a pacemaker network to reproduce a variety of experimentally observed behaviors.

Parameter space exploration

Parameter space exploration methods use computational brute force to simulate model behavior for a large number of parameter sets and to select those parameter sets that best reproduce the target neuron or network activity. The parameter space of the model can be explored by covering it with a regular grid of parameter sets or with random combinations of parameters. Simulation and analysis results from each simulated parameter set are often stored in a model database that can later be mined for parameter sets that generate activity patterns other than the original target behavior.

Figure 1: Schematic illustration of parameter exploration with random parameter sets (left) or parameter sets on a regular grid (right). The boxes delineate the sampled parameter space, the shades areas show the solution space for a particular parameter optimization problem, the white dots are solutions identified by the exploration method, and the black dots are sampled parameter sets that are not solutions.


  • Provides information about model behavior throughout parameter space.
  • Does not require prior knowledge of model dynamics.
  • Locates entire solution space rather than a single solution.


  • Computationally intensive.
  • Number of simulations necessary to cover parameter space increases exponentially with the number of parameters.
  • Sparse sampling of parameter space may locate good, but miss best parameter sets.


  • (Bhalla and Bower 1993) used parameter exploration of different cell types to localize regions of interest in parameter space.
  • (Foster et al. 1993) used a stochastic search method to study the role of conductances in Hodgkin-Huxley type model neurons.
  • (Goldman et al. 2001) explored the maximal conductance space of a model neuron to identify regions in parameter space that generate silent, tonically spiking, or bursting behavior.
  • (Prinz et al. 2003) used the example of a stomatogastric model neuron to introduce model database construction and analysis as model analysis tools.
  • (Prinz et al. 2004) explored the parameter space of a rhythmic model network and showed that similar and functional network behavior can arise from different network parameter sets.

Gradient descent

Gradient descent methods (or ascent methods, depending on whether a goodness measure or error function is being used) start at a point in parameter space, locally explore how goodness changes if one or several parameters are changed by small amounts, and then chose a new best parameter set by moving in the direction in parameter space that most improves the goodness of the model. These steps are repeated until an optimal model parameter combination has been found.


  • Can be computationally efficient.


  • Needs to assume that the goodness function is smooth.
  • Danger of getting stuck in local goodness maxima.


  • (Bhalla and Bower 1993) used gradient descent to identify good parameter sets for multi-compartment models of mitral and granule cells of the olfactory bulb.

Evolutionary algorithms

Evolutionary algorithms (which include genetic algorithms) for model parameter optimization use principles such as mutation, mating, and selection - derived from Darwinian evolution - to improve the goodness of a population of model parameter sets. Evolutionary algorithms typically start with a random population of parameter sets, evaluate the goodness of each parameter set, select the best sets as parents of the next generation, and generate that next generation of parameter sets by mixing the parent parameter sets and randomly mutating a subset of parameters. These steps are then repeated with each new generation until parameter sets with sufficient goodness have been identified.


  • Can handle high-dimensional and non-smooth parameter spaces.
  • Have been shown to be computationally efficient (Moles et al. 2003).


  • Outcome can be highly sensitive to choice of goodness function and algorithmic parameters such as generation size, mutation rate, breeding strategy, etc.


  • (Taylor and Enoka 2004) used an evolutionary algorithm to optimize motor neuron synchronization.
  • (Keren et al. 2005) constrained compartmental models using multiple voltage recordings and genetic algorithms.
  • (Achard and De Schutter 2006) found solutions for a complex model neuron with an evolutionary algorithm as the first stage of a hybrid optimization strategy.

Bifurcation analysis

Model parameter optimization based on bifurcation analysis uses computational tools based on the theory of nonlinear dynamical systems to generate maps of parameter space that indicate the location of bifurcations at which one type of model behavior transitions into another type. Such maps allow the selection of model parameter sets that generate the desired model behavior.


  • Provides information about model behavior throughout parameter space.


  • Comprehensive map construction becomes computationally costly as the number of parameters increases.
  • Maps contain information about behavior type (i.e., bursting with six spikes per burst), but not about quantitative characteristics such as burst period or duration.


  • (Guckenheimer et al. 1993) used bifurcation analysis to map the dynamics of a bursting neuron.
  • (Beer 2006) computed and analyzed the bifurcation manifolds for small recurrent neural networks.

Hybrid methods

Hybrid model parameter optimization methods combine several of the methods described above by applying them to the same model parameter optimization problem either sequentially or in parallel (Achard et al. in press).

Figure 2: Illustration of the hybrid optimization method used by (Achard and De Schutter 2006). The boxes delineate the sampled parameter space, the shades areas show the solution space for a particular parameter optimization problem, the white dots with black edges are solutions identified with an evolutionary algorithm, the white dots are solutions subsequently found by exploring hyperplanes between known solutions, and the black dots are parameter sets in those hyperplanes that are not solutions.


  • Can combine the advantages of the underlying methods.


  • Complex implementation.
  • Require expertise in multiple parameter optimization methods.


  • (Bhalla and Bower 1993) combined gradient descent methods (to localize good parameter sets) with brute-force parameter exploration (to explore how sensitively model behavior depends on the parameters in the vicinity of these solutions).
  • (Achard and De Schutter 2006) used evolutionary algorithms to identify multiple good parameter sets for a complex Purkinjie cell model that would have been too high-dimensional for exhaustive parameter space exploration, but then systematically explored the space between these parameter sets.


  • Achard P, De Schutter E (2006) Complex parameter landscape for a complex neuron model. PLoS Comput Biol 2(7): e94.
  • Achard P, Van Geit W, LeMasson G (in press) Parameter searching. In: Computational modeling methods for neuroscientists, De Schutter E, ed. Cambridge: MIT Press.
  • Beer RD (2006) Parameter space structure of continuous-time recurrent neural networks. Neural Comput 18(12): 3009-3051.
  • Bhalla US, Bower JM (1993) Exploring parameter space in detailed single neuron models: Simulations of the mitral and granule cells of the olfactory bulb. J Neurophysiol 69(6): 1948-1965.
  • Calin-Jagemann RJ, Katz PS (2006) A distributed computing tool for generating neural simulation databases. Neural Comput 18(12): 2923-2927.
  • Foster WR, Ungar LH, Schwaber JS (1993) Significance of conductances in Hodgkin-Huxley models. J Neurophysiol 70(6): 2502-2518.
  • Goldman MS, Golowasch J, Marder E, Abbott LF (2001) Global structure, robustness, and modulation of neuronal models. J Neurosci 21(14): 5229-5238.
  • Golowasch J, Goldman MS, Abbott LF, Marder E (2002) Failure of averaging in the construction of a conductance-based neuron model. J Neurophysiol 87(2): 1129-1131.
  • Guckenheimer J, Gueron S, Harris-Warrick RM (1993) Mapping the dynamics of a bursting neuron. Phil Trans R Soc Lond B 341: 345-359.
  • Hines ML and Carnevale NT (2001) NEURON: a tool for neuroscientists. Neuroscientist 7: 123-135.
  • Keren N, Peled N, Korngreen A (2005) Constraining compartmental models using multiple voltage recordings and genetic algorithms. J Neurophysiol 94: 3730-3742.
  • LeMasson G, Maex R (2001) Introduction to equation solving and parameter fitting. In: Computational neuroscience: Realistic modeling for experimentalists, De Schutter E, ed. London: CRC Press.
  • Marder E, Goaillard JM (2006) Variability, compensation and homeostasis in neuron and network function. Nat Rev Neurosci 7: 563-574.
  • Moles CG, Mendes P, Banga JR (2003) Parameter estimation in biochemical pathways: A comparison of global optimization methods. Genome Res 13: 2467-2474.
  • Nadim F, Olsen OH, De Schutter E, Calabrese RL (1995) Modeling the leech heartbeat elemental oscillator. I. Interactions of intrinsic and synaptic currents. J Comput Neurosci 2: 215-235.
  • Prinz AA, Billimoria CP, Marder E (2003) Alternative to hand-tuning conductance-based models: construction and analysis of databases of model neurons. J Neurophysiol 90: 3998-4015.
  • Prinz AA, Bucher D, Marder E (2004) Similar network activity from disparate circuit parameters. Nat Neurosci 7(2): 1345-1352.
  • Soto-Trevino C, Rabbah P, Marder E, Nadim F (2005) Computational model of electrically coupled, intrinsically distinct pacemaker neurons. J Neurophysiol 94: 590-604.
  • Taylor AM, Enoka RM (2004) Optimization of input patterns and neuronal properties to evoke motor neuron synchronization. J Comput Neurosci 16: 139-157.
  • Taylor AL, Hickey TJ, Prinz AA, Marder E (2006) Structure and visualization of high-dimensional conductance spaces. J Neurophysiol 96: 891-905.

Internal references

External links

  • A model neuron database constructed by parameter space exploration (Prinz et al. 2003) is available here.
  • NeuronPM provides a free client/server application that explores parameter spaces for neural simulations written in NEURON (Hines and Carnevale 2001). The application is also described in (Calin-Jagemann and Katz 2006).
  • NeuroVis is a software tool developed for the visualization and analysis of model neuron and network databases resulting from systematic parameter space exploration. NeuroVis is based on dimensional stacking as described in (Taylor et al. 2006).
  • Gradient descent on Wikipedia.
  • Evolutionary algorithm on Wikipedia.
  • Bifurcation theory on Wikipedia
  • XPP is a free software package that facilitates bifurcation and phase plane analysis.
  • Neurofitter is a parameter tuning package for model neurons that uses the dV/dt versus V phase plane error measure (LeMasson and Maex 2001, Achard and De Schutter 2006) and several of the optimization methods presented here.
Personal tools

Focal areas