Stochastic models of ion channel gating

From Scholarpedia
Tilman Kispersky and John A. White (2008), Scholarpedia, 3(1):1327. doi:10.4249/scholarpedia.1327 revision #152773 [link to/cite this article]
(Redirected from Stochastic channel model)
Jump to: navigation, search
Post-publication activity

Curator: John A. White

Stochastic models of ion channel gating date back as far as the pioneering work of Hodgkin and Huxley (1952), whose gating variables are often interpreted as probabilities. Sakmann and Neher directed the efforts that led to the first single-channel recordings, for which they won the Nobel Prize in Physiology and Medicine in 1991. Single-channel recordings (Sakmann and Neher, 1995) demonstrate that there are typically only two conductance states of individual ion channels: the closed and open states. Fluctuations between closed and open states appear to occur randomly ( Figure 1, top trace), presumably driven by thermal noise. Under voltage clamp, the coefficient of variation (ratio of standard deviation to mean) of the current passing through a set of independent and identically distributed stochastic channels is proportional to \(1/\sqrt{N}\ ,\) where \(N\) is the number of channels.

Figure 1: Simulated voltage-clamp responses of voltage-gated potassium channels. Simulations were performed using the Gillespie method (see MATLAB source code). At the time indicated by the dotted line, the open probability transitions from a low to a high value. From top to bottom, traces indicate the number of open channels for N = 1, 10 100, and 1000, followed by the deterministic case (black) . The y-scale is set in each case to go from zero to N. For N = 1000, the response resembles the deterministic case (cf. bottom two traces).

Contents

Markov models

Simple models that appear to replicate the behavior of single ion channels and populations can be constructed under simple assumptions. For example, for a simple two-state model, one can reasonably assume that the free energies associated with closed and open states depend on membrane potential, and that transitions are driven by thermal fluctuations (Weiss, 1995). A common crucial assumption of this class of models is state transitions in ion channels behave like Markov processes. This assumption implies that ion channels are memoryless, in that the probability per unit time of changing states depends only on the current state, not on past events. In particular, the probability per unit time of changing states does not depend on the time the process has dwelt in that state.

Exact methods for simulating memoryless ion channels

Brute-force methods

Perhaps the most conceptually simple manner to model a population of ion channels is to step forward in time by a small time step dt and calculate the probability that each modeled ion channel flips from its current state into another state during that time interval. For example, consider a population of 1000 channels, each of which can be described by the simple rate scheme:

\[\tag{1} Closed \ \begin{matrix} \alpha \\ \rightleftharpoons \\ \beta \end{matrix} \ Open \]


In this formula, \(\alpha\) and \(\beta\) are the forward and backward rate constants, respectively, with units of 1/ms.

Assuming an initial distribution of \(N_o\) open channels and \(N_c = 1000 - N_o\) closed channels, the brute force method would entail calculating 1000 independent, pseudo-random numbers uniformly distributed in the range [0,1). For each of the \(N_o\) open channels, the algorithm would switch the state to closed if the random number \(r < \beta dt\ ,\) where dt is the time step. For each of the \(N_c\) closed channels, the state is flipped if \(r < \alpha dt\ .\) For this problem, the value of dt must be kept small enough that the transition probabilities for each channel are \(<< 1\ ;\) only in the limit as \(dt \rightarrow 0\) is this first-order approximation of the transition probability accurate.

The brute force method can be tied to membrane equations of Hodgkin-Huxley form straightforwardly by solving the HH equations and updating the stochastic conductances, membrane potential, and voltage-dependent rate transition constants for each time step \(dt\ .\) The method generalizes to more complex rate schemes easily. However, it has the severe disadvantage that it runs extremely slowly when the number of ion channels being simulated is large (Mino et al. 2002).

The Gillespie method

The brute force method is wasteful, in that it unnecessarily requires the evaluation of one pseudo-random number per time step for each channel. A far better algorithm, developed by Gillsepie (1977) and applied to neural channel noise by Skaugen and Walløe (1979) and Chow and White (1996), takes advantage of the fact that populations of ion channels can, to first approximation, be considered memoryless, independent, and identically distributed. Under these assumptions, the probability density function describing the time of the next transition (opening of a closed channel or vice-versa) is simple:

\[\tag{2} f(t) = \lambda e^{-\lambda t}\]


where \(\lambda = N_c \alpha + N_o \beta\) is the effective rate constant of the next transition. The first step in the Gillespie algorithm is to choose the time of the next state transition from the distribution \(f(t)\ .\) To do this, one first picks the pseudo-random number \(r_1\) from the uniform distribution [0,1), then determines the time of the next state transition from the equation:

\[\tag{3} dt = \frac{-ln(r_{1})}{\lambda}\]


Once the time step \(dt\) has been determined, one can determine which type of reaction occurred (an opening or closing in this case) by choosing an additional pseudo random number \(r_2\) from [0,1). If \(\lambda r_{2} < N_{c}\alpha\ ,\) choose an opening; otherwise choose a closing. The rationale behind this equation is that \(N_{c} \alpha / \lambda\) represents the probability of a channel opening, whereas \(N_{o} \beta / \lambda\) represents the probability of a channel closing, conditioned on the fact that an opening or closing is known to have occurred.

Like the brute-force method, the Gillespie method is easily coupled to Hodgkin-Huxley-style differential equations. In Figure 2 and the accompanying MATLAB code, we solve a stochastic version of the Morris-Lecar (1981) model:

\[\tag{4} \frac{dv}{dt} = \frac{1}{C} * (I_{app}(t) - g_{Ca} * m_{\infty}(v) * (v - v_{Ca}) - g_{K_{stoch}} * (v - v_k) - g_{leak} * (v - v_{leak})) \]


Figure 2: Simulated current-clamp responses of a stochastic Morris-Lecar model. Simulations were performed using the Gillespie method (see MATLAB source code). The model was a modification of the deterministic Morris-Lecar model, in which the recovery variable w is interpreted as a time-dependent probability of opening of potassium channels. From top to bottom, traces indicate current-clamp responses for suprathreshold current (turned on at t = 100 ms) with N = 1, 10 100, and 1000 K+ channels, followed by the deterministic case (black). The size of the membrane simulated was scaled to give the same density of K+ channels for each simulation. Even with 1000 K+ channels, the response of the stochastic model differs considerably from that of the deterministic model.

In this model, \(v\) is membrane potential, \(C\) is capacitance per unit area, \(g_{Ca}\) is the calcium conductance per unit area, \(m_{\infty}\) is the activation function of the calcium conductance, \(v_{Ca}\) and \(v_K\) are the Nernst potentials of calcium and potassium, \(g_{leak}\) is the leak conductance, and \(v_{leak}\) is the reversal potential of the leak. All of these factors are the same as in the deterministic model. The important difference is that the potassium conductance, \(g_{K_{stoch}}\ ,\) is determined from a stochastic simulation rather than from an ordinary differential equation. See our MATLAB code for details.

The computational advantages of Gillespie over brute force are enormous, because only two pseudo-random numbers per time step are required, rather than \(N\ .\) The Gillespie algorithm automatically chooses the appropriate time step, freeing the researcher from this task. The accuracy of the Gillespie algorithm is excellent. Essentially the only assumption underlying this algorithm is that membrane potential (and thus transition rates) do not change much during the time interval \(dt\ .\) For large numbers of channels, this condition is met automatically. In the case of a very small number of channels, one can impose a minimum step size to ensure accuracy (Skaugen and Walløe, 1979). We include this feature in the attached example.

Approximate methods for simulating memoryless ion channels

The Gillespie algorithm works very well under most circumstances, but for large numbers of channels N, the average time step is proportional to 1/N. Thus, for N >> 1, the time step can be much smaller than the effective membrane time constant, leading to inefficient and time-consuming simulations.

To increase the efficiency of stochastic simulation of ion channels, some researchers have approximated the effects of channel noise by adding a voltage-dependent noise term to the HH-style gating equation associated with the modeled channels to create a Langevin equation. For the simple two-state channel model we consider here, the modified gating equation would be:

\[\tag{5} \frac{dm}{dt} = \frac{1}{\tau_{m}}(m_{\infty} - m + \zeta)\]


where \(\tau_{m}\) and \(m_{\infty}\) are as usual for HH-style equations and the term \(\zeta\) represents Gaussian noise with a mean value of zero and variance appropriate for equilibrium conditions at the current value of membrane potential. For a population of N channels and a time step dt, \(\zeta\) can be implemented by adding the term

\[\tag{6} \sqrt{\frac{2 dt (\alpha (1 - m) + \beta m) (-ln(r_{1}))} {N}} \cos(2 \pi r_{2})\]


to the gating equation at each time step (Fox 1997). In this equation, \(r_1\) and \(r_2\) are again pseudo-random numbers drawn from a uniform distribution over [0,1).

The Langevin method has the advantage that one can take larger time steps than are possible for the Gillespie algorithm. However, the Langevin method is strictly valid only when the noise source is additive (i.e., not dependent on the value of the dependent variable) or when fluctuations around an equilibrium point are small (Van Kampen 1992). For many neuronal simulations of channel noise, neither of these conditions holds. Empirically, we have found that that Langevin methods lead to inaccurate estimates of spike jitter, spike latency, and relative spread (a measure of variability in threshold; Mino et al. 2002; White et al. 2007).

Are ion channels memoryless?

The Gillespie and Langevin algorithms discussed above gain enormously in efficiency by modeling the population rather than each individual channel. This simplification is made possible by the assumption that ion channels are memoryless, in that transition probabilities are independent of the amount of time that each channel has been in a given state.

The question of whether ion channels “remember” how long they have been in a given state has been debated for the last 20 years (see Jones 2006 for a recent editorial review). In so-called fractal models, the most common class of model with memory, transition rates depend on the amount of time that the channel has spent in a given state in such a way to yield a power-law distribution of dwell times (Lowen et al. 1999). Much of the debate over the years has focused on the question of whether fractal or Markov models fit the data better (McManus et al. 1988; Liebovitch et al. 2001). Although McManus et al. (1988) have shown that Markov models fit their data better than fractal models do, results from both sodium (Toib et al. 1998) and T-type calcium (Uebachs et al. 2006) channels suggest that the rate of recovery from inactivation depends on the duration of prior depolarizations in a manner that suggests power-law behavior. Complete resolution of this argument will be very difficult, because any data set of practical length and temporal resolution can in principle be fit by either class of model simply by adding states (in the Markov case) or by designing more elaborate rules of time-dependent transition rates. It can at least be generally agreed that the correlations in dwell times of many voltage-gated channels are substantially more complex than predicted by the two-state model in Eq. (1).

Methods for simulating ion channels with memory

Pseudo-exact methods

Conceptually, the best method for dealing with dependence of rate constants on history (e.g., dwell time) is to modify the brute force method to account for the appropriate historical variables (Lowen et al. 1999). The down-side of this method is that such algorithms can be exceedingly slow to run, including even more operations per time step than the traditional brute force method.

Approximate methods

Dwell-time-dependence of rate constants can be approximated without resorting to brute-force methods. For example, Lowen et al. (1999) constructed a Markov process description that approximates fractal distributions of dwell times:

\[\tag{7} C_0 \ \begin{matrix} \alpha / k^{n} \\ \rightleftharpoons \\ \beta / k^{n} \end{matrix} \ C_1 \ \begin{matrix} \alpha / k^{n-1} \\ \rightleftharpoons \\ \beta / k^{n-1} \end{matrix} \ \cdots \begin{matrix} \alpha / k \\ \rightleftharpoons \\ \beta / k \end{matrix} \ C_n \ \begin{matrix} \alpha \\ \rightleftharpoons \\ \beta \end{matrix} \ Open \]


For k > 1, as the state of the channel moves by chance to the left (toward the state \(C_0\)), rate constants become progressively slower. Thus, the average “dwell time” in states \(C_0\) or \(C_1\) is much longer than the dwell time in state \(C_n\) or \(C_{n-1}\ .\) In the limit as \(n \rightarrow \infty\ ,\) this model converges to power-law distributions of dwell times. In practice, we found that 10 closed states yielded fractal behavior over 6 decades of time scales (Lowen et al. 1999). Because this scheme follows the Markov assumption of memoryless performance, we were able to simulate the behavior of such channels using the Gillespie method to track only the number of channels in each of the 11 possible states.

Is channel noise important?

As we have shown, the effects of channel noise is inversely proportional to the square root of the total number of channels in the electrical region of interest. For the original Morris-Lecar model (Morris and Lecar 1981), which described the giant muscle fiber of the barnacle, we can compare experimental and modeling results to estimate the number of potassium channels to be over 100,000. In this case, channel noise is unlikely to play an important role. In other in vitro preparations, effects of channel noise can be significant. For example, using pharmacological and electrophysiological techniques in brain slices of entorhinal cortex, we have estimated the number of persistent sodium channels to be less than 5,000. Furthermore, using dynamic clamp techniques, we have shown that stochastic flicker of these persistent sodium channels is crucial for subthreshold oscillations and phase locking to weak periodic stimuli in entorhinal spiny stellate cells (Dorval and White 2006).

The situation in vivo could be different. In this circumstance, intracellular recordings indicate that neurons are flooded by synaptic inputs that reduce them to a so-called high conductance state that can exhibit profoundly different electrophysiological properties (Destexhe et al. 2003). The effects of channel noise in the high conductance state have not been examined thoroughly. One might imagine that synaptic inputs would "shunt, thus blunt" the effects of noise from stochastic open-closed transitions of voltage-gated channels. On the other hand, many of the crucial events in neuronal processing occur in restricted domains like dendritic spines or in areas like the axon hillock with limited synaptic inputs. Channel noise seems likely to be a major factor in fine axonal collaterals, and may in fact serve as a major factor that limits the degree to which these collaterals can be miniaturized without losing fidelity (Faisal et al. 2005). The effects of channel noise under increasingly realistic conditions remains an area of great interest in computational neuroscience.

References

  • Dorval AD, Jr., White JA (2005) Channel noise is essential for perithreshold oscillations in entorhinal stellate neurons. J Neurosci 25:10025-10028. doi:10.1523/JNEUROSCI.3557-05.2005.
  • Faisal AA, White JA, Laughlin SB (2005) Ion-channel noise places limits on the miniaturization of the brain's wiring. Curr Biol 15:1143-1149. doi:10.1016/j.cub.2005.05.056.
  • Gillespie DT (1977) Exact stochastic simulation of coupled chemical reactions. J Phys Chem 81:2340-2361. doi:10.1021/j100540a008.
  • Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117:500-544.
  • Liebovitch LS, Scheurle D, Rusek M, Zochowski M (2001) Fractal methods to analyze ion channel kinetics. Methods 24:359-375. doi:10.1006/meth.2001.1206.
  • Lowen SB, Liebovitch LS, White JA (1999) Fractal ion-channel behavior generates fractal firing patterns in neuronal models. Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics 59:5970-5980. doi:10.1103/PhysRevE.59.5970.
  • McManus OB, Weiss DS, Spivak CE, Blatz AL, Magleby KL (1988) Fractal models are inadequate for the kinetics of four different ion channels. Biophys J 54:859-870. doi:10.1016/S0006-3495(88)83022-4.
  • Mino H, Rubinstein JT, White JA (2002) Comparison of algorithms for the simulation of action potentials with stochastic sodium channels. Ann Biomed Eng 30:578-587. doi:10.1114/1.1475343.
  • Skaugen E, Walløe L (1979) Firing behavior in a stochastic nerve membrane model based upon the Hodgkin- Huxley equations. Acta Physiologica Scandinavica 107:343-363.
  • Toib A, Lyakhov V, Marom S (1998) Interaction between duration of activity and time course of recovery from slow inactivation in mammalian brain Na+ channels. J Neurosci 18:1893-1903.
  • Uebachs M, Schaub C, Perez-Reyes E, Beck H (2006) T-type Ca2+ channels encode prior neuronal activity as modulated recovery rates. J Physiol 571:519-536. doi:10.1113/jphysiol.2005.103614.
  • Van Kampen NG (1992) Stochastic processes in physics and chemistry. Amsterdam: Elsevier.
  • Weiss TF (1996) Cellular biophysics. Cambridge, Mass.: MIT Press.
  • White JA, Rubinstein JT, Mino H (2007) Response: Implementation issues in approximate methods for stochastic Hodgkin-Huxley models. Ann Biomed Eng 35:319. doi:10.1007/s10439-006-9214-5.

Internal references

Recommended Reading

  • Hille B (2001) Ion channels of excitable membranes, 3rd edition. Sunderland, MA: Sinauer Associates, Inc.
  • Koch C (1999) Biophysics of computation. New York: Oxford University Press.
  • Van Kampen NG (1992) Stochastic processes in physics and chemistry. Amsterdam: Elsevier.
  • Weiss TF (1996) Cellular biophysics. Cambridge, Mass.: MIT Press.

External links

See also

Dynamic Clamp, Hodgkin-Huxley Model, Morris-Lecar Model, Neuronal Noise, Patch Clamp, Stochastic Dynamical Systems

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools