Neuronal Cable Theory

From Scholarpedia
Ernst Niebur (2008), Scholarpedia, 3(5):2674. doi:10.4249/scholarpedia.2674 revision #197291 [link to/cite this article]
(Redirected from Cable theory)
Jump to: navigation, search
Post-publication activity

Curator: Ernst Niebur

Neuronal cable theory is a set of assumptions and results relating to the propagation and interaction of electrical signals in spatially extended nerve cells.

Contents

Motivation

Many neurons have either a complex geometry, or large spatial extent, or both. An example of the former are the dendrites of cerebellar Purkinje cells whose complex arborizations accommodate hundreds of thousands of synapses. An example of the latter are motor neurons that convey control signals from the central nervous system to distal muscles. The linear dimensions of their axons can reach a considerable fraction of the size of the whole animal which frequently makes them the largest cells of the organism, certainly when measured by linear extension. For instance, spinal motor neurons of the giraffe reach a length of several meters!

The spatial extent of neurons provides both opportunity and difficulty. As an example of the former, a complex dendritic tree allows a neuron to receive a large number of synaptic inputs (hundreds of thousands in the case of the Purkinje cell!). Furthermore, the inputs can interact in highly nonlinear ways in the dendritic tree which goes well beyond just summing them up, thus allowing dendritic computations.

On the other hand, the fact that synaptic inputs are collected far away from the soma leads to the inherent difficulty that, when they arrive at the soma, they will be filtered and attenuated. That is, the current that is seen at the soma will be very different (usually much smaller and of a different shape) than the current that is injected at the site of the synapse. This is seen in Figure 1 which shows the voltage as a function of time in response to a very fast (in fact, instantaneous) current injection. While very close to the current injection (red trace; distance x=0.3) the voltage rises strongly and rapidly, the voltage rise is less pronounced and also delayed further away (blue trace; x=1) and even more so for larger distances (green trace; x=2).

Figure 1: Voltage at three different distances x from the location of current injection: Red, x=0.3, Blue, x=1, Green, x=2. Time and distance are in units of the characteristic time \(\tau\) and characteristic length \(\lambda\)of the neurite, respectively (see text).

The most important implication of cable theory is that because of a neuron's cable properties, distal synapses and dendrites are out of reach of traditional electrophysiological studies using electrodes at the soma. One reason why we know so little about how neurons integrate their distal synaptic input is that it is extremely difficult to measure, or control, distal parts of the cell. Some progress along these lines has been made with modern optical methods (e.g. 2-photon microscopy) but even a complete descriptive characterization of a cell's behavior will not provide the thorough insights that results from a quantitative theory like cable theory.

The main purpose of cable theory is to understand this process, i.e. how electrical signals from different synapses are combined in the system of branching tubes of different diameters and membrane properties that forms the dendritic tree of a cell. We will assume in this entry that a voltage difference exists between the interior of the cell and its surround. A simple introduction of how such differences arise and are maintained in a biological cell can be found in the article Electrical properties of cell membranes.

Neurites as core conductors

We will disregard the complexities of the whole living cell and consider it as body of cytoplasm with a membrane around it, or, in other words, as a salt water-filled little bag with some pores in it. One of the crucial assumptions of cable theory is to go one step further and to also remove a good deal of the morphological complexity of a neuron (there will be enough left, though!). Essentially, the idea is the following: if we have a long, thin neurite (also called "neural process" which means just a part of a dendrite or axon), the voltage will vary much more along the long axis of the neural process than perpendicular to it. So, we might just as well neglect this small variation and only consider the variation along the long axis. This has the important advantage that we can simplify our model from 3 dimensions to one! This is the single-most important assumption of cable theory.

Let us consider the currents and voltages in a neural process. By our simplification, the only spatial dimension that currents and voltages depend on is the long axis of the neurite, to which we assign the spatial coordinate \(x\ .\) Let us subdivide the process into little pieces of length \(\Delta x\ ,\) small enough so that the voltage is approximately constant everywhere within each such piece. The cell membrane is an insulator and both the inside and the outside of the cell are reasonably good conductors, therefore the membrane can be considered a capacitor (this is discussed in more detail in Electrical properties of cell membranes). Let the capacitance per unit length be \(\hat{C}\ .\) The membrane is, however, not a perfect insulator, therefore currents can flow across it or, in other words, it has a finite conductance. Let the conductance per unit length be \(\hat{g_L}\) and let \(V_L\) be the leakage (or resting) potential of the cell. Then, the capacitive and leakage currents across the membrane of our piece of neurite of length \(\Delta x\) are, respectively,

\[\tag{1} I_C= \hat{C} \frac{dV}{dt} \Delta x \\ I_L= \hat{g_L} (V-V_{L}) \Delta x \]

A note on units: If the radius of the neurite is \(r\ ,\) its circumference is \(2\pi r\) and the surface of a piece of neurite of length \(\Delta x\) is \(2\pi r \Delta x\ .\) As a consequence, the capacitance of this piece is \(2\pi r \Delta x c\) where \( c\) is the specific capacitance per area. Since the latter has the units of \(F/m^2\ ,\) the units of \(2\pi r \Delta x c\) is \( F\ ,\) as expected. Note also that the units of the capacitance per length unit, \(\hat C=2\pi r c\ ,\) are \(F m^{-1}\ .\) An analogous remark applies to the transmembrane conductance per unit length of neurite, \(\hat{g_L}\ ,\) and we see that its units are \( \Omega^{-1}m^{-1}\)

We will now figure out how activity is propagating along the neural process. Let us assume for simplicity that the specific resistance per unit length for currents flowing along the neurite (not across the membrane) is constant along the neurite and has a value \(\hat{\rho_i}\ .\) The resistance for a piece of length \(\Delta x\) is proportional to its length and inversely proportional to its cross section \( \pi r^2\), so the resistance is \(\hat{\rho_i} \Delta x\ /(\pi r^2).\) Note that if we look at two points along the conductor, the resistance between them increases proportionally to their distance. In contrast, the transmembrane conductance (and capacity) of the membrane between the two points decreases with their distance. In other words, the core conductance is in series while the transmembrane conductance is in parallel.

The current between the location \((x-\Delta x)\) and the location \(x\) is proportional to the voltage difference and the inverse resistance of the conductor, thus

\[\tag{2} I(x)=\frac{V(x-\Delta x) - V(x)}{\hat{\rho_i} \pi^{-1} r^{-2} \Delta x} \]


and the cur rent between \(x\) and \((x+\Delta x)\) is

\[\tag{3} I(x+\Delta x)=\frac{V(x) - V(x+\Delta x)}{\hat{\rho_i} \pi^{-1} r^{-2} \Delta x} \]

The conductance across a cylindrical piece of membrane with specific conductance \( \hat{g_L}\), radius r and length \( \Delta x\) is proportional to its surface area, i.e. the product of circumference and length,

\[ \hat{g_L} 2\pi r \Delta x \]

Likewise, the capacitive current is proportional to the temporal derivative of the voltage, the area of the capacitor and the specific capacitance, i.e.


\[ \frac{\partial V}{\partial t} 2\pi r \Delta x \hat{C} \]


Kirchhoff's current law applies: \[ I_C+I_L-I(x)+I(x+\Delta x)=0 \] and together with equations (1), (2) and (3), we obtain \[\tag{4} 2 \pi r \Delta x \hat{C} \frac{dV}{dt}+ 2 \pi r \Delta x \hat{g_L} (V-V_{L}) -\frac{V(x-\Delta x) - V(x)}{\hat{\rho_i}\pi^{-1} r^{-2} \Delta x}+\frac{V(x) - V(x+\Delta x)}{\hat{\rho_i} \pi^{-1} r^{-2} \Delta x} =0 \]


After dividing by \(2 \pi r \Delta x\) and taking the limit \(\Delta x \rightarrow 0\ \), the last two terms on the left hand side are found to be the second spatial derivative of V:

\[\tag{5} \hat{C} \frac{\partial V}{\partial t}+ \hat{g_L} (V-V_{L}) - \frac{r}{\hat{2\rho_i}} \frac{\partial^2 V(x)}{ {\partial x^2} }=0 \]


Conventionally, we divide by the leakage conductance and the coefficient of the first temporal derivative becomes \(\tau:=\hat{C}/\hat{g_L}\ .\) Solutions of eq.(5) decay over times on the order of \(\tau\ .\) This can be seen easily from eq. (5) by considering a spatially homogeneous solution in which case the temporal derivative vanishes and \(V(t)\) decays exponentially towards the equilibrium value \(V_L\ ,\) with a characteristic time \(\tau\ .\) Likewise, the coefficient of the second spatial derivative after division by \(g_L\) becomes \( \lambda^2:=r/(2\hat{g_L}\hat{\rho_i}) \) and it has the units of \(m^2\ .\) The length \(\lambda\) is the characteristic length over which solutions of eq.(5) decay in space.

We can now write eq.(5) in the convenient form

\[\tag{6} \tau \frac{\partial V}{\partial t} - \lambda^2 \frac{\partial^2 V(x)}{ {\partial x^2} }=V_{L}-V \]


This is a Partial Differential Equation called the Telegrapher's Equation. It was first studied by Lord Kelvin and others in the 19th century in the context of telegraph cables. Actually, this is the special case of vanishing inductance of the Telegrapher's Equation. In the case of an electrical cable or transmission line, the inductance has to be taken into account, too. We can safely neglect inductive currents in nerve cells.

Dendritic and axonal trees; branch points; boundaries

A branch point is treated exactly as we did before, except that now there are more than two currents (from \((x-\Delta x)\) and from \((x+\Delta x)\)) flowing into the node (in addition to the transmembrane and capacitive currents). The initial segment of each branch (and the ODE representing it) is thus coupled to the immediately adjacent segments (and equations).

We also have to decide what to do at the boundaries of the neurite. Usually, we assume that no current is flowing into or out of the neurite at its ends, which means that for the coordinates \(x_{end}\) at the ends, \(\frac{\partial V(x_{end})}{ {\partial x} }=0\ .\)

For a detailed discussion of computations in dendritic trees see dendritic processing.

Electrical Synapses

Electrical synapses are usually gap junctions and we here review briefly their properties to the extent necessary for cable theory. These synapses get their name from their morphology which is a close apposition of two neurons (or neural processes) with a very narrow gap in-between. It has been shown that there are actually cytoplasmatic bridges between the neural processes with direct exchange of ions. This gives us immediately an idea of how to model gap junctions: simply as ohmic resistors! Let us assume we have two simple neurons, labeled \(i\) and \(j\ ,\) and the voltages in them are described by equations of the type (6), where \(V\) in this equation is replaced by \(V_i\) for neuron \(i\) and by \(V_j\) for neuron \(j\ .\) If we have a gap junction between these two neurons that has a conductance \(G_{ij}\) (measured in Siemens), the current between the two neurons is \(\pm G_{ij}(V_j-V_i)\). Since we divided in eq. (6) by \(g_L\), we have to do the same for the currents. Therefore, the gap junction conductance is divided by \(g_L\) and the voltage difference is multiplied by \(g_{ij} = G_{ij}/g_{L}\) which is dimensionless. For instance, if the gap junction is at position \(x_g\ ,\) the equation for neuron \(i\) becomes \[\tag{7} \tau \frac{\partial V_i}{\partial t} - \lambda^2 \frac{\partial^2 V_i}{ {\partial x^2} }=V_{L}-V_i + \delta(x-x_g) g_{ij}(V_j-V_i) \]

where \(\delta(x)\) is the Dirac delta function (this function "picks out" the compartment at location \(x_g\) and makes sure that nothing is added in any of the other compartments.). Likewise, the last term in eq. (7) is added (with the opposite sign) to the equation of neuron \(j\ .\) Note that if the leakage conductances \(g_L\) of the two neurons differ, in the equation \(g_{ij} = G_{ij}/g_{L}\) we have to use the value corresponding to the correct neuron.

Chemical Synapses

As in the case of electrical synapses, we will only review basic properties of chemical synapses as required for understanding their role in cable theory; for a more detailed discussion of their properties see Synapses. Of two neurons interacting by a chemical synapse, one of them (called the presynaptic neuron) controls the voltage in the other one (called the postsynaptic neuron) by manipulating the opening of ion channels in the membrane of the postsynaptic neuron. It does this by ejecting small amounts of a chemical (called a neurotransmitter) close to the membrane of the postsynaptic membrane. Opening channels in the postsynaptic membrane corresponds to adding currents across the membrane. Which current depends on the ions which can pass through the ion channels which have been opened by the neurotransmitter. If a positive current into the cell results from the opening of the synaptic channels, the voltage of the cell will be higher than it was without synaptic input; such a synapse is called excitatory. If a negative current flows into the cell, its voltage will be lower (more negative) than it was without synaptic input; such a synapse is called inhibitory.

Electrically, we can model this by adding the appropriate currents on the RHS of eq. (6). As always, the current is the product of a conductance and a voltage difference. The conductance is that of the ion channels which are opened by the presynaptic neuron and they are therefore time-dependent (or, more precisely, dependent on the state of the presynaptic neuron). The voltage difference is the difference between the present voltage and the reversal potential of the ion species which can pass through the channel. For an excitatory synapse, the reversal potential will be higher than the resting potential, for an inhibitory synapses, it will be lower.

An example with one excitatory synapse at location \(x_e\)and one inhibitory synapse at location \(x_i\) results in the following cable equation \[\tag{8} \tau \frac{\partial V}{\partial t} - \lambda^2 \frac{\partial^2 V(x)}{ {\partial x^2} }=V_{L}-V + \delta(x-x_i) g_i(t)[V_i-V] + \delta(x-x_e) g_e(t)[V_e-V] \]


Obviously, \( g_i(t)\) and \( g_e(t)\) are the respective excitatory and inhibitory conductances (which depend on the state of the presynaptic neuron) and \(V_e\) and \(V_i\) are the corresponding reversal potentials. As for the electrical synapses, \( g_i(t)\) and \( g_e(t)\) are in units of \( g_{L}\) and therefore dimensionless. We note that

\[ V_i\le V_L<V_e \]

In the case of equality (\( V_i=V_L\)), activation of the inhibitory synapse does not change the potential of a neuron at rest; this is called shunting inhibition.

Compartmental Modeling

Cable theory consists of solving the partial differential equation (6) in different situations and using different methods (see Jack et al, 1983 for a monograph devoted essentially to this equation). In some situations, analytical solutions can be found but in general one resolves to using numerical solutions. The latter is in particular the case if nonlinear transmembrane currents are present.

One particularly important method is to discretize both space and time, thus replacing the partial differential equation by a coupled system of ordinary differential equations (ODEs). Of course, this is nothing but eq. (4) with the temporal derivative discretized. (We note in passing that replacing \(dt\) by \(\Delta t\) does not yield an efficient solution and in many cases will not lead to any solution at all, see Stiff Systems). This so-called compartmental modelling (the name being based on the compartmentalization of the neural process) was systematically applied to neural processes by Wilfred Rall in the early 1960s (see Rall Model) and it is the basis of essentially all simulators that take neural morphology explicitly into account, like Neuron or GENESIS.

History

The fundaments for the quantitative understanding of long-distance communication in nerve fibers were laid within a few years in the middle of the 19th century. In 1850, Hermann von Helmholtz showed experimentally that the signal velocity in nerve fibers is not infinite (as was frequently assumed at the time) and measured it as 27 m/s in the sciatic nerve of the frog. Although not seen as related at the time, the necessary theory was developed only a few years later in a publication (Kelvin 1855) by William Thompson (Lord Kelvin) that described the propagation of electrical signals in long cables, a problem that had become of interest because of the rapid development of long-distance communication by telegraph (first telegraphic transmission Baltimore to Washington in 1844, first transatlantic cable in 1858). The biophysics underlying neuronal excitability were elucidated over the following century, culminating in a spectacular series of five papers published in 1952 in the Journal of Physiology by Hodgkin and Huxley (one with Bernhard Katz). After first suppressing spatial voltage and current variations using the "space clamp" (inserting a high-conductance wire lengthwise into the squid giant axon) in order to understand the biophysics of the excitable membrane, the local currents could then be integrated into the core conductor model and traveling excitations, like electrotonic decrease in dendrite or lossless propagation of action potentials in axons were explained in terms of cable theory.


References

  • Helmholtz, H. 1850. Messungen über den zeitlichen Verlauf der Zuckung animalischer Muskeln und die Fortpflanzungsgeschwindigkeit der Reizung in den Nerven. Archiv für Anatomie, Physiologie und wissenschaftliche Medicin. 276
  • Hodgkin, A. L., Huxley, A. F. and Katz, B. 1952. Measurement of current-voltage relations in the membrane of the giant axon of Loligo. J. Physiol. 1952. 116: 424.
  • Hodgkin, A. L. and Huxley, A. F. 1952a. Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo. J. Physiol. 116: 449.
  • Hodgkin, A. L. and Huxley, A. F. 1952b. The components of membrane conductance in the giant axon of Loligo. J. Physiol. 116: 473.
  • Hodgkin, A. L. and Huxley, A. F. 1952c. The dual effect of membrane potential on sodium conductance in the giant axon of Loligo. J. Physiol. 116: 497.
  • Hodgkin, A. L. and Huxley, A. F. 1952d. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117: 500
  • Kelvin, Lord. 1855. On the theory of the electric telegraph. Proc. Roy. Soc. (London) 7:382.
  • Scott, A.C., 1975. The electrophysics of a nerve fiber. Reviews of Modern Physics 47: 487.

Internal references


Recommended reading

  • Hobbie, R. K. 1978. Intermediate physics for medicine and biology. John Wiley, New York.
  • Jack, J.J.B., Noble, D. and Tsien, R.W. 1983. Electric Current Flow in Excitable Cells. Oxford University Press, Oxford UK
  • Koch, C. 1999. Biophysics of Computation. Oxford University Press, New York.

See also

Rall model

External Links

Author's home page

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools