Short-term synaptic plasticity
Misha Tsodyks and Si Wu (2013), Scholarpedia, 8(10):3153. | doi:10.4249/scholarpedia.3153 | revision #182489 [link to/cite this article] |
Short-term plasticity (STP) (Stevens 95, Markram 96, Abbott 97, Zucker 02, Abbott 04), also called dynamical synapses, refers to a phenomenon in which synaptic efficacy changes over time in a way that reflects the history of presynaptic activity. Two types of STP, with opposite effects on synaptic efficacy, have been observed in experiments. They are known as Short-Term Depression (STD) and Short-Term Facilitation (STF). STD is caused by depletion of neurotransmitters consumed during the synaptic signaling process at the axon terminal of a pre-synaptic neuron, whereas STF is caused by influx of calcium into the axon terminal after spike generation, which increases the release probability of neurotransmitters. STP has been found in various cortical regions and exhibits great diversity in properties (Markram 98, Dittman 00, Wang 06). Synapses in different cortical areas can have varied forms of plasticity, being either STD-dominated, STF-dominated, or showing a mixture of both forms.
Compared with long-term plasticity (Bi 01), which is hypothesized as the neural substrate for experience-dependent modification of neural circuit, STP has a shorter time scale, typically on the order of hundreds to thousands of milliseconds. The modification it induces to synaptic efficacy is temporary. Without continued presynaptic activity, the synaptic efficacy will quickly return to its baseline level.
Although STP appears to be an unavoidable consequence of synaptic physiology, theoretical studies suggest that its role in brain functions can be profound (see, e.g., publications in (Research Topic) and the references therein). From a computational point of view, the time scale of STP lies between fast neural signaling (on the order of milliseconds) and experience-induced learning (on the order of minutes or more). This is the time scale of many processes that occur in daily life, for example motor control, speech recognition and working memory. It is therefore plausible that STP might serve as a neural substrate for processing of temporal information on the relevant time scales. STP implies that the response of a post-synaptic neuron depends of the history of presynaptic activity, creating information that in principle can be extracted and used. In a large-size network, STP can greatly enrich the network's dynamical behaviors, endowing the neural system with information processing capacities that would be difficult to implement using static connections. These possibilities have led to significant interest in the computational functions of STP within the field of Computational Neuroscience.
Contents |
Phenomenological model
The biophysical processes underlying STP are complex. Studies of the computational roles of STP have relied on the creation of simplified phenomenological models (Abbott 97,Markram 98,Tsodyks 98).
In the model proposed by Tsodyks and Markram (Tsodyks 98), the STD effect is modeled by a normalized variable \(x\) (\(0\leq x \leq1\)), denoting the fraction of resources that remain available after neurotransmitter depletion. The STF effect is modeled by a utilization parameter \(u\), representing the fraction of available resources ready for use (release probability). Following a spike, (i) \(u\) increases due to spike-induced calcium influx to the presynaptic terminal, after which (ii) a fraction \(u\) of available resources is consumed to produce the post-synaptic current. Between spikes, \(u\) decays back to zero with time constant \(\tau_f\) and \(x\) recovers to 1 with time constant \(\tau_d \). In summary, the dynamics of STP is given by
\[\begin{aligned} \frac{du}{dt} & = & -\frac{u}{\tau_f}+U(1-u^-)\delta(t-t_{sp}),\nonumber \\ \frac{dx}{dt} & = & \frac{1-x}{\tau_d}-u^+x^-\delta(t-t_{sp}), \\ \frac{dI}{dt} & = & -\frac{I}{\tau_s} + Au^+x^-\delta(t-t_{sp}), \nonumber \tag{1}\end{aligned}\]
where \(t_{sp}\) denotes the spike time and \(U\) is the increment of \(u\) produced by a spike. We denote as \(u^-, x^-\) the corresponding variables just before the arrival of the spike, and \(u^+\) refers to the moment just after the spike. From the first equation, \(u^+ = u^- + U(1-u^-)\). The synaptic current generated at the synapse by the spike arriving at \(t_{sp}\) is then given by
\[\Delta I(t_{sp}) = Au^+x^-, \tag{2}\]
where \(A\) denotes the response amplitude that would be produced by total release of all the neurotransmitter (\(u=x=1\)), called absolute synaptic efficacy of the connections (see Fig. 1A).
The interplay between the dynamics of \(u\) and \(x\) determines whether the joint effect of \(ux\) is dominated by depression or facilitation. In the parameter regime of \(\tau_d\gg \tau_f\) and large \(U\), an initial spike incurs a large drop in \(x\) that takes a long time to recover; therefore the synapse is STD-dominated (Fig.1B). In the regime of \(\tau_f \gg \tau_d\) and small \(U\), the synaptic efficacy is increased gradually by spikes, and consequently the synapse is STF-dominated (Fig.1C). This phenomenological model successfully reproduces the kinetic dynamics of depressed and facilitated synapses observed in many cortical areas.
Figure 1. (A) The phenomenological model for STP given by Eqs.(1) and (2). (B) The post-synaptic current generated by an STD-dominated synapse. The neuronal firing rate \(R=15\)Hz. The parameters \(A=1\), \(U=0.45\), \(\tau_s=20\)ms, \(\tau_d=750\)ms, and \(\tau_f=50\)ms. (C) The dynamics of a STF-dominating synapse. The parameters \(U=0.15\), \(\tau_f=750\)ms, and \(\tau_d=50\)ms.
Effects on information transmission
Because STP modifies synaptic efficacy based on the history of presynaptic activity, it can alter neural information transmission (Abbott 97, Tsodyks 97, Fuhrmann 02, Rotman 11, Rosenbaum 12). In general, an STD-dominated synapse favors information transfer for low firing rates, since high-frequency spikes rapidly deactivate the synapse. An STF-dominated synapse, however, tends to optimize information transfer for high-frequency bursts, which increase the synaptic strength.
Firing-rate-dependent transmission via dynamic synapses can be analyzed by examining the transmission of uncorrelated Poisson spike trains from a large neuronal population with global firing rate \(R(t)\). The time evolution for the postsynaptic current \(I(t)\) can be obtained by averaging Eq. (1) over different realization of Poisson processes corresponding to different spike trains (Tsodyks 98):
\[\begin{aligned} \frac{du}{dt} & = & -\frac{u}{\tau_f} + U(1-u^-)R(t),\nonumber \\ \frac{dx}{dt} & = & \frac{1-x}{\tau_d}-u^+xR(t), \\ I(t) &= & \tau_s Au^+xR(t), \nonumber \tag{3}\end{aligned} \]
where again \(u^+ = u^- + U(1-u^-)\) and we neglect time scales on the order of the synaptic time constant. For the stationary rate, \(R(t) \equiv R_0\), we obtain
\[\begin{aligned} u^+=u_0 & \equiv & U\frac{1+\tau_fR_0}{1+U\tau_fR_0}, \nonumber \\ x=x_0 & \equiv & \frac{1}{1+u_0\tau_d R_0},\\ I=I_0 & \equiv & \tau_s Au_0x_0 R_0, \nonumber \tag{4} \end{aligned}\]
which is shown in Fig. 2A,B. In particular, for depression-dominated synapses (\(u^+ \approx U\)), the average synaptic efficacy \(E=Au^+x\) decays inversely with the rate, and the stationary synaptic current saturates at the limiting frequency \(\lambda \sim \frac{1}{U\tau_d}\), above which dynamic synapses cannot transmit information about the stationary firing rate (Fig. 2A). On the other hand, facilitating synapses can be tuned for a particular presynaptic rate that depends on STP parameters (Fig. 2B).
Temporal filtering
The above analysis only describes neural population firing with stationary firing rates. Eq. (3) can be used to derive the filtering properties of dynamic synapses when the presynaptic population firing rate changes arbitrarily with time. In Appendix A we present the corresponding calculation for depression-dominated synapses (\(u^+ \approx U\)). By considering small perturbations $R(t):=R_0 + R_1 \rho (t)$ with $R_1\ll R_0$ around the constant rate $R_0>0 $, the Fourier transform of the synaptic current $I$ approximated by
\( \begin{eqnarray} \widehat{I}(\omega) \approx I_0 \delta(\omega) + \frac{I_0 R_1}{R_0} \widehat{\chi}(\omega) \widehat{\rho}(\omega) \tag{5} \end{eqnarray} \) where \( \begin{eqnarray} \widehat{\chi}(\omega) := 1- \frac{1/x_0 -1}{1/x_0 + j\omega \tau_{d}} = \frac{1+(\tau_{d}\omega)^2x_0+j\omega\tau_{d}(1-x_0)}{1/x_0+(\tau_{d}\omega)^2 x_0} \tag{6} \end{eqnarray} \)
and $I_0$ and $x_0$ are the stationary values of $I$ and $x$, respectively [see Eq. (4) with $u_0 = U$]. The amplitude of the filter \(|\widehat{\chi}(w)|\) is shown in Fig. 2C, illustrating the high-pass filter properties of depressing synapses. In other words, fast changes in presynaptic firing rates are faithfully transmitted to the postsynaptic targets, while slow changes are attenuated by depression.
STP can also regulate information transmission in other ways. For instance, STD may contribute to remove auto-correlation in temporal inputs, since temporally proximal spikes tend to magnify the depression effect and hence reduce the output correlation of the post-synaptic potential (Goldman 02). On the other hand, STF, whose effect is enlarged by temporally proximal spikes, improves the sensitivity of a post-synaptic neuron to temporally correlated inputs (Mejías 08, Bourjaily 12).
By combining STD and STF, neural information transmission could be further improved. For example, by combining STF-dominated excitatory and STD-dominated inhibitory synapses, the detection of high-frequency epochs by a postsynaptic neuron can be enhanced (Klyachko 06). In a postsynaptic neuron receiving both STD-dominated and STF-dominated inputs, the neural response can show both low- and high-pass filtering properties (Fortune 01).
Gain control
Since STD suppresses synaptic efficacy in a frequency-dependent manner, it has been suggested that STD provides an automatic mechanism to achieve gain control, namely, by assigning high gain to slowly firing afferents and low gain to rapidly firing afferents (Abbott 97, Abbott 04, Cook 03). If a steady presynaptic firing rate \(R\) changes abruptly by an amount \(\Delta R\), the first spike at the new rate will be transmitted with the efficacy \(E\) before the synapse is further depressed. Thus, the transient increase in synaptic input will be proportional to \(\Delta R E(R)\), which is approximately proportional to \(\Delta R/R\) for large rates (see above). This is reminiscent of Weber’s law, which states that a transient synaptic response is roughly proportional to the percentage change of the input firing rate. Fig. 2D shows that for a fixed-size rate change \(\Delta R\), the response decreases as a function of the steady input value; whereas without STD, the response would be constant for a fixed-size rate change.
Figure 2. (A) The steady values of the efficacy of an STD-dominated synapse and the postsynaptic currents it generates, measured by \(ux\) and \(uxR\), respectively. The parameters are the same as in Fig.1B. (B) Same as (A) for an STF-dominated synapse. The parameters are the same as in Fig. 1C. (C) The filtering properties of an STD-dominated synapse, measured by \(|\widehat{\chi}(w)|\) [Eq. (6)]. (D) The neural response to an abrupt input change \(\Delta R\) vs. the steady rate value for a STD-dominating synapse. \(\Delta R=5\)Hz. The parameters are the same as in Fig.1B.
Effects on network dynamics
In addition to feedforward and feedback transmission, neural circuits generate recurrent interactions between neurons. With STP included in the recurrent interactions, the network dynamics exhibits many new interesting behaviors that do not arise with purely static synapses. These new dynamical properties could therefore implement STP-mediated network computation.
Prolongation of neural responses to transient inputs
Since STP has a much longer time scale than that of single neuron dynamics (the latter is typically in the time order of \(10-20\) milliseconds), a new feature STP can bring to the network dynamics is prolongation of neural responses to a transient input. This stimulus-induced residual activity therefore holds a memory trace of the input, lasting up to several hundred milliseconds in a large-size network, and can serve as a buffer for information processing. For example, it has been shown that STD-mediated residual activity can cause a neural system to discriminate between rhythmic inputs of different periods (Karmorkar 07). STP also plays an important role in a general computation framework called a reservoir network. In this framework, STP, together with other dynamical elements of a large-size network, effectively map the input features from a low-dimensional space to the high-dimensional state space of the network that includes both active (neural) and hidden (synaptic) components, so that the input information can be more easily read out (Buonomano 09). In a recent development it was proposed that STF-enhanced synapses themselves can hold the memory trace of an input without recruiting persistent firing of neurons, potentially providing the most economical and robust way to implement working memory (Mongillo 08).
Modulation of network responses to external input
Since STP modifies synaptic efficacy instantly, it can modulate the network response to sustained external inputs. An example of this is bursty synchronous firing in an STD-dominated network, either spontaneously or in response to external inputs. The resulting bursts of activity are called population spikes (Loebel 02). To understand this effect, consider a network with strong recurrent interactions between neurons. When a sufficiently large group of neurons fire together, e.g. triggered by external stimulus, they can recruit other neurons via an avalanche-like process. However, after a large synchronous burst of activity, the synapses are weakened by STD, reducing the recurrent currents rapidly, and consequently the network activity returns to baseline. The network will not be activated again until the synapses are sufficiently recovered from depression. Therefore, the rate of population spikes is determined by the time constant of STD (Fig.3A,B). STF can also modulate the network response to external inputs, but in a very different manner (Barak 07). The varied response properties mediated by STP may provide different ways of representing and conveying the stimulus information in a network.
Induction of instability or mobility of network state
Persistent firing, referring to situations in which a group of neurons continue firing without external drive, is widely regarded as a neural substrate for information representation (Fuster 71). To maintain persistent activity in a network, strong excitatory recurrent interactions between neurons are needed to establish a positive-feedback loop sustaining neuronal responses. Mathematically, persistent activity is often modeled as an active stationary state (attractor) of the network. Since STD weakens synaptic efficacy depending on the level of neuronal activity, it can suppress an attractor state. This property, however, can be used to carry out valuable computations.
Consider a network that holds multiple attractor states competing with each other, STD destabilizing one of them can incur the network to switch to another attractor state (Torres 07, Katori 11, Igarashi 12). This property has been linked to spontaneous transition between up and down states of cortical neurons (Holcman 06), to the binocular rivalry phenomenon (Kilpatrick 10), and to enhanced discrimination capacity for superimposed ambiguous inputs (Fung 13). STF can also induce state switching, but this is achieved in an indirect way through facilitating the excitatory synapses to interneurons, with the latter in turn suppressing excitatory neurons (Melamed 08).
The joint effect of STD and STF on the memory capacity of the classical Hopfield model has been investigated (Mejías 09). It was found that STD degrades the memory capacity of the network, but induces a novel computationally desirable property, that is, the network can hop among memory states, which could be useful for memory searching. Interestingly, STF can compensate for the lost memory capacity caused by STD.
Enrichment of attractor dynamics
Continuous Attractor Neural Networks (CANNs), also called neural field models or ring models (Amari 77), have been widely used to describe the encoding of continuous stimuli in the neural system, such as for head-direction, orientation, movement direction, and spatial location of objects. A CANN, due to its translation-invariant recurrent interactions between neurons, holds a continuous family of localized stationary states, called bumps. These stationary states form a subspace on which the network is neutrally stable, enabling the network to track time-varying stimuli smoothly.
With STP included, a CANN displays new interesting dynamical behaviors. One of them is a spontaneous traveling wave phenomenon (York 09, Fung 12, Bressloff 12) (Fig.3C). Consider a network that is initially in a localized bump state. Because of STD, the neural interactions in the bump region are weakened. As a result of competition from neighboring attractor states, a small displacement will push the bump away, and it will continue to move in that direction due to the STD effect. If the network is driven by a continuously moving input, in a proper parameter regime the bump movement can even lead the external drive by a constant time irrespective to the input moving speed, achieving an anticipative behavior that is reminiscent to the predictive responses of head-direction neurons in rodents (Fig.3D; Fung 12).
Figure 3. (A,B) Population spikes generated by a STD-dominating network in response to external excitatory pulses. When the presentation rate of the pulses is low (A), the network responds to each one of them. For higher presentation rate (B), the network only responds to a fraction of the inputs. Adapted from (Loebel 02). (C) The traveling wave generated by STD in a CANN. (D) The anticipative tracking behavior of a CANN with STD.
Appendix A: Derivation the temporal filter for short-term depression
We consider the rate-based dynamics in Eq. (3) for depression-dominated synapses (\(u^+ \approx U\)) and for synaptic responses that are much faster than the depression dynamics ($\tau_s \ll \tau_d$)\[ \begin{eqnarray} {\frac{{\rm d} x}{{\rm d}t}}&=&\frac{1-x}{\tau_{d}}-Ux R(t) \tag{7}\\ I(t) &= & \tau_{s} AU x R(t) \tag{8} \,. \end{eqnarray} \]
The aim is to derive a filter $\chi$ that relates the output synaptic current $I$ to the input rate $R$. Note that because the input rate $R$ enters the equations in a multiplicative fashion the input-output transfer function is non linear. Yet a linear filter can be derived by considering small perturbations $R_1 \rho(t)$ of the firing rate $R(t)$ around a constant rate $R_0$, that is, \( R(t):=R_0 + R_1 \rho (t)\, \quad\text{with}\quad R_0,R_1>0 \quad\text{and}\quad R_1\ll R_0 \, . \tag{9} \)
We assume that such small perturbations in $R$ produce small perturbations in the variable $x$ around its steady state value $x_0>0$ \[ x(t) = x_0 + x_1(t)\quad\text{with}\quad x_0 = \frac{1}{1+UR_0\tau_{d}} \quad\text{and}\quad |x_1(t)| \ll x_0 \, . \tag{10} \]
We can now linearize the dynamics of $x(t)$ around the steady-state value $x_0$ by approximating the product
\( \begin{eqnarray} xR &=& (x_0+x_1)(R_0+R_1\rho)\\ &=& x_0 R_0 + x_0 R_1 \rho + x_1 R_0+ x_1 R_1\rho\\ &\approx& x_0 R_0 + x_0 R_1 \rho + x_1 R_0\\ &\approx& R_0 x+ x_0R -x_0 R_0 \tag{11} \end{eqnarray} \)
where in Eq. (11) we dropped the second-order term $x_1 R_1\rho$ because we assumed $R_1\ll R_0$ and $|x_1|\ll x_0$. Plugging Eq. (11) into Eq. (7) yields
\( \begin{eqnarray} {\frac{{\rm d} x}{{\rm d}t}} = \frac{1-x}{\tau_{d}} - U R_0 x - U x_0 R + U x_0 R_0\,.\tag{12} \end{eqnarray} \)
We now take the Fourier transform at both sides of Eq. (12)
\(
\begin{eqnarray}
j\omega \tau_{d} \widehat{x} = -\widehat{x} - U R_0 \tau_{d} \widehat{x} - U x_0 \tau_{d}\widehat{R} + (1+ U R_0 \tau_{d} x_0) \delta(\omega)
\tag{13}
\end{eqnarray}
\)
where we defined the Fourier transform pair
\(
\begin{eqnarray}
\widehat{x}(\omega) := \int \!{\rm d}{t}\, x(t) \exp(-j\omega t ) \quad ; \quad x(t) = \frac{1}{2\pi}\int \!{\rm d}\omega\, \widehat{x}(\omega) \exp(j\omega t)
\tag{14}
\end{eqnarray}
\)
and $j=\sqrt{-1}$ is the imaginary unit. Solving Eq. (13) for the variable $\widehat{x}$, we find
\(
\begin{eqnarray}
\widehat{x} = -\frac{U\tau_{d}x_0}{1/x_0 + j \omega \tau_{d}} \widehat{R} + x_0 (2-x_0) \delta(\omega) \tag{15}
\end{eqnarray}
\)
where from Eq. (10) we used $U R_0 \tau_{d}=1/x_0 - 1$.
Next, we plug Eq. (11) into Eq. (8) to linearize the dynamics of the synaptic current
\( \begin{eqnarray} I &=& \tau_{s}AU (R_0x+x_0R-x_0R_0)\\ &=& I_0 \left( \frac{x}{x_0}+ \frac{R}{R_0}-1\right) \tag{16} \end{eqnarray} \) around the steady-state value $I_0 = \tau_{s}AU x_0 R_0$.
By taking the Fourier transform at both sides of Eq. (16), using Eq. (15), we obtain \( \begin{eqnarray} \widehat{I} &=& I_0 \frac{\widehat{x}}{x_0} + I_0 \frac{\widehat{R}}{R_0} - I_0 \delta(\omega) \\ &=& \frac{I_0}{R_0} \widehat{\chi} \widehat{R} + I_0(1-x_0) \delta(\omega) \tag{17} \end{eqnarray} \) where we defined the filter \( \begin{eqnarray} \widehat{\chi}(\omega) := 1- \frac{1/x_0 -1}{1/x_0 + j\omega \tau_{d}} = \frac{1+(\tau_{d}\omega)^2x_0+j\omega\tau_{d}(1-x_0)}{1/x_0+(\tau_{d}\omega)^2 x_0}\,. \tag{18} \end{eqnarray} \)
To interpret the result, we plug into Eq. (17) the Fourier transform $\widehat{R}=R_0\delta(\omega)+R_1 \widehat{\rho}$, which yields
\( \begin{eqnarray} \widehat{I}(\omega) = I_0 \delta(\omega) + \frac{I_0 R_1}{R_0} \widehat{\chi}(\omega) \widehat{\rho}(\omega)\,. \tag{19} \end{eqnarray} \)
Finally, the inverse Fourier transform of Eq. (19) reads \( \begin{eqnarray} I(t) = I_0 + \frac{I_0 R_1}{R_0} \int {\rm d}\tau \, \chi(\tau) \rho(t-\tau) \tag{20} \end{eqnarray} \) with \( \begin{eqnarray} \chi(t)=\delta(t) - \frac{1/x_0-1}{\tau_{d}} \begin{cases} \displaystyle {\exp\left(-\frac{t}{x_0\tau_{d}}\right)} & \text{for}\quad t\ge0 \\ 0 & \text{for}\quad t<0 \end{cases}\,. \tag{21} \end{eqnarray} \)
Therefore the output current $I$ is the sum of the steady-state current $I_0$ and the filtered perturbation $\frac{I_0 R_1}{R_0} \int {\rm d}\tau \, \chi(\tau) \rho(t-\tau)$ where $\chi$ is the filter we are interested in.
References
- Research Topic: Neural Information Processing with Dynamical Synapses. S. Wu, K. Y. Michael Wong and M. Tsodyks. Frontiers in Computational Neuroscience, 2013 link
- Abbott, L. F. et al (1997). Synaptic Depression and Cortical Gain Control. Science. 275(5297): 221-224. doi:10.1126/science.275.5297.221.doi:10.1126/science.275.5297.221
- Abbott, L. F. and Regehr, Wade G. (2004). Synaptic computation. Nature. 431(7010): 796-803. doi:10.1038/nature03010.doi:10.1038/nature03010
- Amari, Shun-ichi (1977). Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics. 27(2): 77-87. doi:10.1007/bf00337259.doi:10.1007/BF00337259
- Barak, Omri and Tsodyks, Misha (2007). Persistent Activity in Neural Networks with Dynamic Synapses. PLoS Computational Biology. 3(2): e35. doi:10.1371/journal.pcbi.0030104.doi:10.1371/journal.pcbi.0030035
- G. Bi and M. Poo. Synaptic modification by correlated activity: Hebb’s postulate revisited. Annu. Rev. Neurosci. 24: 139–66, 2001.
- Bourjaily, M. A. and Miller, P. (2012). Dynamic afferent synapses to decision-making networks improve performance in tasks requiring stimulus associations and discriminations. Journal of Neurophysiology. 108(2): 513-527. doi:10.1152/jn.00806.2011.doi:10.1152/jn.00806.2011
- P. C. Bressloff. Spatiotemporal Dynamics of Continuum Neural Fields J. Phys. A 45, 033001, 2012.
- Buonomano, Dean V. and Maass, Wolfgang (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nature Reviews Neuroscience. 10(2): 113-125. doi:10.1038/nrn2558.doi:10.1038/nrn2558
- Cook, Daniel L.; Schwindt, Peter C.; Grande, Lucinda A. and Spain, William J. (2003). Synaptic depression in the localization of sound. Nature. 421(6918): 66-70. doi:10.1038/nature01248.doi:10.1038/nature01248
- J. S. Dittman, A. C. Kreitzer and W. G. Regehr. Interplay between facilitation, depression, and residual calcium at three presynaptic terminals. J. Neurosci. 20: 1374-1385, 2000.
- Fortune, Eric S. and Rose, Gary J. (2001). Short-term synaptic plasticity as a temporal filter. Trends in Neurosciences. 24(7): 381-385. doi:10.1016/s0166-2236(00)01835-x.doi:10.1016/S0166-2236(00)01835-X
- G. Fuhrmann et al. Coding of Temporal Information by Activity-Dependent Synapses. J. Neurophysiol. 87: 140-148, 2002.
- Fung, C. C. Alan; Wong, K. Y. Michael; Wang, He and Wu, Si (2012). Dynamical Synapses Enhance Neural Information Processing: Gracefulness, Accuracy, and Mobility. Neural Computation. 24(5): 1147-1185. doi:10.1162/neco_a_00269.doi:10.1162/NECO_a_00269
- C. C. Fung, K. Y. Michael Wong and S. Wu. Delay Compensation with Dynamical Synapses. Advances in Neural Information Processing Systems 16, 2012.
- C. C. A. Fung, H. Wang, K. Lam, K. Y. M. Wong and S. Wu. Resolution enhancement in neural networks with dynamical synapses. Front. Comput. Neurosci. 7:73. doi: 10.3389/fncom.2013.00073, 2013.
- Fuster, J. M. and Alexander, G. E. (1971). Neuron Activity Related to Short-Term Memory. Science. 173(3997): 652-654. doi:10.1126/science.173.3997.652.doi:10.1126/science.173.3997.652
- Goldman, Mark S.; Maldonado, Pedro and Abbott, L. F. (2002). Redundancy Reduction and Sustained Firing with Stochastic Depressing Synapses The Journal of Neuroscience 22(2): 584-591.
- Holcman, David and Tsodyks, Misha (2006). The Emergence of Up and Down States in Cortical Networks. PLoS Computational Biology. 2(3): e23. doi:10.1371/journal.pcbi.0020023.doi:10.1371/journal.pcbi.0020023
- Y. Igarashi, M. Oizumi and M. Okada. Theory of correlation in a network with synaptic depression. Physical Review E, 85, 016108, 2012.
- Karmarkar, Uma R. and Buonomano, Dean V. (2007). Timing in the Absence of Clocks: Encoding Time in Neural Network States. Neuron. 53(3): 427-438. doi:10.1016/j.neuron.2007.01.006.doi:10.1016/j.neuron.2007.01.006
- Katori, Yuichi et al. (2011). Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex. PLoS Computational Biology. 7(11): e1002266. doi:10.1371/journal.pcbi.1002266.doi:10.1371/journal.pcbi.1002266
- Kilpatrick, Zachary P. and Bressloff, Paul C. (2010). Binocular Rivalry in a Competitive Neural Network with Synaptic Depression. SIAM Journal on Applied Dynamical Systems. 9(4): 1303-1347. doi:10.1137/100788872.doi:10.1137/100788872
- Klyachko, Vitaly A. and Stevens, Charles F. (2006). Excitatory and Feed-Forward Inhibitory Hippocampal Synapses Work Synergistically as an Adaptive Filter of Natural Spike Trains. PLoS Biology. 4(7): e207. doi:10.1371/journal.pbio.0040207.doi:10.1371/journal.pbio.0040207
- A. Loebel and M. Tsodyks. Computation by ensemble synchronization in recurrent networks with synaptic depression. J. Comput. Neurosci. 13: 111-124, 2002.
- Markram, H.; Wang, Y. and Tsodyks, M. (1998). Differential signaling via the same axon of neocortical pyramidal neurons. Proceedings of the National Academy of Sciences. 95(9): 5323-5328. doi:10.1073/pnas.95.9.5323.doi:10.1073/pnas.95.9.5323
- Markram, Henry and Tsodyks, Misha (1996). Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature. 382(6594): 807-810. doi:10.1038/382807a0.doi:10.1038/382807a0
- Mejías, Jorge F. and Torres, Joaquín J. (2008). The role of synaptic facilitation in spike coincidence detection. Journal of Computational Neuroscience. 24(2): 222-234. doi:10.1007/s10827-007-0052-8.doi:10.1007/s10827-007-0052-8
- Mejías, Jorge F. and Torres, Joaquín J. (2009). Maximum Memory Capacity on Neural Networks with Short-Term Synaptic Depression and Facilitation. Neural Computation. 21(3): 851-871. doi:10.1162/neco.2008.02-08-719.doi:10.1162/neco.2008.02-08-719
- Melamed, Ofer; Barak, Omri; Silberberg, Gilad; Markram, Henry and Tsodyks, Misha (2008). Slow oscillations in neural networks with facilitating synapses. Journal of Computational Neuroscience. 25(2): 308-316. doi:10.1007/s10827-008-0080-z.doi:10.1007/s10827-008-0080-z
- Mongillo, G.; Barak, O. and Tsodyks, M. (2008). Synaptic Theory of Working Memory. Science. 319(5869): 1543-1546. doi:10.1126/science.1150769.doi:10.1126/science.1150769
- Rosenbaum, Robert; Rubin, Jonathan and Doiron, Brent (2012). Short Term Synaptic Depression Imposes a Frequency Dependent Filter on Synaptic Information Transfer. PLoS Computational Biology. 8(6): e1002557. doi:10.1371/journal.pcbi.1002557.doi:10.1371/journal.pcbi.1002557
- Rotman, Z.; Deng, P.-Y. and Klyachko, V. A. (2011). Short-Term Plasticity Optimizes Synaptic Information Transmission. Journal of Neuroscience. 31(41): 14800-14809. doi:10.1523/jneurosci.3231-11.2011.doi:10.1523/JNEUROSCI.3231-11.2011
- Stevens, Charles F and Wang, Yanyan (1995). Facilitation and depression at single central synapses. Neuron. 14(4): 795-802. doi:10.1016/0896-6273(95)90223-6.doi:10.1016/0896-6273(95)90223-6
- Torres, J. J.; Cortes, J. M.; Marro, J. and Kappen, H. J. (2007). Competition Between Synaptic Depression and Facilitation in Attractor Neural Networks. Neural Computation. 19(10): 2739-2755. doi:10.1162/neco.2007.19.10.2739.doi:10.1162/neco.2007.19.10.2739
- Tsodyks, Misha and Markram, Henry (1997). The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proceedings of the National Academy of Sciences. 94(2): 719-723. doi:10.1073/pnas.94.2.719.doi:10.1073/pnas.94.2.719
- Tsodyks, Misha; Pawelzik, Klaus and Markram, Henry (1998). Neural Networks with Dynamic Synapses. Neural Computation. 10(4): 821-835. doi:10.1162/089976698300017502.doi:10.1162/089976698300017502
- Wang, Yun et al. (2006). Heterogeneity in the pyramidal network of the medial prefrontal cortex. Nature Neuroscience. 9(4): 534-542. doi:10.1038/nn1670.doi:10.1038/nn1670
- York, Lawrence Christopher and van Rossum, Mark C. W. (2009). Recurrent networks with short term synaptic depression. Journal of Computational Neuroscience. 27(3): 607-620. doi:10.1007/s10827-009-0172-4.doi:10.1007/s10827-009-0172-4
- Zucker, Robert S. and Regehr, Wade G. (2002). Short-Term Synaptic Plasticity. Annual Review of Physiology. 64(1): 355-405. doi:10.1146/annurev.physiol.64.092501.114547