Measures of neuronal signal synchrony

From Scholarpedia
Thomas Kreuz (2011), Scholarpedia, 6(12):11922. doi:10.4249/scholarpedia.11922 revision #196767 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Thomas Kreuz

Measures of neuronal signal synchrony are estimators of the synchrony between two or sometimes more continuous time series of brain activity which yield low values for independent time series and high values for correlated time series. A complementary class of approaches comprises measures of spike train synchrony which quantify the degree of synchrony between discrete signals.

Synchronization of continuous time series can manifest itself in many different ways. The simplest case of complete synchronization (Fujisaka and Yamada, 1983) can be attained if identical systems are coupled sufficiently strongly so that their states coincide after transients have died out. The concept of generalized synchronization (Afraimovich et al., 1986) introduced for uni-directionally coupled systems, describes the presence of some functional relation between the states of the two systems. Finally, phase synchronization, first described for chaotic oscillators (Rosenblum et al., 1996), is defined as the global entrainment of the phases while the amplitudes may remain uncorrelated.

Following this variety of concepts many different approaches to quantify the degree of synchronization between two continuous signals have been proposed. These approaches comprise linear ones like the cross correlation or the spectral coherence function as well as nonlinear measures like mutual information (Gray, 1990), transfer entropy (Schreiber, 2000), Granger causality (Granger, 1969), or the nonlinear interdependence (Arnhold et al., 1999; Quian Quiroga et al., 2002; Andrzejak et al., 2003). Furthermore, different indices of phase synchronization such as the mean phase coherence (Kuramoto, 1984; Mormann et al., 2000) have been introduced.

Contents

Linear measures

Cross correlation

The simplest and most widely used measure of synchronization is the linear cross correlation. It is defined in the time domain as a function of the time lag \(\tau = -(N-1),...,0,...,N-1\) and is derived from normalized signals \(x_n\) and \(y_n\) of length \(N\) and with zero mean and unit variance as

\[\tag{1} C_{XY} (\tau) = \begin{cases} \frac{1}{N-\tau} \sum_{n=1}^{N-\tau} x_{n+\tau} y_n & \tau \geq 0 \\ C_{XY} (-\tau) & \tau<0. \end{cases}\]


Its absolute value is symmetric in \(X\) and \(Y\) and attains a maximum value of \(1\) for complete or lag synchronization, a minimum value of \(-1\) for completely uncorrelated signals, and values close to \(0\) for linearly independent signals. For considerations regarding the statistical significance of any deviation from zero refer to Bartlett, 1946.

Coherence

Linear correlations can also be quantified in the frequency domain by means of the cross spectrum

\[\tag{2} C_{XY} (\omega) = E [F_X (\omega) F_Y^* (\omega)] \]


where \(E[a]\) is the estimation of a function \(a\), \(F_X (\omega)\) is the Fourier transform of \(x\ ,\) \(\omega\) are the discrete frequencies, and the asterisk denotes complex conjugation. The cross-spectrum is a complex function whose amplitude normalized by the power spectrum of the two systems is called the coherence function

\[\tag{3} \Gamma_{XY} (\omega) = \frac{|C_{XY} (\omega)|^2}{|C_{XX} (\omega)| |C_{YY} (\omega)|}\]


In practice, to reduce the finite size effect, spectra are usually calculated by averaging over the estimated periodograms for subintervals of equal length (Welch’s method, Welch, 1967). Due to its frequency-dependence, \(\Gamma_{XY} (\omega)\) is a very useful measure if one is interested in the synchronization in certain frequency ranges only, e.g., in the classical EEG frequency bands \(\delta, \theta, \alpha, \beta, \) and \(\gamma \) (Niedermeyer and Lopez da Silva, 2005).

The coherence function may also be computed instantaneously by applying time-frequency analysis, such as wavelet analysis (Torrence and Compo, 1998). Then the estimation method depends on the nature of data under study. In case of two single time series, the estimation may be done by a smoothing operation (Grindsted et al., 2004; Torrence and Webster, 1999), whereas multiple sets of time series, i.e. multiple trials, may be analyzed by a different estimation method (Zhan et al., 2006).


Nonlinear measures

Mutual information

Main article: Mutual information

In contrast to the cross correlation, mutual information quantifies not only linear but also nonlinear dependencies between the systems \(X\) and \(Y\). It is a measure originating from information theory (Cover and Thomas, 2006) and is based on the Shannon entropy, a measure which quantifies the uncertainty of a probability distribution. It is defined as

\[\tag{4} I (X,Y) = H (X) + H (Y) - H (X,Y),\]

which uses the Shannon entropies of the marginal distributions, e.g.,

\[\tag{5} H (X) = - \sum_{i=1}^{M_x} p_x (i) \log p_x (i)\]

as well as the Shannon entropy of the joint distribution

\[\tag{6} H (X,Y) = - \sum_{i=1}^{M_x} \sum_{j=1}^{M_y} p_{xy} (i,j) \log p_{xy} (i,j).\]

Here \(p_x (i), i=1,...,M_x [p_y (j), j=1,...,M_y]\) represent the normalized probabilities of the \(i\)-th [\(j\)-th] state in \(X\)-space [\(Y\)-space]. Their joint probability is denoted by \(p_{xy} (i,j)\), while \(M_x\) and \(M_y\) denote the respective numbers of states.

Mutual information is symmetric in \(X\) and \(Y\) and quantifies the amount of information about \(X\) obtained by knowing \(Y\) and vice versa. If the logarithms are defined with base \(2\ ,\) it is measured in bits. Mutual information is zero if and only if the two time series are independent.

The mutual information \(I (X,Y)\) can also be regarded as a Kullback-Leibler entropy measuring the gain in information when replacing the distribution \(p_x (i) p_y (j)\ ,\) obtained under the assumption of independence between \(X\) and \(Y\ ,\) by the actual joint probability distribution \(p_{xy} (i,j)\ ,\) i.e.,

\[\tag{7} I (X,Y) = - \sum_{i=1}^{M_x} \sum_{j=1}^{M_y} p_{xy} (i,j) \log \frac{p_{xy} (i,j)}{p_x (i) p_y (j)}\ .\]


The classical approach for estimating mutual information from two time series \(x\) and \(y\) consists in partitioning their supports into bins of finite size and counting the numbers of points falling into the various bins. A more sophisticated estimator adapts the resolution by using bins whose size is adjusted according to the local data density in the joint space and then kept equal in the marginal subspaces (Kraskov et al., 2003). For general strategies regarding the estimation of entropies and mutual information see Paninski, 2003.

Transfer entropy

Transfer entropy (Schreiber, 2000) extends the concept of mutual information to conditional probabilities and estimates the influence of the state of \(X\) on the transition probabilities in \(Y\)

\[\tag{8} T (X,Y) = - \sum_{i=1}^{M_x} \sum_{j=1}^{M_y} p_{xy} (i_{n+1},i_n^{(k)},j_n^{(l)}) \log \frac{p_{xy} (i_{n+1}|i_n^{(k)},j_n^{(l)})}{p_x (i_{n+1}|i_n^{(k)})}\ .\]


and analogously for \(T (Y,X)\ .\)

Here \(p_x (i_{n+1}|i_n^{(k)})\) represents the conditional probability of observing the state \(i_{n+1}\) in \(X\) after the word \(i_n^{(k)}\) of length \(k\ ,\) while \(p_{xy} (i_{n+1}|i_n^{(k)},j_n^{(l)})\) denotes the conditional probability of observing the state \(i_{n+1}\) in \(X\) after the word \(i_n^{(k)}\) of length \(k\) in \(X\) and the word \(j_n^{(l)}\) of length \(l\) in \(Y\ .\) The joint probability of observing the state \(i_{n+1}\) in \(X\) after the word \(i_n^{(k)}\) in \(X\) and the word \(j_n^{(l)}\) in \(Y\) is denoted by \(p_{xy} (i_{n+1},i_n^{(k)},j_n^{(l)}).\)

In contrast to mutual information, transfer entropy is an asymmetric measure designed to detect the direction of information exchange between the two systems. For a general review on causality detection based on information-theoretic approaches see Hlaváčková-Schindler et al. (2007).

Granger causality

Main article: Granger causality

A long-standing and still very important and widely used directional approach which is based on the same principle as the transfer entropy is the Granger causality (Granger, 1969). It tests whether the prediction of a signal which relies only on the information from its own past (the univariate model) can be improved by incorporating past information from the other signal (the bivariate model). The classical approach compares the univariate model

\[\tag{9} x_n = \sum_{k=1}^{K} a_k^x x_{n-k} + u_n^x\]


\[y_n = \sum_{k=1}^{K} a_k^y y_{n-k} + u_n^y\]

with the bivariate model

\[\tag{10} x_n = \sum_{k=1}^{K} a_k^{xy} x_{n-k} + \sum_{k=1}^{K} b_k^{xy} y_{n-k} + u_n^{xy}\]


\[y_n = \sum_{k=1}^{K} a_k^{yx} y_{n-k} + \sum_{k=1}^{K} b_k^{yx} x_{n-k} + u_n^{yx}\]

where \(K\) is the model order, \(a_k^x, a_k^y, a_k^{xy}, a_k^{yx}, b_k^{xy}, b_k^{yx}\) are the model parameters, which are fitted to the signals using linear regression, and \(u_n^x, u_n^y, u_n^{xy}, u_n^{yx}\) are the prediction errors associated with the respective model. The performance for the two models is typically evaluated by comparing the variances of their prediction errors. Recently, in addition to this linear approach a variety of nonlinear extensions have been proposed (e.g., Marinazzo et al., 2008).

Nonlinear interdependence

Figure 1: Nonlinear interdependence. State space reconstruction of two nonlinear systems (Rössler and Lorenz, see Quian Quiroga et al., 2000) for the A. uncoupled case, and B. strongly coupled case. The size of the neighborhood in the responder is compared with the size of the mapping in the driver. In the uncoupled case neighbors in the responder are mapped on dispersed points in the driver, whereas in the coupled case neighbors are mapped onto neighbors.

This asymmetric measure was derived in Andrzejak et al. (2003) and is based on original proposals in Arnhold et al. (1999) and Quian Quiroga et al. (2002). It is related to the method of mutual false nearest neighbors (Rulkov et al., 1995), but unlike this method it does not assume a strict functional relationship between the dynamics of the underlying systems \(X\) and \(Y\ .\) The nonlinear interdependence \(M\) relies on state space reconstruction. According to Takens’ time delay embedding theorem (Takens, 1981), the state spaces of the two systems can be reconstructed from the recorded signals by temporal sequences of delay vectors \(\overrightarrow{x}_n = (x_n,...,x_{n-(m-1)d})\) and \(\overrightarrow{y}_n = (y_n,...,y_{n-(m-1)d})\) with \(m\) and \(d\) representing the embedding dimension and the time lag, respectively. Subsequently one can test whether closeness in the state space of \(Y\) implies closeness in the state space of \(X\) for equal time partners ( Figure 1). Denoting the time indices of the \(k\) nearest neighbors of \(\overrightarrow{x}_n\) with \(r_{n,j}, j = 1,...,k\ ,\) for each \(\overrightarrow{x}_n\ ,\) the mean squared Euclidean distance to its \(k\) neighbors (after using a Theiler-correction (Theiler, 1986) to exclude temporally correlated neighbors) is defined as

\[\tag{11} R_n^{(k)} (X) = \frac{1}{k} \sum_{j=1}^{k} (\overrightarrow{x}_n - \overrightarrow{x}_{r_{n,j}})^2\ .\]


Then, by replacing the nearest neighbors by the equal time partners of the closest neighbors of \(\overrightarrow{y}_n\ \), denoted by \(s_{n,j}\), the \(y\)-conditioned mean squared Euclidean distance is given as

\[\tag{12} R_n^{(k)} (X|Y) = \frac{1}{k} \sum_{j=1}^{k} (\overrightarrow{x}_n - \overrightarrow{x}_{s_{n,j}})^2\ .\]


Using the mean squared Euclidean distance to all \(N-(m-1)d\) remaining vectors in \(\{ \overrightarrow{x}_n \} \ ,\)

\[\tag{13} R_n (X) = \frac{1}{N-(m-1)d} \sum_{j=1,j \neq n}^{N-(m-1)d} (\overrightarrow{x}_n - \overrightarrow{x}_j)^2\ ,\]


a normalized measure of directed nonlinear interdependence is obtained:

\[\tag{14} M (X|Y) = \frac{1}{N} \sum_{n=1}^N \frac{R_n (X) - R_n^{(k)} (X|Y)}{R_n (X) - R_n^{(k)} (X)}\ .\]


If closeness in \(Y\) implies closeness in \(X\ ,\) then \(R_n^{(k)} (X) \approx R_n^{(k)} (X|Y) \ll R_n (X)\ ,\) which leads to \(M (X|Y) \approx 1\) (for identical synchronization \(M (X|Y) = 1\)). In contrast, for independent systems one obtains \(R_n^{(k)} (X) \ll R_n^{(k)} (X|Y) \approx R_n (X)\) and accordingly \(M (X|Y) \approx 0\ .\)

Exchanging systems \(X\) and \(Y\) yields the opposite interdependence \(M (Y|X)\ .\)

Phase synchronization

The two most important features of a measure for phase synchronization are that it is time-resolved (with a much better time resolution than coherence, see Quian Quiroga et al., 2002) and that it is only sensitive to the phases, irrespective of the amplitudes of the two signals. Estimates of phase synchronization have found widespread use in neurophysiology since the analysis can be restricted to certain frequency bands reflecting specific brain rhythms, which allows relating the results to cognitive processes, states of vigilance, etc.

As the name implies, the first step in quantifying phase synchronization between two time series \(x\) and \(y\) is to determine their instantaneous phases \(\phi_x (t)\) and \(\phi_y (t)\ .\) The most common technique is based on the analytic signal approach (Gabor, 1946; Panter, 1965). From the continuous time series \(x (t)\ ,\) the analytic signal is defined as

\[\tag{15} Z_x (t) = x (t) + i \tilde{x} (t) = A_x^H e^{i \phi_x^H (t)}\ .\]


where \(\tilde{x} (t)\) is the Hilbert transform of \(x (t)\ :\)

\[\tag{16} \tilde{x} (t) = \frac{1}{\pi} \mathrm{p.v.} \int_{- \infty}^\infty \frac{x(t')}{t-t'} dt'\ .\]


(here p.v. denotes the Cauchy principal value). From \(Z\) we can obtain the Hilbert phase:

\[\tag{17} \phi_x^H (t) = \arctan \frac{\tilde{x} (t)}{x (t)}\ .\]


Analogously, \(\phi_y^H (t)\) is defined from \(y (t)\ .\)

Another widely used method to extract the phases is based on the wavelet transform (Tallon-Baudry et al., 1996; Lachaux et al., 1999). In this approach the phase is determined by the convolution of the respective signal with a mother wavelet such as the modified complex Morlet wavelet

\[\tag{18} \Psi (t) = (e^{i \omega_c t} - e^{- \omega_c^2 \sigma^2 /2}) e^{-t^2/2 \sigma^2} \ ,\]


where \(\omega_c\) is the center frequency of the wavelet and \(\sigma\) denotes its rate of decay which is proportional to the number of cycles \(n_c\) and related to the frequency range by the uncertainty principle.

The convolution of \(x (t)\) with \(\Psi (t)\) yields a complex time series of the wavelet coefficient for \(\omega_c\)

\[\tag{19} W_x (t) = (\Psi \circ x) (t) = \int |\Psi (t') x (t-t')| dt' = A_x^W (t) e^{i \phi_x^W (t)} \ ,\]


from which the phases can be defined as

\[\tag{20} \phi_x^W (t) = \arctan \frac{\mathrm{Im} W_x (t)}{\mathrm{Re} W_x (t)}\ .\]


In the same way \(W_y (t)\) and \(\phi_y^W (t)\) are derived from \(y (t)\ .\)

In Quian Quiroga et al. (2002) it was shown that the phase extracted from the wavelet transform is closely related to the one obtained with the Hilbert transform. It includes already an implicit band-pass filtering (defined by the frequency content of the mother wavelet) whereas for the latter a prefiltering is needed in order to be limited to a specific frequency band. On the other hand, a broadband phase variable such as the unfiltered Hilbert phase generally reflects the dominant frequency in the spectral composition of a signal. Depending on the problem under investigation, broadband phase definitions may be favorable over narrowband definitions (e.g., if the dominant frequency changes with time), or vice versa (Frei et al., 2010).

Figure 2: Index of phase synchronization based on circular variance. Distribution of phase differences and mean phase difference (in red) for A. uncorrelated, B. weakly correlated, and C. strongly correlated time series.

The most prominent index of phase synchronization is the mean phase coherence which is based on the circular variance of an angular distribution (Figure 2). It is obtained by projecting the phase differences, wrapped to the interval \([2 \pi)\), onto the unit circle in the complex plane and calculating the absolute value of the mean phase difference between two signals (Mardia, 1972; Kuramoto, 1984; Mormann, 2000):

\[\tag{21} R = \left | \frac{1}{N} \sum_{j=1}^N e^{i[\phi_x (t_j)- \phi_y (t_j)]} \right | \ .\]

Two other indices, the index based on conditional probability and the index based on Shannon entropy, have been proposed in Tass et al. (1998). All of these indices are confined to the interval \([0, 1]\ .\) Values close to zero are attained for uncorrelated phase differences (no phase synchronization) while the maximum value corresponds to Dirac-like distributions (perfect phase synchronization). For a renormalization accounting for the non-uniformity of the individual phase distributions, refer to Kreuz et al. (2007a).

Comment on directional measures

While most approaches aim at quantifying the overall level of synchronization in a symmetric way, some measures are specifically designed to detect directional couplings (so-called driver-response relationships) between time series. These include some of the measures introduced above, e.g., transfer entropy, Granger causality, and the nonlinear interdependence. Also the concept of phase synchronization has been extended to a measure of directionality (Rosenblum and Pikovsky, 2001, Kralemann et al., 2007, Wagner et al., 2010).

For any asymmetric measure \(A\) with its two directional variants \(A(X|Y)\) and \(A(Y|X)\ ,\) there exists a symmetrized variant

\[\tag{22} A_S = \frac{A(X|Y)+A(Y|X)}{2}\]


which measures the overall degree of synchronization, while the normalized difference between the two directional quantities

\[\tag{23} A_A = \frac{A(X|Y)-A(Y|X)}{A(X|Y)+A(Y|X)}\]


can indicate driver-response relationships.

An important caveat regarding all directional analyses is that any \(A_A\)-value significantly different from zero does not necessarily imply direct causality. For example the interaction could have been mediated by another system or caused by a common driver. To distinguish these cases, various approaches based on partialization analysis (e.g., Blalock, 1961) have been proposed. Furthermore, it is also possible that putative driver-response relationships just reflect asymmetric properties of the two individual time series (Quian Quiroga et al., 2000).

References

  • Afraimovich VS, Verichev NN, Rabinovich MI (1986). Stochastic synchronization of oscillation in dissipative systems. Radiophys. Quantum Electron 29:795–803.
  • Andrzejak RG, Kraskov A, Stögbauer H, Mormann F, Kreuz T (2003). Bivariate surrogate techniques: Necessity, strengths, and caveats. Phys Rev E 68:066202.
  • Arnhold J, Lehnertz K, Grassberger P, Elger CE (1999). A robust method for detecting interdependences: application to intracranially recorded EEG. Physica D 134:419–430.
  • Bartlett MS (1946). On the theoretical specification and sampling properties of autocorrelated time series. J Roy Stat Soc B 8:27–41.
  • Blalock H (1961). Causal Inferences in Nonexperimental Research (University of North Carolina, Chapel Hill, NC).
  • Cover TM, Thomas JA (2006). Elements of information theory (2nd edn). New York: Wiley.
  • Frei MG, Zaperi HP, Arthurs S, Bergey GK, Jouny CC, Lehnertz K, Gotman J, Osorio I, Netoff TI, Freeman WJ, Jefferys J, Worrell G, Le Van Quyen M, Schiff S, Mormann F (2010). Controversies in epilepsy: Debates held during the Fourth International Workshop on Seizure Prediction. Epilepsy & Behavior 19:4–16.
  • Fujisaka H, Yamada T (1983). Stability Theory of Synchronized Motion in Coupled-Oscillator Systems. Progr Theoret Phys 69:32–47.
  • Gabor D (1946). Theory of communication. Proc IEE London 93:429–457.
  • Granger CWJ (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37:424–438.
  • Gray R (1990). Entropy and Information Theory. Springer Verlag, New York.
  • Grinsted A, Moore JC and Jevrejeva S (2004). Application of the cross wavelet transform and wavelet coherence to geophysical time series. Nonlin Proc. Geophys 11: 561–566.
  • Hlaváčková-Schindler K, Palus M, Vejmelka M and Bhattacharya J (2007). Causality detection based on information-theoretic approaches in time series analysis. Phys. Rep. 441:1–46.
  • Kralemann B, Cimponeriu L, Rosenblum M, Pikovsky A, Mrowka R (2007). Uncovering interaction of coupled oscillators from data. Phys Rev E 76, 055201.
  • Kraskov A, Stögbauer H, Grassberger P (2004). Estimating mutual information. Phys Rev E 69:066138.
  • Kreuz T, Kraskov A, Andrzejak RG, Mormann F, Lehnertz K, Grassberger P (2007a). Measuring synchronization in coupled model systems: a comparison of different approaches. Phys D 225:29–42.
  • Kuramoto Y (1984). Chemical Oscillations, Waves, and Turbulence. Berlin:Springer.
  • Lachaux JP, Rodriguez E, Martinerie J, Varela FJ (1999). Measuring phase synchrony in brain signals. Hum Brain Mapp 8:194–208.
  • Mardia KV (1972). Probability and Mathematical Statistics: Statistics of Directional Data, Academic Press, London.
  • Marinazzo D, Pellicoro M, Stramaglia S (2008). Kernel Method for Nonlinear Granger Causality. Phys Rev Lett 100:144103.
  • Mormann F, Lehnertz K, David P, Elger CE (2000). Mean phase coherence as a measure for phase synchronization and its application to the EEG of epilepsy patients. Physica D 144:358-369.
  • Niedermeyer E, Lopez da Silva FH (2005). Electroencephalography: Basic Principles, Clinical Applications and Related Fields (5th edn). Philadelphia: Lippincott Williams & Wilkins.
  • Paninski L (2003). Estimation of Entropy and Mutual Information. Neural Comput 15:1191–1253.
  • Panter P (1965). Modulation, Noise, and Spectral Analysis. New York: McGraw-Hill.
  • Quian Quiroga R, Arnhold J, Grassberger P (2000). Learning driver-response relationships from synchronization patterns. Phys Rev E 61: 5142–5148.
  • Quian Quiroga R, Kraskov A, Kreuz T, Grassberger P (2002). Performance of different synchronization measures in real data: A case study on electroencephalographic signals. Phys Rev E 65:041903.
  • Rosenblum MG, Pikovsky AS, Kurths J (1996). Phase Synchronization of Chaotic Oscillators. Phys Rev Lett 76:1804-1807.
  • Rosenblum MG, Pikovsky AS (2001). Detecting direction of coupling in interacting oscillators. Phys Rev E 64:045202.
  • Rulkov NF, Sushchik MM, Tsimring LS, Abarbanel HDI (1995). Generalized synchronization of chaos in directionality coupled chaotic systems. Phys Rev E 51:980–994.
  • Schreiber T (2000). Measuring Information Transfer. Phys Rev Lett 85:461–464.
  • Tallon-Baudry C, Bertrand O, Delpuech C, Pernier J (1996). Stimulus specificity of phase-locked and non-phase-locked 40 Hz visual Responses in Human. J Neurosci 16:4240-4249.
  • Takens F (1981). Detecting strange attractors in turbulence. In: Proc. Warwick Symp., Rand D and Young LS, eds., Lecture Notes in Math. 898. Berlin:Springer.
  • Tass P, Rosenblum MG, Weule J, Kurths J, Pikovsky A, Volkmann J, Schnitzler A, Freund HJ (1998). Detection of n:m Phase Locking from Noisy Data: Application to Magnetoencephalography. Phys Rev Lett 81:3291–3294.
  • Theiler J (1986). Spurious dimension from correlation algorithms applied to limited time-series data. Phys. Rev. A 34:2427–2432.
  • Torrence C and Compo GP (1998). A practical guide to wavelet analysis. Bull Am Meteorol Soc 79:61–78.
  • Torrence C and Webster P (1999). Interdecadal changes in the ESNO-Monsoon System. J Clim 12:2679–2690.
  • Wagner T, Fell J, Lehnertz K (2010). The detection of transient directional couplings based on phase synchronization. New J Phys 12:053031.
  • Welch, PD (1967). The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging Over Short, Modified Periodograms, IEEE Transactions on Audio Electroacoustics 15:70–73.
  • Zhan Y, Halliday D, Jiang P, Liu X and Feng J (2006). Detecting time-dependent coherence between non-stationary electrophysiological signals--a combined statistical and time-frequency approach. J Neurosci Methods 156(1-2):322-32.


Further reading

  • Ansari-Asl K, Senhadji L, Bellanger JJ, Wendling F (2006). Quantitative evaluation of linear and nonlinear methods characterizing interdependencies between brain signals. Phys Rev E 74:031916.
  • Lungarella M, Ishiguro K, Kuniyoshi Y, Otsu N (2007). Methods for Quantifying the Causal Structure of Bivariate Time Series. Int J Bif Chaos 17:903–921.
  • Pereda E, Quian Quiroga R, Bhattacharya J (2005). Nonlinear multivariate analysis of neurophysiological signals. Progress in Neurobiology 77:1–37.
  • Pikovsky A, Rosenblum M, Kurths J (2001). Synchronization. A Universal Concept in Nonlinear Sciences. Cambridge University Press, Cambridge, England.
  • Wendling F, Ansari-Asl K, Bartolomei F, Senhadji L (2009). From EEG signals to brain connectivity: a model-based evaluation of interdependence measures. J Neurosci Methods 183:9-18.

Internal references

  • Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459.
  • Florian Mormann (2008) Seizure prediction. Scholarpedia, 3(10):5770.
  • Nebojsa Bozanic, Mario Mulansky, Thomas Kreuz (2014) SPIKY. Scholarpedia, 9(12):32344.


External Links


Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools