Grassberger-Procaccia algorithm

From Scholarpedia
Peter Grassberger (2007), Scholarpedia, 2(5):3043. doi:10.4249/scholarpedia.3043 revision #91330 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Peter Grassberger

Contents

Basic Definitions

The Grassberger-Procaccia Algorithm is used for estimating the correlation dimension of some fractal measure \(\mu\) from a given set of points randomly distributed according to \(\mu\ .\) Let the \(N\) points be denoted by \({\mathbf x}_1,\ldots {\mathbf x}_N\ ,\) in some metric space with distances \(|{\mathbf x}_i-{\mathbf x}_j|\) between any pair of points. For any positive number \(r\ ,\) the correlation sum \(C(r)\) is then defined as the fraction of pairs whose distance is smaller than \(r\ ,\)

\[\tag{1} {\hat C}(r) = {2\over N(N-1)}\sum_{i<j} \theta(r-|{\mathbf x}_i-{\mathbf x}_j|), \]


where \(\theta(x)\) is the Heaviside step function. It is an unbiased estimator of the correlation integral

\[\tag{2} C(r) = \int d\mu({\mathbf x}) \int d\mu({\mathbf y}) \theta(r-|{\mathbf x}-{\mathbf y}|). \]


Both \({\hat C}(r)\) and \(C(r)\) are monotonically decreasing to zero as \(r\to 0\ .\) If \(C(r)\) decreases like a power law, \(C(r) \sim r^D\ ,\) then \(D\) is called the correlation dimension of \(\mu\ .\) Formally, the dimension is defined by \(D=\lim_{r\to 0} {{\log C(r)}\over {\log r}}\ .\) The term ``GP algorithm" is used generically for any algorithm which attempts to estimate \(D\) (and more generally \(C(r)\)) from the small-\(r\) behavior of \({\hat C}(r)\ ,\) in particular when the input data are in form of a time series. Because this involves an extrapolation to a limit where the statistics is severely undersampled for any finite \(N\ ,\) this is an inherently ill-posed problem. The simplest and most naive way to estimate \(D\) is to plot \(C(r)\) against \(r\) on a log-log plot and to fit a straight line to the small-\(r\) tail of the curve. \(D\) is then the slope of this line (see Fig. 1). More sophisticated methods involve e.g. fitting local slopes \(D_{\rm eff}(r)\) and extrapolating them to \(r\to 0\ ,\) or the method proposed in (Takens 1985, Theiler 1988).

Figure 1: Description

Main Application: Chaotic Dynamical Systems

Although the GP algorithm can be used for any measure (the basic idea had been used before to estimate dimensions of fractal clusters created by diffusion limited aggregation (Witten and Sander 1981)), it is mostly used to measure the fractal dimensions of a strange attractor from a univariate (i.e. scalar) time series which is denoted as \(x_1,\ldots x_N\ .\) Now, \(x_i\) represents a measurement of the quantity \(x\) at time \(t_i = t_0 + i\Delta t\ .\) We assume stationarity, i.e. the statistics of the set \(\{x_i\}\) is invariant under time translation. Unless the measurements are i.i.d., there will be correlations between successive measurements. But they will be weak and short-ranged, if the data are produced by a chaotic system, i.e. if they are sampled from a trajectory on a strange attractor (or strange repeller). In that case, and if \(N\) is sufficiently big, one can assume that the data are effectively independent and randomly sampled from the invariant natural measure on the attractor, and one can directly take over Eqs.(1) and (2). Furthermore, using Takens' time delay embedding theorem (Takens 1981, Packard et al. 1980) and its improvements (Sauer et al. 1991), one can replace a series of \(N+m-1\) univariate measurements by a time series of \(N\) delay vectors

\[\tag{3} {\mathbf x}_i = (x_{i-m+1},x_{i-m+2},\ldots x_i) \in R^m \]


where \(m\) is the embedding dimension. Estimating the dimension of attractors by using Eqs.(1) with delay vectors, and using Euclidean distances in delay vector space, was first proposed in (Grassberger and Procaccia 1983a). An equivalent algorithm with maximum instead of Euclidean norm had been proposed independently in (Takens 1982).

One of the main applications of the GP algorithm is to distinguish (in principle) between stochastic and deterministically chaotic time sequences. For a stochastic signal, \(C(r,m) \sim r^m\) for all \(m\ .\) In contrast, \(C(r,m) \sim r^D\) for \(m\) larger than the attractor dimension, if the signal is generated by a deterministic system. Notice that in both cases the Fourier spectrum is continuous, and thus cannot be used to make this distinction. In practice, the distinction based on \(C(r,m)\) is often not possible either, due to experimental noise, finiteness of \(N\ ,\) non-stationarity and intermittency effects, and due to the uncertainties involved in the extrapolation \(r\to 0\ .\) Many of these issues are discussed in the review by Theiler (1990). It seems fair to say that a large fraction of the relevant literature is questionable, because authors have underestimated these problems in view of the simplicity of the GP algorithm's implementation.

Relations to Other Dynamical Invariants and Multifractality

Let us denote by \(C(r,m)\) the correlation integral obtained with embedding dimension \(m\ .\) Then the typical behavior for \(r\to 0,\; m\to \infty\) for a chaotic system with attractor dimension \(<m\) is \(C(r,m) \sim r^D \exp{(-mK_2\Delta t)}\) where \(K_2\) is the order-2 Renyi entropy, a proxy for the Kolmogorov-Sinai entropy. Thus the GP algorithm can be used also to estimate dynamical entropies (Grassberger and Procaccia 1983b) (see Fig. 1).

The basic idea of the GP algorithm, namely to estimate a dimension from the statistics of near neighbors, can be implemented also in other ways. For instance, one can define pointwise dimensions \(D(i)\) by counting for each \(i\) the fraction \(n_i(r)/(N-1)\) of points which are \(r\)-close neighbors of \({\mathbf x}_i\) and fitting it to a power law. Alternatively, one obtains the information dimension \(D_1\) by fitting a power law to a geometric average, \(C_1(r,m) = \exp[N^{-1} \sum_i \ln (n_i(r)/(N-1))]\ .\) Or, more generally, one can define non-linear averages by \(C_q(r,m) = [N^{-1} \sum_i [(n_i(r)/(N-1)]^{q-1}]^{1/(q-1)}\ .\) Notice that \(C_2(r,m) \equiv C(r,m)\ .\) If \(C_q((r,m) \sim r^{D_q} \exp{(-m{K_q}\Delta t)}\ ,\) then \(D_q\) and \(K_q\) are called order-\(q\) Renyi dimensions and order-q dynamical entropies. Thus \(D\) is also called \(D_2\ ,\) the order-2 Renyi dimension.

Measures for which \(D_q\) is independent of \(q\) are called monofractal, those with non-trivial \(q\)-dependence are called multifractal (Hentschel and Procaccia 1983, Grassberger 1983, 1985, Halsey et al. 1986). All \(D_q\) and \(K_q\) are (metric) invariants, i.e. their values do not change when the metric \(|x-y|\) is replaced by some other metric, or when \(x_i \to f(x_i)\) with smooth and invertible \(f(x)\ .\) While the most interesting invariants for certain theoretical analyses are those with \(q=1\ ,\) invariants with \(q=2\) are easiest to measure because, for finite \(N\ ,\) \(C_2(r)\) has twice the dynamic range of other \(C_q(r)\) functions, which means the small \(r\) limit is more effectively probed.

Computational Complexity Aspects

Typically, one wants to obtain \({\hat C}(r,m)\) for \(N_r\) different values of \(r\) (equally spaced on logarithmic scale) and for \(M\) different values of \(m\ .\) Naive evaluation of Eq.(1) requires then \(O(N^2N_rM^2)\) operations. With e.g. \(N=10^4, N_r = 10^2, M=10\) (rather modest requirements), this is already a non-trivial task on a fast PC. The most obvious improvement is obtained by binning \(r\) logarithmically and storing \[ {\hat C}(r_k,m)- {\hat C}(r_{k-1},m) = {1\over N(N-1)}\#\{(i,j):\; r_{k-1} < |x_i-x_j| < r_k\} \] in separate entries of a histogram. This reduces the complexity to \(O(N^2M^2)\ .\) Next, one can treat all values of \(m\) in a single run, which reduces the \(M^2\) dependence to \(M\ .\) This can be further reduced to a weaker than linear increase with \(M\) (at least for intermediate values of \(M\)), if one replaces the double sum over \(i\) and \(j\) in Eq.(1) by a sum over \(i\) and \(i-j\ .\) For a fast implementation using also some other shortcuts, see (Widman et al. 1998).

For very large \(N\) one can reduce CPU time further by noticing that it is mainly the small \(r\) tail of \({\hat C}(r,m)\) that is of interest. By preprocessing the data (using e.g. grids and taking pairs of points only from neighboring boxes) one can avoid counting pairs with large \(r\ ,\) obtaining substantial improvements (Schreiber 1995).

``Optimal" Choices for Delay and Embedding Dimension

There exists a large literature which attempts to determine optimal choices for the delay \(\Delta t\) and for \(m\ .\) The delay is often chosen such that some measure of dependence (e.g. mutual information) between successive coordinates \(x_i\) and \(x_{i+1}\) of delay vectors has a local minimum. More precisely, one wants to avoid the case where all \(m\) components are too dependent. In fact, these two requirements are in general mutually exclusive (Grassberger et al. 1991). Also, there are in general no optimal values of \(m\) and \(\Delta t\) separately, but only for the product \((m+1)\Delta t\ .\) The reason is simply that adding more measured values cannot be detrimental (at least if the data are not too noisy, if the maximum norm is used, and if one has enough computing power). The only general advice one can give for \(\Delta t\) and for \(m\) is to avoid values for which \(D\) has a local minimum, because that means that such a choice cannot resolve all effective degrees of freedom, as they would be seen with other, nearby, choices (Grassberger et al. 1991).

Non-Stationary Signals and Theiler Correction

When applying the GP method to time sequences, one should remember that its justification hinges on the assumption that all points \({\mathbf x}_i\) are independent apart from being distributed according to the same invariant measure. In particular, there should be no significant time correlations.

This is manifestly and grossly violated, if the system is not stationary. In that case a main reason for two points \({\mathbf x}_i\) and \({\mathbf x}_j\) to be close neighbors in space might be that they are also close in time, as is most clearly demonstrated by (ordinary or fractal) diffusion (Osborne and Provenzale 1989), and by data with a strong linear trend. Neglecting this has been one of the most common reasons for wrong claims for small attractor dimensions. Fortunately, there is an easy way to test against this danger: Plot all pairs \((i,j)\) with \(|{\mathbf x}_i-{\mathbf x}_j|<r\) against \(|i-j|\ ,\) and check that they don't cluster at small \(|i-j|\) (more precisely, the density of these points should be \(\sim N-|i-j|\)). More common tests for stationarity are less useful, as they are sensitive to the bulk of the data and not only to the tiny fraction of small distance pairs.

Even for stationary systems, pairs with very small \(|i-j|\) will not be independent and should thus be excluded from the analysis. As suggested by Theiler (Theiler 1990), this is done by defining a generous upper limit \(\tau_c\) to the correlation time, and replacing Eq.(1) by

\[\tag{4} {\hat C}(r) = {2\over (N-\tau_c)(N-\tau_c-1)}\sum_{i+\tau_c<j} \theta(r-|x_i-x_j|). \]


Intermittent, Noisy, and Stochastic Time Sequences

Experimental data are usually noisy and often intermittent. Strong intermittency poses a practical problem, in that it implies a large time scale over which the signal does not look stationary. It also leads often to very inhomogeneous invariant measures, so that any scaling law is likely to show very large corrections. Finally, it usually implies a strong dependence on the order \(q\ ,\) so that \(D\) is a bad proxy for the more interesting information dimension \(D_1\ .\)

Low amplitude and high frequency noise (the most common case) leads to deviations from scaling behavior at small \(r\ .\) In the ideal case, it fills the available phase space, and thus \({\hat C}(r,m) \sim r^m\) below the noise level, with \({\hat C}(r,m) \sim r^D\) above. In this case the estimation of \(D\) is more difficult but still possible.

The worst case is when a separation into noise and deterministic signal is no longer possible. In this case, looking for scaling behavior is no longer adequate. But studying \({\hat C}(r,m)\) can still be useful for rejecting null models, such as AR and ARMA models popular e.g. in economy (Brock et al. 1996). Another application of \({\hat C}(r,m)\) is to EEG analysis. There, even if estimates of ``dimensions" are usually misleading, the shape of \({\hat C}(r,m)\) can systematically depend on mental states which might not be easily distinguished otherwise. For instance, values of \({\hat C}(r,m)\) at small \(r\) are increased (i.e. effective dimensions are reduced) during sleep, under the influence of narcotic drugs, and during epileptic seizures. An interesting suggestion which has stimulated much controversy is that there is also a ``preictal" phase preceding epileptic seizures, during which \(D_{\rm eff}\) is reduced and which could be used to predict seizures (Elger and Lehnertz 2004).

For further reading, see (Kantz and Schreiber 2003). For public domain software, see e.g. (Hegger et al. 2007).


References

  • W.A. Brock, W.D. Dechert, J.A. Scheinkman, and B. LeBaron, Economic Reviews 15, 197 (1996).
  • C.E. Elger and K. Lehnertz, ``Prediction of seizure occurrence by chaos analysis: Technique and therapeutic implications", in: F. Rosenow et al., eds., Handbook of Clinical Neurophysiology Vol.3, pp.491-500 (2004).
  • P. Grassberger, T. Schreiber, and C. Schaffrath, Int. J. Bifurcation and Chaos 1, 521 (1991).
  • P. Grassberger and I. Procaccia, Physica D 9, 198 (1983); Phys. Rev. Lett. 50, 346 (1983).
  • P. Grassberger and I. Procaccia, Phys. Rev. A 28, 2591 (1983).
  • P. Grassberger, Phys. Lett. A 97, 227 (1983).
  • P. Grassberger, Phys. Lett. A 107, 101 (1985).
  • T.C. Halsey, M.H. Jensen, L.P. Kadanoff, I. Procaccia, and B.I. Shraiman, Phys. Rev. A 33, 1141 (1986).
  • R. Hegger, H. Kantz and T. Schreiber, TISEAN sortware package; URL {\sf http://www.mpipks-dresden.mpg.de/ tisean} (2007).
  • H.G.E. Hentschel and I. Procaccia, Physica D 8, 435 (1983).
  • H. Kantz and T. Schreiber, "Nonlinear time series analysis", 2nd edition (Cambridge University Press, Cambridge 2003).
  • H. Osborne and A. Provenzale, Physica D 35, 375 (1989).
  • N.H. Packard, J.P. Crutchfield, J.D. Farmer, and R.S. Shaw, Phys. Rev. Lett. 45,712 (1980).
  • T. Sauer, J.A. Yorke, and M. Casdagli, J. Stat. Phys. 65, 579 (1991).
  • T. Schreiber, Int. J. Bifurcation and Chaos 5, 349 (1995).
  • F. Takens, in: Proc. Warwick Symp. 1980, D. Rand and L.S. Young, eds., Lecture Notes in Math. 898 (Springer, Berlin, 1981).
  • F. Takens, ``Invariants related to dimension and entropy", Atas do \(13^0\) Coloquio Brasileiro de Matematica (1982).
  • F. Takens, in: B.L.J. Braaksma et al., eds., "Dynamical Systems and Bifurcations", Lecture Notes in Math. Vol. 1125 (Springer, Heidelberg, 1985).
  • J. Theiler, Phys. Lett. A 135, 195 (1988).
  • J. Theiler, J. Opt. Soc. Amer. A 7, 1055 (1990).
  • G. Widmann et al., Physica D 121, 65 (1998).
  • T.A. Witten and L.M. Sander, Phys. Rev. Lett. 47, 1400 (1981).

Internal references

See Also

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools