Surrogate data testing

Surrogate data testing[1] (or the method of surrogate data) is a statistical proof by contradiction technique similar to permutation tests[2] and parametric bootstrapping. It is used to detect non-linearity in a time series.[3] The technique involves specifying a null hypothesis describing a linear process and then generating several surrogate data sets according to using Monte Carlo methods. A discriminating statistic is then calculated for the original time series and all the surrogate set. If the value of the statistic is significantly different for the original series than for the surrogate set, the null hypothesis is rejected and non-linearity assumed.[3]

The particular surrogate data testing method to be used is directly related to the null hypothesis. Usually this is similar to the following: The data is a realization of a stationary linear system, whose output has been possibly measured by a monotonically increasing possibly nonlinear (but static) function.[1] Here linear means that each value is linearly dependent on past values or on present and past values of some independent identically distributed (i.i.d.) process, usually also Gaussian. This is equivalent to saying that the process is ARMA type. In case of fluxes (continuous mappings), linearity of system means that it can be expressed by a linear differential equation. In this hypothesis, the static measurement function is one which depends only on the present value of its argument, not on past ones.

Methods

Many algorithms to generate surrogate data have been proposed. They are usually classified in two groups:[4]

  • Typical realizations: data series are generated as outputs of a well-fitted model to the original data.
  • Constrained realizations: data series are created directly from original data, generally by some suitable transformation of it.

The last surrogate data methods do not depend on a particular model, nor on any parameters, thus they are non-parametric methods. These surrogate data methods are usually based on preserving the linear structure of the original series (for instance, by preserving the autocorrelation function, or equivalently the periodogram, an estimate of the sample spectrum).[5] Among constrained realizations methods, the most widely used (and thus could be called the classical methods) are:

  1. Algorithm 0, or RS (for Random Shuffle):[1][6] New data are created simply by random permutations of the original series. This concept is also used in permutation tests. The permutations guarantee the same amplitude distribution as the original series, but destroy any temporal correlation that may have been in the original data. This method is associated to the null hypothesis of the data being uncorrelated i.i.d noise (possibly Gaussian and measured by a static nonlinear function).
  2. Algorithm 1, or RP (for Random Phases; also known as FT, for Fourier Transform):[1][7] In order to preserve the linear correlation (the periodogram) of the series, surrogate data are created by the inverse Fourier Transform of the modules of Fourier Transform of the original data with new (uniformly random) phases. If the surrogates must be real, the Fourier phases must be antisymmetric with respect to the central value of data.
  3. Algorithm 2, or AAFT (for Amplitude Adjusted Fourier Transform):[1][4] This method has approximately the advantages of the two previous ones: it tries to preserve both the linear structure and the amplitude distribution. This method consists of these steps:
    • Scaling the data to a Gaussian distribution (Gaussianization).
    • Performing a RP transformation of the new data.
    • Finally doing a transformation inverse of the first one (de-Gaussianization).
    The drawback of this method is precisely that the last step changes somewhat the linear structure.
  4. Iterative algorithm 2, or IAAFT (for Iterative Amplitude Adjusted Fourier Transform):[8] This algorithm is an iterative version of AAFT. The steps are repeated until the autocorrelation function is sufficiently similar to the original, or until there is no change in the amplitudes.

Many other surrogate data methods have been proposed, some based on optimizations to achieve an autocorrelation close to the original one,[9][10][11] some based on wavelet transform[12][13][14] and some capable of dealing with some types of non-stationary data.[15][16][17]

The above mentioned techniques are called linear surrogate methods, because they are based on a linear process and address a linear null hypothesis.[9] Broadly speaking, these methods are useful for data showing irregular fluctuations (short-term variabilities) and data with such a behaviour abound in the real world. However, we often observe data with obvious periodicity, for example, annual sunspot numbers, electrocardiogram (ECG) and so on. Time series exhibiting strong periodicities are clearly not consistent with the linear null hypotheses. To tackle this case, some algorithms and null hypotheses have been proposed.[18][19][20]

See also

References

  1. J. Theiler; S. Eubank; A. Longtin; B. Galdrikian; J. Doyne Farmer (1992). "Testing for nonlinearity in time series: the method of surrogate data" (PDF). Physica D. 58 (1–4): 77–94. Bibcode:1992PhyD...58...77T. doi:10.1016/0167-2789(92)90102-S.
  2. Moore, Jason H. "Bootstrapping, permutation testing and the method of surrogate data." Physics in Medicine & Biology 44.6 (1999): L11
  3. Andreas Galka (2000). Topics in Nonlinear Time Series Analysis: with Implications for EEG Analysis. River Edge, N.J.: World Scientific. pp. 222–223. ISBN 9789810241483.
  4. J. Theiler; D. Prichard (1996). "Constrained-realization Monte-Carlo method for hypothesis testing". Physica D. 94 (4): 221–235. arXiv:comp-gas/9603001. Bibcode:1996PhyD...94..221T. doi:10.1016/0167-2789(96)00050-4. S2CID 12568769.
  5. A. Galka; T. Ozaki (2001). "Testing for nonlinearity in high-dimensional time series from continuous dynamics". Physica D. 158 (1–4): 32–44. Bibcode:2001PhyD..158...32G. CiteSeerX 10.1.1.379.7641. doi:10.1016/s0167-2789(01)00318-9.
  6. J.A. Scheinkman; B. LeBaron (1989). "Nonlinear Dynamics and Stock Returns". The Journal of Business. 62 (3): 311. doi:10.1086/296465.
  7. A.R. Osborne; A.D. Kirwan Jr.; A. Provenzale; L. Bergamasco (1986). "A search for chaotic behavior in large and mesoscale motions in the Pacific Ocean". Physica D. 23 (1–3): 75–83. Bibcode:1986PhyD...23...75O. doi:10.1016/0167-2789(86)90113-2.
  8. T. Schreiber; A. Schmitz (1996). "Improved Surrogate Data for Nonlinearity Tests". Phys. Rev. Lett. 77 (4): 635–638. arXiv:chao-dyn/9909041. Bibcode:1996PhRvL..77..635S. doi:10.1103/PhysRevLett.77.635. PMID 10062864. S2CID 13193081.
  9. T. Schreiber; A. Schmitz (2000). "Surrogate time series". Physica D. 142 (3–4): 346–382. arXiv:chao-dyn/9909037. Bibcode:2000PhyD..142..346S. doi:10.1016/S0167-2789(00)00043-9. S2CID 13889229.
  10. T. Schreiber (1998). "Constrained Randomization of Time Series Data". Phys. Rev. Lett. 80 (4): 2105–2108. arXiv:chao-dyn/9909042. Bibcode:1998PhRvL..80.2105S. doi:10.1103/PhysRevLett.80.2105. S2CID 42976448.
  11. R. Engbert (2002). "Testing for nonlinearity: the role of surrogate data". Chaos, Solitons & Fractals. 13 (1): 79–84. Bibcode:2002CSF....13...79E. doi:10.1016/S0960-0779(00)00236-8.
  12. M. Breakspear; M. Brammer; P.A. Robinson (2003). "Construction of multivariate surrogate sets from nonlinear data using the wavelet transform". Physica D. 182 (1): 1–22. Bibcode:2003PhyD..182....1B. doi:10.1016/S0167-2789(03)00136-2.
  13. C.J. Keylock (2006). "Constrained surrogate time series with preservation of the mean and variance structure". Phys. Rev. E. 73 (3): 036707. Bibcode:2006PhRvE..73c6707K. doi:10.1103/PhysRevE.73.036707. PMID 16605698.
  14. C.J. Keylock (2007). "A wavelet-based method for surrogate data generation". Physica D. 225 (2): 219–228. Bibcode:2007PhyD..225..219K. doi:10.1016/j.physd.2006.10.012.
  15. T. Nakamura; M. Small (2005). "Small-shuffle surrogate data: Testing for dynamics in fluctuating data with trends". Phys. Rev. E. 72 (5): 056216. Bibcode:2005PhRvE..72e6216N. doi:10.1103/PhysRevE.72.056216. hdl:10397/4826. PMID 16383736.
  16. T. Nakamura; M. Small; Y. Hirata (2006). "Testing for nonlinearity in irregular fluctuations with long-term trends". Phys. Rev. E. 74 (2): 026205. Bibcode:2006PhRvE..74b6205N. doi:10.1103/PhysRevE.74.026205. hdl:10397/7633. PMID 17025523.
  17. J.H. Lucio; R. Valdés; L.R. Rodríguez (2012). "Improvements to surrogate data methods for nonstationary time series". Phys. Rev. E. 85 (5): 056202. Bibcode:2012PhRvE..85e6202L. doi:10.1103/PhysRevE.85.056202. PMID 23004838.
  18. J. Theiler (1995). "On the evidence for low-dimensional chaos in an epileptic electroencephalogram". Physics Letters A. 196 (5–6): 335–341. Bibcode:1995PhLA..196..335T. doi:10.1016/0375-9601(94)00856-K.
  19. M. Small; D. Yu; R. G. Harrison (2001). "Surrogate test for pseudoperiodic time series data". Phys. Rev. Lett. 87 (18): 188101. Bibcode:2001PhRvL..87r8101S. doi:10.1103/PhysRevLett.87.188101. hdl:10397/4856.
  20. X. Luo; T. Nakamura; M. Small (2005). "Surrogate test to distinguish between chaotic and pseudoperiodic time series". Phys. Rev. E. 71 (2): 026230. arXiv:nlin/0404054. Bibcode:2005PhRvE..71b6230L. doi:10.1103/PhysRevE.71.026230. hdl:10397/4828. PMID 15783410. S2CID 35512941.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.