Stationary phase approximation

In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential.

This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin.[1] It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others.

Basics

The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times.

Formula

Letting denote the set of critical points of the function (i.e. points where ), under the assumption that is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. for ) we have the following asymptotic formula, as :

Here denotes the Hessian of , and denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues.

For , this reduces to:

In this case the assumptions on reduce to all the critical points being non-degenerate.

This is just the Wick-rotated version of the formula for the method of steepest descent.

An example

Consider a function

.

The phase term in this function, , is stationary when

or equivalently,

.

Solutions to this equation yield dominant frequencies for some and . If we expand as a Taylor series about and neglect terms of order higher than , we have

where denotes the second derivative of . When is relatively large, even a small difference will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula,

.
.

This integrates to

.

Reduction steps

The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma.

The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by

.

The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval [−1, 1] and quickly tending to 0 outside it. Take

,

then Fubini's theorem reduces I(k) to a product of integrals over the real line like

with f(x) = ±x2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate.

In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques (see for example Airy function).

One-dimensional case

The essential statement is this one:

.

In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range (for a proof see Fresnel integral). Therefore it is the question of estimating away the integral over, say, .[2]

This is the model for all one-dimensional integrals with having a single non-degenerate critical point at which has second derivative . In fact the model case has second derivative 2 at 0. In order to scale using , observe that replacing by where is constant is the same as scaling by . It follows that for general values of , the factor becomes

.

For one uses the complex conjugate formula, as mentioned before.

Lower-order terms

As can be seen from the formula, the stationary phase approximation is a first-order approximation of the asymptotic behavior of the integral. The lower-order terms can be understood as a sum of over Feynman diagrams with various weighting factors, for well behaved .

See also

Notes

  1. Courant, Richard; Hilbert, David (1953), Methods of mathematical physics, vol. 1 (2nd revised ed.), New York: Interscience Publishers, p. 474, OCLC 505700
  2. See for example Jean Dieudonné, Infinitesimal Calculus, p. 119 or Jean Dieudonné, Calcul Infinitésimal, p.135.

References

  • Bleistein, N. and Handelsman, R. (1975), Asymptotic Expansions of Integrals, Dover, New York.
  • Victor Guillemin and Shlomo Sternberg (1990), Geometric Asymptotics, (see Chapter 1).
  • Hörmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer-Verlag, ISBN 978-3-540-00662-6.
  • Aki, Keiiti; & Richards, Paul G. (2002). "Quantitative Seismology" (2nd ed.), pp 255–256. University Science Books, ISBN 0-935702-96-2
  • Wong, R. (2001), Asymptotic Approximations of Integrals, Classics in Applied Mathematics, Vol. 34. Corrected reprint of the 1989 original. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA. xviii+543 pages, ISBN 0-89871-497-4.
  • Dieudonné, J. (1980), Calcul Infinitésimal, Hermann, Paris
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.