Lindeberg's condition
In probability theory, Lindeberg's condition is a sufficient condition (and under certain conditions also a necessary condition) for the central limit theorem (CLT) to hold for a sequence of independent random variables.[1][2][3] Unlike the classical CLT, which requires that the random variables in question have finite variance and be both independent and identically distributed, Lindeberg's CLT only requires that they have finite variance, satisfy Lindeberg's condition, and be independent. It is named after the Finnish mathematician Jarl Waldemar Lindeberg.[4]
Statement
Let be a probability space, and , be independent random variables defined on that space. Assume the expected values and variances exist and are finite. Also let
If this sequence of independent random variables satisfies Lindeberg's condition:
for all , where 1{…} is the indicator function, then the central limit theorem holds, i.e. the random variables
converge in distribution to a standard normal random variable as
Lindeberg's condition is sufficient, but not in general necessary (i.e. the inverse implication does not hold in general). However, if the sequence of independent random variables in question satisfies
then Lindeberg's condition is both sufficient and necessary, i.e. it holds if and only if the result of central limit theorem holds.
Remarks
Feller's theorem
Feller's theorem can be used as an alternative method to prove that Lindeberg's condition holds.[5] Letting and for simplicity , the theorem states
- if , and converges weakly to a standard normal distribution as then satisfies the Lindeberg's condition.
This theorem can be used to disprove the central limit theorem holds for by using proof by contradiction. This procedure involves proving that Lindeberg's condition fails for .
Interpretation
Because the Lindeberg condition implies as , it guarantees that the contribution of any individual random variable () to the variance is arbitrarily small, for sufficiently large values of .
See also
References
- Billingsley, P. (1986). Probability and Measure (2nd ed.). Wiley. p. 369. ISBN 0-471-80478-9.
- Ash, R. B. (2000). Probability and measure theory (2nd ed.). p. 307. ISBN 0-12-065202-1.
- Resnick, S. I. (1999). A probability Path. p. 314.
- Lindeberg, J. W. (1922). "Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung". Mathematische Zeitschrift. 15 (1): 211–225. doi:10.1007/BF01494395. S2CID 119730242.
- Athreya, K. B.; Lahiri, S. N. (2006). Measure Theory and Probability Theory. Springer. p. 348. ISBN 0-387-32903-X.