Many practical situations involve random variables that are defined as the sum or average of other random variables. For instance, estimating a population mean from a sample mean relies heavily on the properties of these sums. Limit theorems provide the mathematical foundation for such estimations.
Limit theorems are fundamental results in probability theory that describe the behavior of sums of random variables as the number of terms increases. We will focus on two key theorems:
- The Law of Large Numbers (LLN): this theorem explains why averages stabilize. It states that as you perform an experiment many times, the average of the results (experimental mean) gets closer and closer to the expected value (theoretical mean).
- The Central Limit Theorem (CLT): this theorem explains the shape of the distribution. It states that if you sum a large number of independent random variables, the distribution of their sum (or mean) tends toward a normal distribution, regardless of the original distribution of the variables (under mild technical conditions).
These theorems are crucial for
statistical inference, allowing us to draw conclusions about entire populations based on limited samples. They help answer two fundamental questions:
- Does the experimental probability (or observed frequency) approach the theoretical probability as the number of trials increases? Yes, the Law of Large Numbers guarantees this.
- Can we quantify the error between our sample estimate and the true value? Yes, the Central Limit Theorem allows us to define confidence intervals and error margins using the normal distribution.