Imagine you're trying to predict the chance of rain today. You might start with a basic probability based on the weather forecast. But then you notice dark clouds rolling in—suddenly, the odds of rain feel higher because you have new information. This is where conditional probability comes in: it’s about updating probabilities when you know something extra has happened.
Think of it like a game with a bag of colored balls—say, 5 red, 3 blue, and 2 green (10 total). The chance of picking a red ball is 5 out of 10, or \(\frac{5}{10} = \frac{1}{2}\). Now, suppose someone tells you they’ve already removed all the blue balls. The bag now has 5 red and 2 green (7 total), so the chance of picking a red ball jumps to \(\frac{5}{7}\). That’s conditional probability: the probability of an event (picking red) given that another event (blue balls removed) has occurred.
Formally, conditional probability is the likelihood of one event, say \(F\), happening given that another event, \(E\), has already taken place. We write it as \(\PCond{F}{E}\), pronounced “the probability of \(F\) given \(E\).” It’s a way to refine our predictions with new context, and it’s used everywhere—from weather forecasts to medical tests.