Laplace and Kolmogorov Definitions of Probability
From Laplace's classical probability to Kolmogorov's modern axiomatic definition, explained step by step for beginners.
When we hear the word “probability,” we often think of phrases like “the probability is 50%.” But what does probability actually mean? Historically, two major approaches have shaped our understanding: Laplace’s classical probability and Kolmogorov’s modern probability.
In this article, we’ll gently walk through both definitions and set the stage for future discussions about probability distributions and probability density functions.
Laplace’s Probability (Classical Definition)
In the 18th century, Pierre-Simon Laplace defined probability as:
Probability = Number of favorable outcomes ÷ Total number of possible outcomes
This works well for experiments like dice or coin tosses, where all outcomes are finite and symmetric.
Example: Rolling a die
If we roll a six-sided die once, the probability of getting an odd number (1, 3, 5) is:
This relies on the assumption of symmetry: every outcome is equally likely.
Limitations
Laplace’s definition has important limitations:
- It does not apply when there are infinitely many outcomes (e.g., choosing a real number)
- It struggles when outcomes are biased (e.g., a weighted coin)
To overcome these issues, we turn to Kolmogorov’s probability.
Kolmogorov’s Probability (Modern Definition)
In 1933, the Soviet mathematician Andrey Kolmogorov introduced an axiomatic system for probability.
Probability Space
Kolmogorov formalized probability with three components:
- : the set of all possible outcomes (sample space)
- : a collection of subsets of called events (a σ-algebra)
- : a probability function that assigns a value in to each event in
Note: A σ-algebra is, roughly speaking, a collection of subsets that is closed under union, intersection, and complement—even when we take infinitely many of them. It ensures we can handle probabilities consistently without contradictions.
Axioms
The probability function must satisfy:
- Non-negativity:
- Normalization:
- Additivity: For disjoint events and ,
This definition no longer depends on “counting outcomes” and works for continuous spaces.
Example: Dropping a needle on [0,1]
Imagine dropping a needle randomly somewhere between 0 and 1 cm on a line. The probability that it lands in the interval is:
Here, probability is measured by length (or area, volume, etc.), making continuous outcomes manageable.
Laplace vs Kolmogorov: A Comparison
| Feature | Laplace | Kolmogorov |
|---|---|---|
| Suitable for | Finite, symmetric cases | Infinite, continuous cases |
| Definition | Favorable outcomes ÷ Total outcomes | Axiomatic definition |
| Limitations | Biased or continuous cases | Abstract, but highly general |
Let’s learn both types of probability with interactive visual demo!
Laplace vs Kolmogorov: Interactive Probability Demo
Understand the fundamental difference between counting-based and measure-based probability
Laplace's Classical Probability
Formula: P(Event) = Favorable Outcomes ÷ Total Outcomes
Example: What's the probability of rolling an odd number with a fair six-sided die?
Key Points:
- All outcomes must be equally likely (symmetry assumption)
- Works well for finite discrete cases
- Simply count favorable vs. total outcomes