Probability Axioms & Basics

Published on May 8, 2025 by Aman K Sahu

Probability theory is the mathematical framework for analyzing random phenomena. It is grounded in a set of axioms that form the basis for understanding probability. In this article, we will explore the fundamental axioms of probability, as well as key concepts such as sample space, events, and probability measures. Understanding these concepts is crucial for solving problems related to uncertainty, randomness, and statistical analysis.

1. The Three Axioms of Probability

The foundation of probability theory is built upon three simple but powerful rules known as Kolmogorov's Axioms. These rules define how we assign and manipulate probabilities for different events in a consistent and logical way.

  • 1. Non-Negativity:
    The probability of any event is always a non-negative number. This means that probabilities cannot be negative or imaginary.
    P(E) ≥ 0 for every event E
    For example, it makes sense to say the probability of raining tomorrow is 0.3 (30%), but saying it’s -0.3 would have no meaning.
  • 2. Total Probability (Normalization Axiom):
    The probability of the entire sample space is 1. The sample space includes all possible outcomes of an experiment, and since something is guaranteed to happen, its probability must be 1.
    P(S) = 1
    For example, when rolling a standard die, the total probability of getting any number from 1 to 6 is 1.
  • 3. Additivity (for Mutually Exclusive Events):
    If two events cannot happen at the same time (they are mutually exclusive), then the probability of either event happening is the sum of their individual probabilities.
    If A ∩ B = ∅, then P(A ∪ B) = P(A) + P(B)
    Example: When tossing a coin, the events "Heads" and "Tails" are mutually exclusive. So,
    P(Heads or Tails) = P(Heads) + P(Tails) = 0.5 + 0.5 = 1

These axioms help ensure that probability behaves in a predictable and logical way. They are used in everything from simple games of chance to complex fields like machine learning, artificial intelligence, and risk analysis.

2. Sample Space and Events

In probability theory, the sample space and events are the basic building blocks that help us describe and analyze random experiments. Let’s explore what they mean with easy-to-understand examples:

  • Sample Space (S):
    The sample space is the set of all possible outcomes of a random experiment. It is usually denoted by the symbol S.
    👉 For example:
    • When tossing a coin once, the sample space is: S = {Heads, Tails}
    • When rolling a standard six-sided die, the sample space is: S = {1, 2, 3, 4, 5, 6}
    • When drawing a card from a deck, the sample space includes all 52 cards.
  • Event (E):
    An event is a subset of the sample space. It represents one or more outcomes that we are interested in.
    👉 For example:
    • When rolling a die, the event of getting an even number can be written as: E = {2, 4, 6}
    • When flipping a coin, the event of getting a head is: E = {Heads}
    • An event can also contain more than one outcome, or even all of them (this is called a "sure event").

Understanding sample spaces and events is crucial because the probability of an event is calculated based on how many outcomes from the sample space belong to that event. The more likely an outcome is to occur, the higher the probability of the event containing that outcome.

3. Probability of Events

Probability helps us measure how likely it is for a particular event to happen. The probability of any event is a number between 0 and 1:

  • 0 means the event is impossible (it will never happen).
  • 1 means the event is certain (it will definitely happen).
  • A probability of 0.5 means the event is equally likely to happen or not.

There are different ways to calculate probability based on the kind of data or scenario. Let’s explore three common types:

  • 1. Classical Probability:
    This method is used when all outcomes are equally likely. It's based on logical reasoning and works well for games of chance like dice, coins, or cards.
    👉 Formula: P(E) = Number of favorable outcomes / Total number of possible outcomes
    🧠 Example: Rolling a fair six-sided die. What's the probability of getting a 3?
    👉 There is only one "3" and six total outcomes, so: P(3) = 1 / 6
  • 2. Empirical (or Experimental) Probability:
    This method is based on real-life observations or experiments. It uses actual data collected from repeated trials.
    👉 Formula: P(E) = Number of times event occurs / Total number of trials
    🧠 Example: If it rained on 20 out of the last 100 days, the probability of rain tomorrow is estimated as:
    P(Rain) = 20 / 100 = 0.2
  • 3. Subjective Probability:
    This type is based on a person’s own experience, opinion, or intuition. It’s not calculated using data or formulas.
    🧠 Example: A doctor may say there's a 70% chance of recovery based on experience with past patients — even if no formal data is available.

Understanding these types of probability allows you to apply the right approach depending on the context — whether it's solving math problems, analyzing data, or making decisions under uncertainty.

4. Conditional Probability

Conditional probability is the probability of one event happening given that another event has already occurred. It helps us analyze situations where the outcome of one event affects the likelihood of another.

The notation for conditional probability is: P(A | B)
This is read as: “The probability of event A occurring given that event B has already happened.”

👉 The formula for calculating conditional probability is:
P(A | B) = P(A ∩ B) / P(B)
Where:

  • P(A ∩ B): Probability that both A and B occur (intersection).
  • P(B): Probability that event B occurs.

💡 Example:
Suppose we have a deck of 52 playing cards. What is the probability that a card is a King, given that it's a face card (Jack, Queen, or King)?

  • Total number of face cards = 12 (4 Jacks, 4 Queens, 4 Kings)
  • Number of Kings among face cards = 4
  • So, P(King | Face Card) = 4 / 12 = 1 / 3

Conditional probability is very useful in real-life scenarios like diagnosing diseases, predicting weather, or even solving puzzles where certain outcomes are already known.

Previous: Counting Principles
Next: Sample Space and Event Types