Bayes' Theorem Calculator

Calculate posterior probability using Bayes' Theorem

Input Probabilities

Enter probabilities as percentages (0-100)

Initial probability that event A occurs

Probability of B occurring given that A is true

Probability of B occurring given that A is false

Load Example

Bayes' Theorem Formula

P(A|B) = [P(B|A) × P(A)] / P(B)
P(A|B) = Posterior probability (what we want)
P(B|A) = Likelihood (evidence given hypothesis)
P(A) = Prior probability (initial belief)
P(B) = Marginal probability (total evidence)
Law of Total Probability:
P(B) = P(B|A)×P(A) + P(B|¬A)×P(¬A)

Posterior Probability

P(A|B) - Probability of A given B

77.42%
P(A|B)
Updated probability after observing evidence B

All Probabilities

P(A) - Prior30.00%
P(¬A) - Not A70.00%
P(B|A) - Likelihood80.00%
P(B|¬A)10.00%
P(B) - Total Prob31.00%
P(A|B) - Posterior77.42%
P(¬A|B)22.58%

Calculation Steps

Step 1: Calculate P(¬A)
P(¬A) = 1 - P(A) = 1 - 0.3000 = 0.7000
Step 2: Calculate P(B)
P(B) = P(B|A)×P(A) + P(B|¬A)×P(¬A)
= 0.8000×0.3000 + 0.1000×0.7000
= 0.3100
Step 3: Calculate P(A|B)
P(A|B) = [P(B|A) × P(A)] / P(B)
= [0.8000 × 0.3000] / 0.3100
= 0.7742 (77.42%)

Interpretation

Prior → Posterior:
The probability of A changed from 30.00% to 77.42% after observing evidence B.
Evidence supports A:
Observing B increases our belief in A by 47.42 percentage points.

About Bayes' Theorem Calculator

What is Bayes' Theorem?

Bayes' Theorem is a fundamental formula in probability theory that describes how to update the probability of a hypothesis based on new evidence. It allows us to revise our beliefs (prior probabilities) when we receive new information (likelihood), resulting in updated beliefs (posterior probabilities).

The Formula

P(A|B) = [P(B|A) × P(A)] / P(B)

  • P(A|B) - Posterior: Probability of A given evidence B
  • P(B|A) - Likelihood: Probability of observing B if A is true
  • P(A) - Prior: Initial probability of A before seeing evidence
  • P(B) - Marginal: Total probability of observing B

Real-World Applications

  • Medical Diagnosis: Calculating disease probability from test results
  • Spam Filtering: Determining if an email is spam based on word patterns
  • Machine Learning: Naive Bayes classifiers for text and data classification
  • Weather Forecasting: Updating predictions based on new observations
  • Legal Reasoning: Evaluating evidence in court cases
  • Finance: Risk assessment and portfolio optimization
  • Search Engines: Ranking and relevance algorithms
  • Robotics: Sensor fusion and localization

Medical Test Example

A classic example: A disease affects 1% of the population. A test is 99% accurate for those with the disease (true positive) but has a 5% false positive rate. If you test positive, what's the actual probability you have the disease? Surprisingly, it's only about 16.5%! This counterintuitive result shows why Bayes' Theorem is crucial for interpreting medical tests.

Key Insights

  • The posterior probability depends on both the prior and the likelihood
  • Strong evidence can overcome weak priors, and vice versa
  • Bayes' Theorem is the foundation of Bayesian statistics
  • It provides a mathematical framework for rational belief updating
  • The theorem is reversible: you can calculate P(B|A) from P(A|B)

Common Misconceptions

  • P(A|B) ≠ P(B|A) - These are different conditional probabilities
  • A high accuracy test doesn't guarantee a high probability of disease
  • Base rates (priors) matter significantly in real-world applications
  • Ignoring P(B) leads to the "base rate fallacy"

History

Named after Reverend Thomas Bayes (1701-1761), the theorem was published posthumously in 1763. It has become one of the most important tools in statistics, artificial intelligence, and scientific reasoning.