Statistics Calculators

Free statistics tools with formula explanations, step-by-step examples, and real-world context. From mean and median to ANOVA and effect size — every calculator includes the math behind the result.

Maintained by CalcMulti Editorial Team · Last updated: February 2026

What Is Statistics?

Statistics is the branch of mathematics that deals with collecting, organising, analysing, interpreting, and presenting data. It underpins virtually every scientific field — from clinical trials and economic forecasting to machine learning and quality control. Statistics gives us the tools to extract meaningful conclusions from data that would otherwise be noise.

Descriptive Statistics

Summarises and describes the data you already have. It does not generalise beyond the sample.

  • • Mean, median, mode — central tendency
  • • Range, variance, standard deviation — spread
  • • Percentiles, quartiles — position
  • • Skewness, kurtosis — distribution shape

Inferential Statistics

Uses a sample to draw probability-based conclusions about a larger population.

  • • Confidence intervals — range for a parameter
  • • Hypothesis testing — p-values, t-tests, ANOVA
  • • Regression — relationships between variables
  • • Effect size — practical magnitude of a result

A critical distinction: when you have data for an entire population, you use population formulas (divide by n). When you have a sample, you use sample formulas (divide by n − 1). This correction — known as Bessel's correction — ensures sample estimates are unbiased estimators of population parameters.

Measures of Central Tendency

Mean · Median · Mode · Weighted Mean

Central tendency describes the centre of a dataset — a single representative value that summarises where most values cluster. There are three main measures, each suited to different data types and distributions. When observations carry different importance (different sample sizes, weights, or credits), the weighted mean is the correct measure.

Mean

x̄ or μ

Σx / n

Best for
Uses all data points; best for symmetric distributions
Limitation
Sensitive to outliers
Example use
Average exam score in a class
Open Mean Calculator →

Median

M

Middle value when sorted

Best for
Robust to outliers; best for skewed data
Limitation
Ignores magnitude of extreme values
Example use
Median household income
Open Median Calculator →

Mode

Mo

Most frequent value

Best for
Works for categorical data
Limitation
May not be unique; not useful for continuous data
Example use
Most popular shoe size
Open Mode Calculator →

When Mean Fails: Use the Weighted Mean

The arithmetic mean gives every observation equal weight. When data points represent different quantities — course credits for GPA, asset sizes for portfolio return, respondent counts for combined survey data — the weighted mean Σ(wᵢ × xᵢ) / Σwᵢ is the correct measure.

Example: a student earns 85% in a 3-credit course and 70% in a 1-credit course. Simple average = 77.5%. Weighted average = (85×3 + 70×1) / 4 = 81.25% — the correct GPA calculation.

Open Weighted Average Calculator →

When to Use Mean vs Median vs Mode

Decision guide — pick the right measure for your data

Choosing the wrong measure of central tendency produces misleading summaries. The decision depends on three factors: the level of measurement (nominal, ordinal, interval/ratio), the shape of the distribution (symmetric, skewed, bimodal), and whether outliers are present.

SituationMeanMedianMode
Symmetric distribution, no outliers✅ Best✅ OK
Skewed distribution (e.g. incomes)⚠️ Misleading✅ Best
Data has extreme outliers⚠️ Pulled by outliers✅ Best
Categorical data (colours, sizes)❌ Not valid❌ Not valid✅ Best
Finding most popular item✅ Best
Normal (bell curve) distribution✅ Best✅ Same✅ Same
Bimodal distribution (two peaks)⚠️ Misleading⚠️ Misleading✅ Both modes
Small dataset (< 10 values)✅ OK✅ OK⚠️ Unstable
Reporting to non-technical audience✅ Familiar✅ "Typical value"✅ "Most common"

Quick Decision Rule

Is data categorical? Use mode.
Is data numerical with outliers or a skewed distribution? Use median. Report mean as supplemental information.
Is data numerical, roughly symmetric, no extreme outliers? Use mean. It is the most mathematically useful measure.
Do observations have different weights (credits, asset sizes)? Use weighted mean — arithmetic mean will give wrong results.
Not sure? Report both mean and median. If they differ significantly, the distribution is likely skewed.

Measures of Spread

Central tendency tells you where the data is centred. Spread (also called variability or dispersion) tells you how far values typically deviate from that centre. Two datasets can have identical means but completely different spreads — and that difference matters enormously in practice.

MeasureFormulaUnitsUse whenCalculator
RangeMax − MinSame as dataQuick, rough estimate of spreadCalculate →
IQR (Q3 − Q1)Q3 − Q1Same as dataRobust to outliers; used in box plotsCalculate →
Variance (σ²)Σ(x − μ)² / nSquared unitsMathematical derivations, ANOVACalculate →
Standard Deviation (σ)√VarianceSame as dataMost practical reportingCalculate →
Coefficient of Variation(σ / μ) × 100%Comparing spread across different scalesCalculate →

Standard deviation is by far the most commonly reported measure of spread because it is in the same units as the data. Variance is useful internally but rarely reported to non-technical audiences. When comparing datasets measured in different units (e.g., height in cm vs weight in kg), use the coefficient of variation — it expresses spread as a percentage of the mean, making comparison valid. The IQR is the preferred spread measure for skewed data and is used in box plots and outlier detection.

Position Measures — Where Does a Value Rank?

Position measures describe where a specific value sits within a distribution — relative to all other values. Unlike central tendency (where the data clusters) or spread (how wide the data is), position answers: how does this particular value compare to the rest?

Z-Score

z = (x − μ) / σ

Expresses how many standard deviations a value is from the mean. A z-score of +1.5 means the value is 1.5 standard deviations above average. Negative z-scores fall below the mean.

Best for:
Comparing values across different datasets (different units, different scales)
Example:
Comparing a student's performance in Math vs English on different scoring systems
Open Z-Score Calculator →

Percentile Rank

P = B / n × 100

Tells you what percentage of the dataset falls at or below a given value. The 75th percentile means 75% of values are at or below that point. Percentiles are used in standardised tests, growth charts, and salary benchmarks.

Best for:
Ranking a value within a real dataset (no assumption of normal distribution required)
Example:
Determining which percentile of test-takers a score falls in
Open Percentile Calculator →

Z-Score vs Percentile — Which to Use?

ConditionZ-ScorePercentile Rank
Data is approximately normally distributed✅ Preferred✅ OK
Data is skewed or non-normal⚠️ Use with caution✅ Preferred
Comparing across two different datasets✅ Best (unit-free)⚠️ Only if same reference group
Communicating to a non-technical audience⚠️ Less intuitive✅ "You scored higher than X%"
Population σ is known✅ Use z = (x−μ)/σ
Working from raw data only⚠️ Need mean + σ first✅ Calculate directly from data

Probability Distributions

Normal · Binomial · Poisson · Geometric

A probability distribution describes the probability of each possible outcome of a random variable. Choosing the correct distribution for your data is a foundational skill in statistics — the wrong distribution leads to incorrect p-values, wrong predictions, and flawed models.

DistributionTypeKey parameter(s)Use whenCalculator
NormalContinuousμ, σSymmetric, bell-shaped data; z-tests; natural measurementsCalculate →
BinomialDiscreten trials, p successCounting successes in a fixed number of independent binary trialsCalculate →
PoissonDiscreteλ (rate)Counting events in a fixed time or space interval (calls/hour, defects/batch)Calculate →
GeometricDiscretep (success prob)Number of trials until the first success (sales calls, retries)Calculate →
T-distributionContinuousdegrees of freedomSmall-sample inference when σ is unknown; t-testsCalculate →

Normal Distribution — Key Facts

  • • 68% of data falls within ±1σ of the mean
  • • 95% falls within ±2σ (empirical rule)
  • • 99.7% falls within ±3σ
  • • Mean = Median = Mode (perfectly symmetric)
  • • Foundation of z-scores and most parametric tests
Normal Distribution Calculator →

Binomial → Normal Approximation

When n is large and p is not close to 0 or 1, the binomial distribution can be approximated by the normal distribution. Rule of thumb: use the approximation when np ≥ 5 and n(1−p) ≥ 5.

Mean = np, Standard deviation = √(np(1−p)). Apply the continuity correction (+0.5 or −0.5) for improved accuracy.

Binomial vs Normal Comparison →

Data Shape — Skewness, Kurtosis & Outliers

Beyond centre and spread — the shape of your distribution matters

After computing mean and standard deviation, the next step is understanding the shape of your distribution. Shape affects which statistical tests are valid, whether parametric or non-parametric methods are appropriate, and how you should communicate your results. Three key shape measures are skewness, kurtosis, and the five-number summary.

Skewness

Measures asymmetry. Right-skewed (positive) distributions have a long tail to the right — mean > median. Left-skewed (negative) distributions have a long tail to the left — mean < median. Values near 0 indicate symmetry.

• |skewness| < 0.5 → approximately symmetric

• 0.5–1.0 → moderately skewed

• > 1.0 → highly skewed (consider log transform)

Skewness & Kurtosis Calculator →

Kurtosis

Measures tail heaviness relative to a normal distribution. Excess kurtosis = 0 means normal (mesokurtic). Positive excess kurtosis means heavy tails and a sharp peak (leptokurtic) — extreme values occur more often than expected.

• Excess kurtosis = 0 → Normal (mesokurtic)

• > 0 → Heavy tails (leptokurtic, e.g. finance)

• < 0 → Light tails (platykurtic, e.g. uniform)

Skewness & Kurtosis Calculator →

Five-Number Summary

Min, Q1, Median, Q3, Max form the five-number summary — the foundation of box plots. It describes shape without assuming a distribution: the spread of the middle 50% (IQR), and whether extreme values (outliers) are present.

• IQR = Q3 − Q1

• Mild outlier: < Q1 − 1.5×IQR or > Q3 + 1.5×IQR

• Extreme outlier: < Q1 − 3×IQR or > Q3 + 3×IQR

Five-Number Summary Calculator →

Outlier Detection — Two Methods Compared

PropertyIQR Method (Tukey Fences)Z-Score Method (|z| > 3)
Requires normality?No — robust to any distributionBest for approximately normal data
Affected by outliers?No — Q1/Q3 are resistantYes — outliers inflate mean and SD
Best for small n?✅ Reliable for n < 30⚠️ Unreliable — use IQR instead
Detects mild outliers?✅ 1.5×IQR fence⚠️ Only extreme |z| > 3 cases
Open Outlier Calculator (both methods) →

Probability & Inferential Statistics

Inferential statistics bridges the gap between a sample and the larger population it represents. The foundation is probability theory — which quantifies uncertainty mathematically.

Z-Score & Normal Distribution

The z-score converts any value to the number of standard deviations from its distribution's mean. This allows comparison across different datasets. Under a normal distribution, approximately 68% of values fall within ±1σ, 95% within ±2σ, and 99.7% within ±3σ (the empirical rule).

Z-Score Calculator →

Confidence Intervals

A 95% confidence interval means: if you repeated the sampling process 100 times, approximately 95 of the resulting intervals would contain the true population parameter. It quantifies the precision of an estimate — a wide interval means high uncertainty; a narrow interval means the sample provides strong evidence.

Confidence Interval Calculator →

P-Values & Hypothesis Testing

A p-value is the probability of observing your result (or more extreme) if the null hypothesis were true. A small p-value (< 0.05 by convention) is evidence against the null hypothesis. Important: statistical significance ≠ practical importance. Always pair p-values with effect sizes.

P-Value Calculator →

Conditional Probability & Bayes

Conditional probability asks: given that event A occurred, what is the probability of B? P(B|A) = P(A ∩ B) / P(A). Bayes' theorem reverses this: it lets you update a prior belief with new evidence. It is the foundation of Bayesian statistics and is used in spam filters, medical diagnosis, and machine learning.

Probability Calculator →

Advanced Analysis — ANOVA, Mann-Whitney & Effect Size

Comparing multiple groups and measuring practical significance

When you move beyond two groups or need to quantify how meaningful a result is in practice, three tools become essential: ANOVA for comparing three or more group means, the Mann-Whitney U test for non-parametric group comparisons, and effect size measures for reporting practical significance alongside p-values.

One-Way ANOVA

Compares means across 3+ groups simultaneously using the F-statistic: ratio of between-group variance to within-group variance. A significant F (p < 0.05) means at least one group mean is different — but not which one.

F = MS_between / MS_within

Post-hoc: Tukey HSD, Bonferroni correction to identify which groups differ.

ANOVA Calculator →

Mann-Whitney U Test

Non-parametric alternative to the independent t-test. Use when data is ordinal, clearly non-normal, or sample sizes are small (n < 30). Compares distributions via rank sums — no normality assumption required.

Effect size: rank-biserial correlation r = 1 − 2U/(n₁n₂). Interpret as: 0.1 small, 0.3 medium, 0.5 large.

Mann-Whitney U Calculator →

Effect Size

Measures the practical magnitude of a result, independent of sample size. A p-value alone tells you if a result is likely real — effect size tells you if it matters.

• Cohen's d: 0.2 small, 0.5 medium, 0.8 large

• r: 0.1 small, 0.3 medium, 0.5 large

• η² (ANOVA): 0.01 small, 0.06 medium, 0.14 large

Effect Size Calculator →

Which Test to Use — Decision Guide

SituationRecommended Test
Comparing 2 groups, continuous data, approximately normalIndependent t-test
Comparing 2 groups, ordinal data or non-normalMann-Whitney U test
Comparing 3+ groups, continuous data, approximately normalOne-way ANOVA
Comparing 3+ groups, ordinal or non-normalKruskal-Wallis test
One sample vs known value, σ knownZ-test
One sample vs known value, σ unknownOne-sample t-test
Categorical outcomes (frequencies)Chi-square test
Relationship between two continuous variablesPearson correlation / linear regression

All Statistics Calculators

Mean CalculatorNew

Arithmetic, weighted, geometric, and harmonic mean.

Central Tendency
Median CalculatorNew

Middle value of any dataset — robust to outliers.

Central Tendency
Mode CalculatorNew

Most frequent value; bimodal and multimodal support.

Central Tendency
Weighted Average CalculatorNew

Weighted mean for GPA, portfolio returns, and survey data.

Central Tendency
Standard Deviation Calculator

Population and sample standard deviation with variance.

Spread
Variance CalculatorNew

Population and sample variance from raw data.

Spread
Range CalculatorNew

Max − min — the simplest measure of spread.

Spread
Coefficient of VariationNew

Relative variability as a % of the mean.

Spread
IQR CalculatorNew

Q3 − Q1 for box plots and outlier detection.

Spread
Five-Number SummaryNew

Min, Q1, Median, Q3, Max — complete box plot data.

Descriptive
Frequency DistributionNew

Absolute, relative, and cumulative frequency tables.

Descriptive
Skewness & KurtosisNew

Distribution shape — symmetric, skewed, heavy-tailed.

Descriptive
Outlier CalculatorNew

IQR method (Tukey fences) + z-score outlier detection.

Descriptive
Z-Score CalculatorNew

Standardise any value in standard deviation units.

Position
Percentile CalculatorNew

Percentile rank of a value within a dataset.

Position
T-Score CalculatorNew

T-score from raw data with p-value for small samples.

Position
Normal Distribution CalculatorNew

P(X < x), P(X > x), P(a < X < b) for any mean and σ.

Distribution
Binomial CalculatorNew

P(X = k) and P(X ≤ k) for fixed-trial binary outcomes.

Distribution
Poisson CalculatorNew

Probability of k events in a fixed interval (rate λ).

Distribution
Geometric DistributionNew

Trials until first success — P(X = k), mean, variance.

Distribution
Sample Size CalculatorNew

Minimum sample size for surveys and experiments.

Inference
T-Test CalculatorNew

One-sample and two-sample t-test with p-value.

Inference
ANOVA CalculatorNew

One-way ANOVA F-test for 3+ group comparisons.

Inference
Mann-Whitney U TestNew

Non-parametric alternative to the independent t-test.

Inference
Chi-Square CalculatorNew

Goodness-of-fit χ² statistic with p-value.

Inference
Standard Error CalculatorNew

SE of the mean with 95% and 99% confidence intervals.

Inference
Confidence Interval Calculator

95% and 99% CIs for means and proportions.

Inference
P-Value Calculator

One-tail and two-tail p-values from z and t scores.

Inference
Correlation CalculatorNew

Pearson r and R² for two paired variables.

Relationships
Linear Regression CalculatorNew

Slope, intercept, R² and predictions for y = mx + b.

Relationships
Effect Size CalculatorNew

Cohen's d, r from t-test, and eta-squared (η²).

Advanced
Probability Calculator

Basic, conditional, and complement probability.

Probability

Formula Guides

Deep-dive explanations — where each formula comes from, how to apply it, and common mistakes to avoid.

Comparisons

Side-by-side analysis — when to choose one method over another, with worked examples.

Common Statistical Errors

Confusing correlation with causation

Two variables moving together (correlation) does not mean one causes the other. Ice cream sales and drowning rates both rise in summer — but ice cream does not cause drowning; both are driven by hot weather. Always look for confounding variables before inferring causation.

Using mean for skewed data

The arithmetic mean is pulled toward outliers. When a dataset is right-skewed — such as income distributions, housing prices, or response times — report the median. A small number of extremely high values inflate the mean, making it unrepresentative of the typical case.

Misinterpreting p < 0.05 as proof

Statistical significance means the result is unlikely under the null hypothesis — it does not confirm the alternative hypothesis is true. A p-value of 0.04 means there is a 4% chance of this result if the null hypothesis were true. Multiple comparisons compound this problem: run 20 tests and expect one to be "significant" by chance at p < 0.05.

Reporting p-values without effect sizes

A p-value only tells you whether a result is statistically detectable — not whether it matters. With a large enough sample, even a trivial 0.1-point difference in means will be statistically significant. Always report effect size (Cohen's d, r, η²) alongside p-values to communicate practical importance.

Dividing by n instead of n−1 for sample variance

When computing variance from a sample (not the full population), the denominator must be n−1, not n. This is Bessel's correction — it produces an unbiased estimate of the population variance. Most calculators and software default to the correct formula, but be aware of which convention a tool uses.

Averaging percentages directly

You cannot take the arithmetic mean of percentages unless the sample sizes are equal. If store A has 20% return rate on 1,000 sales and store B has 80% return rate on 10 sales, the combined rate is not 50% — it is (200 + 8) / 1,010 ≈ 20.6%. Use weighted mean with sample sizes as weights.

Frequently Asked Questions

Educational use only. All calculators on this page use standard mathematical formulas from academic and public domain sources. Content is reviewed for accuracy by the CalcMulti Editorial Team. For research, clinical, or professional decisions, verify results with qualified software and subject-matter expertise. Last updated: February 2026.