Z-Score Formula Guide

By CalcMulti Editorial Team··10 min read

A z-score answers one question: how unusual is this value, relative to its distribution? By converting any raw value to a standard scale measured in standard deviation units, z-scores allow you to compare values from completely different datasets — test scores and heights, stock returns and lab measurements — on the same footing.

This guide covers the formula derivation, step-by-step calculation, z-table reading, percentile conversion, the empirical rule, and the critical decision of when to use z versus t.

Formula

z = (x − μ) / σ

Anatomy of the Z-Score Formula

The formula z = (x − μ) / σ has three components. The numerator (x − μ) is the deviation: how far the raw value x sits from the mean μ. If x is above the mean, the deviation is positive; if below, negative; if equal, zero.

The denominator σ is the standard deviation of the distribution. Dividing by σ scales the deviation into "standard deviation units" — making the result unit-free. A z-score of +1 always means "one standard deviation above the mean," regardless of whether the original units were centimetres, dollars, or milliseconds.

The result z tells you: (a) the direction — positive z means above average, negative means below average; (b) the magnitude — z = 2.0 means the value is farther from average than z = 0.5; (c) under a normal distribution — the exact percentile position the value occupies.

Step-by-step worked example: A student scores 82 on a maths exam. The class mean is μ = 70 and standard deviation is σ = 10. Step 1 — Deviation: x − μ = 82 − 70 = 12. Step 2 — Standardise: z = 12 / 10 = 1.2. Interpretation: the student scored 1.2 standard deviations above the class mean. Under a normal distribution, this corresponds to approximately the 88th percentile — scoring higher than about 88% of the class.

Reading a Z-Table — Converting Z to Probability

A z-table (also called a standard normal table) gives the cumulative probability P(Z ≤ z) — the proportion of the normal distribution that falls at or below a given z-score. This is the area under the normal curve to the left of z.

Most z-tables give values for z between −3.49 and +3.49. To use one: look up the row for the first two digits of z, then the column for the second decimal place. For z = 1.2: row 1.2, column 0.00 → P(Z ≤ 1.20) = 0.8849, meaning 88.49% of values fall below z = 1.2.

From one table lookup you can derive three probabilities: (1) P(X ≤ x) = Φ(z) — the value from the table directly. (2) P(X ≥ x) = 1 − Φ(z) — the right tail. (3) P(−|z| ≤ Z ≤ |z|) = 2Φ(|z|) − 1 — the two-tailed probability, used in hypothesis testing.

For the student example (z = 1.2): P(X ≤ 82) = 0.8849 (88.49th percentile). P(X ≥ 82) = 1 − 0.8849 = 0.1151 (11.51% scored higher). P(60 ≤ X ≤ 80) at z = −1.0 and z = 1.0: Φ(1.0) − Φ(−1.0) = 0.8413 − 0.1587 = 0.6827 ≈ 68.3% (the empirical rule for ±1σ).

The Empirical Rule (68–95–99.7 Rule)

For any approximately normal distribution, the empirical rule gives three key probability bounds based on z-scores:

|z| ≤ 1 (within one standard deviation of mean): 68.27% of values. Roughly two-thirds of any normal dataset falls within ±1σ of the mean.

|z| ≤ 2 (within two standard deviations): 95.45% of values. Only about 1 in 22 observations falls outside this range.

|z| ≤ 3 (within three standard deviations): 99.73% of values. Values with |z| > 3 are genuinely rare — occurring less than 3 times per 1,000 observations. In quality control, "six sigma" targets defect rates of 3.4 per million by demanding |z| ≤ 6.

The empirical rule is useful as a quick sanity check: if you calculate a z-score of 4.5 for a value that is supposed to come from a normal distribution, either the data point is an extraordinary outlier or your mean and standard deviation estimates are wrong.

Z-Score RangeProbability InsideProbability OutsideFrequency analogy
|z| ≤ 1.068.27%31.73%About 1 in 3 observations outside
|z| ≤ 1.64590.00%10.00%1 in 10
|z| ≤ 1.96095.00%5.00%1 in 20
|z| ≤ 2.095.45%4.55%About 1 in 22
|z| ≤ 2.57699.00%1.00%1 in 100
|z| ≤ 3.099.73%0.27%About 1 in 370
|z| ≤ 4.099.9937%0.0063%About 1 in 15,787

Z-Score vs T-Score — When to Use Which

Both z and t standardise a value relative to a distribution. The critical difference is what you know about the population standard deviation σ.

Use a z-score when: (a) the population standard deviation σ is known (from historical data, specification sheets, or prior research); or (b) the sample is large (n ≥ 30), in which case the sample standard deviation s is a reliable enough estimate of σ that the distinction is negligible.

Use a t-score when: σ is unknown and must be estimated from the sample (the usual situation in research), AND the sample size is small (n < 30). The t-distribution has heavier tails than the normal distribution, accounting for the additional uncertainty introduced by estimating σ from limited data. As n increases, the t-distribution converges to the standard normal — for n = 30, t and z critical values differ by only about 0.03 at the 95% level.

Practical rule: in real data analysis, σ is almost never known. Use t unless you have a specific reason to know σ (e.g., you are a manufacturer with a decades-long process record). Calculators and software can handle either; the choice matters most for small samples.

Real-World Applications of Z-Scores

Standardised testing (SAT, GRE, IQ): All are reported with reference to a normalised scale. An IQ of 130 corresponds to z = (130−100)/15 = +2.0 — the 97.7th percentile. A GRE Verbal score is converted to a percentile by computing z = (score − population mean) / σ and looking up Φ(z).

Quality control (Six Sigma): Manufacturing processes are evaluated by their z-score — how many standard deviations the process mean is from the nearest specification limit. A "six sigma" process has z = 6, meaning only 3.4 defects per million opportunities. Z-scores quantify process capability.

Finance — identifying unusual returns: If a stock fund has a mean monthly return of 1.2% and standard deviation of 3.5%, a month with −6% return has z = (−6 − 1.2)/3.5 = −2.06 — an unusually bad month at the 2nd percentile. Z-scores help distinguish genuine shocks from routine volatility.

Medical reference ranges: Lab test reference ranges are usually set at the 2.5th to 97.5th percentiles (±1.96 standard deviations). A result outside this range has |z| > 1.96 and is flagged as abnormal — not because it cannot occur naturally, but because it is rare enough to warrant investigation.

Frequently Asked Questions

Educational use only. Content is based on publicly documented mathematical formulas and reviewed for accuracy by the CalcMulti Editorial Team. Last updated: February 2026.