One-Way ANOVA Calculator
Reviewed by CalcMulti Editorial Team·Last updated: ·← Statistics Hub
One-way ANOVA (Analysis of Variance) tests whether the means of three or more independent groups are significantly different. It extends the two-sample t-test to multiple groups without inflating the Type I error rate — comparing all groups simultaneously with a single F-test.
Enter your group data (one group per row, values comma-separated) to compute the full ANOVA table: sum of squares (SS), degrees of freedom (df), mean squares (MS), F-statistic, p-value, and eta-squared effect size.
How ANOVA works: it partitions total variability in the data into two sources. Between-group variance (MS_between) measures how much the group means differ from the grand mean — this is the "signal." Within-group variance (MS_within), also called error or residual variance, measures how much individuals vary within each group — this is the "noise." The F-statistic = MS_between / MS_within. A large F means the between-group differences are large relative to random variation within groups.
ANOVA assumptions: (1) Independence — observations within and across groups are independent. (2) Normality — the data within each group are approximately normally distributed (less critical with larger sample sizes due to the Central Limit Theorem). (3) Homogeneity of variance — group variances are approximately equal (Levene's test checks this). When normality or equal variance assumptions are violated, use the Kruskal-Wallis test (the non-parametric alternative to one-way ANOVA).
What a significant ANOVA result means: a significant p-value (typically p < 0.05) tells you that at least one group mean is different from the others — but it does not tell you which groups differ. Post-hoc tests (Tukey HSD, Bonferroni, Scheffé) are needed to identify pairwise differences while controlling the family-wise error rate. Effect size eta-squared (η²) = SS_between / SS_total indicates practical significance: η² ≈ 0.01 is small, 0.06 is medium, 0.14 is large.
Formula
F = MS_between / MS_within | MS_between = SS_between / df_between | MS_within = SS_within / df_within
Enter each group on a separate row (comma or space separated). Minimum 3 groups, minimum 2 values per group.
After a Significant ANOVA — Post-Hoc Tests
| Post-hoc test | When to use | Controls |
|---|---|---|
| Tukey's HSD | All pairwise comparisons, equal n | Familywise error rate |
| Bonferroni | Any comparisons, conservative | Familywise error rate (α/k) |
| Dunnett's | All groups vs one control group | Familywise error rate |
| Scheffé's | Complex contrasts (not just pairwise) | Familywise error rate |
| Games-Howell | Unequal variances or unequal n | FWE without equal variance assumption |
Case Study: Teaching Method Comparison
An education researcher compared test scores across three teaching methods: Traditional (n=25, mean=72), Flipped Classroom (n=25, mean=81), Problem-Based Learning (n=25, mean=78). Total N=75.
ANOVA result: F(2,72)=8.34, p=0.0005, η²=0.19 (large effect). Conclusion: at least one teaching method significantly outperforms the others. The large η² means 19% of variance in test scores is attributable to teaching method.
Post-hoc Tukey HSD revealed: Flipped vs Traditional was significant (p=0.001); PBL vs Traditional was significant (p=0.03); Flipped vs PBL was not significant (p=0.21). The school adopted the Flipped Classroom method due to the largest effect and significant difference.
Related Calculators
Disclaimer
This calculator is for educational purposes only and does not constitute professional advice. Results are based on standard mathematical formulas. Always verify critical calculations with a qualified professional before making important decisions.