Effect Size: What It Is, Why It Matters, and How to Calculate It

April 1, 2026 3 min read By Angel Reyes

If you've ever had a committee member say "but is that result meaningful?" after you proudly reported a significant p-value, they were asking about effect size. And they were right to ask.

Statistical significance tells you whether an effect exists. Effect size tells you whether that effect is big enough to matter. For your dissertation, you need both.

What Is Effect Size?

Effect size is a standardized measure of how large a difference or relationship is. Unlike p-values, effect sizes don't depend on sample size. A tiny, trivial difference can be "statistically significant" if you have thousands of participants. Effect size cuts through that noise and tells you the magnitude of what you found.

Common Effect Size Measures

Cohen's d (Comparing Two Group Means)

Cohen's d measures the difference between two group means in standard deviation units. For example, if your tutoring group scored 0.8 standard deviations higher than the control group, Cohen's d = 0.8.

Benchmarks:

  • Small: d = 0.2
  • Medium: d = 0.5
  • Large: d = 0.8

These benchmarks come from Jacob Cohen's original 1988 guidelines. Use them as rough guides, not absolute rules — a "small" effect in one field might be considered large in another. See our guide on Hedges' g vs. Cohen's d for when to use each.

Eta-Squared (ANOVA)

Eta-squared (η²) tells you what proportion of the total variance in your dependent variable is explained by your independent variable. If η² = 0.06, your grouping variable explains 6% of the variance in scores.

Benchmarks:

  • Small: η² = 0.01
  • Medium: η² = 0.06
  • Large: η² = 0.14

R-Squared (Regression)

R² tells you the proportion of variance in the outcome explained by your predictor(s). If R² = 0.35, your model explains 35% of the variance. You'll encounter this in every regression analysis.

Pearson's r (Correlation)

The correlation coefficient itself is an effect size measure. Benchmarks: small = 0.10, medium = 0.30, large = 0.50.

How to Calculate Effect Size

Most statistical software reports effect sizes automatically — or can with a quick setting change. In SPSS, check the "Estimate effect sizes" box in your analysis dialog. In R, packages like effectsize or rstatix calculate them for you.

You can also calculate Cohen's d by hand:

d = (Mean₁ - Mean₂) / Pooled Standard Deviation

Why Your Committee Cares

The American Psychological Association (APA) has recommended reporting effect sizes since the 6th edition of the Publication Manual. Most dissertation committees now require them. Without effect sizes, your results chapter is incomplete.

More importantly, effect sizes help you answer the real question your research is asking: did the intervention, program, or variable make a meaningful difference — not just a detectable one?

What to Do Next

  1. Identify which effect size measure matches your analysis (Cohen's d for t-tests, η² for ANOVA, R² for regression).
  2. Report the effect size alongside every inferential test in your results.
  3. Interpret the size using benchmarks and your field's context.
  4. Run a power analysis using your expected effect size before you collect data.