Skip to main content
Stats for Scholars
Concepts Decision Tree Reporting Calculators Blog Software Cheat Sheets
Concepts Decision Tree Reporting Calculators Blog Software Cheat Sheets
Home Concepts Sample Size Determination

Descriptive Statistics

  • Descriptive Statistics

Inferential Statistics

  • Chi-Square Test of Independence
  • Independent Samples t-Test
  • Kruskal-Wallis H Test
  • Logistic Regression
  • Mann-Whitney U Test
  • Multiple Linear Regression
  • One-Way ANOVA
  • Paired Samples t-Test
  • Pearson Correlation
  • Repeated Measures ANOVA
  • Simple Linear Regression
  • Two-Way (Factorial) ANOVA
  • Wilcoxon Signed-Rank Test

Effect Size & Power

  • Effect Size
  • Sample Size Determination
  • Statistical Power & Power Analysis

Reliability & Validity

  • Cronbach's Alpha
  • Inter-Rater Reliability

Sample Size Determination

intermediate Effect Size & Power

Sample Size Determination

Purpose
Calculates the number of participants needed for a study to have sufficient statistical power to detect a meaningful effect.
When to Use
During study planning, before data collection begins. Required for grant applications, IRB proposals, and dissertation prospectuses.
Data Type
Requires estimates of effect size, desired power, and alpha level — applicable across all quantitative research designs.
Key Assumptions
Assumes you have a reasonable estimate of the expected effect size. Results depend on the specific statistical test planned and the study design.
Tools
Sample Size & Power Analysis Calculator on Subthesis →

What Is Sample Size Determination?

Sample size determination is the process of calculating how many participants you need to recruit for your study before you begin collecting data. It ensures your study is large enough to detect a meaningful effect (if one exists) while avoiding unnecessary cost and effort from over-recruiting.

Getting the sample size right is one of the most important decisions in research design. Too small, and your study lacks the statistical power to find real effects. Too large, and you waste time, money, and participant goodwill.

"How many participants do I need?" is the single most common statistics question researchers ask. The answer is never a guess — it's a calculation.

When to Use It

Sample size determination should happen before you collect any data, during the study design phase. You need it when:

  • Writing a grant proposal (funders require justification for sample size)
  • Submitting to an IRB/ethics board (to ensure you're not exposing unnecessary participants to risk)
  • Planning a dissertation or thesis study
  • Designing a clinical trial (regulatory agencies require formal power calculations)
  • Planning any study where you want to make a credible statistical inference

The Inputs You Need

To calculate sample size, you need four pieces of information. Three are chosen by the researcher; the fourth is what you solve for:

Input Typical Value Your Choice
Significance level (α\alphaα) .05 How much Type I error risk you accept
Power (1−β1 - \beta1−β) .80 or .90 How much Type II error risk you accept
Effect size (ddd, fff, rrr, etc.) Varies The smallest effect worth detecting
Sample size (nnn) Solve for this What you need to calculate

Formula

For an Independent Samples t-Test (Two Groups, Equal Size)

n=2(z1−α/2+z1−β)2d2n = \frac{2\left(z_{1-\alpha/2} + z_{1-\beta}\right)^2}{d^2} n=d22(z1−α/2​+z1−β​)2​

Where:

  • nnn = required sample size per group
  • z1−α/2z_{1-\alpha/2}z1−α/2​ = critical zzz-value for your significance level (1.96 for α=.05\alpha = .05α=.05, two-tailed)
  • z1−βz_{1-\beta}z1−β​ = critical zzz-value for your desired power (0.84 for 80% power; 1.28 for 90% power)
  • ddd = expected Cohen's d effect size

For a Paired Samples t-Test

n=(z1−α/2+z1−β)2dz2n = \frac{\left(z_{1-\alpha/2} + z_{1-\beta}\right)^2}{d_z^2} n=dz2​(z1−α/2​+z1−β​)2​

Where dzd_zdz​ is the effect size for the difference scores (mean difference divided by the standard deviation of differences). Note that you need only one group, so nnn is the total sample.

For One-Way ANOVA (kkk Groups)

N=k(z1−α/2+z1−β)2f2N = \frac{k\left(z_{1-\alpha/2} + z_{1-\beta}\right)^2}{f^2} N=f2k(z1−α/2​+z1−β​)2​

Where fff is Cohen's fff effect size and NNN is the total sample size across all kkk groups.

For Chi-Square Test of Independence

N=(z1−α/2+z1−β)2w2N = \frac{\left(z_{1-\alpha/2} + z_{1-\beta}\right)^2}{w^2} N=w2(z1−α/2​+z1−β​)2​

Where www is Cohen's www effect size for chi-square tests.

Quick Reference Table

For α=.05\alpha = .05α=.05 (two-tailed) and power = .80, the per-group sample size for an independent t-test is:

Cohen's d nnn per group
0.20 (small) 394
0.50 (medium) 64
0.80 (large) 26

Assumptions

  1. You have a reasonable effect size estimate. This is the hardest part. Options include:
    • Results from a pilot study
    • Meta-analyses in your field
    • The smallest effect size of interest (SESOI) — the smallest effect that would be practically meaningful
  2. You know which statistical test you will use. Different tests have different formulas.
  3. Your design matches the formula. Equal vs. unequal group sizes, number of groups, number of predictors, etc.
  4. You plan for attrition. The formula gives you the minimum needed for analysis; recruit extra to compensate for dropouts.

Worked Example

Scenario: An educational researcher wants to compare reading comprehension scores across three teaching methods (phonics-based, whole-language, and blended). A recent meta-analysis suggests a medium effect (f=0.25f = 0.25f=0.25). She wants 80% power at α=.05\alpha = .05α=.05.

Step 1: Identify parameters.

  • k=3k = 3k=3 groups
  • f=0.25f = 0.25f=0.25 (medium effect, Cohen's fff)
  • α=.05\alpha = .05α=.05, so z0.975=1.96z_{0.975} = 1.96z0.975​=1.96
  • Power = .80, so z0.80=0.84z_{0.80} = 0.84z0.80​=0.84

Step 2: Apply the formula.

N=k(z1−α/2+z1−β)2f2=3(1.96+0.84)20.252N = \frac{k(z_{1-\alpha/2} + z_{1-\beta})^2}{f^2} = \frac{3(1.96 + 0.84)^2}{0.25^2} N=f2k(z1−α/2​+z1−β​)2​=0.2523(1.96+0.84)2​

N=3(2.80)20.0625=3×7.840.0625=23.520.0625=376.32N = \frac{3(2.80)^2}{0.0625} = \frac{3 \times 7.84}{0.0625} = \frac{23.52}{0.0625} = 376.32 N=0.06253(2.80)2​=0.06253×7.84​=0.062523.52​=376.32

Step 3: Round up and distribute across groups.

N=378 total(126 per group)N = 378 \text{ total} \quad (126 \text{ per group}) N=378 total(126 per group)

Step 4: Adjust for attrition. Expecting 10% dropout:

Nadjusted=3781−0.10=3780.90=420 total(140 per group)N_{adjusted} = \frac{378}{1 - 0.10} = \frac{378}{0.90} = 420 \text{ total} \quad (140 \text{ per group}) Nadjusted​=1−0.10378​=0.90378​=420 total(140 per group)

Interpretation: The researcher needs approximately 140 participants per group (420 total) to have 80% power for detecting a medium effect across three groups, accounting for 10% attrition.

Note: This approximation is useful for planning. For precise calculations, use software such as G*Power, which accounts for the exact non-central FFF-distribution. G*Power yields n=159n = 159n=159 per group for this scenario (the zzz-approximation is less precise for ANOVA).

Interpretation

A sample size calculation tells you the minimum number of participants needed under your stated assumptions. Key points for interpretation:

  • The result is only as good as your effect size estimate. If the true effect is smaller than assumed, your study will be underpowered.
  • Always round up to the next whole number — you cannot have 62.7 participants.
  • For designs with groups, ensure each group meets the minimum, not just the total.
  • Consider running a sensitivity analysis: calculate required nnn for a range of plausible effect sizes (e.g., d=0.30d = 0.30d=0.30, 0.500.500.50, 0.700.700.70) to see how sensitive your design is.

Common Mistakes

  1. Using "medium" as a default without justification. Reviewers will ask why you expect a medium effect. Cite prior research or explain your reasoning.
  2. Calculating sample size after data collection. This is backwards. Post hoc sample size calculations are meaningless — the data are already collected.
  3. Forgetting the design details. A repeated-measures design needs fewer participants than a between-subjects design. A 2x3 factorial ANOVA needs a different calculation than a one-way ANOVA.
  4. Ignoring attrition, missing data, and exclusions. Always inflate your target nnn to account for real-world data loss.
  5. Confusing per-group and total sample size. If the formula gives n=64n = 64n=64 per group and you have two groups, you need 128 total — not 64 total.
  6. Not specifying directionality. One-tailed tests require smaller samples than two-tailed tests, but they are harder to justify. Be clear about which you are using.

How to Report in APA Format

In the Method section, under Participants or Data Analysis Plan:

A priori sample size estimation was performed using G*Power 3.1 (Faul et al., 2009). For a one-way ANOVA with three groups, a medium effect size (fff = 0.25), α\alphaα = .05, and power = .80, the minimum required sample was 159 per group (477 total). Anticipating approximately 10% attrition, we targeted 175 per group (525 total).

For a two-group design:

Based on prior research indicating a medium effect (ddd = 0.50; Smith & Jones, 2024), a power analysis for an independent samples t-test (two-tailed, α\alphaα = .05, power = .80) indicated a minimum of 64 participants per group. We recruited 75 per group to account for potential attrition.

Ready to calculate?

Now that you understand the concept, use the free Sample Size & Power Analysis Calculator on Subthesis to run your own analysis.

Calculate Your Sample Size on Subthesis

Related Concepts

Statistical Power & Power Analysis

Learn what statistical power is, why 80% is the standard threshold, and how to conduct a power analysis to determine if your study can detect real effects.

Effect Size

Learn what effect size is, why it matters more than p-values alone, and how to calculate and interpret Cohen's d, Hedges' g, and eta-squared for your research.

Independent Samples t-Test

Learn how to conduct and interpret an independent samples t-test, including assumptions, formulas, worked examples, and APA reporting guidelines.

One-Way ANOVA

Learn how to conduct a one-way ANOVA to compare three or more group means, including F-ratio formulas, post-hoc tests, and effect size with eta-squared.

Stats for Scholars

Statistics for Researchers, Not Statisticians

A Subthesis Resource

Learn

  • Statistical Concepts
  • Choose a Test
  • APA Reporting
  • Blog

Resources

  • Calculators
  • Cheat Sheets
  • About
  • FAQ
  • Accessibility
  • Privacy
  • Terms

© 2026 Angel Reyes / Subthesis. All rights reserved.

Privacy Policy Terms of Use