How Many Participants Do You Really Need?
"How many participants do I need?" is one of the first questions every dissertation student asks — and one of the most important. Too few participants and you won't detect real effects. Too many and you've wasted time and resources collecting data you didn't need.
The answer depends on your research design, but there are clear methods for figuring it out.
The Short Answer: It Depends on Your Analysis
Different statistical tests require different minimum sample sizes. Here are some practical guidelines:
| Analysis | Minimum Rule of Thumb |
|---|---|
| Independent t-test | 30 per group (60 total) |
| Paired t-test | 30 pairs |
| One-way ANOVA (3 groups) | 20 per group (60 total) |
| Chi-square | 5 expected per cell |
| Pearson correlation | 30+ |
| Simple regression | 50+ |
| Multiple regression | 50 + 8 × number of predictors |
| Factor analysis | 10 per item or 300+ |
These are starting points, not guarantees. The real answer comes from a power analysis.
Why Power Analysis Is the Right Answer
A power analysis mathematically determines your required sample size based on three inputs:
- Expected effect size — how large you expect the difference or relationship to be
- Significance level (alpha) — typically 0.05
- Desired statistical power — typically 0.80
This is the method your committee wants to see in your proposal. Rules of thumb are fine for planning, but your formal justification should come from a power analysis.
What Happens When Your Sample Is Too Small
With an underpowered study, you risk:
- False negatives (Type II errors) — concluding there's no effect when there actually is one
- Unstable estimates — your sample statistics won't reliably reflect the population
- Inflated effect sizes — small studies that do find significance tend to overestimate the true effect
- Committee pushback — reviewers will question whether your non-significant results are meaningful
What Happens When Your Sample Is Too Large
Yes, this is a real problem. With thousands of participants, trivially small effects become statistically significant. A half-point difference on a 100-point scale might produce p < .001, but that doesn't make it practically significant. You'll spend more time collecting and managing data for diminishing returns.
Practical Strategies for Getting Enough Participants
Most graduate students struggle with too few participants, not too many. Some strategies:
- Cast a wider net. Can you recruit from multiple sites, schools, or organizations?
- Extend your recruitment window. Ask your IRB for a longer data collection period.
- Use online surveys. Tools like Qualtrics or SurveyMonkey can dramatically increase your reach.
- Offer incentives. Gift cards, extra credit, or raffle entries boost response rates.
- Reduce attrition. Send reminders, keep surveys short, and make participation easy.
What If You Simply Can't Get Enough?
Sometimes access is limited — maybe you're studying a rare population or a single school. If you can't meet the ideal sample size:
- Be transparent. Acknowledge the limitation in your methods and discussion chapters.
- Report effect sizes. Even with a small sample, effect sizes provide useful information.
- Consider your design. Repeated measures designs (same participants measured multiple times) require fewer participants than between-groups designs.
- Discuss post-hoc power. After analysis, you can report observed power — though this is debated among statisticians.
The Bottom Line
Don't pick a number out of thin air, and don't just copy the sample size from a study you liked. Run a power analysis, ground your expected effect size in the literature, and plan for attrition. Your proposal will be stronger, your committee will be satisfied, and your results will be trustworthy.