The 5 Most Common Statistical Mistakes in Dissertation Research
After reviewing hundreds of dissertation drafts, certain statistical mistakes come up again and again. The good news is they're all preventable. Here are the five most common errors — and how to avoid each one.
Mistake #1: Not Checking Assumptions
Every parametric statistical test has assumptions: normality, homogeneity of variance, independence, linearity. When these assumptions are violated, your results may be unreliable. Yet many students skip assumption checking entirely and go straight to running their analysis.
How to fix it: Before every inferential test, run the appropriate assumption checks. Test normality with Shapiro-Wilk and visual inspection (histograms, Q-Q plots). Check homogeneity of variance with Levene's test. Examine scatterplots for linearity. Document all of this in your results chapter. See our complete guide on what to do when assumptions are violated.
Mistake #2: Not Reporting Effect Sizes
Reporting a p-value without an effect size is like telling someone you found a fire but not how big it is. Statistical significance tells you whether an effect exists; effect size tells you whether it matters.
How to fix it: Report an effect size for every inferential test. Cohen's d for t-tests, eta-squared for ANOVA, R-squared for regression, odds ratios for logistic regression. Interpret the effect size using both standard benchmarks and your field's context. APA style has required this since the 6th edition.
Mistake #3: Misinterpreting Non-Significant Results
"The results were not significant, so the intervention had no effect." This sentence appears in countless dissertations, and it's wrong. A non-significant result means you didn't find sufficient evidence to reject the null hypothesis. It doesn't prove the null hypothesis is true.
How to fix it: Use precise language. Say "no statistically significant difference was found" rather than "there was no difference." Discuss possible reasons for non-significance, including insufficient sample size and low statistical power. Report effect sizes even for non-significant results — they're still informative.
Mistake #4: Using the Wrong Test
Choosing an inappropriate statistical test is more common than you'd think. Frequent examples include:
- Running an independent t-test when participants are matched or measured twice (should be paired)
- Using multiple t-tests instead of ANOVA when comparing three or more groups
- Running Pearson correlation on ordinal data (should be Spearman)
- Using parametric tests on severely non-normal data without acknowledging the violation
How to fix it: Use a systematic approach to choosing the right test. Ask three questions: What am I trying to do? What type of data do I have? How is my study designed? The answers will point you to the correct test.
Mistake #5: P-Hacking and Selective Reporting
P-hacking means running multiple analyses and only reporting the ones that produce significant results. Sometimes it's intentional, but more often it's accidental — you try different groupings, remove outliers, add covariates, and test alternative variables until something "works."
How to fix it: Pre-register your analysis plan if possible. Report all analyses you ran, not just the significant ones. If you conducted exploratory analyses beyond your original plan, label them as exploratory. Your committee will respect your honesty far more than suspiciously clean results.
The Underlying Theme
All five mistakes share a common thread: they happen when students treat statistics as a set of buttons to push rather than a reasoning framework. Statistics is about making honest inferences from data. Check your assumptions, report your effects, choose the right tools, and be transparent about what you found — even when the answer isn't what you hoped for.
Your dissertation doesn't need perfect results. It needs rigorous methods and honest interpretation. That's what committees are really looking for.