Interpreting Regression Output: A Plain-Language Guide
You ran a multiple regression in SPSS, and now you're staring at several tables of numbers with no idea what most of them mean. You're not alone — regression output is dense, but you only need to focus on a few key pieces.
The Big Picture: Model Summary
The first thing to check is the Model Summary table. The number you care about most here is R² (R-squared).
R² tells you the proportion of variance in your dependent variable that's explained by your predictor variables combined. If R² = .35, your predictors collectively explain 35% of the variance in the outcome.
- R² = .02 — your model explains almost nothing
- R² = .13 — moderate explanatory power
- R² = .26+ — substantial explanatory power
These benchmarks (from Cohen) are rough guides. In some fields, R² = .10 is impressive; in others, .40 is expected.
Adjusted R² is a more conservative version that penalizes you for adding predictors that don't improve the model. Report adjusted R² when you have multiple predictors.
The ANOVA Table: Is Your Model Significant?
The ANOVA table tests whether your overall model is statistically significant — meaning whether your set of predictors, taken together, does a better job of predicting the outcome than simply using the mean.
Look at the F statistic and its p-value (labeled "Sig." in SPSS). If p < .05, your model is significant. If not, your predictors collectively don't explain the outcome better than chance.
The Coefficients Table: The Heart of Your Results
This is where the action is. Each row represents one predictor variable. The key columns are:
B (Unstandardized Coefficient)
This tells you how much the dependent variable changes for every one-unit increase in the predictor, holding all other predictors constant. If B = 2.45 for "years of experience," then each additional year of experience is associated with a 2.45-point increase in the outcome.
SE (Standard Error)
The standard error of B. Smaller values mean more precise estimates.
Beta (β) — Standardized Coefficient
This is what you use to compare the relative importance of different predictors. Beta is expressed in standard deviation units, so you can compare across variables measured on different scales. The predictor with the largest absolute beta value has the strongest unique relationship with the outcome.
t and Sig. (p-value)
These test whether each individual predictor is statistically significant after accounting for all other predictors in the model. A significant p-value means the predictor makes a unique contribution above and beyond the other variables.
How to Interpret the Results
Let's say you're predicting job satisfaction from three variables: salary, work-life balance, and supervisor support.
| Predictor | B | SE | β | t | p |
|---|---|---|---|---|---|
| Salary | 0.003 | 0.001 | .18 | 2.41 | .017 |
| Work-life balance | 4.21 | 0.89 | .38 | 4.73 | < .001 |
| Supervisor support | 2.87 | 0.76 | .29 | 3.78 | < .001 |
You would report: "All three predictors significantly predicted job satisfaction. Work-life balance was the strongest predictor (β = .38), followed by supervisor support (β = .29) and salary (β = .18)."
How to Report Regression in APA Style
A complete APA write-up includes:
"A multiple linear regression was conducted to predict job satisfaction from salary, work-life balance, and supervisor support. The overall model was significant, F(3, 146) = 28.41, p < .001, R² = .37, indicating that the predictors explained 37% of the variance in job satisfaction. Work-life balance was the strongest predictor (β = .38, p < .001), followed by supervisor support (β = .29, p < .001) and salary (β = .18, p = .017)."
Include a results table with B, SE, β, t, and p for each predictor.
Common Mistakes
- Interpreting B without considering the scale. A B of 0.003 for salary isn't "tiny" — salary is measured in dollars, so each $1,000 increase corresponds to a 3-point increase.
- Ignoring multicollinearity. If predictors are highly correlated with each other, your coefficients become unstable. Check VIF values — above 10 is a problem.
- Confusing correlation with causation. Regression shows associations, not causal effects (unless your design supports it).
- Not checking assumptions. Regression assumes linearity, normality of residuals, homoscedasticity, and independence. Check residual plots before interpreting your results.