Understanding an interaction effect in a linear regression model is usually difficult when using just the basic output tables and looking at the coefficients. The jtools
package provides two functions that can help analysts probe more deeply.
Simple slopes analysis gives researchers a way to express the interaction effect in terms that are easy to understand to those who know how to interpret direct effects in regression models. This method is designed for continuous variable by continuous variable interactions.
In simple slopes analysis, researchers are interested in the conditional slope of the focal predictor; that is, what is the slope of the predictor when the moderator is held at some particular value? The regression output we get when including the interaction term tells us what the slope is when the moderator is held at zero, which is often not a practically/theoretically meaningful value. To better understand the nature of the interaction, simple slopes analysis allows the researcher to specify meaningful values at which to hold the moderator value.
While the computation behind doing so isn’t exactly rocket science, it is inconvenient and prone to error. The sim_slopes()
function from jtools
accepts a regression model (with an interaction term) as an input and automates the simple slopes procedure. The function will, by default, do the following:
In its most basic use case, sim_slopes
needs three arguments: a linear model (with support for svyglm
models), the name of the focal predictor in quotations as the argument for pred =
, and the name of the moderator in quotations as the argument for modx =
. Let’s go through an example.
First, we use example data from state.x77
that is built into R. Let’s look at the interaction model output with j_summ()
as a starting point.
fiti <- lm(Income ~ Illiteracy*Murder, data = as.data.frame(state.x77))
j_summ(fiti)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
##
## MODEL FIT:
## F(3,46) = 7.461, p = 0
## R-squared = 0.327
## Adj. R-squared = 0.283
##
## Standard errors: OLS
## Est. S.E. t val. p
## (Intercept) 3822.607 405.332 9.431 0 ***
## Illiteracy 617.341 434.851 1.42 0.162
## Murder 146.818 50.326 2.917 0.005 **
## Illiteracy:Murder -117.096 40.131 -2.918 0.005 **
So we see a significant main effect of Murder
on Income
in the presence of a significant interaction between Murder
and Illiteracy
. The positive estimate for Illiteracy
does not differ significantly from zero. With that said, you shouldn’t focus too much on the main effects of terms included in the interaction.
Note that if you would like to see the output if the input variables were mean-centered and/or standardized, j_summ()
can do that for you.
j_summ(fiti, standardize = TRUE)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
##
## MODEL FIT:
## F(3,46) = 7.461, p = 0
## R-squared = 0.327
## Adj. R-squared = 0.283
##
## Standard errors: OLS
## Est. S.E. t val. p
## (Intercept) 4617.315 96.338 47.928 0 ***
## Illiteracy -150.306 122.065 -1.231 0.224
## Murder 36.234 106.322 0.341 0.735
## Illiteracy:Murder -263.479 90.3 -2.918 0.005 **
##
## All continuous variables are mean-centered and scaled by 1 s.d.
Don’t sweat the change in the significance of the lower-order focal variables—they aren’t directly interpretable, anyway.
Now let’s do the most basic simple slopes analysis:
sim_slopes(fiti, pred = Illiteracy, modx = Murder, johnson_neyman = FALSE)
## SIMPLE SLOPES ANALYSIS
##
## Slope of Illiteracy when Murder = 11.07 (+1 SD):
## Est. S.E. p
## -678.856 177.114 0.000
##
## Slope of Illiteracy when Murder = 7.378 (Mean):
## Est. S.E. p
## -246.592 200.260 0.224
##
## Slope of Illiteracy when Murder = 3.686 (-1 SD):
## Est. S.E. p
## 185.672 304.522 0.545
So what we see in this example is that when the value of Murder
is high, the slope of Illiteracy
is negative and significantly different from zero. The value for Illiteracy
when Murder
is high is in the opposite direction from its coefficient estimate for the first version of the model fit with lm()
but this result makes sense considering the interaction coefficient was negative; it means that as one of the variables goes up, the other goes down. Now we know the effect of Illiteracy
only exists when Murder
is high.
You may also choose the values of the moderator yourself with the modxvals =
argument.
sim_slopes(fiti, pred = Illiteracy, modx = Murder, modxvals = c(0, 5, 10),
johnson_neyman = FALSE)
## SIMPLE SLOPES ANALYSIS
##
## Slope of Illiteracy when Murder = 0:
## Est. S.E. p
## 617.341 434.851 0.162
##
## Slope of Illiteracy when Murder = 5:
## Est. S.E. p
## 31.862 262.633 0.904
##
## Slope of Illiteracy when Murder = 10:
## Est. S.E. p
## -553.617 171.416 0.002
Did you notice how I was adding the argument johnson_neyman = FALSE
above? That’s because by default, sim_slopes
will also calculate what is called the Johnson-Neyman interval. This tells you all the values of the moderator for which the slope of the predictor will be statistically significant. Depending on the specific analysis, it may be that all values of the moderator outside of the interval will have a significant slope for the predictor. Other times, it will only be values inside the interval—you will have to look at the output to see.
It can take a moment to interpret this correctly if you aren’t familiar with the Johnson-Neyman technique. But if you read the output carefully and take it literally, you’ll get the hang of it.
sim_slopes(fiti, pred = Illiteracy, modx = Murder, modxvals = c(0, 5, 10),
johnson_neyman = TRUE)
## JOHNSON-NEYMAN INTERVAL
##
## The slope of Illiteracy is p < .05 when Murder is OUTSIDE this interval:
## [-5.5021, 8.3402]
## Note: The range of observed values of Murder is [1.4, 15.1]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of Illiteracy when Murder = 0:
## Est. S.E. p
## 617.341 434.851 0.162
##
## Slope of Illiteracy when Murder = 5:
## Est. S.E. p
## 31.862 262.633 0.904
##
## Slope of Illiteracy when Murder = 10:
## Est. S.E. p
## -553.617 171.416 0.002
So in the example above, we can see that the Johnson-Neyman interval and the simple slopes analysis agree—they always will. The benefit of the J-N interval is it will tell you exactly where the predictor’s slope becomes significant/insignificant. You can also call the johnson_neyman
function directly if you want to do something like tweak the alpha level. The johnson_neyman
function will also create a plot by default—you can get them by setting jnplot = TRUE
with sim_slopes
.
johnson_neyman(fiti, pred = Illiteracy, modx = Murder, alpha = 0.01)
## JOHNSON-NEYMAN INTERVAL
##
## The slope of Illiteracy is p < .01 when Murder is OUTSIDE this interval:
## [-31.6714, 9.1154]
## Note: The range of observed values of Murder is [1.4, 15.1]
One note on Johnson-Neyman plots: Once again, it is easy to misinterpret the meaning. Notice that the y-axis is the conditional slope of the predictor. The plot shows you where the conditional slope differs significantly from zero. In the plot above, we see that from the point Murder (the moderator) = 9.12 and greater, the slope of Illiteracy (the predictor) is significantly different from zero and in this case negative. The lower bound for this interval (about -32) is so far outside the observed data that it is not plotted. If you could have -32 as a value for Murder rate, though, that would be the other threshold before which the slope of Illiteracy would become positive.
The purpose of reminding you both within the plot and the printed output of the range of observed data is to help you put the results in context; in this case, the only justifiable interpretation is that Illiteracy has no effect on the outcome variable except when Murder is higher than 9.12. You wouldn’t interpret the lower boundary because your dataset doesn’t contain any values near it.
Sometimes it is informative to know the conditional intercepts in addition to the slopes. It might be interesting to you that individuals low on the moderator have a positive slope and individuals high on it don’t, but that doesn’t mean that individuals low on the moderator will have higher values of the dependent variable. You would only know that if you know the conditional intercept.
You can print the conditional intercepts with the cond.int = TRUE
argument.
sim_slopes(fiti, pred = Illiteracy, modx = Murder, cond.int = TRUE)
## JOHNSON-NEYMAN INTERVAL
##
## The slope of Illiteracy is p < .05 when Murder is OUTSIDE this interval:
## [-5.5021, 8.3402]
## Note: The range of observed values of Murder is [1.4, 15.1]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of Illiteracy when Murder = 11.07 (+1 SD):
## Est. S.E. p
## -678.856 177.114 0.000
## Conditional intercept when Murder = 11.07 (+1 SD):
## Est. S.E. p
## 5447.810 308.169 0.000
##
## Slope of Illiteracy when Murder = 7.378 (Mean):
## Est. S.E. p
## -246.592 200.260 0.224
## Conditional intercept when Murder = 7.378 (Mean):
## Est. S.E. p
## 4905.827 221.597 0.000
##
## Slope of Illiteracy when Murder = 3.686 (-1 SD):
## Est. S.E. p
## 185.672 304.522 0.545
## Conditional intercept when Murder = 3.686 (-1 SD):
## Est. S.E. p
## 4363.845 268.834 0.000
This example shows you that while the slope associated with Illiteracy
is negative when Murder
is high, the conditional intercept is also high when Murder
is high. That tells you that increases in Illiteracy
for high-Murder
observations will tend towards being equal on Income
to observations with lower values of Murder
.
Certain models require heteroskedasticity-robust standard errors. To be consistent with the reporting of heteroskedasticity-robust standard errors offered by j_summ()
, sim_slopes()
will do the same with the use of the robust = TRUE
option so you can consistently report standard errors across models.
sim_slopes(fiti, pred = Illiteracy, modx = Murder, robust = TRUE)
## JOHNSON-NEYMAN INTERVAL
##
## The slope of Illiteracy is p < .05 when Murder is OUTSIDE this interval:
## [-2.8269, 8.0808]
## Note: The range of observed values of Murder is [1.4, 15.1]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of Illiteracy when Murder = 11.07 (+1 SD):
## Est. S.E. p
## -678.856 149.126 0.000
##
## Slope of Illiteracy when Murder = 7.378 (Mean):
## Est. S.E. p
## -246.592 180.844 0.179
##
## Slope of Illiteracy when Murder = 3.686 (-1 SD):
## Est. S.E. p
## 185.672 276.263 0.505
These data are a relatively rare case in which the robust standard errors are even more efficient than typical OLS standard errors. Note that you must have the sandwich
and lmtest
packages installed to use this feature.
By default, all non-focal variables are mean-centered. You can additionally specify that the focal predictors are centered with centered = "all"
. You may also request that no variables be centered with centered = "none"
. You may also request specific variables to center by providing a vector of quoted variable names—no others will be centered in that case.
Note that the moderator is centered around the specified values. Factor variables are ignored in the centering process.
You can standardize the variables as well using the standardize = TRUE
argument.
sim_slopes(fiti, pred = Illiteracy, modx = Murder, standardize = TRUE,
centered = "all")
## JOHNSON-NEYMAN INTERVAL
##
## The slope of Illiteracy is p < .05 when Murder is OUTSIDE this interval:
## [-3.4891, 0.2606]
## Note: The range of observed values of Murder is [-1.6194, 2.0918]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of Illiteracy when Murder = 1 (+1 SD):
## Est. S.E. p
## -413.785 107.957 0.000
##
## Slope of Illiteracy when Murder = 0 (Mean):
## Est. S.E. p
## -150.306 122.065 0.224
##
## Slope of Illiteracy when Murder = -1 (-1 SD):
## Est. S.E. p
## 113.173 185.616 0.545
Standardization is only meaningful if applied to the focal predictors. If you use standardize = TRUE
when centered = NULL
, the output won’t change since the scale of the other predictors has no direct effect on the others.
An even more versatile and sometimes more interpretable method for understanding interaction effects is via plotting. jtools
provides interact_plot
as a relatively pain-free method to get good-looking plots of interactions using ggplot2
on the backend.
interact_plot(fiti, pred = "Illiteracy", modx = "Murder")
By default, with a continuous moderator you get three lines—1 standard deviation above and below the mean and the mean itself. If you specify modxvals = "plus-minus"
, the mean of the moderator is not plotted, just the two +/- SD lines.
interact_plot(fiti, pred = "Illiteracy", modx = "Murder", modxvals = "plus-minus")
However, if your moderator is a factor, each level will be plotted and you should leave modxvals = NULL
, the default.
fitiris <- lm(Petal.Length ~ Petal.Width*Species, data = iris)
interact_plot(fitiris, pred = "Petal.Width", modx = "Species")
If you want to see the individual data points plotted to better understand how the fitted lines related to the observed data, you can use the plot.points = TRUE
argument.
interact_plot(fiti, pred = "Illiteracy", modx = "Murder", plot.points = TRUE)
This isn’t especially informative for a continuous moderator. It can be very enlightening, though, for categorical moderators.
interact_plot(fitiris, pred = "Petal.Width", modx = "Species", plot.points = TRUE)
Another way to get a sense of the precision of the estimates is by plotting confidence bands. To get started, just set interval = TRUE
. To decide how wide the confidence interval should be, express the percentile as a number, e.g., int.width = 0.8
corresponds to an 80% interval.
interact_plot(fiti, pred = "Illiteracy", modx = "Murder", interval = TRUE, int.width = 0.8)
There are a number of other options not mentioned, many relating to the appearance.
For instance, you can manually specify the axis labels, add a main title, and so on.
interact_plot(fiti, pred = "Illiteracy", modx = "Murder", x.label = "Custom X Label",
y.label = "Custom Y Label", main.title = "Sample Plot",
legend.main = "Custom Legend Title")
Because the function uses ggplot2
, it can be modified and extended like any other ggplot2
object. For example, using the theme_apa()
function from jtools
:
interact_plot(fitiris, pred = "Petal.Width", modx = "Species") + theme_apa()
You won’t have to use these functions long before you may find yourself using both of them for each model you want to explore. To streamline the process, this package offers probe_interaction()
as a convenience function that calls both sim_slopes()
and interact_plot()
, taking advantage of their overlapping syntax.
library(survey)
data(api)
dstrat <- svydesign(id=~1,strata=~stype, weights=~pw, data=apistrat, fpc=~fpc)
regmodel <- svyglm(api00 ~ avg.ed*growth, design = dstrat)
probe_interaction(regmodel, pred = growth, modx = avg.ed, cond.int = TRUE, interval = TRUE,
jnplot = TRUE)
## JOHNSON-NEYMAN INTERVAL
##
## The slope of growth is p < .05 when avg.ed is OUTSIDE this interval:
## [3.1075, 5.7432]
## Note: The range of observed values of avg.ed is [1.38, 4.44]
## SIMPLE SLOPES ANALYSIS
##
## Slope of growth when avg.ed = 3.49 (+1 SD):
## Est. S.E. p
## 0.106 0.247 0.668
## Conditional intercept when avg.ed = 3.49 (+1 SD):
## Est. S.E. p
## 758.336 6.853 0.000
##
## Slope of growth when avg.ed = 2.787 (Mean):
## Est. S.E. p
## 0.578 0.152 0.000
## Conditional intercept when avg.ed = 2.787 (Mean):
## Est. S.E. p
## 639.782 6.800 0.000
##
## Slope of growth when avg.ed = 2.085 (-1 SD):
## Est. S.E. p
## 1.049 0.185 0.000
## Conditional intercept when avg.ed = 2.085 (-1 SD):
## Est. S.E. p
## 521.228 11.199 0.000
Note in the above example that you can provide arguments that only apply to one function and they will be used appropriately. On the other hand, you cannot apply their overlapping functions selectively. That is, you can’t have one standardize = TRUE
and the other standardize = FALSE
. If you want that level of control, just call each function separately.
Also, the above example comes from the survey package as a means to show that, yes, these tools can be used with svyglm
objects, though should only be applied to linear models.
It returns an object with each of the two functions’ return objects:
out <- probe_interaction(regmodel, pred = growth, modx = avg.ed, cond.int = TRUE,
interval = TRUE, jnplot = TRUE)
names(out)
## [1] "simslopes" "interactplot"
If 2-way interactions can be hard to grasp by looking at regular regression output, then 3-way interactions are outright inscrutable. The aforementioned functions also support 3-way interactions, however. Plotting these effects is particularly helpful.
Note that Johnson-Neyman intervals are still provided, but only insofar as you get the intervals for chosen levels of the second moderator. This does go against some of the distinctiveness of the J-N technique, which for 2-way interactions is a way to avoid having to choose points of the moderator to check whether the predictor has a significant slope.
fita3 <- lm(rating ~ privileges*critical*learning, data = attitude)
probe_interaction(fita3, pred = critical, modx = learning, mod2 = privileges)
## #######################################################
## While privileges (2nd moderator) = 40.898 (Mean of privileges -1 SD)
## #######################################################
##
## JOHNSON-NEYMAN INTERVAL
##
## The slope of critical is p < .05 when learning is INSIDE this interval:
## [52.8623, 73.5881]
## Note: The range of observed values of learning is [34, 75]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of critical when learning = 68.104 (+1 SD):
## Est. S.E. p
## 1.109 0.553 0.057
##
## Slope of critical when learning = 56.367 (Mean):
## Est. S.E. p
## 0.675 0.329 0.052
##
## Slope of critical when learning = 44.63 (-1 SD):
## Est. S.E. p
## 0.242 0.237 0.318
##
## #######################################################
## While privileges (2nd moderator) = 53.133 (Mean of privileges)
## #######################################################
##
## The Johnson-Neyman interval could not be found. Is your interaction term significant?
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of critical when learning = 68.104 (+1 SD):
## Est. S.E. p
## 0.023 0.330 0.946
##
## Slope of critical when learning = 56.367 (Mean):
## Est. S.E. p
## 0.058 0.239 0.811
##
## Slope of critical when learning = 44.63 (-1 SD):
## Est. S.E. p
## 0.093 0.340 0.788
##
## #######################################################
## While privileges (2nd moderator) = 65.369 (Mean of privileges +1 SD)
## #######################################################
##
## The Johnson-Neyman interval could not be found. Is your interaction term significant?
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of critical when learning = 68.104 (+1 SD):
## Est. S.E. p
## -1.063 0.658 0.120
##
## Slope of critical when learning = 56.367 (Mean):
## Est. S.E. p
## -0.560 0.502 0.276
##
## Slope of critical when learning = 44.63 (-1 SD):
## Est. S.E. p
## -0.057 0.614 0.927
The only downside is that ggplot2
does not offer any easy ways to clearly display what variable each panel represents, so you’ll have to keep track of what your 2nd moderator is. So in the figure above, each panel represents a different level of the privileges
variable from the attitudes
built-in dataset.
interact_plot()
has a bit more flexibility here than sim_slopes()
, allowing for factor moderators with greater than 2 levels. And don’t forget that you can use theme_apa()
to format for publications or just to make more economical use of space.
mtcars$cyl <- factor(mtcars$cyl, labels = c("4 cylinder", "6 cylinder", "8 cylinder"))
fitc3 <- lm(mpg ~ hp*wt*cyl, data=mtcars)
interact_plot(fitc3, pred = hp, modx = wt, mod2 = cyl) +
theme_apa(legend.pos = "bottomright")
You can get Johnson-Neyman plots for 3-way interactions as well, but keep in mind what I mentioned earlier in this section about the J-N technique for 3-way interactions. You will also need the cowplot
package, which is used on the backend to mush together the separate J-N plots.
regmodel3 <- svyglm(api00 ~ avg.ed*growth*enroll, design = dstrat)
probe_interaction(regmodel3, pred = growth, modx = avg.ed, mod2 = enroll,
johnson_neyman = TRUE, jnplot = TRUE, interval = T)
## #######################################################
## While enroll (2nd moderator) = 153.052 (Mean of enroll -1 SD)
## #######################################################
##
## JOHNSON-NEYMAN INTERVAL
##
## The slope of growth is p < .05 when avg.ed is OUTSIDE this interval:
## [2.7515, 3.8116]
## Note: The range of observed values of avg.ed is [1.38, 4.44]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of growth when avg.ed = 3.49 (+1 SD):
## Est. S.E. p
## -0.482 0.351 0.171
##
## Slope of growth when avg.ed = 2.787 (Mean):
## Est. S.E. p
## 0.385 0.220 0.081
##
## Slope of growth when avg.ed = 2.085 (-1 SD):
## Est. S.E. p
## 1.252 0.324 0.000
##
## #######################################################
## While enroll (2nd moderator) = 595.282 (Mean of enroll)
## #######################################################
##
## JOHNSON-NEYMAN INTERVAL
##
## The slope of growth is p < .05 when avg.ed is OUTSIDE this interval:
## [2.8415, 7.6506]
## Note: The range of observed values of avg.ed is [1.38, 4.44]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of growth when avg.ed = 3.49 (+1 SD):
## Est. S.E. p
## -0.038 0.236 0.871
##
## Slope of growth when avg.ed = 2.787 (Mean):
## Est. S.E. p
## 0.339 0.157 0.032
##
## Slope of growth when avg.ed = 2.085 (-1 SD):
## Est. S.E. p
## 0.716 0.218 0.001
##
## #######################################################
## While enroll (2nd moderator) = 1037.512 (Mean of enroll +1 SD)
## #######################################################
##
## The Johnson-Neyman interval could not be found. Is your interaction term significant?
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of growth when avg.ed = 3.49 (+1 SD):
## Est. S.E. p
## 0.405 0.271 0.137
##
## Slope of growth when avg.ed = 2.787 (Mean):
## Est. S.E. p
## 0.293 0.197 0.138
##
## Slope of growth when avg.ed = 2.085 (-1 SD):
## Est. S.E. p
## 0.181 0.311 0.561
Notice that at one of the three values of the second moderator, there were no Johnson-Neyman interval values so it wasn’t plotted. The more levels of the second moderator you want to plot, the more likely that the resulting plot will be unwieldy and hard to read. You can resize your window to help, though.