j_summ — An alternative to summary for regression models

Jacob Long

2017-08-08

When sharing analyses with colleagues unfamiliar with R, I found that the output generally was not clear to them. Even worse, if I wanted to give them information that is not included in the output like VIFs, robust standard errors, or standardized coefficients. After creating output tables “by hand” on multiple occasions, I thought it best to pack things into a reusable function.

With no user-specified arguments except a fitted model, the output of j_summ() looks like this:

# Fit model
fit <- lm(Income ~ Frost + Illiteracy + Murder, data = as.data.frame(state.x77))
j_summ(fit)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4.049, p = 0.012
## R-squared = 0.209
## Adj. R-squared = 0.157
## 
## Standard errors: OLS 
##             Est.     S.E.    t val. p        
## (Intercept) 5111.097 416.576 12.269 0     ***
## Frost       -1.254   2.11    -0.594 0.555    
## Illiteracy  -610.715 213.138 -2.865 0.006 ** 
## Murder      23.074   30.94   0.746  0.46

Like any output, this one is somewhat opinionated—some information is shown that perhaps not everyone would be interested in, some may be missing. That, of course, was the motivation behind the creation of the function; the author was no fan of summary() and its lack of configurability.

Adding and removing written output

Much of the output with j_summ() can be removed while there are several other pieces of information under the hood that users can ask for.

To remove the written output at the beginning, set model.info = FALSE and/or model.fit = FALSE.

j_summ(fit, model.info = FALSE, model.fit = FALSE)
## Standard errors: OLS 
##             Est.     S.E.    t val. p        
## (Intercept) 5111.097 416.576 12.269 0     ***
## Frost       -1.254   2.11    -0.594 0.555    
## Illiteracy  -610.715 213.138 -2.865 0.006 ** 
## Murder      23.074   30.94   0.746  0.46

Another, related bit of information available before the coefficient table relates to model assumptions (for OLS linear regression). When model.check = TRUE, j_summ() will report (with the help fo the car package) two quantities related to linear regression assumptions:

In both cases, you shouldn’t treat the results as proof of meaningful problems (or a lack of meaningful problems), but instead as a heuristic for more probing with graphical analyses.

j_summ(fit, model.check = TRUE)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4.049, p = 0.012
## R-squared = 0.209
## Adj. R-squared = 0.157
## 
## MODEL CHECKING:
## Homoskedasticity (Breusch-Pagan) = Assumption not violated (p = 0.131)
## Number of high-leverage observations = 2
## 
## Standard errors: OLS 
##             Est.     S.E.    t val. p        
## (Intercept) 5111.097 416.576 12.269 0     ***
## Frost       -1.254   2.11    -0.594 0.555    
## Illiteracy  -610.715 213.138 -2.865 0.006 ** 
## Murder      23.074   30.94   0.746  0.46

Report robust standard errors

One of the problems that originally motivated the creation of this function was the desire to efficiently report robust standard errors—while it is easy enough for an experienced R user to calculate robust standard errors, there are not many simple ways to include the results in a regression table as is common with the likes of Stata, SPSS, etc.

Robust standard errors require the user to have both lmtest and sandwich packages installed. They do not need to be loaded.

There are multiple types of robust standard errors that you may use, ranging from “HC0” to “HC5”. Per the recommendation of the authors of the sandwich package, the default is “HC3”. Stata’s default is “HC1”, so you may want to use that if your goal is to replicate Stata analyses.

j_summ(fit, robust = TRUE, robust.type = "HC3")
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4.049, p = 0.012
## R-squared = 0.209
## Adj. R-squared = 0.157
## 
## Standard errors: Robust, type = HC3
##             Est.     S.E.    t val. p        
## (Intercept) 5111.097 537.808 9.504  0     ***
## Frost       -1.254   2.867   -0.437 0.664    
## Illiteracy  -610.715 196.879 -3.102 0.003 ** 
## Murder      23.074   36.846  0.626  0.534

Robust standard errors will not be calculated for non-linear models (from glm) and svyglm models. In the case of svyglm, the standard errors that package calculates are already robust to heteroskedasticity, so a robust = TRUE parameter will be ignored.

Other options

Choose how many digits past the decimal to round to

With the digits = argument, you can decide how precise you want the outputted numbers to be. It is often inappropriate or distracting to report quantities with many digits past the decimal due to the inability to measure them so precisely or interpret them in applied settings. In other cases, it may be necessary to use more digits due to the way measures are calculated.

The default argument is digits = 3.

j_summ(fit, digits=5)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4.04857, p = 0.01232
## R-squared = 0.20888
## Adj. R-squared = 0.15729
## 
## Standard errors: OLS 
##             Est.       S.E.      t val.   p          
## (Intercept) 5111.09665 416.57608 12.2693  0       ***
## Frost       -1.25407   2.11012   -0.59432 0.55521    
## Illiteracy  -610.71471 213.13769 -2.86535 0.00626 ** 
## Murder      23.07403   30.94034  0.74576  0.45961
j_summ(fit, digits=1)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4, p = 0
## R-squared = 0.2
## Adj. R-squared = 0.2
## 
## Standard errors: OLS 
##             Est.   S.E.  t val. p      
## (Intercept) 5111.1 416.6 12.3   0   ***
## Frost       -1.3   2.1   -0.6   0.6    
## Illiteracy  -610.7 213.1 -2.9   0   ***
## Murder      23.1   30.9  0.7    0.5

Note that the return object has non-rounded values if you wish to use them later.

j <- j_summ(fit, digits = 3)

j$coeftable
##                    Est.       S.E.     t val.            p
## (Intercept) 5111.096650 416.576083 12.2692993 4.146240e-16
## Frost         -1.254074   2.110117 -0.5943151 5.552133e-01
## Illiteracy  -610.714712 213.137691 -2.8653529 6.259724e-03
## Murder        23.074026  30.940339  0.7457587 4.596073e-01

Calculate and report variance inflation factors (VIF)

When multicollinearity is a concern, it can be useful to have VIFs reported alongside each variable. This can be particularly helpful for model comparison and checking for the impact of newly-added variables. To get VIFs reported in the output table, just set vifs = TRUE.

Note that the car package is needed to calculate VIFs.

j_summ(fit, vifs = TRUE)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4.049, p = 0.012
## R-squared = 0.209
## Adj. R-squared = 0.157
## 
## Standard errors: OLS 
##             Est.     S.E.    t val. p         VIF  
## (Intercept) 5111.097 416.576 12.269 0     ***      
## Frost       -1.254   2.11    -0.594 0.555     1.853
## Illiteracy  -610.715 213.138 -2.865 0.006 **  2.599
## Murder      23.074   30.94   0.746  0.46      2.009

There are many standards researchers apply for deciding whether a VIF is too large. In some domains, a VIF over 2 is worthy of suspicion. Others set the bar higher, at 5 or 10. Ultimately, the main thing to consider is that small effects are more likely to be “drowned out” by higher VIFs.

Standardized beta coefficients

Some prefer to use standardized coefficients in order to avoid dismissing an effect as “small” when it is just the units of measure that are small. Standardized betas are used instead when standardized = TRUE. To be clear, since the meaning of “standardized beta” can vary depending on who you talk to, this option mean-centers the variables too.

j_summ(fit, standardize = TRUE)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4.049, p = 0.012
## R-squared = 0.209
## Adj. R-squared = 0.157
## 
## Standard errors: OLS 
##             Est.     S.E.    t val. p        
## (Intercept) 4435.8   79.773  55.605 0     ***
## Frost       -65.188  109.686 -0.594 0.555    
## Illiteracy  -372.251 129.914 -2.865 0.006 ** 
## Murder      85.179   114.217 0.746  0.46     
## 
## All continuous variables are mean-centered and scaled by 1 s.d.

You can also choose a different number of standard deviations to divide by for standardization. Andrew Gelman has been a proponent of dividing by 2 standard deviations; if you want to do things that way, give the argument n.sd = 2.

j_summ(fit, standardize = TRUE, n.sd = 2)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4.049, p = 0.012
## R-squared = 0.209
## Adj. R-squared = 0.157
## 
## Standard errors: OLS 
##             Est.     S.E.    t val. p        
## (Intercept) 4435.8   79.773  55.605 0     ***
## Frost       -130.376 219.371 -0.594 0.555    
## Illiteracy  -744.502 259.829 -2.865 0.006 ** 
## Murder      170.357  228.435 0.746  0.46     
## 
## All continuous variables are mean-centered and scaled by 2 s.d.

Note that this is achieved by refitting the model. If the model took a long time to fit initially, expect a similarly long time to refit it.

Mean-centered variables

In the same vein as the standardization feature, you can keep the original scale while still mean-centering the predictors with the center = TRUE argument.

j_summ(fit, center = TRUE)
## MODEL INFO:
## Observations: 50
## Dependent Variable: Income
## 
## MODEL FIT: 
## F(3,46) = 4.049, p = 0.012
## R-squared = 0.209
## Adj. R-squared = 0.157
## 
## Standard errors: OLS 
##             Est.     S.E.    t val. p        
## (Intercept) 4435.8   79.773  55.605 0     ***
## Frost       -1.254   2.11    -0.594 0.555    
## Illiteracy  -610.715 213.138 -2.865 0.006 ** 
## Murder      23.074   30.94   0.746  0.46     
## 
## All continuous variables are mean-centered.