This vignette covers the intricacies of transformations and link functions in emmeans.
Consider the same example with the pigs
dataset that is used in many of these vignettes:
pigs.lm <- lm(log(conc) ~ source + factor(percent), data = pigs)
This model has two factors, source
and percent
(coerced to a factor), as predictors; and log-transformed conc
as the response. Here we obtain the EMMs for source
, examine its structure, and finally produce a summary, including a test against a null value of log(35):
pigs.emm.s <- emmeans(pigs.lm, "source")
str(pigs.emm.s)
## 'emmGrid' object with variables:
## source = fish, soy, skim
## Transformation: "log"
summary(pigs.emm.s, infer = TRUE, null = log(35))
## source emmean SE df lower.CL upper.CL null t.ratio p.value
## fish 3.394492 0.03668122 23 3.318612 3.470373 3.555348 -4.385 0.0002
## soy 3.667260 0.03744798 23 3.589793 3.744727 3.555348 2.988 0.0066
## skim 3.796770 0.03938283 23 3.715300 3.878240 3.555348 6.130 <.0001
##
## Results are averaged over the levels of: percent
## Results are given on the log (not the response) scale.
## Confidence level used: 0.95
Now suppose that we want the EMMs expressed on the same scale as conc
. This can be done by adding type = "response"
to the summary()
call:
summary(pigs.emm.s, infer = TRUE, null = log(35), type = "response")
## source response SE df lower.CL upper.CL null t.ratio p.value
## fish 29.79952 1.093083 23 27.62197 32.14874 35 -4.385 0.0002
## soy 39.14451 1.465883 23 36.22658 42.29747 35 2.988 0.0066
## skim 44.55704 1.754782 23 41.07093 48.33905 35 6.130 <.0001
##
## Results are averaged over the levels of: percent
## Confidence level used: 0.95
## Intervals are back-transformed from the log scale
## Tests are performed on the log scale
Dealing with transformations in emmeans is somewhat complex, due to the large number of possibilities. But the key is understanding what happens, when. These results come from a sequence of steps. Here is what happens (and doesn’t happen) at each step:
log(conc)
model. The fact that a log transformation is used is recorded, but nothing else is done with that information.percent
levels, for each source
, to obtain the EMMs for source
– still on the log(conc)
scale.log(conc)
scale.conc
scale.conc
scale using the delta method. These SEs were not used in constructing the tests and confidence intervals.This choice of timing is based on the idea that the model is right. In particular, the fact that the response is transformed suggests that the transformed scale is the best scale to be working with. In addition, the model specifies that the effects of source
and percent
are linear on the transformed scale; inasmuch as marginal averaging to obtain EMMs is a linear operation, that averaging is best done on the transformed scale. For those two good reasons, back-transforming to the response scale is delayed until the very end by default.
As well-advised as it is, some users may not want the default timing of things. The tool for changing when back-transformation is performed is the regrid()
function – which, with default settings of its arguments, back-transforms an emmGrid
object and adjusts everything in it appropriately. For example:
str(regrid(pigs.emm.s))
## 'emmGrid' object with variables:
## source = fish, soy, skim
summary(regrid(pigs.emm.s), infer = TRUE, null = 35)
## source response SE df lower.CL upper.CL null t.ratio p.value
## fish 29.79952 1.093083 23 27.53831 32.06074 35 -4.758 0.0001
## soy 39.14451 1.465883 23 36.11210 42.17692 35 2.827 0.0096
## skim 44.55704 1.754782 23 40.92699 48.18708 35 5.446 <.0001
##
## Results are averaged over the levels of: percent
## Confidence level used: 0.95
Notice that the structure no longer includes the transformation. That’s because it is no longer relevant; the reference grid is on the conc
scale, and how we got there is now forgotten. Compare this summary()
result with the preceding one, and note the following:
Understood, right? But think carefully about how these EMMs were obtained. They are back-transformed from pigs.emm.s
, in which the marginal averaging was done on the log scale. If we want to back-transform before doing the averaging, we need to call regrid()
after the reference grid is constructed but before the averaging takes place:
pigs.rg <- ref_grid(pigs.lm)
pigs.remm.s <- emmeans(regrid(pigs.rg), "source")
summary(pigs.remm.s, infer = TRUE, null = 35)
## source response SE df lower.CL upper.CL null t.ratio p.value
## fish 29.97478 1.096051 23 27.70743 32.24214 35 -4.585 0.0001
## soy 39.37473 1.494655 23 36.28280 42.46666 35 2.927 0.0076
## skim 44.81909 1.789918 23 41.11636 48.52182 35 5.486 <.0001
##
## Results are averaged over the levels of: percent
## Confidence level used: 0.95
These results all differ from either of the previous two summaries – again, because the averaging is done on the conc
scale rather than the log(conc)
scale.
Note: For those who want to routinely back-transform before averaging, the transform
argument in ref_grid()
simplifies this. The first two steps above could have been done more easily as follows:
pigs.remm.s <- emmeans(pigs.lm, "source", transform = "response")
But don’t get transform
and type
confused. The transform
argument is passed to regrid()
after the reference grid is constructed, whereas the type
argument is simply remembered and used by summary()
. So a similar-looking call:
emmeans(pigs.lm, "source", type = "response")
will compute the results we have seen for pigs.emm.s
– back-transformed after averaging on the log scale.
Remember again: When it comes to transformations, timing is everything.
Exactly the same ideas we have presented for response transformations apply to generalized linear models having non-identity link functions. As far as emmeans is concerned, there is no difference at all.
To illustrate, consider the neuralgia
dataset provided in the package. These data come from an experiment reported in a SAS technical report where different treatments for neuralgia are compared. The patient’s sex is an additional factor, and their age is a covariate. The response is Pain
, a binary variable on whether or not the patient reports neuralgia pain after treatment. The model suggested in the SAS report is equivalent to the following. We use it to obtain estimated probabilities of experiencing pain:
neuralgia.glm <- glm(Pain ~ Treatment * Sex + Age, family = binomial(), data = neuralgia)
neuralgia.emm <- emmeans(neuralgia.glm, "Treatment", type = "response")
## NOTE: Results may be misleading due to involvement in interactions
neuralgia.emm
## Treatment prob SE df asymp.LCL asymp.UCL
## A 0.2109015 0.11090460 Inf 0.0675069 0.4966578
## B 0.1206362 0.08352179 Inf 0.0284832 0.3909567
## P 0.8662555 0.08827173 Inf 0.5926536 0.9664811
##
## Results are averaged over the levels of: Sex
## Confidence level used: 0.95
## Intervals are back-transformed from the logit scale
(The note about the interaction is discussed shortly.) Note that the averaging over Sex
is done on the logit scale, before the results are back-transformed for the summary. We may use pairs()
to compare these estimates; note that logits are logs of odds; so this is another instance where log-differences are back-transformed – in this case to odds ratios:
pairs(neuralgia.emm, reverse = TRUE)
## contrast odds.ratio SE df z.ratio p.value
## B / A 0.51329 0.51461 Inf -0.665 0.7837
## P / A 24.23381 25.14231 Inf 3.073 0.0060
## P / B 47.21292 57.24225 Inf 3.179 0.0042
##
## Results are averaged over the levels of: Sex
## P value adjustment: tukey method for comparing a family of 3 estimates
## Tests are performed on the log odds ratio scale
So there is evidence of considerably more pain being reported with placebo (treatment P
) than with either of the other two treatments. The estimated odds of pain with B
are about half that for A
, but this finding is not statistically significant. (The odds that this is a made-up dataset seem quite high, but that finding is strictly this author’s impression.)
Observe that there is a note in the output for neuralgia.emm
that the results may be misleading. It is important to take it seriously, because if two factors interact, it may be the case that marginal averages of predictions don’t reflect what is happening at any level of the factors being averaged over. To find out, look at an interaction plot of the fitted model:
emmip(neuralgia.glm, Sex ~ Treatment)
There is no practical difference between females and males in the patterns of response to Treatment
; so I think most people would be quite comfortable with the marginal results that are reported earlier.
It is possible to have a generalized linear model with a non-identity link and a response transformation. Here is an example, with the built-in wapbreaks
dataset:
warp.glm <- glm(sqrt(breaks) ~ wool*tension, family = Gamma, data = warpbreaks)
ref_grid(warp.glm)
## 'emmGrid' object with variables:
## wool = A, B
## tension = L, M, H
## Transformation: "inverse"
## Additional response transformation: "sqrt"
The canonical link for a gamma model is the reciprocal (or inverse); and there is the square-root response transformation besides. If we choose type = "response"
in summarizing, we undo both transformations:
emmeans(warp.glm, ~ tension | wool, type = "response")
## wool = A:
## tension response SE df asymp.LCL asymp.UCL
## L 42.87080 5.237067 Inf 33.22074 53.74966
## M 23.28978 2.845066 Inf 18.04733 29.19978
## H 23.58442 2.881055 Inf 18.27566 29.56919
##
## wool = B:
## tension response SE df asymp.LCL asymp.UCL
## L 27.43817 3.351832 Inf 21.26193 34.40087
## M 28.07575 3.429719 Inf 21.75599 35.20024
## H 18.50894 2.261044 Inf 14.34264 23.20577
##
## Confidence level used: 0.95
## Intervals are back-transformed from the sqrt scale
What happened here is first the linear predictor was back-transformed from the link scale (inverse); then the squares were obtained to back-transform the rest of the way. It is possible to undo the link, and not the response transformation:
emmeans(warp.glm, ~ tension | wool, type = "unlink")
## wool = A:
## tension response SE df asymp.LCL asymp.UCL
## L 6.547580 0.3999238 Inf 5.847547 7.438013
## M 4.825948 0.2947676 Inf 4.309983 5.482251
## H 4.856380 0.2966258 Inf 4.337161 5.516819
##
## wool = B:
## tension response SE df asymp.LCL asymp.UCL
## L 5.238146 0.3199445 Inf 4.678110 5.950505
## M 5.298655 0.3236405 Inf 4.732150 6.019244
## H 4.302202 0.2627775 Inf 3.842232 4.887278
##
## Confidence level used: 0.95
## Intervals are back-transformed from the inverse scale
It is not possible to undo the response transformation and leave the link in place, because the response was transform first, then the link model was applied; we have to undo those in reverse order to make sense.
One may also use "unlink"
as a transform
argument in regrid()
or through ref_grid()
.
The make.tran()
function provides several special transformations and sets things up so they can be handled in emmeans with relative ease. (See [help("make.tran", "emmeans")](../html/make.tran.html) for descriptions of what is available.)
make.tran()works much like
stats::make.link()in that it returns a list of functions
linkfun(),
linkinv(), etc. that serve in managing results on a transformed scale. The difference is that most transformations with
make.tran()` require additional arguments.
To use this capability in emmeans()
, it is fortuitous to first obtain the make.tran()
result, and then to use it as the enclosing environment for fitting the model, with linkfun
as the transformation. For example, suppose we want to use the response transformation \(\log(y + \frac12)\). Then proceed like this:
tran <- make.tran("genlog", 1/2)
my.model <- with(tran,
lmer(linkfun(yield) ~ treatment + (1|Block), data = mydata))
Subsequent calls to ref_grid()
, emmeans()
, regrid()
, etc. will then be able to access the transformation information correctly.
The help page for make.tran()
has an example like this using a Box-Cox transformation.
It is not at all uncommon to fit a model using statements like the following:
mydata <- transform(mydata, logy.5 = log(yield + .5))
my.model <- lmer(logy.5 ~ treatment + (1|Block), data = mydata)
In this case, there is no way for ref_grid()
to figure out that a response transformation was used. What can be done is to update the reference grid with the required information:
my.rg <- update(ref_grid(my.model), tran = make.tran("genlog", .5))
Subsequently, use my.rg
in place of my.mnodel
in any emmeans()
analyses, and the transformation information will be there.
For standard transformations (those in stats::make.link()
), just give the name of the transformation; e.g.,
model.rg <- update(ref_grid(model), tran = "sqrt")
The regrid()
function makes it possible to fake a log transformation of the response. Why would you want to do this? So that you can make comparisons using ratios instead of differences.
Consider the pigs
example once again, but suppose we had fitted a model with a square-root transformation instead of a log:
pigroot.lm <- lm(sqrt(conc) ~ source + factor(percent), data = pigs)
piglog.emm.s <- regrid(emmeans(pigroot.lm, "source"), transform = "log")
confint(piglog.emm.s, type = "response")
## source response SE df lower.CL upper.CL
## fish 29.84382 1.316416 23 27.24115 32.69514
## soy 39.24300 1.541103 23 36.18104 42.56408
## skim 44.99895 1.735523 23 41.54824 48.73626
##
## Results are averaged over the levels of: percent
## Confidence level used: 0.95
## Intervals are back-transformed from the log scale
pairs(piglog.emm.s, type = "response")
## contrast ratio SE df t.ratio p.value
## fish / soy 0.7604877 0.04535114 23 -4.591 0.0004
## fish / skim 0.6632114 0.03910379 23 -6.965 <.0001
## soy / skim 0.8720869 0.04685060 23 -2.548 0.0457
##
## Results are averaged over the levels of: percent
## P value adjustment: tukey method for comparing a family of 3 estimates
## Tests are performed on the log scale
These results are not identical, but very similar to the back-transformed confidence intervals above for the EMMs and the pairwise ratios in the “comparisons” vignette, where the fitted model actually used a log response.