The {logitr} package requires that data be structured in a data.frame
and arranged in “long” or “tidy” format [@Wickham2014]. Each row should be an alternative from a choice observation. The choice observations do not have to be symmetric, meaning they can have a “ragged” structure where different choice observations have different numbers of alternatives. The data must also include variables for each of the following:
1
is chosen, 0
is not chosen). Only one alternative should be chosen per choice observation.obsID
variable would be 1, 1, 2, 2, 3, 3
.The “Data Formatting and Encoding” vignette has more details about the required data format.
Models are specified and estimated using the logitr()
function. The data
argument should be set to the data frame containing the choice data, and the choice
and obsID
arguments should be set to the column names in the data frame that correspond to the choice dummy variable column and the observation ID column, respectively. All variables to be used as model covariates should be provided as a vector of column names to the pars
argument. Each variable in the vector is additively included as a covariate in the utility model, with the interpretation that they represent utilities in preference space models and WTPs in a WTP space model.
For example, consider a preference space model where the utility for yogurt is given by the following utility model:
\[\begin{equation} u_{j} = \alpha p_{j} + \beta_1 x_{j1} + \beta_2 x_{j2} + \beta_3 x_{j3} + \beta_4 x_{j4} + \varepsilon_{j}, \label{eq:yogurtUtilityPref} \end{equation}\]
where \(p_{j}\) is price
, \(x_{j1}\) is feat
, and \(x_{j2-4}\) are dummy-coded variables for each brand
(with the fourth brand representing the reference level). This model can be estimated using the logitr()
function as follows:
<- logitr(
mnl_pref data = yogurt,
choice = "choice",
obsID = "obsID",
pars = c("price", "feat", "brand")
)
The equivalent model in the WTP space is given by the following utility model:
\[\begin{equation} u_{j} = \lambda \left( \omega_1 x_{j1} + \omega_1 x_{j2} + \omega_1 x_{j3} + \omega_2 x_{j4} - p_{j} \right) + \varepsilon_{j}, \label{eq:yogurtUtilityWtp} \end{equation}\]
To specify this model, the user should move "price"
from the pars
argument to the price
argument and set the modelSpace
argument to "wtp"
("pref"
is the default value):
<- logitr(
mnl_wtp data = yogurt,
choice = "choice",
obsID = "obsID",
pars = c("feat", "brand"),
price = "price",
modelSpace = "wtp"
)
In the above model, the variables in pars
are marginal WTPs, whereas in the preference space model they are marginal utilities. Note that in the WTP space model price is separately specified with the price
argument as it acts as a scaling term and does not reflect a marginal WTP.
Interactions between covariates can be entered in the pars
vector separated by the *
symbol. For example, an interaction between price
with feat
in the above preference space model could be included by specifying pars = c("price", "feat", "brand", "price*feat")
, or even more concisely just pars = c("price*feat", "brand")
as the interaction between price
and feat
will produce individual parameters for price
and feat
in addition to the interaction parameter.
Both of these examples are multinomial logit models with fixed parameters. See the “Estimating Multinomial Logit Models” vignette for more details.
Since WTP space models have non-convex log-likelihood functions, it is recommended to use a multi-start search to run the optimization loop multiple times to search for different local minima. This is implemented using the numMultiStarts
argument, e.g.:
<- logitr(
mnl_wtp data = yogurt,
choice = "choice",
obsID = "obsID",
pars = c("feat", "brand"),
price = "price",
modelSpace = "wtp",
numMultiStarts = 10
)
Use the summary()
function to print a summary of the results from an estimated model, e.g.
summary(mnl_pref)
#> =================================================
#> Call:
#> logitr(data = yogurt, choice = "choice", obsID = "obsID", pars = c("price",
#> "feat", "brand"))
#>
#> Frequencies of alternatives:
#> 1 2 3 4
#> 0.402156 0.029436 0.229270 0.339138
#>
#> Exit Status: 3, Optimization stopped because ftol_rel or ftol_abs was reached.
#>
#> Model Type: Multinomial Logit
#> Model Space: Preference
#> Model Run: 1 of 1
#> Iterations: 21
#> Elapsed Time: 0h:0m:0.04s
#> Algorithm: NLOPT_LD_LBFGS
#> Weights Used?: FALSE
#> Robust? FALSE
#>
#> Model Coefficients:
#> Estimate Std. Error z-value Pr(>|z|)
#> price -0.366555 0.024365 -15.0441 < 2.2e-16 ***
#> feat 0.491439 0.120062 4.0932 4.254e-05 ***
#> brandhiland -3.715477 0.145417 -25.5506 < 2.2e-16 ***
#> brandweight -0.641138 0.054498 -11.7645 < 2.2e-16 ***
#> brandyoplait 0.734519 0.080642 9.1084 < 2.2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Log-Likelihood: -2656.8878790
#> Null Log-Likelihood: -3343.7419990
#> AIC: 5323.7757580
#> BIC: 5352.7168000
#> McFadden R2: 0.2054148
#> Adj McFadden R2: 0.2039195
#> Number of Observations: 2412.0000000
Use statusCodes()
to print a description of each status code from the nloptr
optimization routine.
You can also extract other values of interest at the solution, such as:
The estimated coefficients
coef(mnl_pref)
#> price feat brandhiland brandweight brandyoplait
#> -0.3665546 0.4914392 -3.7154773 -0.6411384 0.7345195
The log-likelihood
logLik(mnl_pref)
#> 'log Lik.' -2656.888 (df=5)
The variance-covariance matrix
vcov(mnl_pref)
#> price feat brandhiland brandweight
#> price 0.0005936657 5.729619e-04 0.001851795 1.249988e-04
#> feat 0.0005729619 1.441482e-02 0.000855011 5.092010e-06
#> brandhiland 0.0018517954 8.550110e-04 0.021146019 1.490080e-03
#> brandweight 0.0001249988 5.092009e-06 0.001490080 2.970026e-03
#> brandyoplait -0.0015377720 -1.821331e-03 -0.003681036 7.779428e-04
#> brandyoplait
#> price -0.0015377720
#> feat -0.0018213311
#> brandhiland -0.0036810363
#> brandweight 0.0007779428
#> brandyoplait 0.0065031782
The coefficient standard errors
sqrt(diag(vcov(mnl_pref)))
#> price feat brandhiland brandweight brandyoplait
#> 0.02436526 0.12006175 0.14541671 0.05449794 0.08064229
For models in the preference space, a summary table of the computed WTP based on the estimated preference space parameters can be obtained with the wtp()
function. For example, the computed WTP from the previously estimated fixed parameter model can be obtained with the following command:
wtp(mnl_pref, price = "price")
#> Estimate Std. Error z-value Pr(>|z|)
#> lambda 0.366555 0.024396 15.0253 < 2.2e-16 ***
#> feat 1.340699 0.359136 3.7331 0.0001891 ***
#> brandhiland -10.136219 0.583646 -17.3671 < 2.2e-16 ***
#> brandweight -1.749094 0.181400 -9.6422 < 2.2e-16 ***
#> brandyoplait 2.003848 0.143323 13.9814 < 2.2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The wtp()
function divides the non-price parameters by the negative of the price parameter. Standard errors are estimated using the Krinsky and Robb parametric bootstrapping method [@Krinsky1986]. Similarly, the wtpCompare()
function can be used to compare the WTP from a WTP space model with that computed from an equivalent preference space model:
wtpCompare(mnl_pref, mnl_wtp, price = "price")
#> pref wtp difference
#> lambda 0.3665546 0.3665832 0.00002867
#> feat 1.3406987 1.3405926 -0.00010605
#> brandhiland -10.1362190 -10.1357635 0.00045548
#> brandweight -1.7490940 -1.7490826 0.00001133
#> brandyoplait 2.0038476 2.0038208 -0.00002686
#> logLik -2656.8878790 -2656.8878779 0.00000106
To estimate a mixed logit model, use the randPars
argument in the logitr()
function to denote which parameters will be modeled with a distribution. The current package version supports normal ("n"
) and log-normal ("ln"
) distributions.
For example, assume the observed utility for yogurts was \(v_{j} = \alpha p_{j} + \beta_1 x_{j1} + \beta_2 x_{j2} + \beta_3 x_{j3} + \beta_4 x_{j4}\), where \(p_{j}\) is price
, \(x_{j1}\) is feat
, and \(x_{j2-4}\) are dummy-coded variables for brand
. To model feat
as well as each of the brands as normally-distributed, set randPars = c(feat = "n", brand = "n")
:
<- logitr(
mxl_pref data = yogurt,
choice = 'choice',
obsID = 'obsID',
pars = c('price', 'feat', 'brand'),
randPars = c(feat = 'n', brand = 'n'),
numMultiStarts = 10
)
Since mixed logit models also have non-convex log-likelihood functions, it is again recommended to use a multi-start search to run the optimization loop multiple times to search for different local minima.
See the “Estimating Mixed Logit Models” vignette for more details.
Estimated models can be used to predict expected choices and choice probabilities for a set (or multiple sets) of alternatives based on an estimated model. As an example, consider predicting choice probabilities for two of the choice observations from the yogurt
dataset:
<- subset(
alts %in% c(42, 13),
yogurt, obsID select = c('obsID', 'alt', 'choice', 'price', 'feat', 'brand')
)
alts#> obsID alt choice price feat brand
#> 49 13 1 0 8.1 0 dannon
#> 50 13 2 0 5.0 0 hiland
#> 51 13 3 1 8.6 0 weight
#> 52 13 4 0 10.8 0 yoplait
#> 165 42 1 1 6.3 0 dannon
#> 166 42 2 0 6.1 1 hiland
#> 167 42 3 0 7.9 0 weight
#> 168 42 4 0 11.5 0 yoplait
In the example below, the expected choice probabilities for these two sets of alternatives are computed using the fixed parameter mnl_pref
model:
<- predictProbs(
probs model = mnl_pref,
alts = alts,
altID = "alt",
obsID = "obsID"
)
probs#> obsID alt prob_mean prob_low prob_high
#> 49 13 1 0.43685145 0.41510696 0.45779942
#> 50 13 2 0.03312986 0.02628916 0.04170730
#> 51 13 3 0.19155548 0.17617411 0.20820785
#> 52 13 4 0.33846321 0.31868868 0.35880207
#> 165 42 1 0.60764778 0.57287197 0.63979259
#> 166 42 2 0.02602007 0.01838555 0.03665216
#> 167 42 3 0.17803313 0.16206314 0.19501828
#> 168 42 4 0.18829902 0.16827987 0.20961170
The resulting probs
data frame contains the expected choice probabilities for each alternative. The low and high values show a 95% confidence interval, estimated using the Krinsky and Robb parametric bootstrapping method [@Krinsky1986]. You can change the CI level with the optional ci
argument (e.g. a 90% CI is obtained with ci = 0.90
).
Choice probabilities can also be predicted used WTP space models. For example, the predicted choice probabilities using the mnl_wtp
model are nearly identical with those from the mnl_pref
model:
<- predictProbs(
probs model = mnl_wtp,
alts = alts,
altID = "alt",
obsID = "obsID"
)
probs#> obsID alt prob_mean prob_low prob_high
#> 49 13 1 0.43686141 0.41501695 0.45781008
#> 50 13 2 0.03312947 0.02654480 0.04229588
#> 51 13 3 0.19154829 0.17641913 0.20760238
#> 52 13 4 0.33846083 0.31857110 0.35884452
#> 165 42 1 0.60767120 0.57285938 0.63931635
#> 166 42 2 0.02601800 0.01856952 0.03663630
#> 167 42 3 0.17802363 0.16222403 0.19470915
#> 168 42 4 0.18828717 0.16735147 0.20898596
See the “Predicting Choice Probabilities from Estimated Models” vignette for more details.
Similar to the predictProbs()
function, the predictChoices()
function can be used to predict choices using the results of an estimated model. In the example below, choices are predicted for the entire yogurt
dataset, which was used to the estimate the mnl_pref
model:
<- predictChoices(
choices model = mnl_pref,
alts = yogurt,
altID = "alt",
obsID = "obsID"
)
# Preview actual and predicted choices
head(choices[c('obsID', 'choice', 'choice_predict')])
#> obsID choice choice_predict
#> 1.1 1 0 1
#> 1.2 1 0 0
#> 1.3 1 1 0
#> 1.4 1 0 0
#> 2.5 2 1 1
#> 2.6 2 0 0
The resulting choices
data frame contains the same alts
data frame with an additional column, choice_predict
, which contains the predicted choices. You can quickly compute the accuracy by dividing the number of correctly predicted choices by the total number of choices:
<- subset(choices, choice == 1)
chosen $correct <- chosen$choice == chosen$choice_predict
chosensum(chosen$correct) / nrow(chosen)
#> [1] 0.3884743
See the “Predicting Choices from Estimated Models” vignette for more details.