The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
This is a package dedicated to performing a least squares constrained optimization on a linear objective function. The functions minimize the same objective function as lm
, applying a constraint on the beta parameters:
\[S(\beta) = \sum_{i=1}^m \vert y_i - \sum_{j=1}^nX_{ij}\beta_j \vert^2 = \Vert y - X\beta\Vert^2\]
And
\[\hat{\beta} = arg_\beta min \ S(\beta)\] under the constraints:
\[lower \le \hat{\beta} \le upper \]
The idea behind the package is to give the users a way to perform a constrained “linear regression” in an easy and intuitive way. The functions require a formula in the same syntax and format as lm
which is a style most R users are familiar with.
So far the package includes two functions in order to perform the constrained optimization:
colf_nls
- uses the port algorithm which comes from the stats::nls
function.colf_nlxb
- uses Nash’s variant of Marquardt nonlinear least squares solution which comes from the nlsr::nlxb
function.You can find more details about the two algorithms if you have a look at ?nls
and ?nlxb
respectively.
Now we will see how we can easily use the port algorithm to perform a constrained optimization. As you will see we are using colf_nls
in the same way we would use lm
with the addition of upper and lower bounds for our parameter estimates.
We will use the mtcars
data set for a demonstration. Let’s load the package and use mtcars
to run a constrained least squares optimization model.
In the model below we use 4 variables to model mpg which means we will have 5 parameter estimates (don’t forget the Intercept). Parameters are prefixed with param_
in the model’s output. We set the lower bounds of those 4 parameter estimates to -2 and the upper bounds to 2 (obviously they do not need to be the same). Ideally, starting values should be provided. If omitted a cheap guess will be made, which is basically setting all starting values to 1. If the staring values do not fall within the boundaries defined by lower and upper then an error will be returned and you would need to manually change the starting values via the start
argument.
library(colf)
mymod <- colf_nls(mpg ~ cyl + disp + hp + qsec, mtcars, lower = rep(-2, 5), upper = rep(2, 5))
mymod
## Nonlinear regression model
## model: mpg ~ param_X.Intercept. * X.Intercept. + param_cyl * cyl + param_disp * disp + param_hp * hp + param_qsec * qsec
## data: model_ingredients$model_data
## param_X.Intercept. param_cyl param_disp
## 2.0000 0.2394 -0.0387
## param_hp param_qsec
## 0.0103 1.3391
## residual sum-of-squares: 418
##
## Algorithm "port", convergence message: both X-convergence and relative convergence (5)
As you can see all 5 parameter estimates fall within the defined boundaries. The above provided formula includes the Intercept. In the output, X.Intercept is a variable set to 1 and param_X.Intercept is the estimated intercept.
If starting values do not fall within the boundaries an error will be returned. As said previously if not provided they will be set to 1.
colf_nls(mpg ~ cyl + disp + hp + qsec, mtcars, lower = rep(-2, 5), upper = rep(0.5, 5))
## Error in nls(model_ingredients$model_formula, data = model_ingredients$model_data, : Convergence failure: initial par violates constraints
So, then they need to be set by the user:
colf_nls(mpg ~ cyl + disp + hp + qsec, mtcars, lower = rep(-2, 5), upper = rep(0.5, 5),
start = rep(0, 5))
## Nonlinear regression model
## model: mpg ~ param_X.Intercept. * X.Intercept. + param_cyl * cyl + param_disp * disp + param_hp * hp + param_qsec * qsec
## data: model_ingredients$model_data
## param_X.Intercept. param_cyl param_disp
## 0.5000 0.5000 -0.0254
## param_hp param_qsec
## 0.0697 0.5000
## residual sum-of-squares: 2238
##
## Algorithm "port", convergence message: both X-convergence and relative convergence (5)
As with lm
, colf_nls
accepts the same kind of formula syntax:
#no intercept
colf_nls(mpg ~ 0 + hp + cyl, mtcars)
## Nonlinear regression model
## model: mpg ~ param_hp * hp + param_cyl * cyl
## data: model_ingredients$model_data
## param_hp param_cyl
## -0.107 5.404
## residual sum-of-squares: 3150
##
## Algorithm "port", convergence message: relative convergence (4)
colf_nls(mpg ~ ., mtcars)
## Nonlinear regression model
## model: mpg ~ param_X.Intercept. * X.Intercept. + param_cyl * cyl + param_disp * disp + param_hp * hp + param_drat * drat + param_wt * wt + param_qsec * qsec + param_vs * vs + param_am * am + param_gear * gear + param_carb * carb
## data: model_ingredients$model_data
## param_X.Intercept. param_cyl param_disp
## 12.3033 -0.1114 0.0133
## param_hp param_drat param_wt
## -0.0215 0.7871 -3.7153
## param_qsec param_vs param_am
## 0.8210 0.3178 2.5202
## param_gear param_carb
## 0.6554 -0.1994
## residual sum-of-squares: 147
##
## Algorithm "port", convergence message: relative convergence (4)
colf_nls(mpg ~ I(hp + cyl), mtcars)
## Nonlinear regression model
## model: mpg ~ param_X.Intercept. * X.Intercept. + param_I.hp...cyl. * I.hp...cyl.
## data: model_ingredients$model_data
## param_X.Intercept. param_I.hp...cyl.
## 30.3667 -0.0672
## residual sum-of-squares: 439
##
## Algorithm "port", convergence message: both X-convergence and relative convergence (5)
colf_nls(mpg ~ (hp + cyl + disp)^3, mtcars)
## Nonlinear regression model
## model: mpg ~ param_X.Intercept. * X.Intercept. + param_hp * hp + param_cyl * cyl + param_disp * disp + param_hp.cyl * hp.cyl + param_hp.disp * hp.disp + param_cyl.disp * cyl.disp + param_hp.cyl.disp * hp.cyl.disp
## data: model_ingredients$model_data
## param_X.Intercept. param_hp param_cyl
## 9.29e+01 -4.70e-01 -1.06e+01
## param_disp param_hp.cyl param_hp.disp
## -3.86e-01 6.73e-02 2.81e-03
## param_cyl.disp param_hp.cyl.disp
## 5.27e-02 -3.84e-04
## residual sum-of-squares: 154
##
## Algorithm "port", convergence message: both X-convergence and relative convergence (5)
colf_nls(mpg ~ hp:cyl, mtcars)
## Nonlinear regression model
## model: mpg ~ param_X.Intercept. * X.Intercept. + param_hp.cyl * hp.cyl
## data: model_ingredients$model_data
## param_X.Intercept. param_hp.cyl
## 27.26649 -0.00713
## residual sum-of-squares: 407
##
## Algorithm "port", convergence message: both X-convergence and relative convergence (5)
colf_nls(mpg ~ hp * cyl, mtcars)
## Nonlinear regression model
## model: mpg ~ param_X.Intercept. * X.Intercept. + param_hp * hp + param_cyl * cyl + param_hp.cyl * hp.cyl
## data: model_ingredients$model_data
## param_X.Intercept. param_hp param_cyl
## 50.7512 -0.1707 -4.1191
## param_hp.cyl
## 0.0197
## residual sum-of-squares: 248
##
## Algorithm "port", convergence message: relative convergence (4)
Notice that when the above versions are used, the parameter names are created with the use of make.names
in order to be syntactically valid (otherwise the optimizers fail). This is why you see an ‘X.’ in front of the intercept or too many dots in the names.
colf
provides a number of methods for colf
objects:
predict
- uses parameter estimates to predict on a new data setcoef
- retrieve the coefficientsresid
- retrieve the residualsprint
- print the modelsummary
- view a summary of the modelfitted
- retrieve the fitted valuesIn order to use the parameter estimates to make predictions on a new data set you need to remember two really important checks:
If any of the two is not valid, predict
will fail.
set.seed(10)
newdata <- data.frame(hp = mtcars$hp, cyl = mtcars$cyl, disp = mtcars$disp, qsec = mtcars$qsec)
predict(mymod, newdata)
## 1 2 3 4 5 6 7 8 9 10
## 20.426 21.176 24.663 20.627 14.591 22.896 13.734 24.707 29.160 22.731
## 11 12 13 14 15 16 17 18 19 20
## 23.534 18.408 18.676 19.212 11.855 12.208 12.601 26.669 25.368 27.528
## 21 22 23 24 25 26 27 28 29 30
## 26.111 15.757 16.874 13.545 13.084 25.894 21.608 23.078 12.484 20.392
## 31 32
## 15.285 24.312
But if I change any of the names or classes predict
will fail
#change column name
newdata2 <- newdata
names(newdata2)[1] <- 'col1'
predict(mymod, newdata2)
## Error in eval(expr, envir, enclos): object 'hp' not found
#change column class
newdata2 <- newdata
newdata2$cyl <- as.character(newdata2$cyl)
predict(mymod, newdata2)
## Error in value[[3L]](cond): newdata column classes need to be the same as original data
The rest of the colf_nls
methods are demonstrated below:
You need to be careful when using summary
because it returns p-values. By default nls
and nlxb
both return p-values for the coefficients, which were naturally passed on to colf. When running an unconstrained regression the p-values show us how likely it is for the estimate to be zero. In constrained regression though this may not even hold if you think that a restriction (and actually a common one) is to force the coefficients to be positive. In such a case the hypothesis test does not hold at all since we have restricted the coefficients to be positive. In constrained regression other assumptions that we make in unconstrained regression do not hold either (like the coefficients’ distribution) so the use and interpretation of the p-values can be problematic when we set lower and/or upper.
summary(mymod)
##
## Formula: mpg ~ param_X.Intercept. * X.Intercept. + param_cyl * cyl + param_disp *
## disp + param_hp * hp + param_qsec * qsec
##
## Parameters:
## Estimate Std. Error t value Pr(>|t|)
## param_X.Intercept. 2.0000 14.0397 0.14 0.888
## param_cyl 0.2394 1.0843 0.22 0.827
## param_disp -0.0387 0.0148 -2.61 0.014 *
## param_hp 0.0103 0.0228 0.45 0.654
## param_qsec 1.3391 0.6187 2.16 0.039 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.93 on 27 degrees of freedom
##
## Algorithm "port", convergence message: both X-convergence and relative convergence (5)
coef(mymod)
## param_X.Intercept. param_cyl param_disp
## 2.000000 0.239363 -0.038676
## param_hp param_qsec
## 0.010329 1.339146
print(mymod)
## Nonlinear regression model
## model: mpg ~ param_X.Intercept. * X.Intercept. + param_cyl * cyl + param_disp * disp + param_hp * hp + param_qsec * qsec
## data: model_ingredients$model_data
## param_X.Intercept. param_cyl param_disp
## 2.0000 0.2394 -0.0387
## param_hp param_qsec
## 0.0103 1.3391
## residual sum-of-squares: 418
##
## Algorithm "port", convergence message: both X-convergence and relative convergence (5)
resid(mymod)
## [1] 0.57350 -0.17642 -1.86251 0.77311 4.10872 -4.79609 0.56591
## [8] -0.30696 -6.35952 -3.53086 -5.73435 -2.00833 -1.37616 -4.01182
## [15] -1.45499 -1.80813 2.09908 5.73149 5.03225 6.37205 -4.61065
## [22] -0.25659 -1.67389 -0.24502 6.11559 1.40641 4.39164 7.32194
## [29] 3.31603 -0.69243 -0.28503 -2.91159
## attr(,"label")
## [1] "Residuals"
fitted(mymod)
## [1] 20.426 21.176 24.663 20.627 14.591 22.896 13.734 24.707 29.160 22.731
## [11] 23.534 18.408 18.676 19.212 11.855 12.208 12.601 26.669 25.368 27.528
## [21] 26.111 15.757 16.874 13.545 13.084 25.894 21.608 23.078 12.484 20.392
## [31] 15.285 24.312
## attr(,"label")
## [1] "Fitted values"
colf_nlxb
can be used in the exact same way as colf_nls
. All aspects / features discussed about colf_nls
do stand for colf_nlxb
as well. Only the underlying algorithm changes.
mymod <- colf_nlxb(mpg ~ cyl + disp + hp + qsec, mtcars, lower = rep(-2, 5), upper = rep(2, 5))
mymod
## nlsr class object: x
## residual sumsquares = 1505 on 32 observations
## after 7 Jacobian and 7 function evaluations
## name coeff SEs tstat pval gradient JSingval
## 1 param_X.Intercept. 2.000000 NA NA NA 3.1092e-02 1.7236e+03
## 2 param_cyl -2.000000 NA NA NA 0.0000e+00 2.2588e+02
## 3 param_disp 0.038586 NA NA NA 2.2571e+04 4.8424e+01
## 4 param_hp 0.002320 NA NA NA 8.8068e+03 3.2482e-01
## 5 param_qsec 1.189077 NA NA NA -2.5675e+01 1.9079e-14
Setting lower, upper and starting values:
#start values are outside boundaries
colf_nlxb(mpg ~ cyl + disp + hp + qsec, mtcars, lower = rep(-2, 5), upper = rep(0.5, 5))
## Error in nlxb(model_ingredients$model_formula, data = model_ingredients$model_data, : Infeasible start
#so they need to be provided
colf_nlxb(mpg ~ cyl + disp + hp + qsec, mtcars, lower = rep(-5, 5), upper = rep(.5, 5),
start = rep(0, 5))
## nlsr class object: x
## residual sumsquares = 2237.9 on 32 observations
## after 5 Jacobian and 6 function evaluations
## name coeff SEs tstat pval gradient JSingval
## 1 param_X.Intercept. 0.500000 NA NA NA 0.0000e+00 1.7213e+03
## 2 param_cyl 0.500000 NA NA NA 0.0000e+00 2.2559e+02
## 3 param_disp -0.025392 NA NA NA 2.0037e-05 3.9200e-14
## 4 param_hp 0.069710 NA NA NA -1.2805e-05 0.0000e+00
## 5 param_qsec 0.500000 NA NA NA 0.0000e+00 0.0000e+00
lm
:#no intercept
colf_nlxb(mpg ~ 0 + hp + cyl, mtcars)
## nlsr class object: x
## residual sumsquares = 3149.5 on 32 observations
## after 3 Jacobian and 4 function evaluations
## name coeff SEs tstat pval gradient JSingval
## 1 param_hp -0.10747 0.045383 -2.3680 2.4529e-02 3.5416e-05 914.0685
## 2 param_cyl 5.40364 1.139220 4.7433 4.8055e-05 -1.4114e-06 8.9873
colf_nlxb(mpg ~ ., mtcars)
## nlsr class object: x
## residual sumsquares = 147.5 on 32 observations
## after 2 Jacobian and 3 function evaluations
## name coeff SEs tstat pval gradient
## 1 param_X.Intercept. 12.080869 18.717972 0.64542 0.525648 -2.6874e-03
## 2 param_cyl -0.106182 1.045028 -0.10161 0.920033 -6.2879e-03
## 3 param_disp 0.013435 0.017858 0.75235 0.460194 5.5551e-01
## 4 param_hp -0.021483 0.021769 -0.98688 0.334938 -7.2165e-03
## 5 param_drat 0.791165 1.635381 0.48378 0.633548 2.0801e-04
## 6 param_wt -3.729778 1.894423 -1.96882 0.062312 -8.2814e-03
## 7 param_qsec 0.830959 0.730848 1.13698 0.268355 6.4511e-02
## 8 param_vs 0.310536 2.104518 0.14756 0.884100 -1.1932e-05
## 9 param_am 2.526646 2.056660 1.22852 0.232843 3.1270e-05
## 10 param_gear 0.659013 1.493267 0.44132 0.663490 -4.4844e-04
## 11 param_carb -0.196141 0.828756 -0.23667 0.815207 1.9448e-03
## JSingval
## 1 1724.34152
## 2 226.12682
## 3 50.89759
## 4 5.65107
## 5 4.27912
## 6 4.05009
## 7 1.62429
## 8 1.40497
## 9 1.23732
## 10 1.11377
## 11 0.14119
colf_nlxb(mpg ~ I(hp + cyl), mtcars)
## nlsr class object: x
## residual sumsquares = 438.6 on 32 observations
## after 3 Jacobian and 4 function evaluations
## name coeff SEs tstat pval gradient
## 1 param_X.Intercept. 30.366699 1.6439622 18.4717 0.000e+00 -2.8609e-08
## 2 param_I.hp...cyl. -0.067219 0.0098026 -6.8572 1.307e-07 4.6600e-06
## JSingval
## 1 948.7026
## 2 2.3258
colf_nlxb(mpg ~ (hp + cyl + disp)^3, mtcars)
## nlsr class object: x
## residual sumsquares = 6207826 on 32 observations
## after 2 Jacobian and 3 function evaluations
## name coeff SEs tstat pval gradient
## 1 param_X.Intercept. -2.5360e+04 5.4329e+03 -4.6679 9.6610e-05 -2.2429e+00
## 2 param_hp 2.4485e+02 5.1945e+01 4.7136 8.6027e-05 8.2240e+02
## 3 param_cyl 4.2408e+03 9.9135e+02 4.2778 2.6062e-04 3.5875e+01
## 4 param_disp 1.8386e+02 3.8699e+01 4.7510 7.8223e-05 1.0508e+03
## 5 param_hp.cyl -3.5587e+01 7.7299e+00 -4.6038 1.1372e-04 -1.0075e+04
## 6 param_hp.disp -1.9810e+00 4.0748e-01 -4.8617 5.9040e-05 -1.0917e+06
## 7 param_cyl.disp -2.6274e+01 5.5456e+00 -4.7378 8.0890e-05 -1.5164e+04
## 8 param_hp.cyl.disp 2.6299e-01 5.3782e-02 4.8900 5.4952e-05 1.0170e+07
## JSingval
## 1 2.3660e+06
## 2 2.1312e+04
## 3 2.2941e+03
## 4 1.2322e+03
## 5 5.9652e+01
## 6 3.5301e+01
## 7 1.9948e+00
## 8 9.2185e-02
colf_nlxb(mpg ~ hp:cyl, mtcars)
## nlsr class object: x
## residual sumsquares = 407.2 on 32 observations
## after 2 Jacobian and 3 function evaluations
## name coeff SEs tstat pval gradient
## 1 param_X.Intercept. 27.2664037 1.1817113 23.0737 0.0000e+00 -0.00045181
## 2 param_hp.cyl -0.0071303 0.0009798 -7.2774 4.2035e-08 0.62834774
## JSingval
## 1 6822.6123
## 2 3.1177
colf_nlxb(mpg ~ hp * cyl, mtcars)
## nlsr class object: x
## residual sumsquares = 247.6 on 32 observations
## after 3 Jacobian and 4 function evaluations
## name coeff SEs tstat pval gradient
## 1 param_X.Intercept. 50.765907 6.5116862 7.7961 1.7145e-08 0.00044412
## 2 param_hp -0.170838 0.0691016 -2.4723 1.9767e-02 -0.12074368
## 3 param_cyl -4.121103 0.9882292 -4.1702 2.6582e-04 -0.00238224
## 4 param_hp.cyl 0.019758 0.0088109 2.2424 3.3034e-02 0.87458854
## JSingval
## 1 6881.82249
## 2 155.24157
## 3 8.31320
## 4 0.45214
set.seed(10)
newdata <- data.frame(hp = mtcars$hp, cyl = mtcars$cyl, disp = mtcars$disp, qsec = mtcars$qsec)
predict(mymod, newdata)
## 1 2 3 4 5 6 7 8 9 10
## 16.001 16.667 20.512 23.326 20.535 22.969 19.295 23.586 26.883 18.513
## 11 12 13 14 15 16 17 18 19 20
## 19.226 17.750 17.988 18.463 26.068 25.438 24.225 20.341 19.063 20.557
## 21 22 23 24 25 26 27 28 29 30
## 22.653 18.678 18.649 18.397 22.114 19.675 18.711 18.027 17.398 14.432
## 31 32
## 15.752 21.039
As with colf_nls
, in colf_nlxb
keeping names and classes the same is vital:
#change column name
newdata2 <- newdata
names(newdata2)[1] <- 'col1'
predict(mymod, newdata2)
## Error in eval(expr, envir, enclos): object 'hp' not found
#change column class
newdata2 <- newdata
newdata2$cyl <- as.character(newdata2$cyl)
predict(mymod, newdata2)
## Error in value[[3L]](cond): newdata column classes need to be the same as original data
Rest of methods provided:
Please make sure you read the section about the interpretation of the p-values at colf_nls
when running a constrained regression. The same principles described there hold for colf_nlxb
.
summary(mymod)
## $resname
## [1] "mymod"
##
## $ssquares
## [1] 1505
##
## $nobs
## [1] 32
##
## $coeff
## param_X.Intercept. param_cyl param_disp
## 2.000000 -2.000000 0.038586
## param_hp param_qsec
## 0.002320 1.189077
##
## $ct
## [1] "U" "L" " " " " " "
##
## $mt
## [1] " " " " " " " " " "
##
## $SEs
## [1] NA NA NA NA NA
##
## $tstat
## [1] NA NA NA NA NA
##
## $pval
## [1] NA NA NA NA NA
##
## $Sd
## [1] 1.7236e+03 2.2588e+02 4.8424e+01 3.2482e-01 1.9079e-14
##
## $gr
## [,1]
## param_X.Intercept. 3.1092e-02
## param_cyl 0.0000e+00
## param_disp 2.2571e+04
## param_hp 8.8068e+03
## param_qsec -2.5675e+01
##
## $jeval
## [1] 7
##
## $feval
## [1] 7
##
## $data_frame_to_print
## name coeff SEs tstat pval gradient JSingval
## 1 param_X.Intercept. 2.000000 NA NA NA 3.1092e-02 1.7236e+03
## 2 param_cyl -2.000000 NA NA NA 0.0000e+00 2.2588e+02
## 3 param_disp 0.038586 NA NA NA 2.2571e+04 4.8424e+01
## 4 param_hp 0.002320 NA NA NA 8.8068e+03 3.2482e-01
## 5 param_qsec 1.189077 NA NA NA -2.5675e+01 1.9079e-14
coef(mymod)
## param_X.Intercept. param_cyl param_disp
## 2.000000 -2.000000 0.038586
## param_hp param_qsec
## 0.002320 1.189077
print(mymod)
## nlsr class object: x
## residual sumsquares = 1505 on 32 observations
## after 7 Jacobian and 7 function evaluations
## name coeff SEs tstat pval gradient JSingval
## 1 param_X.Intercept. 2.000000 NA NA NA 3.1092e-02 1.7236e+03
## 2 param_cyl -2.000000 NA NA NA 0.0000e+00 2.2588e+02
## 3 param_disp 0.038586 NA NA NA 2.2571e+04 4.8424e+01
## 4 param_hp 0.002320 NA NA NA 8.8068e+03 3.2482e-01
## 5 param_qsec 1.189077 NA NA NA -2.5675e+01 1.9079e-14
resid(mymod)
## Mazda RX4 Mazda RX4 Wag Datsun 710
## -4.99876 -4.33287 -2.28818
## Hornet 4 Drive Hornet Sportabout Valiant
## 1.92617 1.83522 4.86870
## Duster 360 Merc 240D Merc 230
## 4.99451 -0.81398 4.08324
## Merc 280 Merc 280C Merc 450SE
## -0.68744 1.42601 1.34969
## Merc 450SL Merc 450SLC Cadillac Fleetwood
## 0.68750 3.26314 15.66802
## Lincoln Continental Chrysler Imperial Fiat 128
## 15.03793 9.52537 -12.05880
## Honda Civic Toyota Corolla Toyota Corona
## -11.33666 -13.34307 1.15271
## Dodge Challenger AMC Javelin Camaro Z28
## 3.17823 3.44932 5.09734
## Pontiac Firebird Fiat X1-9 Porsche 914-2
## 2.91435 -7.62499 -7.28934
## Lotus Europa Ford Pantera L Ferrari Dino
## -12.37287 1.59795 -5.26827
## Maserati Bora Volvo 142E
## 0.75225 -0.36133
## attr(,"gradient")
## param_X.Intercept. param_cyl param_disp param_hp param_qsec
## [1,] 1 6 160.0 110 16.46
## [2,] 1 6 160.0 110 17.02
## [3,] 1 4 108.0 93 18.61
## [4,] 1 6 258.0 110 19.44
## [5,] 1 8 360.0 175 17.02
## [6,] 1 6 225.0 105 20.22
## [7,] 1 8 360.0 245 15.84
## [8,] 1 4 146.7 62 20.00
## [9,] 1 4 140.8 95 22.90
## [10,] 1 6 167.6 123 18.30
## [11,] 1 6 167.6 123 18.90
## [12,] 1 8 275.8 180 17.40
## [13,] 1 8 275.8 180 17.60
## [14,] 1 8 275.8 180 18.00
## [15,] 1 8 472.0 205 17.98
## [16,] 1 8 460.0 215 17.82
## [17,] 1 8 440.0 230 17.42
## [18,] 1 4 78.7 66 19.47
## [19,] 1 4 75.7 52 18.52
## [20,] 1 4 71.1 65 19.90
## [21,] 1 4 120.1 97 20.01
## [22,] 1 8 318.0 150 16.87
## [23,] 1 8 304.0 150 17.30
## [24,] 1 8 350.0 245 15.41
## [25,] 1 8 400.0 175 17.05
## [26,] 1 4 79.0 66 18.90
## [27,] 1 4 120.3 91 16.70
## [28,] 1 4 95.1 113 16.90
## [29,] 1 8 351.0 264 14.50
## [30,] 1 6 145.0 175 15.50
## [31,] 1 8 301.0 335 14.60
## [32,] 1 4 121.0 109 18.60
fitted(mymod)
## Mazda RX4 Mazda RX4 Wag Datsun 710
## 16.001 16.667 20.512
## Hornet 4 Drive Hornet Sportabout Valiant
## 23.326 20.535 22.969
## Duster 360 Merc 240D Merc 230
## 19.295 23.586 26.883
## Merc 280 Merc 280C Merc 450SE
## 18.513 19.226 17.750
## Merc 450SL Merc 450SLC Cadillac Fleetwood
## 17.988 18.463 26.068
## Lincoln Continental Chrysler Imperial Fiat 128
## 25.438 24.225 20.341
## Honda Civic Toyota Corolla Toyota Corona
## 19.063 20.557 22.653
## Dodge Challenger AMC Javelin Camaro Z28
## 18.678 18.649 18.397
## Pontiac Firebird Fiat X1-9 Porsche 914-2
## 22.114 19.675 18.711
## Lotus Europa Ford Pantera L Ferrari Dino
## 18.027 17.398 14.432
## Maserati Bora Volvo 142E
## 15.752 21.039
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.