SuperML R package is designed to unify the model training process in R like Python. Generally, it’s seen that people spend lot of time in searching for packages, figuring out the syntax for training machine learning models in R. This behaviour is highly apparent in users who frequently switch between R and Python. This package provides a python´s scikit-learn interface (fit
, predict
) to train models faster.
In addition to building machine learning models, there are handy functionalities to do feature engineering
This ambitious package is my ongoing effort to help the r-community build ML models easily and faster in R.
You can install latest cran version using (recommended):
You can install the developmemt version directly from github using:
This package uses existing r-packages to build machine learning model. In this tutorial, we’ll use data.table R package to do all tasks related to data manipulation.
We’ll quickly prepare the data set to be ready to served for model training.
load("../data/reg_train.rda")
# if the above doesn't work, you can try: load("reg_train.rda")
library(data.table)
library(caret)
#> Loading required package: lattice
#> Loading required package: ggplot2
library(superml)
library(Metrics)
#>
#> Attaching package: 'Metrics'
#> The following objects are masked from 'package:caret':
#>
#> precision, recall
head(reg_train)
#> Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape LandContour
#> 1: 1 60 RL 65 8450 Pave <NA> Reg Lvl
#> 2: 2 20 RL 80 9600 Pave <NA> Reg Lvl
#> 3: 3 60 RL 68 11250 Pave <NA> IR1 Lvl
#> 4: 4 70 RL 60 9550 Pave <NA> IR1 Lvl
#> 5: 5 60 RL 84 14260 Pave <NA> IR1 Lvl
#> 6: 6 50 RL 85 14115 Pave <NA> IR1 Lvl
#> Utilities LotConfig LandSlope Neighborhood Condition1 Condition2 BldgType
#> 1: AllPub Inside Gtl CollgCr Norm Norm 1Fam
#> 2: AllPub FR2 Gtl Veenker Feedr Norm 1Fam
#> 3: AllPub Inside Gtl CollgCr Norm Norm 1Fam
#> 4: AllPub Corner Gtl Crawfor Norm Norm 1Fam
#> 5: AllPub FR2 Gtl NoRidge Norm Norm 1Fam
#> 6: AllPub Inside Gtl Mitchel Norm Norm 1Fam
#> HouseStyle OverallQual OverallCond YearBuilt YearRemodAdd RoofStyle RoofMatl
#> 1: 2Story 7 5 2003 2003 Gable CompShg
#> 2: 1Story 6 8 1976 1976 Gable CompShg
#> 3: 2Story 7 5 2001 2002 Gable CompShg
#> 4: 2Story 7 5 1915 1970 Gable CompShg
#> 5: 2Story 8 5 2000 2000 Gable CompShg
#> 6: 1.5Fin 5 5 1993 1995 Gable CompShg
#> Exterior1st Exterior2nd MasVnrType MasVnrArea ExterQual ExterCond Foundation
#> 1: VinylSd VinylSd BrkFace 196 Gd TA PConc
#> 2: MetalSd MetalSd None 0 TA TA CBlock
#> 3: VinylSd VinylSd BrkFace 162 Gd TA PConc
#> 4: Wd Sdng Wd Shng None 0 TA TA BrkTil
#> 5: VinylSd VinylSd BrkFace 350 Gd TA PConc
#> 6: VinylSd VinylSd None 0 TA TA Wood
#> BsmtQual BsmtCond BsmtExposure BsmtFinType1 BsmtFinSF1 BsmtFinType2
#> 1: Gd TA No GLQ 706 Unf
#> 2: Gd TA Gd ALQ 978 Unf
#> 3: Gd TA Mn GLQ 486 Unf
#> 4: TA Gd No ALQ 216 Unf
#> 5: Gd TA Av GLQ 655 Unf
#> 6: Gd TA No GLQ 732 Unf
#> BsmtFinSF2 BsmtUnfSF TotalBsmtSF Heating HeatingQC CentralAir Electrical
#> 1: 0 150 856 GasA Ex Y SBrkr
#> 2: 0 284 1262 GasA Ex Y SBrkr
#> 3: 0 434 920 GasA Ex Y SBrkr
#> 4: 0 540 756 GasA Gd Y SBrkr
#> 5: 0 490 1145 GasA Ex Y SBrkr
#> 6: 0 64 796 GasA Ex Y SBrkr
#> 1stFlrSF 2ndFlrSF LowQualFinSF GrLivArea BsmtFullBath BsmtHalfBath FullBath
#> 1: 856 854 0 1710 1 0 2
#> 2: 1262 0 0 1262 0 1 2
#> 3: 920 866 0 1786 1 0 2
#> 4: 961 756 0 1717 1 0 1
#> 5: 1145 1053 0 2198 1 0 2
#> 6: 796 566 0 1362 1 0 1
#> HalfBath BedroomAbvGr KitchenAbvGr KitchenQual TotRmsAbvGrd Functional
#> 1: 1 3 1 Gd 8 Typ
#> 2: 0 3 1 TA 6 Typ
#> 3: 1 3 1 Gd 6 Typ
#> 4: 0 3 1 Gd 7 Typ
#> 5: 1 4 1 Gd 9 Typ
#> 6: 1 1 1 TA 5 Typ
#> Fireplaces FireplaceQu GarageType GarageYrBlt GarageFinish GarageCars
#> 1: 0 <NA> Attchd 2003 RFn 2
#> 2: 1 TA Attchd 1976 RFn 2
#> 3: 1 TA Attchd 2001 RFn 2
#> 4: 1 Gd Detchd 1998 Unf 3
#> 5: 1 TA Attchd 2000 RFn 3
#> 6: 0 <NA> Attchd 1993 Unf 2
#> GarageArea GarageQual GarageCond PavedDrive WoodDeckSF OpenPorchSF
#> 1: 548 TA TA Y 0 61
#> 2: 460 TA TA Y 298 0
#> 3: 608 TA TA Y 0 42
#> 4: 642 TA TA Y 0 35
#> 5: 836 TA TA Y 192 84
#> 6: 480 TA TA Y 40 30
#> EnclosedPorch 3SsnPorch ScreenPorch PoolArea PoolQC Fence MiscFeature
#> 1: 0 0 0 0 <NA> <NA> <NA>
#> 2: 0 0 0 0 <NA> <NA> <NA>
#> 3: 0 0 0 0 <NA> <NA> <NA>
#> 4: 272 0 0 0 <NA> <NA> <NA>
#> 5: 0 0 0 0 <NA> <NA> <NA>
#> 6: 0 320 0 0 <NA> MnPrv Shed
#> MiscVal MoSold YrSold SaleType SaleCondition SalePrice
#> 1: 0 2 2008 WD Normal 208500
#> 2: 0 5 2007 WD Normal 181500
#> 3: 0 9 2008 WD Normal 223500
#> 4: 0 2 2006 WD Abnorml 140000
#> 5: 0 12 2008 WD Normal 250000
#> 6: 700 10 2009 WD Normal 143000
split <- createDataPartition(y = reg_train$SalePrice, p = 0.7)
xtrain <- reg_train[split$Resample1]
xtest <- reg_train[!split$Resample1]
# remove features with 90% or more missing values
# we will also remove the Id column because it doesn't contain
# any useful information
na_cols <- colSums(is.na(xtrain)) / nrow(xtrain)
na_cols <- names(na_cols[which(na_cols > 0.9)])
xtrain[, c(na_cols, "Id") := NULL]
xtest[, c(na_cols, "Id") := NULL]
# encode categorical variables
cat_cols <- names(xtrain)[sapply(xtrain, is.character)]
for(c in cat_cols){
lbl <- LabelEncoder$new()
lbl$fit(c(xtrain[[c]], xtest[[c]]))
xtrain[[c]] <- lbl$transform(xtrain[[c]])
xtest[[c]] <- lbl$transform(xtest[[c]])
}
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
# removing noise column
noise <- c('GrLivArea','TotalBsmtSF')
xtrain[, c(noise) := NULL]
xtest[, c(noise) := NULL]
# fill missing value with -1
xtrain[is.na(xtrain)] <- -1
xtest[is.na(xtest)] <- -1
KNN Regression
knn <- KNNTrainer$new(k = 2,prob = T,type = 'reg')
knn$fit(train = xtrain, test = xtest, y = 'SalePrice')
probs <- knn$predict(type = 'prob')
labels <- knn$predict(type='raw')
rmse(actual = xtest$SalePrice, predicted=labels)
#> [1] 53143.97
SVM Regression
svm <- SVMTrainer$new()
svm$fit(xtrain, 'SalePrice')
pred <- svm$predict(xtest)
rmse(actual = xtest$SalePrice, predicted = pred)
Simple Regresison
lf <- LMTrainer$new(family="gaussian")
lf$fit(X = xtrain, y = "SalePrice")
summary(lf$model)
#>
#> Call:
#> stats::glm(formula = f, family = self$family, data = X, weights = self$weights)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -354562 -13472 -874 12287 190700
#>
#> Coefficients: (1 not defined because of singularities)
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -1.935e+06 1.438e+06 -1.346 0.178578
#> MSSubClass -9.667e+01 4.882e+01 -1.980 0.047971 *
#> MSZoning 1.837e+02 1.361e+03 0.135 0.892653
#> LotFrontage 6.564e+01 3.054e+01 2.149 0.031879 *
#> LotArea 4.872e-01 1.926e-01 2.530 0.011572 *
#> Street -5.296e+04 1.834e+04 -2.887 0.003973 **
#> LotShape 2.155e+03 1.928e+03 1.117 0.264121
#> LandContour 6.719e+02 1.749e+03 0.384 0.700982
#> Utilities NA NA NA NA
#> LotConfig 2.369e+03 9.739e+02 2.432 0.015191 *
#> LandSlope 3.287e+03 4.706e+03 0.699 0.484968
#> Neighborhood 4.204e+01 1.758e+02 0.239 0.811017
#> Condition1 -2.973e+03 8.498e+02 -3.498 0.000490 ***
#> Condition2 -1.538e+04 2.802e+03 -5.491 5.12e-08 ***
#> BldgType -7.397e+01 1.886e+03 -0.039 0.968727
#> HouseStyle -7.203e+02 8.898e+02 -0.809 0.418462
#> OverallQual 1.310e+04 1.268e+03 10.330 < 2e-16 ***
#> OverallCond 6.093e+03 1.137e+03 5.357 1.06e-07 ***
#> YearBuilt 3.409e+02 7.719e+01 4.417 1.12e-05 ***
#> YearRemodAdd 2.015e+02 7.285e+01 2.766 0.005791 **
#> RoofStyle 4.065e+02 1.815e+03 0.224 0.822836
#> RoofMatl -3.428e+03 2.131e+03 -1.608 0.108070
#> Exterior1st -1.477e+03 6.251e+02 -2.363 0.018317 *
#> Exterior2nd 1.290e+03 6.181e+02 2.086 0.037230 *
#> MasVnrType 8.893e+02 1.478e+03 0.602 0.547514
#> MasVnrArea 2.326e+01 6.349e+00 3.663 0.000263 ***
#> ExterQual 2.123e+03 2.177e+03 0.975 0.329711
#> ExterCond -3.978e+02 2.485e+03 -0.160 0.872857
#> Foundation -2.031e+03 1.044e+03 -1.945 0.052077 .
#> BsmtQual 3.233e+03 1.358e+03 2.382 0.017437 *
#> BsmtCond -1.023e+03 1.292e+03 -0.792 0.428814
#> BsmtExposure 4.900e+03 8.919e+02 5.494 5.05e-08 ***
#> BsmtFinType1 -4.813e+02 6.955e+02 -0.692 0.489104
#> BsmtFinSF1 3.302e+01 5.341e+00 6.183 9.33e-10 ***
#> BsmtFinType2 1.075e+02 1.165e+03 0.092 0.926488
#> BsmtFinSF2 1.811e+01 9.381e+00 1.930 0.053866 .
#> BsmtUnfSF 1.738e+01 4.849e+00 3.585 0.000354 ***
#> Heating 3.425e+03 3.991e+03 0.858 0.391008
#> HeatingQC -2.826e+03 1.294e+03 -2.184 0.029211 *
#> CentralAir 4.866e+03 4.889e+03 0.995 0.319871
#> Electrical 3.521e+03 1.838e+03 1.915 0.055739 .
#> `1stFlrSF` 5.851e+01 6.226e+00 9.396 < 2e-16 ***
#> `2ndFlrSF` 6.336e+01 5.502e+00 11.516 < 2e-16 ***
#> LowQualFinSF 4.414e+01 2.057e+01 2.145 0.032170 *
#> BsmtFullBath 3.560e+03 2.689e+03 1.324 0.185855
#> BsmtHalfBath 2.396e+03 4.279e+03 0.560 0.575586
#> FullBath 1.410e+03 2.888e+03 0.488 0.625472
#> HalfBath -7.868e+00 2.713e+03 -0.003 0.997687
#> BedroomAbvGr -6.089e+03 1.778e+03 -3.424 0.000643 ***
#> KitchenAbvGr -1.751e+04 5.435e+03 -3.221 0.001321 **
#> KitchenQual 9.385e+03 1.749e+03 5.366 1.01e-07 ***
#> TotRmsAbvGrd 8.345e+02 1.305e+03 0.639 0.522815
#> Functional -5.129e+03 1.306e+03 -3.926 9.25e-05 ***
#> Fireplaces 1.098e+02 2.421e+03 0.045 0.963823
#> FireplaceQu 2.322e+03 1.244e+03 1.867 0.062258 .
#> GarageType -3.064e+02 1.167e+03 -0.263 0.792886
#> GarageYrBlt -3.086e+00 5.264e+00 -0.586 0.557833
#> GarageFinish 1.889e+03 1.339e+03 1.411 0.158533
#> GarageCars 3.273e+03 3.189e+03 1.026 0.304937
#> GarageArea 3.726e+01 1.061e+01 3.513 0.000465 ***
#> GarageQual 5.999e+03 3.036e+03 1.976 0.048431 *
#> GarageCond -4.376e+03 2.573e+03 -1.701 0.089269 .
#> PavedDrive -1.727e+03 3.044e+03 -0.567 0.570737
#> WoodDeckSF 3.719e+01 8.405e+00 4.425 1.08e-05 ***
#> OpenPorchSF -9.317e+00 1.587e+01 -0.587 0.557299
#> EnclosedPorch 4.971e+00 1.811e+01 0.274 0.783796
#> `3SsnPorch` -3.399e+00 3.332e+01 -0.102 0.918762
#> ScreenPorch 3.106e+01 1.683e+01 1.846 0.065205 .
#> PoolArea 9.679e+01 2.608e+01 3.711 0.000218 ***
#> Fence -2.197e+03 1.273e+03 -1.726 0.084639 .
#> MiscVal 5.755e+00 3.158e+00 1.822 0.068729 .
#> MoSold -1.029e+02 3.393e+02 -0.303 0.761717
#> YrSold 3.992e+02 7.152e+02 0.558 0.576890
#> SaleType 2.224e+03 1.231e+03 1.806 0.071284 .
#> SaleCondition 2.192e+03 1.293e+03 1.696 0.090296 .
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for gaussian family taken to be 805496320)
#>
#> Null deviance: 6.4857e+12 on 1023 degrees of freedom
#> Residual deviance: 7.6522e+11 on 950 degrees of freedom
#> AIC: 23978
#>
#> Number of Fisher Scoring iterations: 2
predictions <- lf$predict(df = xtest)
#> Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
#> prediction from a rank-deficient fit may be misleading
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 44584.09
Lasso Regression
lf <- LMTrainer$new(family = "gaussian", alpha = 1, lambda = 1000)
lf$fit(X = xtrain, y = "SalePrice")
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 51639.12
Ridge Regression
lf <- LMTrainer$new(family = "gaussian", alpha=0)
lf$fit(X = xtrain, y = "SalePrice")
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 52373.86
Logistic Regression with CV
lf <- LMTrainer$new(family = "gaussian")
lf$cv_model(X = xtrain, y = 'SalePrice', nfolds = 5, parallel = FALSE)
predictions <- lf$cv_predict(df = xtest)
coefs <- lf$get_importance()
rmse(actual = xtest$SalePrice, predicted = predictions)
Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 0)
rf$fit(X = xtrain, y = "SalePrice")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> OverallQual 810231235087
#> GarageCars 537342887640
#> GarageArea 491958250674
#> 1stFlrSF 446155483803
#> YearBuilt 336625310209
#> BsmtQual 271136072647
#> GarageYrBlt 266706372980
#> 2ndFlrSF 258170435102
#> FullBath 246899429963
#> BsmtFinSF1 243594697505
#> LotArea 185726656403
#> TotRmsAbvGrd 179720086536
#> ExterQual 165163106347
#> FireplaceQu 148660258067
#> Fireplaces 132454421424
#> MasVnrArea 128724376104
#> KitchenQual 127570354814
#> YearRemodAdd 119087987626
#> Foundation 100187214188
#> LotFrontage 88619207838
#> OpenPorchSF 80937713722
#> WoodDeckSF 77970773441
#> BsmtFinType1 76095028289
#> BsmtUnfSF 70579504740
#> Neighborhood 58167426730
#> GarageType 53301968918
#> BedroomAbvGr 50631451190
#> MSSubClass 41811295830
#> HeatingQC 39688353626
#> MoSold 35519448321
#> Exterior2nd 33910799185
#> HalfBath 33747160661
#> HouseStyle 32679585756
#> OverallCond 31547944333
#> BsmtExposure 31535512421
#> RoofStyle 28907573665
#> GarageFinish 25752090562
#> Exterior1st 23074367274
#> BsmtFullBath 23014343988
#> YrSold 22180057627
#> MSZoning 20553433985
#> LotConfig 18480346886
#> SaleCondition 16548706200
#> LotShape 15476615621
#> SaleType 15446249949
#> CentralAir 15296695767
#> LandContour 14582384128
#> RoofMatl 14569386341
#> MasVnrType 13956987995
#> BldgType 12399339073
#> GarageQual 11921584423
#> PoolArea 11613280865
#> ScreenPorch 11050008251
#> Fence 9958095776
#> LandSlope 9493005788
#> GarageCond 9388397946
#> Condition1 8567642951
#> Functional 7822597808
#> ExterCond 7106718446
#> BsmtCond 7063926290
#> BsmtHalfBath 6615777382
#> BsmtFinSF2 6259681719
#> PavedDrive 5846144947
#> EnclosedPorch 5491470621
#> KitchenAbvGr 4596275610
#> Electrical 3993337096
#> LowQualFinSF 3540525609
#> BsmtFinType2 3302170008
#> Condition2 2820740722
#> MiscVal 2184069619
#> 3SsnPorch 2140176000
#> Heating 1800154649
#> Street 217917314
#> Utilities 0
rmse(actual = xtest$SalePrice, predicted = pred)
#> [1] 32821.09
Xgboost
xgb <- XGBTrainer$new(objective = "reg:linear"
, n_estimators = 500
, eval_metric = "rmse"
, maximize = F
, learning_rate = 0.1
,max_depth = 6)
xgb$fit(X = xtrain, y = "SalePrice", valid = xtest)
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:179297.703125 val-rmse:178058.515625
#> Multiple eval metrics are present. Will use val_rmse for early stopping.
#> Will train until val_rmse hasn't improved in 50 rounds.
#>
#> [51] train-rmse:8138.736816 val-rmse:31891.251953
#> [101] train-rmse:4431.447266 val-rmse:31297.320312
#> [151] train-rmse:2883.945557 val-rmse:31136.525391
#> [201] train-rmse:1713.599365 val-rmse:31095.343750
#> Stopping. Best iteration:
#> [169] train-rmse:2332.846436 val-rmse:31083.169922
pred <- xgb$predict(xtest)
rmse(actual = xtest$SalePrice, predicted = pred)
#> [1] 31083.17
Grid Search
xgb <- XGBTrainer$new(objective ="reg:linear")
gst <-GridSearchCV$new(trainer = xgb,
parameters = list(n_estimators = c(10,50), max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'))
gst$fit(xtrain, "SalePrice")
#> [1] "entering grid search"
#> [1] "In total, 4 models will be trained"
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:141109.718750
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:15461.125977
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143618.796875
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:15838.497070
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:142487.609375
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:17033.996094
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:141109.718750
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:4006.114258
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143618.796875
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:3872.664795
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:142487.609375
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:3682.372559
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:142054.625000
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:26201.429688
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:144608.906250
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:28777.785156
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143331.187500
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:30825.937500
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:142054.625000
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:14621.611328
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:144608.906250
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:16298.432617
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143331.187500
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:18349.689453
gst$best_iteration()
#> $n_estimators
#> [1] 10
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0
#>
#> $accuracy_sd
#> [1] 0
#>
#> $auc_avg
#> [1] NaN
#>
#> $auc_sd
#> [1] NA
Random Search
rf <- RFTrainer$new()
rst <- RandomSearchCV$new(trainer = rf,
parameters = list(n_estimators = c(5,10),
max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'),
n_iter = 3)
rst$fit(xtrain, "SalePrice")
#> [1] "In total, 3 models will be trained"
rst$best_iteration()
#> $n_estimators
#> [1] 5
#>
#> $max_depth
#> [1] 2
#>
#> $accuracy_avg
#> [1] 0.007820187
#>
#> $accuracy_sd
#> [1] 0.006108698
#>
#> $auc_avg
#> [1] NaN
#>
#> $auc_sd
#> [1] NA
Here, we will solve a simple binary classification problem (predict people who survived on titanic ship). The idea here is to demonstrate how to use this package to solve classification problems.
Data Preparation
# load class
load('../data/cla_train.rda')
# if the above doesn't work, you can try: load("cla_train.rda")
head(cla_train)
#> PassengerId Survived Pclass
#> 1: 1 0 3
#> 2: 2 1 1
#> 3: 3 1 3
#> 4: 4 1 1
#> 5: 5 0 3
#> 6: 6 0 3
#> Name Sex Age SibSp Parch
#> 1: Braund, Mr. Owen Harris male 22 1 0
#> 2: Cumings, Mrs. John Bradley (Florence Briggs Thayer) female 38 1 0
#> 3: Heikkinen, Miss. Laina female 26 0 0
#> 4: Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35 1 0
#> 5: Allen, Mr. William Henry male 35 0 0
#> 6: Moran, Mr. James male NA 0 0
#> Ticket Fare Cabin Embarked
#> 1: A/5 21171 7.2500 S
#> 2: PC 17599 71.2833 C85 C
#> 3: STON/O2. 3101282 7.9250 S
#> 4: 113803 53.1000 C123 S
#> 5: 373450 8.0500 S
#> 6: 330877 8.4583 Q
# split the data
split <- createDataPartition(y = cla_train$Survived,p = 0.7)
xtrain <- cla_train[split$Resample1]
xtest <- cla_train[!split$Resample1]
# encode categorical variables - shorter way
for(c in c('Embarked','Sex','Cabin')){
lbl <- LabelEncoder$new()
lbl$fit(c(xtrain[[c]], xtest[[c]]))
xtrain[[c]] <- lbl$transform(xtrain[[c]])
xtest[[c]] <- lbl$transform(xtest[[c]])
}
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
# impute missing values
xtrain[, Age := replace(Age, is.na(Age), median(Age, na.rm = T))]
xtest[, Age := replace(Age, is.na(Age), median(Age, na.rm = T))]
# drop these features
to_drop <- c('PassengerId','Ticket','Name')
xtrain <- xtrain[,-c(to_drop), with=F]
xtest <- xtest[,-c(to_drop), with=F]
Now, our data is ready to be served for model training. Let’s do it.
KNN Classification
knn <- KNNTrainer$new(k = 2,prob = T,type = 'class')
knn$fit(train = xtrain, test = xtest, y = 'Survived')
probs <- knn$predict(type = 'prob')
labels <- knn$predict(type='raw')
auc(actual = xtest$Survived, predicted=labels)
#> [1] 0.6385027
Naive Bayes Classification
nb <- NBTrainer$new()
nb$fit(xtrain, 'Survived')
pred <- nb$predict(xtest)
#> Warning: predict.naive_bayes(): More features in the newdata are provided as
#> there are probability tables in the object. Calculation is performed based on
#> features to be found in the tables.
auc(actual = xtest$Survived, predicted=pred)
#> [1] 0.7771836
SVM Classification
#predicts labels
svm <- SVMTrainer$new()
svm$fit(xtrain, 'Survived')
pred <- svm$predict(xtest)
auc(actual = xtest$Survived, predicted=pred)
Logistic Regression
lf <- LMTrainer$new(family="binomial")
lf$fit(X = xtrain, y = "Survived")
summary(lf$model)
#>
#> Call:
#> stats::glm(formula = f, family = self$family, data = X, weights = self$weights)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.6102 -0.6018 -0.4367 0.7038 2.4493
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 1.830070 0.616894 2.967 0.00301 **
#> Pclass -0.980785 0.192493 -5.095 3.48e-07 ***
#> Sex 2.508241 0.230374 10.888 < 2e-16 ***
#> Age -0.041034 0.009309 -4.408 1.04e-05 ***
#> SibSp -0.235520 0.117715 -2.001 0.04542 *
#> Parch -0.098742 0.137791 -0.717 0.47361
#> Fare 0.001281 0.002842 0.451 0.65230
#> Cabin 0.008408 0.004786 1.757 0.07899 .
#> Embarked 0.248088 0.166616 1.489 0.13649
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 831.52 on 623 degrees of freedom
#> Residual deviance: 564.76 on 615 degrees of freedom
#> AIC: 582.76
#>
#> Number of Fisher Scoring iterations: 5
predictions <- lf$predict(df = xtest)
auc(actual = xtest$Survived, predicted = predictions)
#> [1] 0.8832145
Lasso Logistic Regression
lf <- LMTrainer$new(family="binomial", alpha=1)
lf$cv_model(X = xtrain, y = "Survived", nfolds = 5, parallel = FALSE)
pred <- lf$cv_predict(df = xtest)
auc(actual = xtest$Survived, predicted = pred)
Ridge Logistic Regression
lf <- LMTrainer$new(family="binomial", alpha=0)
lf$cv_model(X = xtrain, y = "Survived", nfolds = 5, parallel = FALSE)
pred <- lf$cv_predict(df = xtest)
auc(actual = xtest$Survived, predicted = pred)
Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 1, max_features = 3)
rf$fit(X = xtrain, y = "Survived")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> Sex 67.80128
#> Fare 57.97193
#> Age 48.37045
#> Pclass 24.64915
#> Cabin 21.45972
#> SibSp 13.51637
#> Parch 10.45743
#> Embarked 10.23844
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.7976827
Xgboost
xgb <- XGBTrainer$new(objective = "binary:logistic"
, n_estimators = 500
, eval_metric = "auc"
, maximize = T
, learning_rate = 0.1
,max_depth = 6)
xgb$fit(X = xtrain, y = "Survived", valid = xtest)
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-auc:0.886258 val-auc:0.879085
#> Multiple eval metrics are present. Will use val_auc for early stopping.
#> Will train until val_auc hasn't improved in 50 rounds.
#>
#> [51] train-auc:0.972938 val-auc:0.866370
#> Stopping. Best iteration:
#> [1] train-auc:0.886258 val-auc:0.879085
pred <- xgb$predict(xtest)
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.879085
Grid Search
xgb <- XGBTrainer$new(objective="binary:logistic")
gst <-GridSearchCV$new(trainer = xgb,
parameters = list(n_estimators = c(10,50),
max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'))
gst$fit(xtrain, "Survived")
#> [1] "entering grid search"
#> [1] "In total, 4 models will be trained"
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.144231
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.108173
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.134615
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.112981
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.115385
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.084135
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.144231
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.045673
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.134615
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.045673
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.115385
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.038462
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.211538
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.158654
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.201923
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.168269
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.206731
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.141827
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.211538
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.127404
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.201923
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.132212
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.206731
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.108173
gst$best_iteration()
#> $n_estimators
#> [1] 10
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0
#>
#> $accuracy_sd
#> [1] 0
#>
#> $auc_avg
#> [1] 0.8619512
#>
#> $auc_sd
#> [1] 0.02280628
Random Search
rf <- RFTrainer$new()
rst <- RandomSearchCV$new(trainer = rf,
parameters = list(n_estimators = c(10,50), max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'),
n_iter = 3)
rst$fit(xtrain, "Survived")
#> [1] "In total, 3 models will be trained"
rst$best_iteration()
#> $n_estimators
#> [1] 50
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0.7964744
#>
#> $accuracy_sd
#> [1] 0.03090914
#>
#> $auc_avg
#> [1] 0.7729436
#>
#> $auc_sd
#> [1] 0.04283084
Let’s create some new feature based on target variable using target encoding and test a model.
# add target encoding features
xtrain[, feat_01 := smoothMean(train_df = xtrain,
test_df = xtest,
colname = "Embarked",
target = "Survived")$train[[2]]]
xtest[, feat_01 := smoothMean(train_df = xtrain,
test_df = xtest,
colname = "Embarked",
target = "Survived")$test[[2]]]
# train a random forest
# Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 1, max_features = 4)
rf$fit(X = xtrain, y = "Survived")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> Sex 69.787235
#> Fare 60.832089
#> Age 52.982604
#> Pclass 24.419818
#> Cabin 21.419274
#> SibSp 13.112177
#> Parch 10.175269
#> feat_01 6.675399
#> Embarked 6.450819
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.8018717