CRAN Package Check Results for Maintainer ‘Lorenz A. Kapsner <lorenz.kapsner at gmail.com>’

Last updated on 2026-01-20 23:50:44 CET.

Package ERROR OK
autonewsmd 13
BiasCorrector 13
DQAgui 13
DQAstats 13
kdry 13
mlexperiments 1 12
mllrnrs 2 11
mlsurvlrnrs 2 11
rBiasCorrection 13
sjtable2df 13

Package autonewsmd

Current CRAN status: OK: 13

Package BiasCorrector

Current CRAN status: OK: 13

Package DQAgui

Current CRAN status: OK: 13

Package DQAstats

Current CRAN status: OK: 13

Package kdry

Current CRAN status: OK: 13

Package mlexperiments

Current CRAN status: ERROR: 1, OK: 12

Version: 0.0.8
Check: Rd cross-references
Result: NOTE Unknown package ‘ParBayesianOptimization’ in Rd xrefs Flavor: r-patched-linux-x86_64

Version: 0.0.8
Check: tests
Result: ERROR Running ‘testthat.R’ [120s/146s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Saving _problems/test-knn-115.R Saving _problems/test-knn-182.R CV fold: Fold1 Saving _problems/test-knn-257.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-rpart_classification-125.R Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-rpart_classification-205.R CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-rpart_regression-125.R Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-rpart_regression-203.R CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 7 | WARN 0 | SKIP 1 | PASS 58 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-knn.R:115:5'): test bayesian tuner, initGrid - knn ───────────── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─knn_optimization$execute(k = 3) at test-knn.R:115:5 2. └─private$select_optimizer(self, private) 3. └─BayesianOptimizer$new(...) 4. └─mlexperiments (local) initialize(...) ── Error ('test-knn.R:182:5'): test bayesian tuner, initPoints - LearnerKnn ──── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─knn_optimization$execute(k = 3) at test-knn.R:182:5 2. └─private$select_optimizer(self, private) 3. └─BayesianOptimizer$new(...) 4. └─mlexperiments (local) initialize(...) ── Error ('test-knn.R:257:5'): test nested cv, bayesian - knn ────────────────── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─knn_optimization$execute() at test-knn.R:257:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-rpart_classification.R:125:5'): test bayesian tuner, initGrid, classification - rpart ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─rpart_optimization$execute(k = 3) at test-rpart_classification.R:125:5 2. └─private$select_optimizer(self, private) 3. └─BayesianOptimizer$new(...) 4. └─mlexperiments (local) initialize(...) ── Error ('test-rpart_classification.R:205:5'): test nested cv, bayesian, classification - rpart ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─rpart_optimization$execute() at test-rpart_classification.R:205:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-rpart_regression.R:125:5'): test bayesian tuner, initGrid, regression - rpart ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─rpart_optimization$execute(k = 3) at test-rpart_regression.R:125:5 2. └─private$select_optimizer(self, private) 3. └─BayesianOptimizer$new(...) 4. └─mlexperiments (local) initialize(...) ── Error ('test-rpart_regression.R:203:5'): test nested cv, bayesian, regression - rpart ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─rpart_optimization$execute() at test-rpart_regression.R:203:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) [ FAIL 7 | WARN 0 | SKIP 1 | PASS 58 ] Error: ! Test failures. Execution halted Flavor: r-patched-linux-x86_64

Package mllrnrs

Current CRAN status: ERROR: 2, OK: 11

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [61s/68s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-binary-225.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-regression-107.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-regression-309.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:225:5'): test nested cv, bayesian, binary - lightgbm ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─lightgbm_optimizer$execute() at test-binary.R:225:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-regression.R:107:5'): test nested cv, bayesian, regression - glmnet ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─glmnet_optimizer$execute() at test-regression.R:107:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-regression.R:309:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-patched-linux-x86_64

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [90s/100s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-binary-225.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold1 Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Classification: using 'mean classification error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold1 Saving _problems/test-regression-107.R CV fold: Fold1 Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Regression: using 'mean squared error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Regression: using 'mean squared error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Regression: using 'mean squared error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Regression: using 'mean squared error' as optimization metric. Parameter settings [=============================>---------------] 2/3 ( 67%) Regression: using 'mean squared error' as optimization metric. Parameter settings [=============================================] 3/3 (100%) Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-regression-309.R CV fold: Fold1 Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:225:5'): test nested cv, bayesian, binary - lightgbm ── Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─lightgbm_optimizer$execute() at test-binary.R:225:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-regression.R:107:5'): test nested cv, bayesian, regression - glmnet ── Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─glmnet_optimizer$execute() at test-regression.R:107:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ── Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-regression.R:309:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-release-linux-x86_64

Package mlsurvlrnrs

Current CRAN status: ERROR: 2, OK: 11

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [10s/14s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlsurvlrnrs) > > test_check("mlsurvlrnrs") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold1 Saving _problems/test-surv_glmnet_cox-99.R CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-surv_ranger_cox-110.R CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-surv_rpart_cox-108.R CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-surv_xgboost_aft-121.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-surv_xgboost_cox-118.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 5 | WARN 0 | SKIP 1 | PASS 9 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-surv_glmnet_cox.R:99:5'): test nested cv, grid - surv_glmnet_cox ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_glmnet_cox_optimizer$execute() at test-surv_glmnet_cox.R:99:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-surv_ranger_cox.R:110:5'): test nested cv, bayesian - surv_ranger_cox ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_ranger_cox_optimizer$execute() at test-surv_ranger_cox.R:110:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-surv_rpart_cox.R:108:5'): test nested cv, bayesian - surv_rpart_cox ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_rpart_cox_optimizer$execute() at test-surv_rpart_cox.R:108:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-surv_xgboost_aft.R:121:3'): test nested cv, bayesian - surv_xgboost_aft ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_xgboost_aft_optimizer$execute() at test-surv_xgboost_aft.R:121:3 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-surv_xgboost_cox.R:118:3'): test nested cv, bayesian - surv_xgboost_cox ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_xgboost_cox_optimizer$execute() at test-surv_xgboost_cox.R:118:3 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) [ FAIL 5 | WARN 0 | SKIP 1 | PASS 9 ] Error: ! Test failures. Execution halted Flavor: r-patched-linux-x86_64

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [12s/14s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlsurvlrnrs) > > test_check("mlsurvlrnrs") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold1 Saving _problems/test-surv_glmnet_cox-99.R CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-surv_ranger_cox-110.R CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-surv_rpart_cox-108.R CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-surv_xgboost_aft-121.R CV fold: Fold1 Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Parameter settings [=============================>---------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Saving _problems/test-surv_xgboost_cox-118.R CV fold: Fold1 Parameter settings [=============================================] 3/3 (100%) CV fold: Fold2 CV progress [==================================>-----------------] 2/3 ( 67%) Parameter settings [=============================================] 3/3 (100%) CV fold: Fold3 CV progress [====================================================] 3/3 (100%) Parameter settings [=============================================] 3/3 (100%) [ FAIL 5 | WARN 0 | SKIP 1 | PASS 9 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-surv_glmnet_cox.R:99:5'): test nested cv, grid - surv_glmnet_cox ── Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_glmnet_cox_optimizer$execute() at test-surv_glmnet_cox.R:99:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-surv_ranger_cox.R:110:5'): test nested cv, bayesian - surv_ranger_cox ── Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_ranger_cox_optimizer$execute() at test-surv_ranger_cox.R:110:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-surv_rpart_cox.R:108:5'): test nested cv, bayesian - surv_rpart_cox ── Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_rpart_cox_optimizer$execute() at test-surv_rpart_cox.R:108:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-surv_xgboost_aft.R:121:3'): test nested cv, bayesian - surv_xgboost_aft ── Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_xgboost_aft_optimizer$execute() at test-surv_xgboost_aft.R:121:3 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-surv_xgboost_cox.R:118:3'): test nested cv, bayesian - surv_xgboost_cox ── Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─surv_xgboost_cox_optimizer$execute() at test-surv_xgboost_cox.R:118:3 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) [ FAIL 5 | WARN 0 | SKIP 1 | PASS 9 ] Error: ! Test failures. Execution halted Flavor: r-release-linux-x86_64

Package rBiasCorrection

Current CRAN status: OK: 13

Package sjtable2df

Current CRAN status: OK: 13

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.