Last updated on 2026-01-20 13:49:55 CET.
| Flavor | Version | Tinstall | Tcheck | Ttotal | Status | Flags |
|---|---|---|---|---|---|---|
| r-devel-linux-x86_64-debian-clang | 0.0.8 | 5.81 | 336.95 | 342.76 | OK | |
| r-devel-linux-x86_64-debian-gcc | 0.0.8 | 3.65 | 232.01 | 235.66 | OK | |
| r-devel-linux-x86_64-fedora-clang | 0.0.8 | 9.00 | 523.96 | 532.96 | OK | |
| r-devel-linux-x86_64-fedora-gcc | 0.0.8 | 9.00 | 549.12 | 558.12 | OK | |
| r-devel-windows-x86_64 | 0.0.8 | 7.00 | 324.00 | 331.00 | OK | |
| r-patched-linux-x86_64 | 0.0.7 | 5.51 | 171.72 | 177.23 | ERROR | |
| r-release-linux-x86_64 | 0.0.7 | 4.90 | 203.31 | 208.21 | ERROR | |
| r-release-macos-arm64 | 0.0.8 | 1.00 | 91.00 | 92.00 | OK | |
| r-release-macos-x86_64 | 0.0.8 | 4.00 | 283.00 | 287.00 | OK | |
| r-release-windows-x86_64 | 0.0.7 | 7.00 | 278.00 | 285.00 | OK | |
| r-oldrel-macos-arm64 | 0.0.8 | 1.00 | 91.00 | 92.00 | OK | |
| r-oldrel-macos-x86_64 | 0.0.8 | 4.00 | 393.00 | 397.00 | OK | |
| r-oldrel-windows-x86_64 | 0.0.8 | 6.00 | 394.00 | 400.00 | OK |
Version: 0.0.7
Check: tests
Result: ERROR
Running ‘testthat.R’ [61s/68s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
> # https://github.com/Rdatatable/data.table/issues/5658
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mllrnrs)
>
> test_check("mllrnrs")
CV fold: Fold1
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-binary-225.R
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Saving _problems/test-regression-107.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-regression-309.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
══ Skipped tests (3) ═══════════════════════════════════════════════════════════
• On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5',
'test-multiclass.R:57:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-binary.R:225:5'): test nested cv, bayesian, binary - lightgbm ──
Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─lightgbm_optimizer$execute() at test-binary.R:225:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-regression.R:107:5'): test nested cv, bayesian, regression - glmnet ──
Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─glmnet_optimizer$execute() at test-regression.R:107:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ──
Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-regression.R:309:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
Error:
! Test failures.
Execution halted
Flavor: r-patched-linux-x86_64
Version: 0.0.7
Check: tests
Result: ERROR
Running ‘testthat.R’ [90s/100s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
> # https://github.com/Rdatatable/data.table/issues/5658
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mllrnrs)
>
> test_check("mllrnrs")
CV fold: Fold1
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-binary-225.R
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Saving _problems/test-regression-107.R
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-regression-309.R
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
══ Skipped tests (3) ═══════════════════════════════════════════════════════════
• On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5',
'test-multiclass.R:57:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-binary.R:225:5'): test nested cv, bayesian, binary - lightgbm ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─lightgbm_optimizer$execute() at test-binary.R:225:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-regression.R:107:5'): test nested cv, bayesian, regression - glmnet ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─glmnet_optimizer$execute() at test-regression.R:107:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-regression.R:309:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
Error:
! Test failures.
Execution halted
Flavor: r-release-linux-x86_64
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.