Last updated on 2025-12-04 09:50:27 CET.
| Package | ERROR | OK |
|---|---|---|
| autonewsmd | 13 | |
| BiasCorrector | 13 | |
| DQAgui | 13 | |
| DQAstats | 13 | |
| kdry | 13 | |
| mlexperiments | 13 | |
| mllrnrs | 2 | 11 |
| mlsurvlrnrs | 2 | 11 |
| rBiasCorrection | 13 | |
| sjtable2df | 13 |
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: ERROR: 2, OK: 11
Version: 0.0.6
Check: tests
Result: ERROR
Running ‘testthat.R’ [2m/14m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
> # https://github.com/Rdatatable/data.table/issues/5658
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mllrnrs)
>
> test_check("mllrnrs")
CV fold: Fold1
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 37.042 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 61.245 seconds
3) Running FUN 2 times in 2 thread(s)... 0.988 seconds
OMP: Warning #96: Cannot form a team with 24 threads, using 2 instead.
OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set).
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 30.162 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 55.36 seconds
3) Running FUN 2 times in 2 thread(s)... 2.693 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 32.292 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 107.661 seconds
3) Running FUN 2 times in 2 thread(s)... 2.562 seconds
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Saving _problems/test-binary-356.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Saving _problems/test-multiclass-294.R
CV fold: Fold1
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 31.15 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 9.486 seconds
3) Running FUN 2 times in 2 thread(s)... 3.691 seconds
CV fold: Fold2
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 37.293 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 17.279 seconds
3) Running FUN 2 times in 2 thread(s)... 2.458 seconds
CV fold: Fold3
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 30.833 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 10.54 seconds
3) Running FUN 2 times in 2 thread(s)... 2.465 seconds
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 52.604 seconds
subsample colsample_bytree min_child_weight learning_rate max_depth
<num> <num> <num> <num> <num>
1: 1.0 0.8 1 0.1 5
2: 0.8 1.0 1 0.2 5
3: 1.0 1.0 5 0.2 5
4: 0.6 0.8 1 0.1 5
5: 0.6 0.8 5 0.2 5
6: 0.8 0.8 5 0.2 5
7: 0.8 0.8 1 0.1 1
8: 0.6 0.6 1 0.2 5
9: 0.6 1.0 1 0.1 1
10: 0.6 0.8 1 0.2 5
errorMessage
<char>
1: FUN returned these elements with length > 1: Score,metric_optim_mean
2: FUN returned these elements with length > 1: Score,metric_optim_mean
3: FUN returned these elements with length > 1: Score,metric_optim_mean
4: FUN returned these elements with length > 1: Score,metric_optim_mean
5: FUN returned these elements with length > 1: Score,metric_optim_mean
6: FUN returned these elements with length > 1: Score,metric_optim_mean
7: FUN returned these elements with length > 1: Score,metric_optim_mean
8: FUN returned these elements with length > 1: Score,metric_optim_mean
9: FUN returned these elements with length > 1: Score,metric_optim_mean
10: FUN returned these elements with length > 1: Score,metric_optim_mean
Saving _problems/test-regression-309.R
CV fold: Fold1
Saving _problems/test-regression-352.R
[ FAIL 4 | WARN 3 | SKIP 3 | PASS 22 ]
══ Skipped tests (3) ═══════════════════════════════════════════════════════════
• On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5',
'test-multiclass.R:57:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-binary.R:356:5'): test nested cv, grid, binary:logistic - xgboost ──
Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-binary.R:356:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─mlexperiments:::.optimize_postprocessing(...)
12. └─mlexperiments:::.get_best_setting(...)
13. └─base::stopifnot(nrow(best_row) == 1)
── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ──
Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-multiclass.R:294:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─mlexperiments:::.optimize_postprocessing(...)
12. └─mlexperiments:::.get_best_setting(...)
13. └─base::stopifnot(nrow(best_row) == 1)
── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ──
Error in `(function (FUN, bounds, saveFile = NULL, initGrid, initPoints = 4, iters.n = 3, iters.k = 1, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 2.576, eps = 0, parallel = FALSE, gsPoints = pmax(100, length(bounds)^3), convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1, ...) { startT <- Sys.time() optObj <- list() class(optObj) <- "bayesOpt" optObj$FUN <- FUN optObj$bounds <- bounds optObj$iters <- 0 optObj$initPars <- list() optObj$optPars <- list() optObj$GauProList <- list() optObj <- changeSaveFile(optObj, saveFile) checkParameters(bounds, iters.n, iters.k, otherHalting, acq, acqThresh, errorHandling, plotProgress, parallel, verbose) boundsDT <- boundsToDT(bounds) otherHalting <- formatOtherHalting(otherHalting) if (missing(initGrid) + missing(initPoints) != 1) stop("Please provide 1 of initGrid or initPoints, but not both.") if (!missing(initGrid)) { setDT(initGrid) inBounds <- checkBounds(initGrid, bounds) inBounds <- as.logical(apply(inBounds, 1, prod)) if (any(!inBounds)) stop("initGrid not within bounds.") optObj$initPars$initialSample <- "User Provided Grid" initPoints <- nrow(initGrid) } else { initGrid <- randParams(boundsDT, initPoints) optObj$initPars$initialSample <- "Latin Hypercube Sampling" } optObj$initPars$initGrid <- initGrid if (nrow(initGrid) <= 2) stop("Cannot initialize with less than 3 samples.") optObj$initPars$initPoints <- nrow(initGrid) if (initPoints <= length(bounds)) stop("initPoints must be greater than the number of FUN inputs.") sinkFile <- file() on.exit({ while (sink.number() > 0) sink() close(sinkFile) }) `%op%` <- ParMethod(parallel) if (parallel) Workers <- getDoParWorkers() else Workers <- 1 if (verbose > 0) cat("\nRunning initial scoring function", nrow(initGrid), "times in", Workers, "thread(s)...") sink(file = sinkFile) tm <- system.time(scoreSummary <- foreach(iter = 1:nrow(initGrid), .options.multicore = list(preschedule = FALSE), .combine = list, .multicombine = TRUE, .inorder = FALSE, .errorhandling = "pass", .verbose = FALSE) %op% { Params <- initGrid[get("iter"), ] Elapsed <- system.time(Result <- tryCatch({ do.call(what = FUN, args = as.list(Params)) }, error = function(e) e)) if (any(class(Result) %in% c("simpleError", "error", "condition"))) return(Result) if (!inherits(x = Result, what = "list")) stop("Object returned from FUN was not a list.") resLengths <- lengths(Result) if (!any(names(Result) == "Score")) stop("FUN must return list with element 'Score' at a minimum.") if (!is.numeric(Result$Score)) stop("Score returned from FUN was not numeric.") if (any(resLengths != 1)) { badReturns <- names(Result)[which(resLengths != 1)] stop("FUN returned these elements with length > 1: ", paste(badReturns, collapse = ",")) } data.table(Params, Elapsed = Elapsed[[3]], as.data.table(Result)) })[[3]] while (sink.number() > 0) sink() if (verbose > 0) cat(" ", tm, "seconds\n") se <- which(sapply(scoreSummary, function(cl) any(class(cl) %in% c("simpleError", "error", "condition")))) if (length(se) > 0) { print(data.table(initGrid[se, ], errorMessage = sapply(scoreSummary[se], function(x) x$message))) stop("Errors encountered in initialization are listed above.") } else { scoreSummary <- rbindlist(scoreSummary) } scoreSummary[, `:=`(("gpUtility"), rep(as.numeric(NA), nrow(scoreSummary)))] scoreSummary[, `:=`(("acqOptimum"), rep(FALSE, nrow(scoreSummary)))] scoreSummary[, `:=`(("Epoch"), rep(0, nrow(scoreSummary)))] scoreSummary[, `:=`(("Iteration"), 1:nrow(scoreSummary))] scoreSummary[, `:=`(("inBounds"), rep(TRUE, nrow(scoreSummary)))] scoreSummary[, `:=`(("errorMessage"), rep(NA, nrow(scoreSummary)))] extraRet <- setdiff(names(scoreSummary), c("Epoch", "Iteration", boundsDT$N, "inBounds", "Elapsed", "Score", "gpUtility", "acqOptimum")) setcolorder(scoreSummary, c("Epoch", "Iteration", boundsDT$N, "gpUtility", "acqOptimum", "inBounds", "Elapsed", "Score", extraRet)) if (any(scoreSummary$Elapsed < 1) & acq == "eips") { cat("\n FUN elapsed time is too low to be precise. Switching acq to 'ei'.\n") acq <- "ei" } optObj$optPars$acq <- acq optObj$optPars$kappa <- kappa optObj$optPars$eps <- eps optObj$optPars$parallel <- parallel optObj$optPars$gsPoints <- gsPoints optObj$optPars$convThresh <- convThresh optObj$optPars$acqThresh <- acqThresh optObj$scoreSummary <- scoreSummary optObj$GauProList$gpUpToDate <- FALSE optObj$iters <- nrow(scoreSummary) optObj$stopStatus <- "OK" optObj$elapsedTime <- as.numeric(difftime(Sys.time(), startT, units = "secs")) saveSoFar(optObj, 0) optObj <- addIterations(optObj, otherHalting = otherHalting, iters.n = iters.n, iters.k = iters.k, parallel = parallel, plotProgress = plotProgress, errorHandling = errorHandling, saveFile = saveFile, verbose = verbose, ...) return(optObj) })(FUN = function (...) { kwargs <- list(...) args <- .method_params_refactor(kwargs, method_helper) set.seed(self$seed) res <- do.call(private$fun_bayesian_scoring_function, args) if (isFALSE(self$metric_optimization_higher_better)) { res$Score <- as.numeric(I(res$Score * -1L)) } return(res) }, bounds = list(subsample = c(0.2, 1), colsample_bytree = c(0.2, 1), min_child_weight = c(1L, 10L), learning_rate = c(0.1, 0.2), max_depth = c(1L, 10L)), initGrid = structure(list(subsample = c(1, 0.8, 1, 0.6, 0.6, 0.8, 0.8, 0.6, 0.6, 0.6), colsample_bytree = c(0.8, 1, 1, 0.8, 0.8, 0.8, 0.8, 0.6, 1, 0.8), min_child_weight = c(1, 1, 5, 1, 5, 5, 1, 1, 1, 1), learning_rate = c(0.1, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.2, 0.1, 0.2), max_depth = c(5, 5, 5, 5, 5, 5, 1, 5, 1, 5)), out.attrs = list(dim = c(subsample = 3L, colsample_bytree = 3L, min_child_weight = 2L, learning_rate = 2L, max_depth = 2L), dimnames = list(subsample = c("subsample=0.6", "subsample=0.8", "subsample=1.0"), colsample_bytree = c("colsample_bytree=0.6", "colsample_bytree=0.8", "colsample_bytree=1.0"), min_child_weight = c("min_child_weight=1", "min_child_weight=5"), learning_rate = c("learning_rate=0.1", "learning_rate=0.2"), max_depth = c("max_depth=1", "max_depth=5"))), row.names = c(NA, -10L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x555d20bccd10>), iters.n = 2L, iters.k = 2L, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 3.5, eps = 0, parallel = TRUE, gsPoints = 125, convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1)`: Errors encountered in initialization are listed above.
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-regression.R:309:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper)
12. ├─base::do.call(...)
13. └─mlexperiments (local) `<fn>`(...)
14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args)
15. └─ParBayesianOptimization (local) `<fn>`(...)
── Error ('test-regression.R:352:5'): test nested cv, grid - xgboost ───────────
Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-regression.R:352:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─mlexperiments:::.optimize_postprocessing(...)
12. └─mlexperiments:::.get_best_setting(...)
13. └─base::stopifnot(nrow(best_row) == 1)
[ FAIL 4 | WARN 3 | SKIP 3 | PASS 22 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang
Version: 0.0.6
Check: tests
Result: ERROR
Running ‘testthat.R’ [3m/14m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
> # https://github.com/Rdatatable/data.table/issues/5658
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mllrnrs)
>
> test_check("mllrnrs")
CV fold: Fold1
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 23.138 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 59.287 seconds
3) Running FUN 2 times in 2 thread(s)... 2.352 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 32.088 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 53.526 seconds
3) Running FUN 2 times in 2 thread(s)... 2.494 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 32.707 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 81.392 seconds
3) Running FUN 2 times in 2 thread(s)... 2.429 seconds
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Saving _problems/test-binary-356.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Saving _problems/test-multiclass-294.R
CV fold: Fold1
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 27.978 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 7.647 seconds
3) Running FUN 2 times in 2 thread(s)... 2.898 seconds
CV fold: Fold2
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 31.776 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 10.418 seconds
3) Running FUN 2 times in 2 thread(s)... 1.84 seconds
CV fold: Fold3
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 30.625 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 9.991 seconds
3) Running FUN 2 times in 2 thread(s)... 2.61 seconds
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 33.651 seconds
subsample colsample_bytree min_child_weight learning_rate max_depth
<num> <num> <num> <num> <num>
1: 1.0 0.8 1 0.1 5
2: 0.8 1.0 1 0.2 5
3: 1.0 1.0 5 0.2 5
4: 0.6 0.8 1 0.1 5
5: 0.6 0.8 5 0.2 5
6: 0.8 0.8 5 0.2 5
7: 0.8 0.8 1 0.1 1
8: 0.6 0.6 1 0.2 5
9: 0.6 1.0 1 0.1 1
10: 0.6 0.8 1 0.2 5
errorMessage
<char>
1: FUN returned these elements with length > 1: Score,metric_optim_mean
2: FUN returned these elements with length > 1: Score,metric_optim_mean
3: FUN returned these elements with length > 1: Score,metric_optim_mean
4: FUN returned these elements with length > 1: Score,metric_optim_mean
5: FUN returned these elements with length > 1: Score,metric_optim_mean
6: FUN returned these elements with length > 1: Score,metric_optim_mean
7: FUN returned these elements with length > 1: Score,metric_optim_mean
8: FUN returned these elements with length > 1: Score,metric_optim_mean
9: FUN returned these elements with length > 1: Score,metric_optim_mean
10: FUN returned these elements with length > 1: Score,metric_optim_mean
Saving _problems/test-regression-309.R
CV fold: Fold1
Saving _problems/test-regression-352.R
[ FAIL 4 | WARN 3 | SKIP 3 | PASS 22 ]
══ Skipped tests (3) ═══════════════════════════════════════════════════════════
• On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5',
'test-multiclass.R:57:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-binary.R:356:5'): test nested cv, grid, binary:logistic - xgboost ──
Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-binary.R:356:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─mlexperiments:::.optimize_postprocessing(...)
12. └─mlexperiments:::.get_best_setting(...)
13. └─base::stopifnot(nrow(best_row) == 1)
── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ──
Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-multiclass.R:294:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─mlexperiments:::.optimize_postprocessing(...)
12. └─mlexperiments:::.get_best_setting(...)
13. └─base::stopifnot(nrow(best_row) == 1)
── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ──
Error in `(function (FUN, bounds, saveFile = NULL, initGrid, initPoints = 4, iters.n = 3, iters.k = 1, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 2.576, eps = 0, parallel = FALSE, gsPoints = pmax(100, length(bounds)^3), convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1, ...) { startT <- Sys.time() optObj <- list() class(optObj) <- "bayesOpt" optObj$FUN <- FUN optObj$bounds <- bounds optObj$iters <- 0 optObj$initPars <- list() optObj$optPars <- list() optObj$GauProList <- list() optObj <- changeSaveFile(optObj, saveFile) checkParameters(bounds, iters.n, iters.k, otherHalting, acq, acqThresh, errorHandling, plotProgress, parallel, verbose) boundsDT <- boundsToDT(bounds) otherHalting <- formatOtherHalting(otherHalting) if (missing(initGrid) + missing(initPoints) != 1) stop("Please provide 1 of initGrid or initPoints, but not both.") if (!missing(initGrid)) { setDT(initGrid) inBounds <- checkBounds(initGrid, bounds) inBounds <- as.logical(apply(inBounds, 1, prod)) if (any(!inBounds)) stop("initGrid not within bounds.") optObj$initPars$initialSample <- "User Provided Grid" initPoints <- nrow(initGrid) } else { initGrid <- randParams(boundsDT, initPoints) optObj$initPars$initialSample <- "Latin Hypercube Sampling" } optObj$initPars$initGrid <- initGrid if (nrow(initGrid) <= 2) stop("Cannot initialize with less than 3 samples.") optObj$initPars$initPoints <- nrow(initGrid) if (initPoints <= length(bounds)) stop("initPoints must be greater than the number of FUN inputs.") sinkFile <- file() on.exit({ while (sink.number() > 0) sink() close(sinkFile) }) `%op%` <- ParMethod(parallel) if (parallel) Workers <- getDoParWorkers() else Workers <- 1 if (verbose > 0) cat("\nRunning initial scoring function", nrow(initGrid), "times in", Workers, "thread(s)...") sink(file = sinkFile) tm <- system.time(scoreSummary <- foreach(iter = 1:nrow(initGrid), .options.multicore = list(preschedule = FALSE), .combine = list, .multicombine = TRUE, .inorder = FALSE, .errorhandling = "pass", .verbose = FALSE) %op% { Params <- initGrid[get("iter"), ] Elapsed <- system.time(Result <- tryCatch({ do.call(what = FUN, args = as.list(Params)) }, error = function(e) e)) if (any(class(Result) %in% c("simpleError", "error", "condition"))) return(Result) if (!inherits(x = Result, what = "list")) stop("Object returned from FUN was not a list.") resLengths <- lengths(Result) if (!any(names(Result) == "Score")) stop("FUN must return list with element 'Score' at a minimum.") if (!is.numeric(Result$Score)) stop("Score returned from FUN was not numeric.") if (any(resLengths != 1)) { badReturns <- names(Result)[which(resLengths != 1)] stop("FUN returned these elements with length > 1: ", paste(badReturns, collapse = ",")) } data.table(Params, Elapsed = Elapsed[[3]], as.data.table(Result)) })[[3]] while (sink.number() > 0) sink() if (verbose > 0) cat(" ", tm, "seconds\n") se <- which(sapply(scoreSummary, function(cl) any(class(cl) %in% c("simpleError", "error", "condition")))) if (length(se) > 0) { print(data.table(initGrid[se, ], errorMessage = sapply(scoreSummary[se], function(x) x$message))) stop("Errors encountered in initialization are listed above.") } else { scoreSummary <- rbindlist(scoreSummary) } scoreSummary[, `:=`(("gpUtility"), rep(as.numeric(NA), nrow(scoreSummary)))] scoreSummary[, `:=`(("acqOptimum"), rep(FALSE, nrow(scoreSummary)))] scoreSummary[, `:=`(("Epoch"), rep(0, nrow(scoreSummary)))] scoreSummary[, `:=`(("Iteration"), 1:nrow(scoreSummary))] scoreSummary[, `:=`(("inBounds"), rep(TRUE, nrow(scoreSummary)))] scoreSummary[, `:=`(("errorMessage"), rep(NA, nrow(scoreSummary)))] extraRet <- setdiff(names(scoreSummary), c("Epoch", "Iteration", boundsDT$N, "inBounds", "Elapsed", "Score", "gpUtility", "acqOptimum")) setcolorder(scoreSummary, c("Epoch", "Iteration", boundsDT$N, "gpUtility", "acqOptimum", "inBounds", "Elapsed", "Score", extraRet)) if (any(scoreSummary$Elapsed < 1) & acq == "eips") { cat("\n FUN elapsed time is too low to be precise. Switching acq to 'ei'.\n") acq <- "ei" } optObj$optPars$acq <- acq optObj$optPars$kappa <- kappa optObj$optPars$eps <- eps optObj$optPars$parallel <- parallel optObj$optPars$gsPoints <- gsPoints optObj$optPars$convThresh <- convThresh optObj$optPars$acqThresh <- acqThresh optObj$scoreSummary <- scoreSummary optObj$GauProList$gpUpToDate <- FALSE optObj$iters <- nrow(scoreSummary) optObj$stopStatus <- "OK" optObj$elapsedTime <- as.numeric(difftime(Sys.time(), startT, units = "secs")) saveSoFar(optObj, 0) optObj <- addIterations(optObj, otherHalting = otherHalting, iters.n = iters.n, iters.k = iters.k, parallel = parallel, plotProgress = plotProgress, errorHandling = errorHandling, saveFile = saveFile, verbose = verbose, ...) return(optObj) })(FUN = function (...) { kwargs <- list(...) args <- .method_params_refactor(kwargs, method_helper) set.seed(self$seed) res <- do.call(private$fun_bayesian_scoring_function, args) if (isFALSE(self$metric_optimization_higher_better)) { res$Score <- as.numeric(I(res$Score * -1L)) } return(res) }, bounds = list(subsample = c(0.2, 1), colsample_bytree = c(0.2, 1), min_child_weight = c(1L, 10L), learning_rate = c(0.1, 0.2), max_depth = c(1L, 10L)), initGrid = structure(list(subsample = c(1, 0.8, 1, 0.6, 0.6, 0.8, 0.8, 0.6, 0.6, 0.6), colsample_bytree = c(0.8, 1, 1, 0.8, 0.8, 0.8, 0.8, 0.6, 1, 0.8), min_child_weight = c(1, 1, 5, 1, 5, 5, 1, 1, 1, 1), learning_rate = c(0.1, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.2, 0.1, 0.2), max_depth = c(5, 5, 5, 5, 5, 5, 1, 5, 1, 5)), out.attrs = list(dim = c(subsample = 3L, colsample_bytree = 3L, min_child_weight = 2L, learning_rate = 2L, max_depth = 2L), dimnames = list(subsample = c("subsample=0.6", "subsample=0.8", "subsample=1.0"), colsample_bytree = c("colsample_bytree=0.6", "colsample_bytree=0.8", "colsample_bytree=1.0"), min_child_weight = c("min_child_weight=1", "min_child_weight=5"), learning_rate = c("learning_rate=0.1", "learning_rate=0.2"), max_depth = c("max_depth=1", "max_depth=5"))), row.names = c(NA, -10L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x1bf794a0>), iters.n = 2L, iters.k = 2L, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 3.5, eps = 0, parallel = TRUE, gsPoints = 125, convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1)`: Errors encountered in initialization are listed above.
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-regression.R:309:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper)
12. ├─base::do.call(...)
13. └─mlexperiments (local) `<fn>`(...)
14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args)
15. └─ParBayesianOptimization (local) `<fn>`(...)
── Error ('test-regression.R:352:5'): test nested cv, grid - xgboost ───────────
Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-regression.R:352:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─mlexperiments:::.optimize_postprocessing(...)
12. └─mlexperiments:::.get_best_setting(...)
13. └─base::stopifnot(nrow(best_row) == 1)
[ FAIL 4 | WARN 3 | SKIP 3 | PASS 22 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Current CRAN status: ERROR: 2, OK: 11
Version: 0.0.6
Check: tests
Result: ERROR
Running ‘testthat.R’ [2m/22m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
>
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mlsurvlrnrs)
>
> test_check("mlsurvlrnrs")
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold1
Registering parallel backend using 2 cores.
Running initial scoring function 6 times in 2 thread(s)... 45.792 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 12.811 seconds
3) Running FUN 2 times in 2 thread(s)... 9.047 seconds
CV fold: Fold2
Registering parallel backend using 2 cores.
Running initial scoring function 6 times in 2 thread(s)... 53.416 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 10.214 seconds
3) Running FUN 2 times in 2 thread(s)... 6.682 seconds
CV fold: Fold3
Registering parallel backend using 2 cores.
Running initial scoring function 6 times in 2 thread(s)... 49.207 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 6.111 seconds
3) Running FUN 2 times in 2 thread(s)... 6.285 seconds
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 36.011 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
- Could not obtain meaningful lengthscales.
2) Running local optimum search...
- Convergence Not Found. Trying again with tighter parameters...
- Convergence Not Found. Trying again with tighter parameters...
- Convergence Not Found. Trying again with tighter parameters...
- Maximum convergence attempts exceeded - process is probably sampling random points. 456.806 seconds
3) Running FUN 2 times in 2 thread(s)... 3.212 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 35.469 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 164.892 seconds
3) Running FUN 2 times in 2 thread(s)... 2.571 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 35.796 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 21.354 seconds
3) Running FUN 2 times in 2 thread(s)... 1.889 seconds
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 38.85 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 5.304 seconds
3) Running FUN 2 times in 2 thread(s)... 2.544 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 33.681 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 2.302 seconds
3) Running FUN 2 times in 2 thread(s)... 3.72 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 39.529 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 4.655 seconds
3) Running FUN 2 times in 2 thread(s)... 2.705 seconds
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 43.088 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
Saving _problems/test-surv_xgboost_aft-116.R
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 33.942 seconds
subsample colsample_bytree min_child_weight learning_rate max_depth
<num> <num> <num> <num> <num>
1: 0.6 0.8 5 0.2 1
2: 1.0 0.8 5 0.1 5
3: 0.8 0.8 5 0.1 1
4: 0.6 0.8 5 0.2 5
5: 1.0 0.8 1 0.1 5
6: 0.8 0.8 5 0.1 5
7: 0.6 1.0 1 0.1 5
8: 0.6 1.0 5 0.2 5
9: 1.0 1.0 5 0.1 5
10: 0.6 1.0 1 0.2 1
errorMessage
<char>
1: FUN returned these elements with length > 1: Score,metric_optim_mean
2: FUN returned these elements with length > 1: Score,metric_optim_mean
3: FUN returned these elements with length > 1: Score,metric_optim_mean
4: FUN returned these elements with length > 1: Score,metric_optim_mean
5: FUN returned these elements with length > 1: Score,metric_optim_mean
6: FUN returned these elements with length > 1: Score,metric_optim_mean
7: FUN returned these elements with length > 1: Score,metric_optim_mean
8: FUN returned these elements with length > 1: Score,metric_optim_mean
9: FUN returned these elements with length > 1: Score,metric_optim_mean
10: FUN returned these elements with length > 1: Score,metric_optim_mean
Saving _problems/test-surv_xgboost_cox-115.R
[ FAIL 2 | WARN 0 | SKIP 1 | PASS 11 ]
══ Skipped tests (1) ═══════════════════════════════════════════════════════════
• On CRAN (1): 'test-lints.R:10:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-surv_xgboost_aft.R:116:5'): test nested cv, bayesian - surv_xgboost_aft ──
Error in `if (r == 0) stop("Results from FUN have 0 variance, cannot build GP.")`: missing value where TRUE/FALSE needed
Backtrace:
▆
1. └─surv_xgboost_aft_optimizer$execute() at test-surv_xgboost_aft.R:116:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper)
12. ├─base::do.call(...)
13. └─mlexperiments (local) `<fn>`(...)
14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args)
15. └─ParBayesianOptimization (local) `<fn>`(...)
16. └─ParBayesianOptimization::addIterations(...)
17. └─ParBayesianOptimization::updateGP(...)
18. └─ParBayesianOptimization:::zeroOneScale(scoreSummary$Score)
── Error ('test-surv_xgboost_cox.R:115:5'): test nested cv, bayesian - surv_xgboost_cox ──
Error in `(function (FUN, bounds, saveFile = NULL, initGrid, initPoints = 4, iters.n = 3, iters.k = 1, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 2.576, eps = 0, parallel = FALSE, gsPoints = pmax(100, length(bounds)^3), convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1, ...) { startT <- Sys.time() optObj <- list() class(optObj) <- "bayesOpt" optObj$FUN <- FUN optObj$bounds <- bounds optObj$iters <- 0 optObj$initPars <- list() optObj$optPars <- list() optObj$GauProList <- list() optObj <- changeSaveFile(optObj, saveFile) checkParameters(bounds, iters.n, iters.k, otherHalting, acq, acqThresh, errorHandling, plotProgress, parallel, verbose) boundsDT <- boundsToDT(bounds) otherHalting <- formatOtherHalting(otherHalting) if (missing(initGrid) + missing(initPoints) != 1) stop("Please provide 1 of initGrid or initPoints, but not both.") if (!missing(initGrid)) { setDT(initGrid) inBounds <- checkBounds(initGrid, bounds) inBounds <- as.logical(apply(inBounds, 1, prod)) if (any(!inBounds)) stop("initGrid not within bounds.") optObj$initPars$initialSample <- "User Provided Grid" initPoints <- nrow(initGrid) } else { initGrid <- randParams(boundsDT, initPoints) optObj$initPars$initialSample <- "Latin Hypercube Sampling" } optObj$initPars$initGrid <- initGrid if (nrow(initGrid) <= 2) stop("Cannot initialize with less than 3 samples.") optObj$initPars$initPoints <- nrow(initGrid) if (initPoints <= length(bounds)) stop("initPoints must be greater than the number of FUN inputs.") sinkFile <- file() on.exit({ while (sink.number() > 0) sink() close(sinkFile) }) `%op%` <- ParMethod(parallel) if (parallel) Workers <- getDoParWorkers() else Workers <- 1 if (verbose > 0) cat("\nRunning initial scoring function", nrow(initGrid), "times in", Workers, "thread(s)...") sink(file = sinkFile) tm <- system.time(scoreSummary <- foreach(iter = 1:nrow(initGrid), .options.multicore = list(preschedule = FALSE), .combine = list, .multicombine = TRUE, .inorder = FALSE, .errorhandling = "pass", .verbose = FALSE) %op% { Params <- initGrid[get("iter"), ] Elapsed <- system.time(Result <- tryCatch({ do.call(what = FUN, args = as.list(Params)) }, error = function(e) e)) if (any(class(Result) %in% c("simpleError", "error", "condition"))) return(Result) if (!inherits(x = Result, what = "list")) stop("Object returned from FUN was not a list.") resLengths <- lengths(Result) if (!any(names(Result) == "Score")) stop("FUN must return list with element 'Score' at a minimum.") if (!is.numeric(Result$Score)) stop("Score returned from FUN was not numeric.") if (any(resLengths != 1)) { badReturns <- names(Result)[which(resLengths != 1)] stop("FUN returned these elements with length > 1: ", paste(badReturns, collapse = ",")) } data.table(Params, Elapsed = Elapsed[[3]], as.data.table(Result)) })[[3]] while (sink.number() > 0) sink() if (verbose > 0) cat(" ", tm, "seconds\n") se <- which(sapply(scoreSummary, function(cl) any(class(cl) %in% c("simpleError", "error", "condition")))) if (length(se) > 0) { print(data.table(initGrid[se, ], errorMessage = sapply(scoreSummary[se], function(x) x$message))) stop("Errors encountered in initialization are listed above.") } else { scoreSummary <- rbindlist(scoreSummary) } scoreSummary[, `:=`(("gpUtility"), rep(as.numeric(NA), nrow(scoreSummary)))] scoreSummary[, `:=`(("acqOptimum"), rep(FALSE, nrow(scoreSummary)))] scoreSummary[, `:=`(("Epoch"), rep(0, nrow(scoreSummary)))] scoreSummary[, `:=`(("Iteration"), 1:nrow(scoreSummary))] scoreSummary[, `:=`(("inBounds"), rep(TRUE, nrow(scoreSummary)))] scoreSummary[, `:=`(("errorMessage"), rep(NA, nrow(scoreSummary)))] extraRet <- setdiff(names(scoreSummary), c("Epoch", "Iteration", boundsDT$N, "inBounds", "Elapsed", "Score", "gpUtility", "acqOptimum")) setcolorder(scoreSummary, c("Epoch", "Iteration", boundsDT$N, "gpUtility", "acqOptimum", "inBounds", "Elapsed", "Score", extraRet)) if (any(scoreSummary$Elapsed < 1) & acq == "eips") { cat("\n FUN elapsed time is too low to be precise. Switching acq to 'ei'.\n") acq <- "ei" } optObj$optPars$acq <- acq optObj$optPars$kappa <- kappa optObj$optPars$eps <- eps optObj$optPars$parallel <- parallel optObj$optPars$gsPoints <- gsPoints optObj$optPars$convThresh <- convThresh optObj$optPars$acqThresh <- acqThresh optObj$scoreSummary <- scoreSummary optObj$GauProList$gpUpToDate <- FALSE optObj$iters <- nrow(scoreSummary) optObj$stopStatus <- "OK" optObj$elapsedTime <- as.numeric(difftime(Sys.time(), startT, units = "secs")) saveSoFar(optObj, 0) optObj <- addIterations(optObj, otherHalting = otherHalting, iters.n = iters.n, iters.k = iters.k, parallel = parallel, plotProgress = plotProgress, errorHandling = errorHandling, saveFile = saveFile, verbose = verbose, ...) return(optObj) })(FUN = function (...) { kwargs <- list(...) args <- .method_params_refactor(kwargs, method_helper) set.seed(self$seed) res <- do.call(private$fun_bayesian_scoring_function, args) if (isFALSE(self$metric_optimization_higher_better)) { res$Score <- as.numeric(I(res$Score * -1L)) } return(res) }, bounds = list(subsample = c(0.2, 1), colsample_bytree = c(0.2, 1), min_child_weight = c(1L, 10L), learning_rate = c(0.1, 0.2), max_depth = c(1L, 10L)), initGrid = structure(list(subsample = c(0.6, 1, 0.8, 0.6, 1, 0.8, 0.6, 0.6, 1, 0.6), colsample_bytree = c(0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 1, 1, 1, 1), min_child_weight = c(5, 5, 5, 5, 1, 5, 1, 5, 5, 1), learning_rate = c(0.2, 0.1, 0.1, 0.2, 0.1, 0.1, 0.1, 0.2, 0.1, 0.2), max_depth = c(1, 5, 1, 5, 5, 5, 5, 5, 5, 1)), out.attrs = list(dim = c(objective = 1L, eval_metric = 1L, subsample = 3L, colsample_bytree = 3L, min_child_weight = 2L, learning_rate = 2L, max_depth = 2L), dimnames = list(objective = "objective=survival:cox", eval_metric = "eval_metric=cox-nloglik", subsample = c("subsample=0.6", "subsample=0.8", "subsample=1.0"), colsample_bytree = c("colsample_bytree=0.6", "colsample_bytree=0.8", "colsample_bytree=1.0"), min_child_weight = c("min_child_weight=1", "min_child_weight=5"), learning_rate = c("learning_rate=0.1", "learning_rate=0.2"), max_depth = c("max_depth=1", "max_depth=5"))), row.names = c(NA, -10L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55bcc6834d10>), iters.n = 2L, iters.k = 2L, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 3.5, eps = 0, parallel = TRUE, gsPoints = 125, convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1)`: Errors encountered in initialization are listed above.
Backtrace:
▆
1. └─surv_xgboost_cox_optimizer$execute() at test-surv_xgboost_cox.R:115:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper)
12. ├─base::do.call(...)
13. └─mlexperiments (local) `<fn>`(...)
14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args)
15. └─ParBayesianOptimization (local) `<fn>`(...)
[ FAIL 2 | WARN 0 | SKIP 1 | PASS 11 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang
Version: 0.0.6
Check: tests
Result: ERROR
Running ‘testthat.R’ [1m/15m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
>
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mlsurvlrnrs)
>
> test_check("mlsurvlrnrs")
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold1
Registering parallel backend using 2 cores.
Running initial scoring function 6 times in 2 thread(s)... 42.656 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 10.44 seconds
3) Running FUN 2 times in 2 thread(s)... 6.043 seconds
CV fold: Fold2
Registering parallel backend using 2 cores.
Running initial scoring function 6 times in 2 thread(s)... 49.692 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 7.875 seconds
3) Running FUN 2 times in 2 thread(s)... 5.798 seconds
CV fold: Fold3
Registering parallel backend using 2 cores.
Running initial scoring function 6 times in 2 thread(s)... 39.088 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 5.747 seconds
3) Running FUN 2 times in 2 thread(s)... 7.002 seconds
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 36.944 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
- Could not obtain meaningful lengthscales.
2) Running local optimum search...
- Convergence Not Found. Trying again with tighter parameters...
- Convergence Not Found. Trying again with tighter parameters... 58.207 seconds
3) Running FUN 2 times in 2 thread(s)... 2.263 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 38.087 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 154.334 seconds
3) Running FUN 2 times in 2 thread(s)... 3.901 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 36.473 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
- Could not obtain meaningful lengthscales.
2) Running local optimum search... 4.045 seconds
3) Running FUN 2 times in 2 thread(s)... 2.601 seconds
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 33.409 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 6.023 seconds
3) Running FUN 2 times in 2 thread(s)... 3.56 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 40.704 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 3.225 seconds
3) Running FUN 2 times in 2 thread(s)... 2.454 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 43.669 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 4.733 seconds
3) Running FUN 2 times in 2 thread(s)... 2.63 seconds
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 38.037 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
Saving _problems/test-surv_xgboost_aft-116.R
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 38.798 seconds
subsample colsample_bytree min_child_weight learning_rate max_depth
<num> <num> <num> <num> <num>
1: 0.6 0.8 5 0.2 1
2: 1.0 0.8 5 0.1 5
3: 0.8 0.8 5 0.1 1
4: 0.6 0.8 5 0.2 5
5: 1.0 0.8 1 0.1 5
6: 0.8 0.8 5 0.1 5
7: 0.6 1.0 1 0.1 5
8: 0.6 1.0 5 0.2 5
9: 1.0 1.0 5 0.1 5
10: 0.6 1.0 1 0.2 1
errorMessage
<char>
1: FUN returned these elements with length > 1: Score,metric_optim_mean
2: FUN returned these elements with length > 1: Score,metric_optim_mean
3: FUN returned these elements with length > 1: Score,metric_optim_mean
4: FUN returned these elements with length > 1: Score,metric_optim_mean
5: FUN returned these elements with length > 1: Score,metric_optim_mean
6: FUN returned these elements with length > 1: Score,metric_optim_mean
7: FUN returned these elements with length > 1: Score,metric_optim_mean
8: FUN returned these elements with length > 1: Score,metric_optim_mean
9: FUN returned these elements with length > 1: Score,metric_optim_mean
10: FUN returned these elements with length > 1: Score,metric_optim_mean
Saving _problems/test-surv_xgboost_cox-115.R
[ FAIL 2 | WARN 1 | SKIP 1 | PASS 11 ]
══ Skipped tests (1) ═══════════════════════════════════════════════════════════
• On CRAN (1): 'test-lints.R:10:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-surv_xgboost_aft.R:116:5'): test nested cv, bayesian - surv_xgboost_aft ──
Error in `if (r == 0) stop("Results from FUN have 0 variance, cannot build GP.")`: missing value where TRUE/FALSE needed
Backtrace:
▆
1. └─surv_xgboost_aft_optimizer$execute() at test-surv_xgboost_aft.R:116:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper)
12. ├─base::do.call(...)
13. └─mlexperiments (local) `<fn>`(...)
14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args)
15. └─ParBayesianOptimization (local) `<fn>`(...)
16. └─ParBayesianOptimization::addIterations(...)
17. └─ParBayesianOptimization::updateGP(...)
18. └─ParBayesianOptimization:::zeroOneScale(scoreSummary$Score)
── Error ('test-surv_xgboost_cox.R:115:5'): test nested cv, bayesian - surv_xgboost_cox ──
Error in `(function (FUN, bounds, saveFile = NULL, initGrid, initPoints = 4, iters.n = 3, iters.k = 1, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 2.576, eps = 0, parallel = FALSE, gsPoints = pmax(100, length(bounds)^3), convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1, ...) { startT <- Sys.time() optObj <- list() class(optObj) <- "bayesOpt" optObj$FUN <- FUN optObj$bounds <- bounds optObj$iters <- 0 optObj$initPars <- list() optObj$optPars <- list() optObj$GauProList <- list() optObj <- changeSaveFile(optObj, saveFile) checkParameters(bounds, iters.n, iters.k, otherHalting, acq, acqThresh, errorHandling, plotProgress, parallel, verbose) boundsDT <- boundsToDT(bounds) otherHalting <- formatOtherHalting(otherHalting) if (missing(initGrid) + missing(initPoints) != 1) stop("Please provide 1 of initGrid or initPoints, but not both.") if (!missing(initGrid)) { setDT(initGrid) inBounds <- checkBounds(initGrid, bounds) inBounds <- as.logical(apply(inBounds, 1, prod)) if (any(!inBounds)) stop("initGrid not within bounds.") optObj$initPars$initialSample <- "User Provided Grid" initPoints <- nrow(initGrid) } else { initGrid <- randParams(boundsDT, initPoints) optObj$initPars$initialSample <- "Latin Hypercube Sampling" } optObj$initPars$initGrid <- initGrid if (nrow(initGrid) <= 2) stop("Cannot initialize with less than 3 samples.") optObj$initPars$initPoints <- nrow(initGrid) if (initPoints <= length(bounds)) stop("initPoints must be greater than the number of FUN inputs.") sinkFile <- file() on.exit({ while (sink.number() > 0) sink() close(sinkFile) }) `%op%` <- ParMethod(parallel) if (parallel) Workers <- getDoParWorkers() else Workers <- 1 if (verbose > 0) cat("\nRunning initial scoring function", nrow(initGrid), "times in", Workers, "thread(s)...") sink(file = sinkFile) tm <- system.time(scoreSummary <- foreach(iter = 1:nrow(initGrid), .options.multicore = list(preschedule = FALSE), .combine = list, .multicombine = TRUE, .inorder = FALSE, .errorhandling = "pass", .verbose = FALSE) %op% { Params <- initGrid[get("iter"), ] Elapsed <- system.time(Result <- tryCatch({ do.call(what = FUN, args = as.list(Params)) }, error = function(e) e)) if (any(class(Result) %in% c("simpleError", "error", "condition"))) return(Result) if (!inherits(x = Result, what = "list")) stop("Object returned from FUN was not a list.") resLengths <- lengths(Result) if (!any(names(Result) == "Score")) stop("FUN must return list with element 'Score' at a minimum.") if (!is.numeric(Result$Score)) stop("Score returned from FUN was not numeric.") if (any(resLengths != 1)) { badReturns <- names(Result)[which(resLengths != 1)] stop("FUN returned these elements with length > 1: ", paste(badReturns, collapse = ",")) } data.table(Params, Elapsed = Elapsed[[3]], as.data.table(Result)) })[[3]] while (sink.number() > 0) sink() if (verbose > 0) cat(" ", tm, "seconds\n") se <- which(sapply(scoreSummary, function(cl) any(class(cl) %in% c("simpleError", "error", "condition")))) if (length(se) > 0) { print(data.table(initGrid[se, ], errorMessage = sapply(scoreSummary[se], function(x) x$message))) stop("Errors encountered in initialization are listed above.") } else { scoreSummary <- rbindlist(scoreSummary) } scoreSummary[, `:=`(("gpUtility"), rep(as.numeric(NA), nrow(scoreSummary)))] scoreSummary[, `:=`(("acqOptimum"), rep(FALSE, nrow(scoreSummary)))] scoreSummary[, `:=`(("Epoch"), rep(0, nrow(scoreSummary)))] scoreSummary[, `:=`(("Iteration"), 1:nrow(scoreSummary))] scoreSummary[, `:=`(("inBounds"), rep(TRUE, nrow(scoreSummary)))] scoreSummary[, `:=`(("errorMessage"), rep(NA, nrow(scoreSummary)))] extraRet <- setdiff(names(scoreSummary), c("Epoch", "Iteration", boundsDT$N, "inBounds", "Elapsed", "Score", "gpUtility", "acqOptimum")) setcolorder(scoreSummary, c("Epoch", "Iteration", boundsDT$N, "gpUtility", "acqOptimum", "inBounds", "Elapsed", "Score", extraRet)) if (any(scoreSummary$Elapsed < 1) & acq == "eips") { cat("\n FUN elapsed time is too low to be precise. Switching acq to 'ei'.\n") acq <- "ei" } optObj$optPars$acq <- acq optObj$optPars$kappa <- kappa optObj$optPars$eps <- eps optObj$optPars$parallel <- parallel optObj$optPars$gsPoints <- gsPoints optObj$optPars$convThresh <- convThresh optObj$optPars$acqThresh <- acqThresh optObj$scoreSummary <- scoreSummary optObj$GauProList$gpUpToDate <- FALSE optObj$iters <- nrow(scoreSummary) optObj$stopStatus <- "OK" optObj$elapsedTime <- as.numeric(difftime(Sys.time(), startT, units = "secs")) saveSoFar(optObj, 0) optObj <- addIterations(optObj, otherHalting = otherHalting, iters.n = iters.n, iters.k = iters.k, parallel = parallel, plotProgress = plotProgress, errorHandling = errorHandling, saveFile = saveFile, verbose = verbose, ...) return(optObj) })(FUN = function (...) { kwargs <- list(...) args <- .method_params_refactor(kwargs, method_helper) set.seed(self$seed) res <- do.call(private$fun_bayesian_scoring_function, args) if (isFALSE(self$metric_optimization_higher_better)) { res$Score <- as.numeric(I(res$Score * -1L)) } return(res) }, bounds = list(subsample = c(0.2, 1), colsample_bytree = c(0.2, 1), min_child_weight = c(1L, 10L), learning_rate = c(0.1, 0.2), max_depth = c(1L, 10L)), initGrid = structure(list(subsample = c(0.6, 1, 0.8, 0.6, 1, 0.8, 0.6, 0.6, 1, 0.6), colsample_bytree = c(0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 1, 1, 1, 1), min_child_weight = c(5, 5, 5, 5, 1, 5, 1, 5, 5, 1), learning_rate = c(0.2, 0.1, 0.1, 0.2, 0.1, 0.1, 0.1, 0.2, 0.1, 0.2), max_depth = c(1, 5, 1, 5, 5, 5, 5, 5, 5, 1)), out.attrs = list(dim = c(objective = 1L, eval_metric = 1L, subsample = 3L, colsample_bytree = 3L, min_child_weight = 2L, learning_rate = 2L, max_depth = 2L), dimnames = list(objective = "objective=survival:cox", eval_metric = "eval_metric=cox-nloglik", subsample = c("subsample=0.6", "subsample=0.8", "subsample=1.0"), colsample_bytree = c("colsample_bytree=0.6", "colsample_bytree=0.8", "colsample_bytree=1.0"), min_child_weight = c("min_child_weight=1", "min_child_weight=5"), learning_rate = c("learning_rate=0.1", "learning_rate=0.2"), max_depth = c("max_depth=1", "max_depth=5"))), row.names = c(NA, -10L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x250a4550>), iters.n = 2L, iters.k = 2L, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 3.5, eps = 0, parallel = TRUE, gsPoints = 125, convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1)`: Errors encountered in initialization are listed above.
Backtrace:
▆
1. └─surv_xgboost_cox_optimizer$execute() at test-surv_xgboost_cox.R:115:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. └─mlexperiments:::.run_optimizer(...)
11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper)
12. ├─base::do.call(...)
13. └─mlexperiments (local) `<fn>`(...)
14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args)
15. └─ParBayesianOptimization (local) `<fn>`(...)
[ FAIL 2 | WARN 1 | SKIP 1 | PASS 11 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Current CRAN status: OK: 13
Current CRAN status: OK: 13
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.