Last updated on 2025-02-23 15:49:42 CET.
Flavor | Version | Tinstall | Tcheck | Ttotal | Status | Flags |
---|---|---|---|---|---|---|
r-devel-linux-x86_64-debian-clang | 1.0-0 | 13.35 | 241.39 | 254.74 | ERROR | |
r-devel-linux-x86_64-debian-gcc | 1.0-0 | 11.40 | 857.24 | 868.64 | OK | |
r-devel-linux-x86_64-fedora-clang | 1.0-0 | 353.31 | ERROR | |||
r-devel-linux-x86_64-fedora-gcc | 1.0-0 | 1954.95 | OK | |||
r-devel-macos-arm64 | 1.0-0 | 404.00 | OK | |||
r-devel-macos-x86_64 | 1.0-0 | 1424.00 | OK | |||
r-devel-windows-x86_64 | 1.0-0 | 13.00 | 882.00 | 895.00 | OK | |
r-patched-linux-x86_64 | 1.0-0 | 13.26 | 1185.92 | 1199.18 | OK | |
r-release-linux-x86_64 | 1.0-0 | 11.22 | 1174.08 | 1185.30 | OK | |
r-release-macos-arm64 | 1.0-0 | 376.00 | OK | |||
r-release-macos-x86_64 | 1.0-0 | 1064.00 | OK | |||
r-release-windows-x86_64 | 1.0-0 | 14.00 | 902.00 | 916.00 | OK | |
r-oldrel-macos-arm64 | 1.0-0 | 341.00 | OK | |||
r-oldrel-macos-x86_64 | 1.0-0 | 646.00 | OK | |||
r-oldrel-windows-x86_64 | 1.0-0 | 19.00 | 1191.00 | 1210.00 | OK |
Version: 1.0-0
Check: tests
Result: ERROR
Running ‘spelling.R’ [0s/0s]
Running ‘testthat.R’ [158s/192s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(flocker)
>
> test_check("flocker")
Formatting data for a single-season occupancy model. For details, see make_flocker_data_static. All warnings and error messages should be interpreted in the context of make_flocker_data_static
Compiling Stan program...
Start sampling
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.000233 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.33 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1:
Chain 1:
Chain 1: WARNING: No variance estimation is
Chain 1: performed for num_warmup < 20
Chain 1:
Chain 1: Iteration: 1 / 8 [ 12%] (Warmup)
Chain 1: Iteration: 2 / 8 [ 25%] (Warmup)
Chain 1: Iteration: 3 / 8 [ 37%] (Warmup)
Chain 1: Iteration: 4 / 8 [ 50%] (Warmup)
Chain 1: Iteration: 5 / 8 [ 62%] (Sampling)
Chain 1: Iteration: 6 / 8 [ 75%] (Sampling)
Chain 1: Iteration: 7 / 8 [ 87%] (Sampling)
Chain 1: Iteration: 8 / 8 [100%] (Sampling)
Chain 1:
Chain 1: Elapsed Time: 0.002 seconds (Warm-up)
Chain 1: 0.003 seconds (Sampling)
Chain 1: 0.005 seconds (Total)
Chain 1:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2:
Chain 2: Gradient evaluation took 0.000115 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.15 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2:
Chain 2:
Chain 2: WARNING: No variance estimation is
Chain 2: performed for num_warmup < 20
Chain 2:
Chain 2: Iteration: 1 / 8 [ 12%] (Warmup)
Chain 2: Iteration: 2 / 8 [ 25%] (Warmup)
Chain 2: Iteration: 3 / 8 [ 37%] (Warmup)
Chain 2: Iteration: 4 / 8 [ 50%] (Warmup)
Chain 2: Iteration: 5 / 8 [ 62%] (Sampling)
Chain 2: Iteration: 6 / 8 [ 75%] (Sampling)
Chain 2: Iteration: 7 / 8 [ 87%] (Sampling)
Chain 2: Iteration: 8 / 8 [100%] (Sampling)
Chain 2:
Chain 2: Elapsed Time: 0.002 seconds (Warm-up)
Chain 2: 0.005 seconds (Sampling)
Chain 2: 0.007 seconds (Total)
Chain 2:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3:
Chain 3: Gradient evaluation took 0.000119 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.19 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3:
Chain 3:
Chain 3: WARNING: No variance estimation is
Chain 3: performed for num_warmup < 20
Chain 3:
Chain 3: Iteration: 1 / 8 [ 12%] (Warmup)
Chain 3: Iteration: 2 / 8 [ 25%] (Warmup)
Chain 3: Iteration: 3 / 8 [ 37%] (Warmup)
Chain 3: Iteration: 4 / 8 [ 50%] (Warmup)
Chain 3: Iteration: 5 / 8 [ 62%] (Sampling)
Chain 3: Iteration: 6 / 8 [ 75%] (Sampling)
Chain 3: Iteration: 7 / 8 [ 87%] (Sampling)
Chain 3: Iteration: 8 / 8 [100%] (Sampling)
Chain 3:
Chain 3: Elapsed Time: 0.008 seconds (Warm-up)
Chain 3: 0.006 seconds (Sampling)
Chain 3: 0.014 seconds (Total)
Chain 3:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4:
Chain 4: Gradient evaluation took 0.000115 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.15 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4:
Chain 4:
Chain 4: WARNING: No variance estimation is
Chain 4: performed for num_warmup < 20
Chain 4:
Chain 4: Iteration: 1 / 8 [ 12%] (Warmup)
Chain 4: Iteration: 2 / 8 [ 25%] (Warmup)
Chain 4: Iteration: 3 / 8 [ 37%] (Warmup)
Chain 4: Iteration: 4 / 8 [ 50%] (Warmup)
Chain 4: Iteration: 5 / 8 [ 62%] (Sampling)
Chain 4: Iteration: 6 / 8 [ 75%] (Sampling)
Chain 4: Iteration: 7 / 8 [ 87%] (Sampling)
Chain 4: Iteration: 8 / 8 [100%] (Sampling)
Chain 4:
Chain 4: Elapsed Time: 0.002 seconds (Warm-up)
Chain 4: 0.002 seconds (Sampling)
Chain 4: 0.004 seconds (Total)
Chain 4:
Formatting data for a single-season multispecies occupancy model with data augmentation for never-observed species. For details, see make_flocker_data_augmented. All warnings and error messages should be interpreted in the context of make_flocker_data_augmented
formatting rep indices
|
| | 0%
|
|= | 1%
|
|= | 2%
|
|== | 2%
|
|== | 3%
|
|== | 4%
|
|=== | 4%
|
|=== | 5%
|
|==== | 5%
|
|==== | 6%
|
|===== | 6%
|
|===== | 7%
|
|===== | 8%
|
|====== | 8%
|
|====== | 9%
|
|======= | 10%
|
|======== | 11%
|
|======== | 12%
|
|========= | 12%
|
|========= | 13%
|
|========= | 14%
|
|========== | 14%
|
|========== | 15%
|
|=========== | 15%
|
|=========== | 16%
|
|============ | 16%
|
|============ | 17%
|
|============ | 18%
|
|============= | 18%
|
|============= | 19%
|
|============== | 20%
|
|=============== | 21%
|
|=============== | 22%
|
|================ | 22%
|
|================ | 23%
|
|================ | 24%
|
|================= | 24%
|
|================= | 25%
|
|================== | 25%
|
|================== | 26%
|
|=================== | 26%
|
|=================== | 27%
|
|=================== | 28%
|
|==================== | 28%
|
|==================== | 29%
|
|===================== | 30%
|
|====================== | 31%
|
|====================== | 32%
|
|======================= | 32%
|
|======================= | 33%
|
|======================= | 34%
|
|======================== | 34%
|
|======================== | 35%
|
|========================= | 35%
|
|========================= | 36%
|
|========================== | 36%
|
|========================== | 37%
|
|========================== | 38%
|
|=========================== | 38%
|
|=========================== | 39%
|
|============================ | 40%
|
|============================= | 41%
|
|============================= | 42%
|
|============================== | 42%
|
|============================== | 43%
|
|============================== | 44%
|
|=============================== | 44%
|
|=============================== | 45%
|
|================================ | 45%
|
|================================ | 46%
|
|================================= | 46%
|
|================================= | 47%
|
|================================= | 48%
|
|================================== | 48%
|
|================================== | 49%
|
|=================================== | 50%
|
|==================================== | 51%
|
|==================================== | 52%
|
|===================================== | 52%
|
|===================================== | 53%
|
|===================================== | 54%
|
|====================================== | 54%
|
|====================================== | 55%
|
|======================================= | 55%
|
|======================================= | 56%
|
|======================================== | 56%
|
|======================================== | 57%
|
|======================================== | 58%
|
|========================================= | 58%
|
|========================================= | 59%
|
|========================================== | 60%
|
|=========================================== | 61%
|
|=========================================== | 62%
|
|============================================ | 62%
|
|============================================ | 63%
|
|============================================ | 64%
|
|============================================= | 64%
|
|============================================= | 65%
|
|============================================== | 65%
|
|============================================== | 66%
|
|=============================================== | 66%
|
|=============================================== | 67%
|
|=============================================== | 68%
|
|================================================ | 68%
|
|================================================ | 69%
|
|================================================= | 70%
|
|================================================== | 71%
|
|================================================== | 72%
|
|=================================================== | 72%
|
|=================================================== | 73%
|
|=================================================== | 74%
|
|==================================================== | 74%
|
|==================================================== | 75%
|
|===================================================== | 75%
|
|===================================================== | 76%
|
|====================================================== | 76%
|
|====================================================== | 77%
|
|====================================================== | 78%
|
|======================================================= | 78%
|
|======================================================= | 79%
|
|======================================================== | 80%
|
|========================================================= | 81%
|
|========================================================= | 82%
|
|========================================================== | 82%
|
|========================================================== | 83%
|
|========================================================== | 84%
|
|=========================================================== | 84%
|
|=========================================================== | 85%
|
|============================================================ | 85%
|
|============================================================ | 86%
|
|============================================================= | 86%
|
|============================================================= | 87%
|
|============================================================= | 88%
|
|============================================================== | 88%
|
|============================================================== | 89%
|
|=============================================================== | 90%
|
|================================================================ | 91%
|
|================================================================ | 92%
|
|================================================================= | 92%
|
|================================================================= | 93%
|
|================================================================= | 94%
|
|================================================================== | 94%
|
|================================================================== | 95%
|
|=================================================================== | 95%
|
|=================================================================== | 96%
|
|==================================================================== | 96%
|
|==================================================================== | 97%
|
|==================================================================== | 98%
|
|===================================================================== | 98%
|
|===================================================================== | 99%
|
|======================================================================| 100%
Compiling Stan program...
0,248,Js_of_ocaml__Js.Error,18,TypeError: not a function
Error in `FUN()`:
! In path: "/home/hornik/tmp/R.check/r-devel-clang/Work/PKGS/flocker.Rcheck/tests/testthat/setup.R"
Caused by error in `model_cppcode$errors`:
! $ operator is invalid for atomic vectors
Backtrace:
▆
1. ├─testthat::test_check("flocker")
2. │ └─testthat::test_dir(...)
3. │ └─testthat:::test_files(...)
4. │ └─testthat:::test_files_serial(...)
5. │ └─testthat:::test_files_setup_state(...)
6. │ └─testthat::source_test_setup(".", env)
7. │ └─testthat::source_dir(path, "^setup.*\\.[rR]$", env = env, wrap = FALSE)
8. │ └─base::lapply(...)
9. │ └─testthat (local) FUN(X[[i]], ...)
10. │ └─testthat::source_file(path, env = env, chdir = chdir, wrap = wrap)
11. │ ├─base::withCallingHandlers(...)
12. │ └─base::eval(exprs, env)
13. │ └─base::eval(exprs, env)
14. │ ├─base::suppressWarnings(...) at tests/testthat/setup.R:6:1
15. │ │ └─base::withCallingHandlers(...)
16. │ └─flocker::flock(...) at tests/testthat/setup.R:31:3
17. │ └─flocker:::flock_(...)
18. │ └─flocker:::flocker_fit_code_util(...)
19. │ └─brms::brm(...)
20. │ └─brms::do_call(compile_model, compile_args)
21. │ └─brms:::eval2(call, envir = args, enclos = envir)
22. │ └─base::eval(expr, envir, ...)
23. │ └─base::eval(expr, envir, ...)
24. │ └─brms (local) .fun(...)
25. │ └─brms (local) .compile_model(model, ...)
26. │ ├─brms:::eval_silent(...)
27. │ │ └─base::eval(expr, envir)
28. │ │ └─base::eval(expr, envir)
29. │ └─brms::do_call(rstan::stan_model, args)
30. │ └─brms:::eval2(call, envir = args, enclos = envir)
31. │ └─base::eval(expr, envir, ...)
32. │ └─base::eval(expr, envir, ...)
33. │ └─rstan (local) .fun(model_code = .x1)
34. │ └─rstan::stanc(...)
35. └─base::.handleSimpleError(...)
36. └─testthat (local) h(simpleError(msg, call))
37. └─rlang::abort(...)
Execution halted
Flavor: r-devel-linux-x86_64-debian-clang
Version: 1.0-0
Check: tests
Result: ERROR
Running ‘spelling.R’
Running ‘testthat.R’ [196s/247s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(flocker)
>
> test_check("flocker")
Formatting data for a single-season occupancy model. For details, see make_flocker_data_static. All warnings and error messages should be interpreted in the context of make_flocker_data_static
Compiling Stan program...
Start sampling
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.000383 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 3.83 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1:
Chain 1:
Chain 1: WARNING: No variance estimation is
Chain 1: performed for num_warmup < 20
Chain 1:
Chain 1: Iteration: 1 / 8 [ 12%] (Warmup)
Chain 1: Iteration: 2 / 8 [ 25%] (Warmup)
Chain 1: Iteration: 3 / 8 [ 37%] (Warmup)
Chain 1: Iteration: 4 / 8 [ 50%] (Warmup)
Chain 1: Iteration: 5 / 8 [ 62%] (Sampling)
Chain 1: Iteration: 6 / 8 [ 75%] (Sampling)
Chain 1: Iteration: 7 / 8 [ 87%] (Sampling)
Chain 1: Iteration: 8 / 8 [100%] (Sampling)
Chain 1:
Chain 1: Elapsed Time: 0.008 seconds (Warm-up)
Chain 1: 0.014 seconds (Sampling)
Chain 1: 0.022 seconds (Total)
Chain 1:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2:
Chain 2: Gradient evaluation took 0.000255 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 2.55 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2:
Chain 2:
Chain 2: WARNING: No variance estimation is
Chain 2: performed for num_warmup < 20
Chain 2:
Chain 2: Iteration: 1 / 8 [ 12%] (Warmup)
Chain 2: Iteration: 2 / 8 [ 25%] (Warmup)
Chain 2: Iteration: 3 / 8 [ 37%] (Warmup)
Chain 2: Iteration: 4 / 8 [ 50%] (Warmup)
Chain 2: Iteration: 5 / 8 [ 62%] (Sampling)
Chain 2: Iteration: 6 / 8 [ 75%] (Sampling)
Chain 2: Iteration: 7 / 8 [ 87%] (Sampling)
Chain 2: Iteration: 8 / 8 [100%] (Sampling)
Chain 2:
Chain 2: Elapsed Time: 0.02 seconds (Warm-up)
Chain 2: 0.024 seconds (Sampling)
Chain 2: 0.044 seconds (Total)
Chain 2:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3:
Chain 3: Gradient evaluation took 0.000227 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 2.27 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3:
Chain 3:
Chain 3: WARNING: No variance estimation is
Chain 3: performed for num_warmup < 20
Chain 3:
Chain 3: Iteration: 1 / 8 [ 12%] (Warmup)
Chain 3: Iteration: 2 / 8 [ 25%] (Warmup)
Chain 3: Iteration: 3 / 8 [ 37%] (Warmup)
Chain 3: Iteration: 4 / 8 [ 50%] (Warmup)
Chain 3: Iteration: 5 / 8 [ 62%] (Sampling)
Chain 3: Iteration: 6 / 8 [ 75%] (Sampling)
Chain 3: Iteration: 7 / 8 [ 87%] (Sampling)
Chain 3: Iteration: 8 / 8 [100%] (Sampling)
Chain 3:
Chain 3: Elapsed Time: 0.028 seconds (Warm-up)
Chain 3: 0.023 seconds (Sampling)
Chain 3: 0.051 seconds (Total)
Chain 3:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4:
Chain 4: Gradient evaluation took 0.000255 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 2.55 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4:
Chain 4:
Chain 4: WARNING: No variance estimation is
Chain 4: performed for num_warmup < 20
Chain 4:
Chain 4: Iteration: 1 / 8 [ 12%] (Warmup)
Chain 4: Iteration: 2 / 8 [ 25%] (Warmup)
Chain 4: Iteration: 3 / 8 [ 37%] (Warmup)
Chain 4: Iteration: 4 / 8 [ 50%] (Warmup)
Chain 4: Iteration: 5 / 8 [ 62%] (Sampling)
Chain 4: Iteration: 6 / 8 [ 75%] (Sampling)
Chain 4: Iteration: 7 / 8 [ 87%] (Sampling)
Chain 4: Iteration: 8 / 8 [100%] (Sampling)
Chain 4:
Chain 4: Elapsed Time: 0.012 seconds (Warm-up)
Chain 4: 0.008 seconds (Sampling)
Chain 4: 0.02 seconds (Total)
Chain 4:
Formatting data for a single-season multispecies occupancy model with data augmentation for never-observed species. For details, see make_flocker_data_augmented. All warnings and error messages should be interpreted in the context of make_flocker_data_augmented
formatting rep indices
|
| | 0%
|
|= | 1%
|
|= | 2%
|
|== | 2%
|
|== | 3%
|
|== | 4%
|
|=== | 4%
|
|=== | 5%
|
|==== | 5%
|
|==== | 6%
|
|===== | 6%
|
|===== | 7%
|
|===== | 8%
|
|====== | 8%
|
|====== | 9%
|
|======= | 10%
|
|======== | 11%
|
|======== | 12%
|
|========= | 12%
|
|========= | 13%
|
|========= | 14%
|
|========== | 14%
|
|========== | 15%
|
|=========== | 15%
|
|=========== | 16%
|
|============ | 16%
|
|============ | 17%
|
|============ | 18%
|
|============= | 18%
|
|============= | 19%
|
|============== | 20%
|
|=============== | 21%
|
|=============== | 22%
|
|================ | 22%
|
|================ | 23%
|
|================ | 24%
|
|================= | 24%
|
|================= | 25%
|
|================== | 25%
|
|================== | 26%
|
|=================== | 26%
|
|=================== | 27%
|
|=================== | 28%
|
|==================== | 28%
|
|==================== | 29%
|
|===================== | 30%
|
|====================== | 31%
|
|====================== | 32%
|
|======================= | 32%
|
|======================= | 33%
|
|======================= | 34%
|
|======================== | 34%
|
|======================== | 35%
|
|========================= | 35%
|
|========================= | 36%
|
|========================== | 36%
|
|========================== | 37%
|
|========================== | 38%
|
|=========================== | 38%
|
|=========================== | 39%
|
|============================ | 40%
|
|============================= | 41%
|
|============================= | 42%
|
|============================== | 42%
|
|============================== | 43%
|
|============================== | 44%
|
|=============================== | 44%
|
|=============================== | 45%
|
|================================ | 45%
|
|================================ | 46%
|
|================================= | 46%
|
|================================= | 47%
|
|================================= | 48%
|
|================================== | 48%
|
|================================== | 49%
|
|=================================== | 50%
|
|==================================== | 51%
|
|==================================== | 52%
|
|===================================== | 52%
|
|===================================== | 53%
|
|===================================== | 54%
|
|====================================== | 54%
|
|====================================== | 55%
|
|======================================= | 55%
|
|======================================= | 56%
|
|======================================== | 56%
|
|======================================== | 57%
|
|======================================== | 58%
|
|========================================= | 58%
|
|========================================= | 59%
|
|========================================== | 60%
|
|=========================================== | 61%
|
|=========================================== | 62%
|
|============================================ | 62%
|
|============================================ | 63%
|
|============================================ | 64%
|
|============================================= | 64%
|
|============================================= | 65%
|
|============================================== | 65%
|
|============================================== | 66%
|
|=============================================== | 66%
|
|=============================================== | 67%
|
|=============================================== | 68%
|
|================================================ | 68%
|
|================================================ | 69%
|
|================================================= | 70%
|
|================================================== | 71%
|
|================================================== | 72%
|
|=================================================== | 72%
|
|=================================================== | 73%
|
|=================================================== | 74%
|
|==================================================== | 74%
|
|==================================================== | 75%
|
|===================================================== | 75%
|
|===================================================== | 76%
|
|====================================================== | 76%
|
|====================================================== | 77%
|
|====================================================== | 78%
|
|======================================================= | 78%
|
|======================================================= | 79%
|
|======================================================== | 80%
|
|========================================================= | 81%
|
|========================================================= | 82%
|
|========================================================== | 82%
|
|========================================================== | 83%
|
|========================================================== | 84%
|
|=========================================================== | 84%
|
|=========================================================== | 85%
|
|============================================================ | 85%
|
|============================================================ | 86%
|
|============================================================= | 86%
|
|============================================================= | 87%
|
|============================================================= | 88%
|
|============================================================== | 88%
|
|============================================================== | 89%
|
|=============================================================== | 90%
|
|================================================================ | 91%
|
|================================================================ | 92%
|
|================================================================= | 92%
|
|================================================================= | 93%
|
|================================================================= | 94%
|
|================================================================== | 94%
|
|================================================================== | 95%
|
|=================================================================== | 95%
|
|=================================================================== | 96%
|
|==================================================================== | 96%
|
|==================================================================== | 97%
|
|==================================================================== | 98%
|
|===================================================================== | 98%
|
|===================================================================== | 99%
|
|======================================================================| 100%
Compiling Stan program...
0,248,Js_of_ocaml__Js.Error,18,TypeError: not a function
Error in `FUN()`:
! In path: "/data/gannet/ripley/R/packages/tests-clang/flocker.Rcheck/tests/testthat/setup.R"
Caused by error in `model_cppcode$errors`:
! $ operator is invalid for atomic vectors
Backtrace:
▆
1. ├─testthat::test_check("flocker")
2. │ └─testthat::test_dir(...)
3. │ └─testthat:::test_files(...)
4. │ └─testthat:::test_files_serial(...)
5. │ └─testthat:::test_files_setup_state(...)
6. │ └─testthat::source_test_setup(".", env)
7. │ └─testthat::source_dir(path, "^setup.*\\.[rR]$", env = env, wrap = FALSE)
8. │ └─base::lapply(...)
9. │ └─testthat (local) FUN(X[[i]], ...)
10. │ └─testthat::source_file(path, env = env, chdir = chdir, wrap = wrap)
11. │ ├─base::withCallingHandlers(...)
12. │ └─base::eval(exprs, env)
13. │ └─base::eval(exprs, env)
14. │ ├─base::suppressWarnings(...) at tests/testthat/setup.R:6:1
15. │ │ └─base::withCallingHandlers(...)
16. │ └─flocker::flock(...) at tests/testthat/setup.R:31:3
17. │ └─flocker:::flock_(...)
18. │ └─flocker:::flocker_fit_code_util(...)
19. │ └─brms::brm(...)
20. │ └─brms::do_call(compile_model, compile_args)
21. │ └─brms:::eval2(call, envir = args, enclos = envir)
22. │ └─base::eval(expr, envir, ...)
23. │ └─base::eval(expr, envir, ...)
24. │ └─brms (local) .fun(...)
25. │ └─brms (local) .compile_model(model, ...)
26. │ ├─brms:::eval_silent(...)
27. │ │ └─base::eval(expr, envir)
28. │ │ └─base::eval(expr, envir)
29. │ └─brms::do_call(rstan::stan_model, args)
30. │ └─brms:::eval2(call, envir = args, enclos = envir)
31. │ └─base::eval(expr, envir, ...)
32. │ └─base::eval(expr, envir, ...)
33. │ └─rstan (local) .fun(model_code = .x1)
34. │ └─rstan::stanc(...)
35. └─base::.handleSimpleError(...)
36. └─testthat (local) h(simpleError(msg, call))
37. └─rlang::abort(...)
Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.