CRAN Package Check Results for Maintainer ‘Dominique Makowski <officialeasystats at gmail.com>’

Last updated on 2025-12-28 03:52:13 CET.

Package ERROR NOTE OK
bayestestR 13
insight 2 1 10
modelbased 1 12
parameters 1 2 10
performance 1 12

Package bayestestR

Current CRAN status: OK: 13

Package insight

Current CRAN status: ERROR: 2, NOTE: 1, OK: 10

Version: 1.4.4
Check: examples
Result: ERROR Running examples in 'insight-Ex.R' failed The error most likely occurred in: > ### Name: get_datagrid.emmGrid > ### Title: Extract a reference grid from objects created by '{emmeans}' and > ### '{marginaleffects}' > ### Aliases: get_datagrid.emmGrid > > ### ** Examples > > data("mtcars") > mtcars$cyl <- factor(mtcars$cyl) > > mod <- glm(am ~ cyl + hp + wt, + family = binomial("logit"), + data = mtcars + ) > > ## Don't show: > if (insight::check_if_installed("emmeans", quietly = TRUE)) withAutoprint({ # examplesIf + ## End(Don't show) + em1 <- emmeans::emmeans(mod, ~ cyl + hp, at = list(hp = c(100, 150))) + get_datagrid(em1) + + contr1 <- emmeans::contrast(em1, method = "consec", by = "hp") + get_datagrid(contr1) + + eml1 <- emmeans::emmeans(mod, pairwise ~ cyl | hp, at = list(hp = c(100, 150))) + get_datagrid(eml1) # not a "true" grid + ## Don't show: + }) # examplesIf > em1 <- emmeans::emmeans(mod, ~cyl + hp, at = list(hp = c(100, 150))) > get_datagrid(em1) cyl hp 1 4 100 2 6 100 3 8 100 4 4 150 5 6 150 6 8 150 > contr1 <- emmeans::contrast(em1, method = "consec", by = "hp") > get_datagrid(contr1) contrast hp 1 cyl6 - cyl4 100 2 cyl8 - cyl6 100 3 cyl6 - cyl4 150 4 cyl8 - cyl6 150 > eml1 <- emmeans::emmeans(mod, pairwise ~ cyl | hp, at = list(hp = c(100, + 150))) > get_datagrid(eml1) hp cyl contrast 1 100 4 <NA> 2 100 6 <NA> 3 100 8 <NA> 4 150 4 <NA> 5 150 6 <NA> 6 150 8 <NA> 7 100 <NA> cyl4 - cyl6 8 100 <NA> cyl4 - cyl8 9 100 <NA> cyl6 - cyl8 10 150 <NA> cyl4 - cyl6 11 150 <NA> cyl4 - cyl8 12 150 <NA> cyl6 - cyl8 > ## End(Don't show) > ## Don't show: > if (insight::check_if_installed("marginaleffects", quietly = TRUE, minimum_version = "0.29.0")) withAutoprint({ # examplesIf + ## End(Don't show) + mfx1 <- marginaleffects::slopes(mod, variables = "hp") + get_datagrid(mfx1) # not a "true" grid + + mfx2 <- marginaleffects::slopes(mod, variables = c("hp", "wt"), by = "am") + get_datagrid(mfx2) + + contr2 <- marginaleffects::avg_comparisons(mod) + get_datagrid(contr2) # not a "true" grid + ## Don't show: + }) # examplesIf > mfx1 <- marginaleffects::slopes(mod, variables = "hp") Error in `[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp) : attempt access index 19/19 in VECTOR_ELT Calls: withAutoprint ... <Anonymous> -> do.call -> get_comparisons -> [ -> [.data.table Execution halted Flavor: r-devel-windows-x86_64

Version: 1.4.4
Check: tests
Result: ERROR Running 'testthat.R' [175s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(testthat) > library(insight) > test_check("insight") Starting 2 test processes. > test-find_transformation.R: boundary (singular) fit: see help('isSingular') > test-gamlss.R: GAMLSS-RS iteration 1: Global Deviance = 365.2328 > test-gamlss.R: GAMLSS-RS iteration 2: Global Deviance = 365.1292 > test-gamlss.R: GAMLSS-RS iteration 3: Global Deviance = 365.1269 > test-gamlss.R: GAMLSS-RS iteration 4: Global Deviance = 365.1268 > test-gamlss.R: GAMLSS-RS iteration 1: Global Deviance = 5779.746 > test-gamlss.R: GAMLSS-RS iteration 2: Global Deviance = 5779.746 > test-gamlss.R: GAMLSS-RS iteration 1: Global Deviance = 703.1164 > test-gamlss.R: GAMLSS-RS iteration 2: Global Deviance = 703.1164 > test-get_model.R: Loading required namespace: GPArotation > test-get_random.R: boundary (singular) fit: see help('isSingular') Saving _problems/test-get_datagrid-486.R Saving _problems/test-get_datagrid-1143.R > test-glmmPQL.R: iteration 1 > test-is_converged.R: boundary (singular) fit: see help('isSingular') > test-mmrm.R: mmrm() registered as emmeans extension > test-mmrm.R: mmrm() registered as car::Anova extension > test-model_info.R: boundary (singular) fit: see help('isSingular') > test-nestedLogit.R: list(work = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, > test-nestedLogit.R: 0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, > test-nestedLogit.R: 0L, 1L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, > test-nestedLogit.R: 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, > test-nestedLogit.R: 1L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 1L, > test-nestedLogit.R: 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, > test-nestedLogit.R: 1L, 1L, 0L, 0L, 1L, 0L, 0L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 0L, 0L > test-nestedLogit.R: ), full = c(1L, 1L, 0L, 1L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, > test-nestedLogit.R: 1L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 1L, 1L, 0L, 0L, > test-nestedLogit.R: 0L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L)) > test-polr.R: > test-polr.R: Re-fitting to get Hessian > test-polr.R: > test-polr.R: > test-polr.R: Re-fitting to get Hessian > test-polr.R: > test-survey_coxph.R: Stratified Independent Sampling design (with replacement) > test-survey_coxph.R: dpbc <- survey::svydesign( > test-survey_coxph.R: id = ~1, > test-survey_coxph.R: prob = ~randprob, > test-survey_coxph.R: strata = ~edema, > test-survey_coxph.R: data = subset(pbc, randomized) > test-survey_coxph.R: ) > test-survey_coxph.R: Stratified Independent Sampling design (with replacement) > test-survey_coxph.R: dpbc <- survey::svydesign( > test-survey_coxph.R: id = ~1, > test-survey_coxph.R: prob = ~randprob, > test-survey_coxph.R: strata = ~edema, > test-survey_coxph.R: data = subset(pbc, randomized) > test-survey_coxph.R: ) > test-survey_coxph.R: Stratified Independent Sampling design (with replacement) > test-survey_coxph.R: dpbc <- survey::svydesign( > test-survey_coxph.R: id = ~1, > test-survey_coxph.R: prob = ~randprob, > test-survey_coxph.R: strata = ~edema, > test-survey_coxph.R: data = subset(pbc, randomized) > test-survey_coxph.R: ) [ FAIL 2 | WARN 2 | SKIP 93 | PASS 3609 ] ══ Skipped tests (93) ══════════════════════════════════════════════════════════ • On CRAN (89): 'test-GLMMadaptive.R:2:1', 'test-averaging.R:1:1', 'test-bias_correction.R:1:1', 'test-blmer.R:262:3', 'test-betareg.R:197:5', 'test-brms_aterms.R:1:1', 'test-brms.R:1:1', 'test-brms_gr_random_effects.R:1:1', 'test-brms_missing.R:1:1', 'test-brms_mm.R:1:1', 'test-brms_von_mises.R:1:1', 'test-clean_names.R:109:3', 'test-clean_parameters.R:1:1', 'test-coxme.R:1:1', 'test-cpglmm.R:152:3', 'test-clmm.R:170:3', 'test-display.R:1:1', 'test-display.R:15:1', 'test-export_table.R:3:1', 'test-export_table.R:7:1', 'test-export_table.R:134:3', 'test-export_table.R:164:3', 'test-export_table.R:193:1', 'test-export_table.R:278:1', 'test-export_table.R:296:3', 'test-export_table.R:328:3', 'test-export_table.R:385:1', 'test-export_table.R:406:3', 'test-export_table.R:470:3', 'test-find_random.R:43:3', 'test-fixest.R:2:1', 'test-format_table.R:2:1', 'test-find_smooth.R:39:3', 'test-format_table_ci.R:72:1', 'test-gam.R:2:1', 'test-get_data.R:507:1', 'test-get_loglikelihood.R:143:3', 'test-get_loglikelihood.R:223:3', 'test-get_predicted.R:2:1', 'test-get_priors.R:1:1', 'test-get_varcov.R:43:3', 'test-get_varcov.R:57:3', 'test-get_datagrid.R:1068:3', 'test-get_datagrid.R:1105:5', 'test-is_converged.R:47:1', 'test-iv_robust.R:120:3', 'test-lavaan.R:1:1', 'test-lcmm.R:1:1', 'test-lme.R:28:3', 'test-lme.R:212:3', 'test-glmmTMB.R:67:3', 'test-glmmTMB.R:767:3', 'test-glmmTMB.R:803:3', 'test-glmmTMB.R:1142:3', 'test-marginaleffects.R:1:1', 'test-mgcv.R:1:1', 'test-mipo.R:1:1', 'test-mlogit.R:1:1', 'test-model_info.R:106:3', 'test-modelbased.R:1:1', 'test-mvrstanarm.R:1:1', 'test-null_model.R:85:3', 'test-panelr-asym.R:165:3', 'test-panelr.R:301:3', 'test-phylolm.R:1:1', 'test-print_parameters.R:1:1', 'test-r2_nakagawa_bernoulli.R:1:1', 'test-r2_nakagawa_beta.R:1:1', 'test-r2_nakagawa_binomial.R:1:1', 'test-r2_nakagawa_gamma.R:1:1', 'test-r2_nakagawa_linear.R:1:1', 'test-r2_nakagawa_negbin.R:1:1', 'test-r2_nakagawa_negbin_zi.R:1:1', 'test-r2_nakagawa_ordered_beta.R:1:1', 'test-r2_nakagawa_poisson.R:1:1', 'test-r2_nakagawa_poisson_zi.R:1:1', 'test-r2_nakagawa_truncated_poisson.R:1:1', 'test-r2_nakagawa_tweedie.R:1:1', 'test-rlmer.R:276:3', 'test-rms.R:1:1', 'test-rqss.R:1:1', 'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-selection.R:2:1', 'test-spatial.R:2:1', 'test-svylme.R:1:1', 'test-tidymodels.R:1:1', 'test-vgam.R:2:1', 'test-weightit.R:1:1' • Package `logistf` is loaded and breaks `mmrm::mmrm()` (1): 'test-mmrm.R:4:1' • works interactively (2): 'test-coxph-panel.R:34:3', 'test-coxph.R:38:3' • {bigglm} is not installed (1): 'test-model_info.R:24:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-get_datagrid.R:481:3'): get_datagrid - marginaleffects ───────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 17/17 in VECTOR_ELT Backtrace: ▆ 1. └─marginaleffects::comparisons(...) at test-get_datagrid.R:481:3 2. ├─base::do.call("get_comparisons", args) 3. └─marginaleffects:::get_comparisons(...) 4. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 5. └─data.table:::`[.data.table`(...) ── Error ('test-get_datagrid.R:1143:3'): get_datagrid - marginaleffects, avg_slopes, non-Bayesian ── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 19/19 in VECTOR_ELT Backtrace: ▆ 1. └─marginaleffects::comparisons(...) 2. ├─base::do.call("get_comparisons", args) 3. └─marginaleffects:::get_comparisons(...) 4. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 5. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 2 | SKIP 93 | PASS 3609 ] Error: ! Test failures. Execution halted Flavor: r-devel-windows-x86_64

Version: 1.4.4
Check: tests
Result: ERROR Running 'testthat.R' [23s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(testthat) > library(insight) > test_check("insight") Starting 2 test processes. > test-coxph.R: Error: ! testthat subprocess exited in file 'test-coxph.R'. Caused by error: ! R session crashed with exit code -1073741819 Backtrace: ▆ 1. └─testthat::test_check("insight") 2. └─testthat::test_dir(...) 3. └─testthat:::test_files(...) 4. └─testthat:::test_files_parallel(...) 5. ├─withr::with_dir(...) 6. │ └─base::force(code) 7. ├─testthat::with_reporter(...) 8. │ └─base::tryCatch(...) 9. │ └─base (local) tryCatchList(expr, classes, parentenv, handlers) 10. │ └─base (local) tryCatchOne(expr, names, parentenv, handlers[[1L]]) 11. │ └─base (local) doTryCatch(return(expr), name, parentenv, handler) 12. └─testthat:::parallel_event_loop_chunky(queue, reporters, ".") 13. └─queue$poll(Inf) 14. └─base::lapply(...) 15. └─testthat (local) FUN(X[[i]], ...) 16. └─private$handle_error(msg, i) 17. └─cli::cli_abort(...) 18. └─rlang::abort(...) Execution halted Flavor: r-release-windows-x86_64

Version: 1.4.4
Check: package dependencies
Result: NOTE Package suggested but not available for checking: 'fungible' Flavor: r-oldrel-windows-x86_64

Package modelbased

Current CRAN status: ERROR: 1, OK: 12

Version: 0.13.1
Check: examples
Result: ERROR Running examples in 'modelbased-Ex.R' failed The error most likely occurred in: > ### Name: estimate_slopes > ### Title: Estimate Marginal Effects > ### Aliases: estimate_slopes > > ### ** Examples > > ## Don't show: > if (all(insight::check_if_installed(c("marginaleffects", "emmeans", "effectsize", "mgcv", "ggplot2", "see"), quietly = TRUE))) withAutoprint({ # examplesIf + ## End(Don't show) + library(ggplot2) + # Get an idea of the data + ggplot(iris, aes(x = Petal.Length, y = Sepal.Width)) + + geom_point(aes(color = Species)) + + geom_smooth(color = "black", se = FALSE) + + geom_smooth(aes(color = Species), linetype = "dotted", se = FALSE) + + geom_smooth(aes(color = Species), method = "lm", se = FALSE) + + # Model it + model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) + # Compute the marginal effect of Petal.Length at each level of Species + slopes <- estimate_slopes(model, trend = "Petal.Length", by = "Species") + slopes + + # What is the *average* slope of Petal.Length? This can be calculated by + # taking the average of the slopes across all Species, using `comparison`. + # We pass a function to `comparison` that calculates the mean of the slopes. + estimate_slopes( + model, + trend = "Petal.Length", + by = "Species", + comparison = ~I(mean(x)) + ) + + ## Not run: + ##D # Plot it + ##D plot(slopes) + ##D standardize(slopes) + ##D + ##D model <- mgcv::gam(Sepal.Width ~ s(Petal.Length), data = iris) + ##D slopes <- estimate_slopes(model, by = "Petal.Length", length = 50) + ##D summary(slopes) + ##D plot(slopes) + ##D + ##D model <- mgcv::gam(Sepal.Width ~ s(Petal.Length, by = Species), data = iris) + ##D slopes <- estimate_slopes(model, + ##D trend = "Petal.Length", + ##D by = c("Petal.Length", "Species"), length = 20 + ##D ) + ##D summary(slopes) + ##D plot(slopes) + ##D + ##D # marginal effects, grouped by Species, at different values of Petal.Length + ##D estimate_slopes(model, + ##D trend = "Petal.Length", + ##D by = c("Petal.Length", "Species"), length = 10 + ##D ) + ##D + ##D # marginal effects at different values of Petal.Length + ##D estimate_slopes(model, trend = "Petal.Length", by = "Petal.Length", length = 10) + ##D + ##D # marginal effects at very specific values of Petal.Length + ##D estimate_slopes(model, trend = "Petal.Length", by = "Petal.Length=c(1, 3, 5)") + ##D + ##D # average marginal effects of Petal.Length, + ##D # just for the trend within a certain range + ##D estimate_slopes(model, trend = "Petal.Length=seq(2, 4, 0.01)") + ## End(Not run) + ## Don't show: + }) # examplesIf > library(ggplot2) > ggplot(iris, aes(x = Petal.Length, y = Sepal.Width)) + geom_point(aes(color = Species)) + + geom_smooth(color = "black", se = FALSE) + geom_smooth(aes(color = Species), + linetype = "dotted", se = FALSE) + geom_smooth(aes(color = Species), method = "lm", + se = FALSE) `geom_smooth()` using method = 'loess' and formula = 'y ~ x' `geom_smooth()` using method = 'loess' and formula = 'y ~ x' `geom_smooth()` using formula = 'y ~ x' > model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) > slopes <- estimate_slopes(model, trend = "Petal.Length", by = "Species") Error in `[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp) : attempt access index 10/10 in VECTOR_ELT Calls: withAutoprint ... eval -> eval -> estimate_slopes -> get_marginaltrends Execution halted Flavor: r-devel-windows-x86_64

Version: 0.13.1
Check: tests
Result: ERROR Running 'testthat.R' [36s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes. Saving _problems/test-attributes_estimatefun-108.R Saving _problems/test-estimate_slopes-10.R Saving _problems/test-estimate_slopes-63.R Saving _problems/test-estimate_slopes-124.R > test-multivariate_response.R: Confidence intervals are not yet supported for models of class `mlm`. > test-multivariate_response.R: Confidence intervals are not yet supported for models of class `mlm`. Saving _problems/test-print-17.R Saving _problems/test-print-27.R > test-scoping_issues.R: We selected `contrast=c("Species")`. > test-scoping_issues.R: We selected `contrast=c("Species")`. Saving _problems/test-transform_response-65.R Saving _problems/test-visualisation_recipe-151.R [ FAIL 8 | WARN 0 | SKIP 59 | PASS 198 ] ══ Skipped tests (59) ══════════════════════════════════════════════════════════ • On CRAN (59): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-equivalence.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts_bookexamples.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_counterfactual.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_inequality.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_grouplevel.R:54:3', 'test-estimate_grouplevel.R:72:3', 'test-estimate_grouplevel.R:93:3', 'test-estimate_grouplevel.R:123:3', 'test-estimate_grouplevel.R:150:5', 'test-estimate_grouplevel.R:182:3', 'test-estimate_grouplevel.R:261:3', 'test-estimate_grouplevel.R:308:3', 'test-estimate_means-average.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_slopes.R:129:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-joint_test.R:1:1', 'test-keep_iterations.R:1:1', 'test-maihda.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-facet.R:7:1', 'test-plot-flexible_numeric.R:6:1', 'test-plot-grouplevel.R:1:1', 'test-plot-ordinal.R:8:1', 'test-plot-slopes.R:6:1', 'test-plot.R:7:1', 'test-predict-dpar.R:1:1', 'test-print.R:31:1', 'test-print.R:44:1', 'test-print.R:55:1', 'test-print.R:70:1', 'test-print.R:81:1', 'test-print.R:101:1', 'test-print.R:112:1', 'test-print.R:125:1', 'test-residualize_over_grid.R:6:1', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-transform_response.R:3:1', 'test-vcov.R:1:1', 'test-zeroinfl.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-attributes_estimatefun.R:108:3'): attributes_means, slopes ───── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 13/13 in VECTOR_ELT Backtrace: ▆ 1. ├─base::suppressMessages(...) at test-attributes_estimatefun.R:108:3 2. │ └─base::withCallingHandlers(...) 3. └─modelbased::estimate_slopes(model, "Sepal.Width", backend = "marginaleffects") 4. └─modelbased::get_marginaltrends(...) 5. ├─base::suppressWarnings(do.call(marginaleffects::avg_slopes, fun_args)) 6. │ └─base::withCallingHandlers(...) 7. ├─base::do.call(marginaleffects::avg_slopes, fun_args) 8. ├─marginaleffects (local) `<fn>`(...) 9. │ └─base::eval.parent(call_attr) 10. │ └─base::eval(expr, p) 11. │ └─base::eval(expr, p) 12. ├─marginaleffects::slopes(...) 13. │ └─base::eval.parent(call_attr_c) 14. │ └─base::eval(expr, p) 15. │ └─base::eval(expr, p) 16. └─marginaleffects::comparisons(...) 17. ├─base::do.call("get_comparisons", args) 18. └─marginaleffects:::get_comparisons(...) 19. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 20. └─data.table:::`[.data.table`(...) ── Error ('test-estimate_slopes.R:10:3'): estimate_slopes ────────────────────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 13/13 in VECTOR_ELT Backtrace: ▆ 1. ├─base::suppressMessages(estimate_slopes(model, backend = "marginaleffects")) at test-estimate_slopes.R:10:3 2. │ └─base::withCallingHandlers(...) 3. └─modelbased::estimate_slopes(model, backend = "marginaleffects") 4. └─modelbased::get_marginaltrends(...) 5. ├─base::suppressWarnings(do.call(marginaleffects::avg_slopes, fun_args)) 6. │ └─base::withCallingHandlers(...) 7. ├─base::do.call(marginaleffects::avg_slopes, fun_args) 8. ├─marginaleffects (local) `<fn>`(...) 9. │ └─base::eval.parent(call_attr) 10. │ └─base::eval(expr, p) 11. │ └─base::eval(expr, p) 12. ├─marginaleffects::slopes(...) 13. │ └─base::eval.parent(call_attr_c) 14. │ └─base::eval(expr, p) 15. │ └─base::eval(expr, p) 16. └─marginaleffects::comparisons(...) 17. ├─base::do.call("get_comparisons", args) 18. └─marginaleffects:::get_comparisons(...) 19. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 20. └─data.table:::`[.data.table`(...) ── Error ('test-estimate_slopes.R:63:3'): estimate_slopes, johnson-neyman p-adjust ── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 10/10 in VECTOR_ELT Backtrace: ▆ 1. └─modelbased::estimate_slopes(model, "Petal.Width", by = "Petal.Length") at test-estimate_slopes.R:63:3 2. └─modelbased::get_marginaltrends(...) 3. ├─base::suppressWarnings(do.call(marginaleffects::avg_slopes, fun_args)) 4. │ └─base::withCallingHandlers(...) 5. ├─base::do.call(marginaleffects::avg_slopes, fun_args) 6. ├─marginaleffects (local) `<fn>`(...) 7. │ └─base::eval.parent(call_attr) 8. │ └─base::eval(expr, p) 9. │ └─base::eval(expr, p) 10. ├─marginaleffects::slopes(...) 11. │ └─base::eval.parent(call_attr_c) 12. │ └─base::eval(expr, p) 13. │ └─base::eval(expr, p) 14. └─marginaleffects::comparisons(...) 15. ├─base::do.call("get_comparisons", args) 16. └─marginaleffects:::get_comparisons(...) 17. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 18. └─data.table:::`[.data.table`(...) ── Error ('test-estimate_slopes.R:124:3'): estimate_slopes, custom comparison ── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 10/10 in VECTOR_ELT Backtrace: ▆ 1. ├─modelbased::estimate_contrasts(...) at test-estimate_slopes.R:124:3 2. └─modelbased:::estimate_contrasts.default(...) 3. └─modelbased::get_marginalcontrasts(...) 4. └─modelbased::estimate_slopes(...) 5. └─modelbased::get_marginaltrends(...) 6. ├─base::suppressWarnings(do.call(marginaleffects::avg_slopes, fun_args)) 7. │ └─base::withCallingHandlers(...) 8. ├─base::do.call(marginaleffects::avg_slopes, fun_args) 9. ├─marginaleffects (local) `<fn>`(...) 10. │ └─base::eval.parent(call_attr) 11. │ └─base::eval(expr, p) 12. │ └─base::eval(expr, p) 13. ├─marginaleffects::slopes(...) 14. │ └─base::eval.parent(call_attr_c) 15. │ └─base::eval(expr, p) 16. │ └─base::eval(expr, p) 17. └─marginaleffects::comparisons(...) 18. ├─base::do.call("get_comparisons", args) 19. └─marginaleffects:::get_comparisons(...) 20. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 21. └─data.table:::`[.data.table`(...) ── Error ('test-print.R:17:3'): estimate_slopes - print summary ──────────────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 9/9 in VECTOR_ELT Backtrace: ▆ 1. └─modelbased::estimate_slopes(...) at test-print.R:17:3 2. └─modelbased::get_marginaltrends(...) 3. ├─base::suppressWarnings(do.call(marginaleffects::avg_slopes, fun_args)) 4. │ └─base::withCallingHandlers(...) 5. ├─base::do.call(marginaleffects::avg_slopes, fun_args) 6. ├─marginaleffects (local) `<fn>`(...) 7. │ └─base::eval.parent(call_attr) 8. │ └─base::eval(expr, p) 9. │ └─base::eval(expr, p) 10. ├─marginaleffects::slopes(...) 11. │ └─base::eval.parent(call_attr_c) 12. │ └─base::eval(expr, p) 13. │ └─base::eval(expr, p) 14. └─marginaleffects::comparisons(...) 15. ├─base::do.call("get_comparisons", args) 16. └─marginaleffects:::get_comparisons(...) 17. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 18. └─data.table:::`[.data.table`(...) ── Error ('test-print.R:27:3'): estimate_slopes - print regular ──────────────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 13/13 in VECTOR_ELT Backtrace: ▆ 1. ├─base::print(estimate_slopes(model, trend = "Petal.Length", backend = "marginaleffects")) 2. └─modelbased::estimate_slopes(model, trend = "Petal.Length", backend = "marginaleffects") 3. └─modelbased::get_marginaltrends(...) 4. ├─base::suppressWarnings(do.call(marginaleffects::avg_slopes, fun_args)) 5. │ └─base::withCallingHandlers(...) 6. ├─base::do.call(marginaleffects::avg_slopes, fun_args) 7. ├─marginaleffects (local) `<fn>`(...) 8. │ └─base::eval.parent(call_attr) 9. │ └─base::eval(expr, p) 10. │ └─base::eval(expr, p) 11. ├─marginaleffects::slopes(...) 12. │ └─base::eval.parent(call_attr_c) 13. │ └─base::eval(expr, p) 14. │ └─base::eval(expr, p) 15. └─marginaleffects::comparisons(...) 16. ├─base::do.call("get_comparisons", args) 17. └─marginaleffects:::get_comparisons(...) 18. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 19. └─data.table:::`[.data.table`(...) ── Error ('test-transform_response.R:65:3'): estimate_slopes, transform ──────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 10/10 in VECTOR_ELT Backtrace: ▆ 1. └─modelbased::estimate_slopes(mod, trend = "Sepal.Width", by = "Species") at test-transform_response.R:65:3 2. └─modelbased::get_marginaltrends(...) 3. ├─base::suppressWarnings(do.call(marginaleffects::avg_slopes, fun_args)) 4. │ └─base::withCallingHandlers(...) 5. ├─base::do.call(marginaleffects::avg_slopes, fun_args) 6. ├─marginaleffects (local) `<fn>`(...) 7. │ └─base::eval.parent(call_attr) 8. │ └─base::eval(expr, p) 9. │ └─base::eval(expr, p) 10. ├─marginaleffects::slopes(...) 11. │ └─base::eval.parent(call_attr_c) 12. │ └─base::eval(expr, p) 13. │ └─base::eval(expr, p) 14. └─marginaleffects::comparisons(...) 15. ├─base::do.call("get_comparisons", args) 16. └─marginaleffects:::get_comparisons(...) 17. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 18. └─data.table:::`[.data.table`(...) ── Error ('test-visualisation_recipe.R:151:3'): visualization_recipe ─────────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 15/15 in VECTOR_ELT Backtrace: ▆ 1. └─modelbased::estimate_slopes(model, trend = "Sepal.Width") at test-visualisation_recipe.R:151:3 2. └─modelbased::get_marginaltrends(...) 3. ├─base::suppressWarnings(do.call(marginaleffects::avg_slopes, fun_args)) 4. │ └─base::withCallingHandlers(...) 5. ├─base::do.call(marginaleffects::avg_slopes, fun_args) 6. ├─marginaleffects (local) `<fn>`(...) 7. │ └─base::eval.parent(call_attr) 8. │ └─base::eval(expr, p) 9. │ └─base::eval(expr, p) 10. ├─marginaleffects::slopes(...) 11. │ └─base::eval.parent(call_attr_c) 12. │ └─base::eval(expr, p) 13. │ └─base::eval(expr, p) 14. └─marginaleffects::comparisons(...) 15. ├─base::do.call("get_comparisons", args) 16. └─marginaleffects:::get_comparisons(...) 17. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 18. └─data.table:::`[.data.table`(...) [ FAIL 8 | WARN 0 | SKIP 59 | PASS 198 ] Error: ! Test failures. Execution halted Flavor: r-devel-windows-x86_64

Package parameters

Current CRAN status: ERROR: 1, NOTE: 2, OK: 10

Version: 0.28.3
Check: tests
Result: ERROR Running 'testthat.R' [75s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes. > test-include_reference.R: Your model may suffer from singularity (see `?lme4::isSingular` and > test-include_reference.R: `?performance::check_singularity`). > test-include_reference.R: Some of the confidence intervals of the random effects parameters are > test-include_reference.R: probably not meaningful! > test-include_reference.R: You may try to impose a prior on the random effects parameters, e.g. > test-include_reference.R: using the glmmTMB package. > test-include_reference.R: Your model may suffer from singularity (see `?lme4::isSingular` and > test-include_reference.R: `?performance::check_singularity`). > test-include_reference.R: Some of the confidence intervals of the random effects parameters are > test-include_reference.R: probably not meaningful! > test-include_reference.R: You may try to impose a prior on the random effects parameters, e.g. > test-include_reference.R: using the glmmTMB package. > test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: condition, talk > test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: condition, talk > test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: treatment, gender Saving _problems/test-marginaleffects-11.R Saving _problems/test-marginaleffects-94.R Saving _problems/test-marginaleffects-130.R Saving _problems/test-marginaleffects-153.R > test-n_factors.R: [1] "# Method Agreement Procedure:" > test-n_factors.R: [2] "" > test-n_factors.R: [3] "The choice of 1 dimensions is supported by 11 (84.62%) methods out of 13 (Bartlett, Anderson, Lawley, Optimal coordinates, Acceleration factor, Parallel analysis, Kaiser criterion, Scree (SE), Scree (R2), VSS complexity 1, Velicer's MAP)." > test-n_factors.R: [1] "# Method Agreement Procedure:" > test-n_factors.R: [2] "" > test-n_factors.R: [3] "The choice of 1 dimensions is supported by 3 (60.00%) methods out of 5 (Velicer's MAP, BIC, BIC (adjusted))." [ FAIL 4 | WARN 0 | SKIP 130 | PASS 714 ] ══ Skipped tests (130) ═════════════════════════════════════════════════════════ • On CRAN (125): 'test-GLMMadaptive.R:1:1', 'test-Hmisc.R:1:1', 'test-averaging.R:1:1', 'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1', 'test-compare_parameters.R:90:5', 'test-compare_parameters.R:95:5', 'test-complete_separation.R:4:1', 'test-complete_separation.R:18:1', 'test-complete_separation.R:28:1', 'test-coxph.R:69:1', 'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:3:1', 'test-equivalence_test.R:13:1', 'test-equivalence_test.R:22:3', 'test-equivalence_test.R:112:3', 'test-factor_analysis.R:2:3', 'test-factor_analysis.R:124:3', 'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:1:1', 'test-group_level_total.R:2:1', 'test-helper.R:1:1', 'test-ivreg.R:45:1', 'test-lcmm.R:1:1', 'test-lmerTest.R:1:1', 'test-include_reference.R:4:1', 'test-include_reference.R:62:1', 'test-include_reference.R:110:1', 'test-mipo.R:5:1', 'test-mipo.R:23:1', 'test-mmrm.R:1:1', 'test-model_parameters.BFBayesFactor.R:4:3', 'test-model_parameters.BFBayesFactor.R:77:3', 'test-model_parameters.BFBayesFactor.R:114:3', 'test-model_parameters.anova.R:1:1', 'test-model_parameters.aov.R:1:1', 'test-marginaleffects.R:170:1', 'test-marginaleffects.R:199:3', 'test-model_parameters.bracl.R:1:1', 'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1', 'test-model_parameters.aov_es_ci.R:183:3', 'test-model_parameters.aov_es_ci.R:294:3', 'test-model_parameters.aov_es_ci.R:344:3', 'test-model_parameters.aov_es_ci.R:397:3', 'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.efa_cfa.R:30:3', 'test-model_parameters.fixest_multi.R:1:1', 'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3', 'test-model_parameters.fixest.R:145:1', 'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:35:1', 'test-model_parameters.glm.R:67:1', 'test-model_parameters.logistf.R:1:1', 'test-model_parameters.logitr.R:1:1', 'test-model_parameters.mclogit.R:1:1', 'test-model_parameters.mediate.R:1:1', 'test-model_parameters.mixed.R:2:1', 'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1', 'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1', 'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1', 'test-model_parameters_std_mixed.R:1:1', 'test-n_factors.R:10:3', 'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1', 'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1', 'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:64:1', 'test-polr.R:1:1', 'test-plm.R:97:1', 'test-posterior.R:1:1', 'test-pool_parameters.R:1:1', 'test-pool_parameters.R:32:1', 'test-print_AER_labels.R:5:1', 'test-printing-stan.R:1:1', 'test-printing.R:1:1', 'test-pretty_names.R:40:1', 'test-pretty_names.R:73:5', 'test-quantreg.R:1:1', 'test-random_effects_ci-glmmTMB.R:3:1', 'test-random_effects_ci.R:1:1', 'test-printing2.R:14:5', 'test-printing2.R:21:5', 'test-printing2.R:26:5', 'test-printing2.R:31:5', 'test-printing2.R:36:5', 'test-printing2.R:48:5', 'test-printing2.R:91:7', 'test-printing2.R:126:5', 'test-rstanarm.R:2:1', 'test-robust.R:1:1', 'test-sampleSelection.R:2:1', 'test-simulate_model.R:19:1', 'test-simulate_parameters.R:18:1', 'test-serp.R:5:1', 'test-svylme.R:1:1', 'test-svyolr.R:1:1', 'test-visualisation_recipe.R:1:1', 'test-weightit.R:6:1', 'test-weightit.R:26:1', 'test-wrs2.R:55:1', 'test-standardize_parameters.R:28:1', 'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:297:3', 'test-standardize_parameters.R:332:3', 'test-standardize_parameters.R:425:3', 'test-standardize_parameters.R:515:3' • TODO: check this test locally, fails on CI, probably due to scoping issues? (1): 'test-marginaleffects.R:280:3' • TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3' • TODO: this one actually is not correct. (1): 'test-model_parameters_robust.R:127:3' • empty test (2): 'test-wrs2.R:69:1', 'test-wrs2.R:81:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-marginaleffects.R:7:3'): marginaleffects() ───────────────────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 10/10 in VECTOR_ELT Backtrace: ▆ 1. └─marginaleffects::comparisons(...) 2. ├─base::do.call("get_comparisons", args) 3. └─marginaleffects:::get_comparisons(...) 4. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 5. └─data.table:::`[.data.table`(...) ── Error ('test-marginaleffects.R:90:3'): comparisons() ──────────────────────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 10/10 in VECTOR_ELT Backtrace: ▆ 1. └─marginaleffects::comparisons(...) 2. ├─base::do.call("get_comparisons", args) 3. └─marginaleffects:::get_comparisons(...) 4. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 5. └─data.table:::`[.data.table`(...) ── Error ('test-marginaleffects.R:127:3'): slopes() ──────────────────────────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 10/10 in VECTOR_ELT Backtrace: ▆ 1. └─marginaleffects::comparisons(...) 2. ├─base::do.call("get_comparisons", args) 3. └─marginaleffects:::get_comparisons(...) 4. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 5. └─data.table:::`[.data.table`(...) ── Error ('test-marginaleffects.R:148:3'): multiple contrasts: Issue #779 ────── Error in ``[.data.table`(out, , `:=`(tmp_idx, seq_len(.N)), by = tmp)`: attempt access index 13/13 in VECTOR_ELT Backtrace: ▆ 1. ├─base::suppressWarnings(...) at test-marginaleffects.R:148:3 2. │ └─base::withCallingHandlers(...) 3. └─marginaleffects::comparisons(...) 4. ├─base::do.call("get_comparisons", args) 5. └─marginaleffects:::get_comparisons(...) 6. ├─out[, `:=`(tmp_idx, seq_len(.N)), by = tmp] 7. └─data.table:::`[.data.table`(...) [ FAIL 4 | WARN 0 | SKIP 130 | PASS 714 ] Error: ! Test failures. Execution halted Flavor: r-devel-windows-x86_64

Version: 0.28.3
Check: package dependencies
Result: NOTE Package suggested but not available for checking: ‘M3C’ Flavor: r-oldrel-macos-arm64

Version: 0.28.3
Check: package dependencies
Result: NOTE Package suggested but not available for checking: 'EGAnet' Flavor: r-oldrel-windows-x86_64

Package performance

Current CRAN status: ERROR: 1, OK: 12

Version: 0.15.3
Check: tests
Result: ERROR Running ‘testthat.R’ [19s/11s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_collinearity.R: NOTE: 2 fixed-effect singletons were removed (2 observations). Saving _problems/test-check_collinearity-157.R Saving _problems/test-check_collinearity-185.R > test-check_overdispersion.R: Overdispersion detected. > test-check_overdispersion.R: Underdispersion detected. > test-check_outliers.R: No outliers were detected (p = 0.238). > test-glmmPQL.R: iteration 1 > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-performance_aic.R: Model was not fitted with REML, however, `estimator = "REML"`. Set > test-performance_aic.R: `estimator = "ML"` to obtain identical results as from `AIC()`. [ FAIL 2 | WARN 2 | SKIP 41 | PASS 443 ] ══ Skipped tests (41) ══════════════════════════════════════════════════════════ • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:163:3', 'test-binned_residuals.R:190:3', 'test-check_convergence.R:1:1', 'test-check_dag.R:1:1', 'test-check_distribution.R:1:1', 'test-check_itemscale.R:1:1', 'test-check_itemscale.R:100:1', 'test-check_model.R:1:1', 'test-check_collinearity.R:193:1', 'test-check_collinearity.R:226:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-check_outliers.R:115:3', 'test-check_outliers.R:339:3', 'test-helpers.R:1:1', 'test-item_omega.R:1:1', 'test-item_omega.R:31:3', 'test-compare_performance.R:1:1', 'test-mclogit.R:56:1', 'test-model_performance.bayesian.R:1:1', 'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:37:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:36:1', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:1:1', 'test-r2_nagelkerke.R:35:3', 'test-r2_bayes.R:39:3', 'test-rmse.R:39:3', 'test-test_likelihoodratio.R:55:1' • On Mac (4): 'test-check_predictions.R:1:1', 'test-icc.R:1:1', 'test-nestedLogit.R:1:1', 'test-r2_nakagawa.R:1:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:300:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_collinearity.R:157:3'): check_collinearity | afex ────── Expected `expect_message(ccoW <- check_collinearity(aW))` to throw a warning. ── Failure ('test-check_collinearity.R:185:3'): check_collinearity | afex ────── Expected `expect_message(ccoW <- check_collinearity(aW))` to throw a warning. [ FAIL 2 | WARN 2 | SKIP 41 | PASS 443 ] Error: ! Test failures. Execution halted Flavor: r-release-macos-arm64

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.