CRAN Package Check Results for Maintainer ‘Dominique Makowski <officialeasystats at gmail.com>’

Last updated on 2025-08-30 03:50:11 CEST.

Package ERROR OK
bayestestR 1 12
insight 4 9
modelbased 4 9
parameters 6 7
performance 13

Package bayestestR

Current CRAN status: ERROR: 1, OK: 12

Version: 0.17.0
Check: package dependencies
Result: ERROR Package required and available but unsuitable version: ‘insight’ See section ‘The DESCRIPTION file’ in the ‘Writing R Extensions’ manual. Flavor: r-oldrel-macos-x86_64

Package insight

Current CRAN status: ERROR: 4, OK: 9

Version: 1.4.0
Check: tests
Result: ERROR Running ‘testthat.R’ [206s/104s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(insight) > test_check("insight") Starting 2 test processes [ FAIL 2 | WARN 1 | SKIP 90 | PASS 3436 ] ══ Skipped tests (90) ══════════════════════════════════════════════════════════ • On CRAN (82): 'test-GLMMadaptive.R:2:1', 'test-averaging.R:3:1', 'test-bias_correction.R:1:1', 'test-blmer.R:249:3', 'test-brms.R:1:1', 'test-brms_aterms.R:1:1', 'test-brms_gr_random_effects.R:1:1', 'test-brms_missing.R:1:1', 'test-brms_mm.R:1:1', 'test-brms_von_mises.R:1:1', 'test-clean_names.R:103:3', 'test-clean_parameters.R:1:1', 'test-coxme.R:7:1', 'test-clmm.R:165:3', 'test-cpglmm.R:145:3', 'test-display.R:10:3', 'test-display.R:32:3', 'test-export_table.R:4:3', 'test-export_table.R:8:3', 'test-export_table.R:106:3', 'test-export_table.R:133:3', 'test-export_table.R:164:3', 'test-export_table.R:192:3', 'test-export_table.R:204:3', 'test-export_table.R:232:3', 'test-export_table.R:292:3', 'test-export_table.R:309:3', 'test-export_table.R:372:3', 'test-find_smooth.R:31:3', 'test-find_random.R:27:3', 'test-format_table.R:2:1', 'test-format_table_ci.R:71:3', 'test-gam.R:2:1', 'test-get_data.R:385:1', 'test-get_loglikelihood.R:94:3', 'test-get_loglikelihood.R:159:3', 'test-get_predicted.R:2:1', 'test-get_priors.R:3:3', 'test-get_priors.R:18:3', 'test-get_varcov.R:40:3', 'test-get_datagrid.R:692:3', 'test-is_converged.R:28:1', 'test-iv_robust.R:120:3', 'test-lavaan.R:1:1', 'test-lme.R:34:3', 'test-lme.R:210:3', 'test-glmmTMB.R:71:3', 'test-glmmTMB.R:755:3', 'test-glmmTMB.R:787:3', 'test-glmmTMB.R:1095:3', 'test-marginaleffects.R:1:1', 'test-mgcv.R:1:1', 'test-mipo.R:1:1', 'test-mlogit.R:2:1', 'test-model_info.R:110:3', 'test-mvrstanarm.R:1:1', 'test-null_model.R:71:3', 'test-panelr-asym.R:142:3', 'test-panelr.R:272:3', 'test-phylolm.R:5:1', 'test-print_parameters.R:1:1', 'test-r2_nakagawa_bernoulli.R:1:1', 'test-r2_nakagawa_beta.R:1:1', 'test-r2_nakagawa_binomial.R:1:1', 'test-r2_nakagawa_gamma.R:1:1', 'test-r2_nakagawa_linear.R:1:1', 'test-r2_nakagawa_negbin.R:1:1', 'test-r2_nakagawa_negbin_zi.R:1:1', 'test-r2_nakagawa_ordered_beta.R:1:1', 'test-r2_nakagawa_poisson.R:1:1', 'test-r2_nakagawa_poisson_zi.R:1:1', 'test-r2_nakagawa_truncated_poisson.R:1:1', 'test-r2_nakagawa_tweedie.R:1:1', 'test-rlmer.R:259:3', 'test-rqss.R:1:1', 'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-selection.R:2:1', 'test-spatial.R:2:1', 'test-svylme.R:1:1', 'test-vgam.R:2:1', 'test-weightit.R:1:1' • On Linux (3): 'test-BayesFactorBF.R:1:1', 'test-MCMCglmm.R:1:1', 'test-get_data.R:150:3' • Package `logistf` is loaded and breaks `mmrm::mmrm()` (1): 'test-mmrm.R:4:1' • TRUE is TRUE (1): 'test-fixest.R:2:1' • works interactively (2): 'test-coxph-panel.R:34:3', 'test-coxph.R:38:3' • {bigglm} is not installed (1): 'test-model_info.R:24:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-get_datagrid.R:352:3'): get_datagrid - marginaleffects ─────── dim(res) (`actual`) not identical to c(6L, 2L) (`expected`). `actual`: 6 0 `expected`: 6 2 ── Error ('test-get_datagrid.R:353:3'): get_datagrid - marginaleffects ───────── <subscriptOutOfBoundsError/error/condition> Error in `.subset2(x, i, exact = exact)`: subscript out of bounds Backtrace: ▆ 1. ├─testthat::expect_true(all(c(4, 6, 8) %in% res[[1]])) at test-get_datagrid.R:353:3 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object") 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. ├─c(4, 6, 8) %in% res[[1]] 5. ├─res[[1]] 6. └─base::`[[.data.frame`(res, 1) 7. └─(function(x, i, exact) if (is.matrix(i)) as.matrix(x)[[i]] else .subset2(x, ... [ FAIL 2 | WARN 1 | SKIP 90 | PASS 3436 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 1.4.0
Check: tests
Result: ERROR Running ‘testthat.R’ [117s/67s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(insight) > test_check("insight") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 93 | PASS 3209 ] ══ Skipped tests (93) ══════════════════════════════════════════════════════════ • Installed parameters is version 0.27.0; but 0.27.0.1 is required (1): 'test-export_table.R:231:3' • On CRAN (70): 'test-GLMMadaptive.R:2:1', 'test-averaging.R:3:1', 'test-bias_correction.R:1:1', 'test-brms.R:1:1', 'test-brms_aterms.R:1:1', 'test-brms_gr_random_effects.R:1:1', 'test-brms_missing.R:1:1', 'test-brms_mm.R:1:1', 'test-blmer.R:249:3', 'test-brms_von_mises.R:1:1', 'test-clean_parameters.R:1:1', 'test-clean_names.R:103:3', 'test-coxme.R:7:1', 'test-clmm.R:165:3', 'test-cpglmm.R:145:3', 'test-display.R:10:3', 'test-display.R:32:3', 'test-export_table.R:4:3', 'test-export_table.R:8:3', 'test-export_table.R:106:3', 'test-export_table.R:133:3', 'test-export_table.R:164:3', 'test-export_table.R:192:3', 'test-export_table.R:204:3', 'test-export_table.R:292:3', 'test-export_table.R:309:3', 'test-export_table.R:372:3', 'test-find_random.R:27:3', 'test-find_smooth.R:31:3', 'test-format_table.R:2:1', 'test-format_table_ci.R:71:3', 'test-gam.R:2:1', 'test-get_loglikelihood.R:94:3', 'test-get_loglikelihood.R:159:3', 'test-get_datagrid.R:692:3', 'test-get_varcov.R:40:3', 'test-iv_robust.R:120:3', 'test-lavaan.R:1:1', 'test-lme.R:34:3', 'test-lme.R:210:3', 'test-marginaleffects.R:1:1', 'test-mgcv.R:1:1', 'test-mipo.R:1:1', 'test-mlogit.R:2:1', 'test-model_info.R:110:3', 'test-mvrstanarm.R:1:1', 'test-panelr-asym.R:142:3', 'test-panelr.R:272:3', 'test-phylolm.R:5:1', 'test-print_parameters.R:1:1', 'test-r2_nakagawa_bernoulli.R:1:1', 'test-r2_nakagawa_beta.R:1:1', 'test-r2_nakagawa_binomial.R:1:1', 'test-r2_nakagawa_gamma.R:1:1', 'test-r2_nakagawa_linear.R:1:1', 'test-r2_nakagawa_negbin.R:1:1', 'test-r2_nakagawa_negbin_zi.R:1:1', 'test-r2_nakagawa_ordered_beta.R:1:1', 'test-r2_nakagawa_poisson.R:1:1', 'test-r2_nakagawa_poisson_zi.R:1:1', 'test-r2_nakagawa_truncated_poisson.R:1:1', 'test-r2_nakagawa_tweedie.R:1:1', 'test-rlmer.R:259:3', 'test-rqss.R:1:1', 'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-spatial.R:2:1', 'test-svylme.R:1:1', 'test-vgam.R:2:1', 'test-weightit.R:1:1' • On Mac (14): 'test-MCMCglmm.R:1:1', 'test-epiR.R:1:1', 'test-get_data.R:1:1', 'test-get_datagrid.R:248:3', 'test-get_predicted.R:1:1', 'test-get_priors.R:2:3', 'test-get_priors.R:17:3', 'test-get_random.R:1:1', 'test-glmmTMB.R:1:1', 'test-is_converged.R:27:1', 'test-model_data.R:26:1', 'test-null_model.R:14:1', 'test-selection.R:1:1', 'test-vglm.R:1:1' • Package `logistf` is loaded and breaks `mmrm::mmrm()` (1): 'test-mmrm.R:4:1' • TRUE is TRUE (1): 'test-fixest.R:2:1' • getRversion() < "4.5.0" is TRUE (2): 'test-aov.R:2:3', 'test-dbart.R:2:1' • works interactively (2): 'test-coxph-panel.R:34:3', 'test-coxph.R:38:3' • {bigglm} is not installed (1): 'test-model_info.R:24:3' • {rstanarm} cannot be loaded (1): 'test-check_if_installed.R:4:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-betareg.R:200:5'): get_predicted ─────────────────────────────── Error in `FUN(X[[i]], ...)`: REAL() can only be applied to a 'numeric', not a 'logical' Backtrace: ▆ 1. ├─base::suppressWarnings(get_predicted(mp3)) at test-betareg.R:200:5 2. │ └─base::withCallingHandlers(...) 3. ├─insight::get_predicted(mp3) 4. └─insight:::get_predicted.default(mp3) 5. └─insight:::.get_predicted_transform(...) 6. └─base::lapply(ci_data[!se_col], link_inv) 7. └─stats (local) FUN(X[[i]], ...) [ FAIL 1 | WARN 0 | SKIP 93 | PASS 3209 ] Error: Test failures Execution halted Flavor: r-oldrel-macos-arm64

Version: 1.4.1
Check: tests
Result: ERROR Running ‘testthat.R’ [197s/134s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(insight) > test_check("insight") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 94 | PASS 3227 ] ══ Skipped tests (94) ══════════════════════════════════════════════════════════ • Installed parameters is version 0.27.0; but 0.27.0.1 is required (1): 'test-export_table.R:231:3' • On CRAN (73): 'test-GLMMadaptive.R:2:1', 'test-averaging.R:3:1', 'test-bias_correction.R:1:1', 'test-blmer.R:249:3', 'test-brms.R:1:1', 'test-brms_aterms.R:1:1', 'test-brms_gr_random_effects.R:1:1', 'test-brms_missing.R:1:1', 'test-brms_mm.R:1:1', 'test-brms_von_mises.R:1:1', 'test-clean_names.R:103:3', 'test-clean_parameters.R:1:1', 'test-coxme.R:7:1', 'test-clmm.R:165:3', 'test-cpglmm.R:145:3', 'test-display.R:10:3', 'test-display.R:32:3', 'test-export_table.R:4:3', 'test-export_table.R:8:3', 'test-export_table.R:106:3', 'test-export_table.R:133:3', 'test-export_table.R:164:3', 'test-export_table.R:192:3', 'test-export_table.R:204:3', 'test-export_table.R:292:3', 'test-export_table.R:309:3', 'test-export_table.R:372:3', 'test-find_random.R:27:3', 'test-fixest.R:2:1', 'test-format_table.R:2:1', 'test-format_table_ci.R:71:3', 'test-gam.R:2:1', 'test-find_smooth.R:31:3', 'test-get_loglikelihood.R:94:3', 'test-get_loglikelihood.R:159:3', 'test-get_datagrid.R:717:3', 'test-get_datagrid.R:754:5', 'test-get_varcov.R:41:3', 'test-get_varcov.R:55:3', 'test-iv_robust.R:120:3', 'test-lavaan.R:1:1', 'test-lme.R:34:3', 'test-lme.R:210:3', 'test-marginaleffects.R:1:1', 'test-mgcv.R:1:1', 'test-mipo.R:1:1', 'test-mlogit.R:2:1', 'test-model_info.R:110:3', 'test-mvrstanarm.R:1:1', 'test-panelr-asym.R:142:3', 'test-panelr.R:272:3', 'test-phylolm.R:5:1', 'test-print_parameters.R:1:1', 'test-r2_nakagawa_bernoulli.R:1:1', 'test-r2_nakagawa_beta.R:1:1', 'test-r2_nakagawa_binomial.R:1:1', 'test-r2_nakagawa_gamma.R:1:1', 'test-r2_nakagawa_linear.R:1:1', 'test-r2_nakagawa_negbin.R:1:1', 'test-r2_nakagawa_negbin_zi.R:1:1', 'test-r2_nakagawa_ordered_beta.R:1:1', 'test-r2_nakagawa_poisson.R:1:1', 'test-r2_nakagawa_poisson_zi.R:1:1', 'test-r2_nakagawa_truncated_poisson.R:1:1', 'test-r2_nakagawa_tweedie.R:1:1', 'test-rlmer.R:259:3', 'test-rqss.R:1:1', 'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-spatial.R:2:1', 'test-svylme.R:1:1', 'test-vgam.R:2:1', 'test-weightit.R:1:1' • On Mac (14): 'test-MCMCglmm.R:1:1', 'test-epiR.R:1:1', 'test-get_data.R:1:1', 'test-get_datagrid.R:248:3', 'test-get_predicted.R:1:1', 'test-get_priors.R:2:3', 'test-get_priors.R:17:3', 'test-get_random.R:1:1', 'test-glmmTMB.R:1:1', 'test-is_converged.R:27:1', 'test-model_data.R:26:1', 'test-null_model.R:14:1', 'test-selection.R:1:1', 'test-vglm.R:1:1' • Package `logistf` is loaded and breaks `mmrm::mmrm()` (1): 'test-mmrm.R:4:1' • getRversion() < "4.5.0" is TRUE (2): 'test-aov.R:2:3', 'test-dbart.R:2:1' • works interactively (2): 'test-coxph-panel.R:34:3', 'test-coxph.R:38:3' • {bigglm} is not installed (1): 'test-model_info.R:24:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-betareg.R:200:5'): get_predicted ─────────────────────────────── Error in `FUN(X[[i]], ...)`: REAL() can only be applied to a 'numeric', not a 'logical' Backtrace: ▆ 1. ├─base::suppressWarnings(get_predicted(mp3)) at test-betareg.R:200:5 2. │ └─base::withCallingHandlers(...) 3. ├─insight::get_predicted(mp3) 4. └─insight:::get_predicted.default(mp3) 5. └─insight:::.get_predicted_transform(...) 6. └─base::lapply(ci_data[!se_col], link_inv) 7. └─stats (local) FUN(X[[i]], ...) [ FAIL 1 | WARN 0 | SKIP 94 | PASS 3227 ] Error: Test failures Execution halted Flavor: r-oldrel-macos-x86_64

Version: 1.4.0
Check: tests
Result: ERROR Running 'testthat.R' [221s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(testthat) > library(insight) > test_check("insight") Starting 2 test processes [ FAIL 2 | WARN 0 | SKIP 89 | PASS 3532 ] ══ Skipped tests (89) ══════════════════════════════════════════════════════════ • On CRAN (81): 'test-GLMMadaptive.R:2:1', 'test-averaging.R:3:1', 'test-bias_correction.R:1:1', 'test-blmer.R:249:3', 'test-brms_aterms.R:1:1', 'test-brms.R:1:1', 'test-brms_gr_random_effects.R:1:1', 'test-brms_missing.R:1:1', 'test-brms_mm.R:1:1', 'test-brms_von_mises.R:1:1', 'test-clean_names.R:103:3', 'test-clean_parameters.R:1:1', 'test-coxme.R:7:1', 'test-cpglmm.R:145:3', 'test-clmm.R:165:3', 'test-display.R:10:3', 'test-display.R:32:3', 'test-export_table.R:4:3', 'test-export_table.R:8:3', 'test-export_table.R:106:3', 'test-export_table.R:133:3', 'test-export_table.R:164:3', 'test-export_table.R:192:3', 'test-export_table.R:204:3', 'test-export_table.R:232:3', 'test-export_table.R:292:3', 'test-export_table.R:309:3', 'test-export_table.R:372:3', 'test-find_smooth.R:31:3', 'test-find_random.R:27:3', 'test-format_table.R:2:1', 'test-format_table_ci.R:71:3', 'test-gam.R:2:1', 'test-get_data.R:385:1', 'test-get_loglikelihood.R:94:3', 'test-get_loglikelihood.R:159:3', 'test-get_predicted.R:2:1', 'test-get_priors.R:18:3', 'test-get_varcov.R:40:3', 'test-get_datagrid.R:692:3', 'test-is_converged.R:28:1', 'test-iv_robust.R:120:3', 'test-lavaan.R:1:1', 'test-lme.R:34:3', 'test-lme.R:210:3', 'test-glmmTMB.R:71:3', 'test-glmmTMB.R:755:3', 'test-glmmTMB.R:787:3', 'test-glmmTMB.R:1095:3', 'test-marginaleffects.R:1:1', 'test-mgcv.R:1:1', 'test-mipo.R:1:1', 'test-mlogit.R:2:1', 'test-model_info.R:110:3', 'test-mvrstanarm.R:1:1', 'test-null_model.R:71:3', 'test-panelr-asym.R:142:3', 'test-panelr.R:272:3', 'test-phylolm.R:5:1', 'test-print_parameters.R:1:1', 'test-r2_nakagawa_bernoulli.R:1:1', 'test-r2_nakagawa_beta.R:1:1', 'test-r2_nakagawa_binomial.R:1:1', 'test-r2_nakagawa_gamma.R:1:1', 'test-r2_nakagawa_linear.R:1:1', 'test-r2_nakagawa_negbin.R:1:1', 'test-r2_nakagawa_negbin_zi.R:1:1', 'test-r2_nakagawa_ordered_beta.R:1:1', 'test-r2_nakagawa_poisson.R:1:1', 'test-r2_nakagawa_poisson_zi.R:1:1', 'test-r2_nakagawa_truncated_poisson.R:1:1', 'test-r2_nakagawa_tweedie.R:1:1', 'test-rlmer.R:259:3', 'test-rqss.R:1:1', 'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-selection.R:2:1', 'test-spatial.R:2:1', 'test-svylme.R:1:1', 'test-vgam.R:2:1', 'test-weightit.R:1:1' • On Windows (1): 'test-get_priors.R:2:3' • Package `logistf` is loaded and breaks `mmrm::mmrm()` (1): 'test-mmrm.R:4:1' • TRUE is TRUE (1): 'test-fixest.R:2:1' • getRversion() < "4.5.0" is TRUE (2): 'test-aov.R:2:3', 'test-dbart.R:2:1' • works interactively (2): 'test-coxph-panel.R:34:3', 'test-coxph.R:38:3' • {bigglm} is not installed (1): 'test-model_info.R:24:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-get_datagrid.R:352:3'): get_datagrid - marginaleffects ─────── dim(res) (`actual`) not identical to c(6L, 2L) (`expected`). `actual`: 6 0 `expected`: 6 2 ── Error ('test-get_datagrid.R:353:3'): get_datagrid - marginaleffects ───────── <subscriptOutOfBoundsError/error/condition> Error in `.subset2(x, i, exact = exact)`: subscript out of bounds Backtrace: ▆ 1. ├─testthat::expect_true(all(c(4, 6, 8) %in% res[[1]])) at test-get_datagrid.R:353:3 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object") 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. ├─c(4, 6, 8) %in% res[[1]] 5. ├─res[[1]] 6. └─base::`[[.data.frame`(res, 1) 7. └─(function(x, i, exact) if (is.matrix(i)) as.matrix(x)[[i]] else .subset2(x, ... [ FAIL 2 | WARN 0 | SKIP 89 | PASS 3532 ] Error: Test failures Execution halted Flavor: r-oldrel-windows-x86_64

Package modelbased

Current CRAN status: ERROR: 4, OK: 9

Version: 0.12.0
Check: tests
Result: ERROR Running ‘testthat.R’ [42s/22s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 2 | WARN 0 | SKIP 46 | PASS 224 ] ══ Skipped tests (46) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:58:3' • On CRAN (38): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_bookexamples.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_grouplevel.R:54:3', 'test-estimate_grouplevel.R:72:3', 'test-estimate_grouplevel.R:93:3', 'test-estimate_grouplevel.R:123:3', 'test-estimate_means-average.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_slopes.R:129:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-joint_test.R:1:1', 'test-keep_iterations.R:1:1', 'test-maihda.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-grouplevel.R:1:1', 'test-predict-dpar.R:1:1', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-transform_response.R:16:3', 'test-vcov.R:1:1', 'test-zeroinfl.R:1:1' • On Linux (7): 'test-plot-facet.R:1:1', 'test-plot-flexible_numeric.R:1:1', 'test-plot-ordinal.R:1:1', 'test-plot-slopes.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1', 'test-scoping_issues.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-attributes_estimatefun.R:93:3'): attributes_means, slopes ──── Names of attributes(estim) ('names', 'class', 'row.names', 'trend', 'p_adjust', 'transform', 'coef_name', 'slope', 'ci', 'model_info', 'keep_iterations', 'vcov', 'table_title', 'table_footer', 'model', 'response') don't match 'names', 'class', 'row.names', 'trend', 'comparison', 'p_adjust', 'transform', 'coef_name', 'slope', 'ci', 'model_info', 'keep_iterations', 'vcov', 'table_title', 'table_footer', 'model', 'response' ── Error ('test-estimate_slopes.R:81:3'): estimate_slopes, johnson-neyman p-adjust ── Error: The entered object is not a model object. Backtrace: ▆ 1. └─modelbased::estimate_slopes(...) at test-estimate_slopes.R:81:3 2. └─modelbased::get_marginaltrends(...) 3. └─modelbased:::.p_adjust(model, estimated, p_adjust, verbose, ...) 4. └─modelbased:::.p_adjust_esarey(params) 5. ├─insight::get_df(model, type = "wald") 6. └─insight:::get_df.default(model, type = "wald") 7. ├─insight::find_statistic(x) 8. └─insight:::find_statistic.default(x) 9. └─insight::format_error("The entered object is not a model object.") 10. └─insight::format_alert(..., type = "error") [ FAIL 2 | WARN 0 | SKIP 46 | PASS 224 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.12.0
Check: tests
Result: ERROR Running ‘testthat.R’ [121s/217s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 2 | WARN 0 | SKIP 46 | PASS 224 ] ══ Skipped tests (46) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:58:3' • On CRAN (38): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_bookexamples.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_grouplevel.R:54:3', 'test-estimate_grouplevel.R:72:3', 'test-estimate_grouplevel.R:93:3', 'test-estimate_grouplevel.R:123:3', 'test-estimate_means-average.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_slopes.R:129:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-joint_test.R:1:1', 'test-keep_iterations.R:1:1', 'test-maihda.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-grouplevel.R:1:1', 'test-predict-dpar.R:1:1', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-transform_response.R:16:3', 'test-vcov.R:1:1', 'test-zeroinfl.R:1:1' • On Linux (7): 'test-plot-facet.R:1:1', 'test-plot-flexible_numeric.R:1:1', 'test-plot-ordinal.R:1:1', 'test-plot-slopes.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1', 'test-scoping_issues.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-attributes_estimatefun.R:93:3'): attributes_means, slopes ──── Names of attributes(estim) ('names', 'class', 'row.names', 'trend', 'p_adjust', 'transform', 'coef_name', 'slope', 'ci', 'model_info', 'keep_iterations', 'vcov', 'table_title', 'table_footer', 'model', 'response') don't match 'names', 'class', 'row.names', 'trend', 'comparison', 'p_adjust', 'transform', 'coef_name', 'slope', 'ci', 'model_info', 'keep_iterations', 'vcov', 'table_title', 'table_footer', 'model', 'response' ── Error ('test-estimate_slopes.R:81:3'): estimate_slopes, johnson-neyman p-adjust ── Error: The entered object is not a model object. Backtrace: ▆ 1. └─modelbased::estimate_slopes(...) at test-estimate_slopes.R:81:3 2. └─modelbased::get_marginaltrends(...) 3. └─modelbased:::.p_adjust(model, estimated, p_adjust, verbose, ...) 4. └─modelbased:::.p_adjust_esarey(params) 5. ├─insight::get_df(model, type = "wald") 6. └─insight:::get_df.default(model, type = "wald") 7. ├─insight::find_statistic(x) 8. └─insight:::find_statistic.default(x) 9. └─insight::format_error("The entered object is not a model object.") 10. └─insight::format_alert(..., type = "error") [ FAIL 2 | WARN 0 | SKIP 46 | PASS 224 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.12.0
Check: tests
Result: ERROR Running ‘testthat.R’ [97s/128s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 2 | WARN 0 | SKIP 46 | PASS 224 ] ══ Skipped tests (46) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:58:3' • On CRAN (38): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_bookexamples.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_means-average.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_grouplevel.R:54:3', 'test-estimate_grouplevel.R:72:3', 'test-estimate_grouplevel.R:93:3', 'test-estimate_grouplevel.R:123:3', 'test-estimate_slopes.R:129:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-joint_test.R:1:1', 'test-keep_iterations.R:1:1', 'test-maihda.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-grouplevel.R:1:1', 'test-predict-dpar.R:1:1', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-vcov.R:1:1', 'test-transform_response.R:16:3', 'test-zeroinfl.R:1:1' • On Linux (7): 'test-plot-facet.R:1:1', 'test-plot-flexible_numeric.R:1:1', 'test-plot-ordinal.R:1:1', 'test-plot-slopes.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1', 'test-scoping_issues.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-attributes_estimatefun.R:93:3'): attributes_means, slopes ──── Names of attributes(estim) ('names', 'class', 'row.names', 'trend', 'p_adjust', 'transform', 'coef_name', 'slope', 'ci', 'model_info', 'keep_iterations', 'vcov', 'table_title', 'table_footer', 'model', 'response') don't match 'names', 'class', 'row.names', 'trend', 'comparison', 'p_adjust', 'transform', 'coef_name', 'slope', 'ci', 'model_info', 'keep_iterations', 'vcov', 'table_title', 'table_footer', 'model', 'response' ── Error ('test-estimate_slopes.R:81:3'): estimate_slopes, johnson-neyman p-adjust ── Error: The entered object is not a model object. Backtrace: ▆ 1. └─modelbased::estimate_slopes(...) at test-estimate_slopes.R:81:3 2. └─modelbased::get_marginaltrends(...) 3. └─modelbased:::.p_adjust(model, estimated, p_adjust, verbose, ...) 4. └─modelbased:::.p_adjust_esarey(params) 5. ├─insight::get_df(model, type = "wald") 6. └─insight:::get_df.default(model, type = "wald") 7. ├─insight::find_statistic(x) 8. └─insight:::find_statistic.default(x) 9. └─insight::format_error("The entered object is not a model object.") 10. └─insight::format_alert(..., type = "error") [ FAIL 2 | WARN 0 | SKIP 46 | PASS 224 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 0.12.0
Check: tests
Result: ERROR Running 'testthat.R' [48s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 2 | WARN 0 | SKIP 52 | PASS 228 ] ══ Skipped tests (52) ══════════════════════════════════════════════════════════ • On CRAN (52): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_bookexamples.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_grouplevel.R:54:3', 'test-estimate_grouplevel.R:72:3', 'test-estimate_grouplevel.R:93:3', 'test-estimate_grouplevel.R:123:3', 'test-estimate_means-average.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_slopes.R:129:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-joint_test.R:1:1', 'test-keep_iterations.R:1:1', 'test-maihda.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-facet.R:7:1', 'test-plot-flexible_numeric.R:6:1', 'test-plot-grouplevel.R:1:1', 'test-plot-ordinal.R:8:1', 'test-plot-slopes.R:6:1', 'test-plot.R:7:1', 'test-predict-dpar.R:1:1', 'test-print.R:14:3', 'test-print.R:26:3', 'test-print.R:37:3', 'test-print.R:50:3', 'test-print.R:65:3', 'test-print.R:82:5', 'test-print.R:96:3', 'test-print.R:110:3', 'test-print.R:118:3', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-transform_response.R:16:3', 'test-vcov.R:1:1', 'test-zeroinfl.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-attributes_estimatefun.R:93:3'): attributes_means, slopes ──── Names of attributes(estim) ('names', 'class', 'row.names', 'trend', 'p_adjust', 'transform', 'coef_name', 'slope', 'ci', 'model_info', 'keep_iterations', 'vcov', 'table_title', 'table_footer', 'model', 'response') don't match 'names', 'class', 'row.names', 'trend', 'comparison', 'p_adjust', 'transform', 'coef_name', 'slope', 'ci', 'model_info', 'keep_iterations', 'vcov', 'table_title', 'table_footer', 'model', 'response' ── Error ('test-estimate_slopes.R:81:3'): estimate_slopes, johnson-neyman p-adjust ── Error: The entered object is not a model object. Backtrace: ▆ 1. └─modelbased::estimate_slopes(...) at test-estimate_slopes.R:81:3 2. └─modelbased::get_marginaltrends(...) 3. └─modelbased:::.p_adjust(model, estimated, p_adjust, verbose, ...) 4. └─modelbased:::.p_adjust_esarey(params) 5. ├─insight::get_df(model, type = "wald") 6. └─insight:::get_df.default(model, type = "wald") 7. ├─insight::find_statistic(x) 8. └─insight:::find_statistic.default(x) 9. └─insight::format_error("The entered object is not a model object.") 10. └─insight::format_alert(..., type = "error") [ FAIL 2 | WARN 0 | SKIP 52 | PASS 228 ] Error: Test failures Execution halted Flavor: r-oldrel-windows-x86_64

Package parameters

Current CRAN status: ERROR: 6, OK: 7

Version: 0.28.0
Check: tests
Result: ERROR Running ‘testthat.R’ [105s/55s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes [ FAIL 1 | WARN 4 | SKIP 124 | PASS 692 ] ══ Skipped tests (124) ═════════════════════════════════════════════════════════ • On CRAN (115): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1', 'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1', 'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5', 'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5', 'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3', 'test-equivalence_test.R:82:3', 'test-factor_analysis.R:2:3', 'test-factor_analysis.R:124:3', 'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:8:1', 'test-group_level_total.R:2:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3', 'test-include_reference.R:16:3', 'test-include_reference.R:69:3', 'test-include_reference.R:121:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1', 'test-model_parameters.aov.R:1:1', 'test-marginaleffects.R:131:3', 'test-marginaleffects.R:154:3', 'test-marginaleffects.R:173:3', 'test-model_parameters.aov_es_ci.R:158:3', 'test-model_parameters.aov_es_ci.R:269:3', 'test-model_parameters.aov_es_ci.R:319:3', 'test-model_parameters.aov_es_ci.R:372:3', 'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1', 'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.fixest_multi.R:3:1', 'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3', 'test-model_parameters.fixest.R:147:5', 'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3', 'test-model_parameters.glm.R:76:3', 'test-model_parameters.logistf.R:1:1', 'test-model_parameters.logitr.R:1:1', 'test-model_parameters.mclogit.R:5:1', 'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1', 'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1', 'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1', 'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1', 'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3', 'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1', 'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1', 'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1', 'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1', 'test-print_AER_labels.R:11:5', 'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-pretty_names.R:65:5', 'test-pretty_names.R:82:7', 'test-quantreg.R:1:1', 'test-random_effects_ci.R:4:1', 'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-printing2.R:127:7', 'test-robust.R:2:1', 'test-rstanarm.R:2:1', 'test-sampleSelection.R:2:1', 'test-serp.R:16:5', 'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3', 'test-weightit.R:43:3', 'test-wrs2.R:58:3', 'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3', 'test-standardize_parameters.R:333:3', 'test-standardize_parameters.R:426:3', 'test-standardize_parameters.R:516:3' • On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1', 'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1', 'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1' • TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3' • TODO: this one actually is not correct. (1): 'test-model_parameters_robust.R:127:3' • empty test (2): 'test-wrs2.R:69:1', 'test-wrs2.R:81:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-marginaleffects.R:52:3'): predictions() ────────────────────── Names of `out` ('Predicted', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p', 'Species') don't match 'Predicted', 'Species', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p' [ FAIL 1 | WARN 4 | SKIP 124 | PASS 692 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.28.0
Check: tests
Result: ERROR Running ‘testthat.R’ [250s/335s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 124 | PASS 692 ] ══ Skipped tests (124) ═════════════════════════════════════════════════════════ • On CRAN (115): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1', 'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1', 'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5', 'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5', 'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3', 'test-equivalence_test.R:82:3', 'test-factor_analysis.R:2:3', 'test-factor_analysis.R:124:3', 'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:8:1', 'test-group_level_total.R:2:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3', 'test-include_reference.R:16:3', 'test-include_reference.R:69:3', 'test-include_reference.R:121:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1', 'test-model_parameters.aov.R:1:1', 'test-model_parameters.aov_es_ci.R:158:3', 'test-model_parameters.aov_es_ci.R:269:3', 'test-model_parameters.aov_es_ci.R:319:3', 'test-model_parameters.aov_es_ci.R:372:3', 'test-marginaleffects.R:131:3', 'test-marginaleffects.R:154:3', 'test-marginaleffects.R:173:3', 'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1', 'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3', 'test-model_parameters.fixest.R:147:5', 'test-model_parameters.fixest_multi.R:3:1', 'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.logistf.R:1:1', 'test-model_parameters.logitr.R:1:1', 'test-model_parameters.glm.R:40:3', 'test-model_parameters.glm.R:76:3', 'test-model_parameters.mclogit.R:5:1', 'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1', 'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1', 'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1', 'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1', 'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3', 'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1', 'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1', 'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1', 'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-pretty_names.R:65:5', 'test-pretty_names.R:82:7', 'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1', 'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-print_AER_labels.R:11:5', 'test-quantreg.R:1:1', 'test-random_effects_ci.R:4:1', 'test-robust.R:2:1', 'test-rstanarm.R:2:1', 'test-sampleSelection.R:2:1', 'test-serp.R:16:5', 'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-printing2.R:127:7', 'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3', 'test-weightit.R:43:3', 'test-wrs2.R:58:3', 'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3', 'test-standardize_parameters.R:333:3', 'test-standardize_parameters.R:426:3', 'test-standardize_parameters.R:516:3' • On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1', 'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1', 'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1' • TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3' • TODO: this one actually is not correct. (1): 'test-model_parameters_robust.R:127:3' • empty test (2): 'test-wrs2.R:69:1', 'test-wrs2.R:81:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-marginaleffects.R:52:3'): predictions() ────────────────────── Names of `out` ('Predicted', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p', 'Species') don't match 'Predicted', 'Species', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p' [ FAIL 1 | WARN 0 | SKIP 124 | PASS 692 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.28.0
Check: tests
Result: ERROR Running ‘testthat.R’ [248s/334s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes [ FAIL 1 | WARN 4 | SKIP 124 | PASS 692 ] ══ Skipped tests (124) ═════════════════════════════════════════════════════════ • On CRAN (115): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1', 'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1', 'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5', 'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5', 'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3', 'test-equivalence_test.R:82:3', 'test-factor_analysis.R:2:3', 'test-factor_analysis.R:124:3', 'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1', 'test-glmmTMB.R:8:1', 'test-glmmTMB-profile_CI.R:2:3', 'test-group_level_total.R:2:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3', 'test-include_reference.R:16:3', 'test-include_reference.R:69:3', 'test-include_reference.R:121:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1', 'test-model_parameters.aov.R:1:1', 'test-marginaleffects.R:131:3', 'test-marginaleffects.R:154:3', 'test-marginaleffects.R:173:3', 'test-model_parameters.aov_es_ci.R:158:3', 'test-model_parameters.aov_es_ci.R:269:3', 'test-model_parameters.aov_es_ci.R:319:3', 'test-model_parameters.aov_es_ci.R:372:3', 'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1', 'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.fixest_multi.R:3:1', 'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3', 'test-model_parameters.fixest.R:147:5', 'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.logistf.R:1:1', 'test-model_parameters.logitr.R:1:1', 'test-model_parameters.glm.R:40:3', 'test-model_parameters.glm.R:76:3', 'test-model_parameters.mclogit.R:5:1', 'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1', 'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1', 'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1', 'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1', 'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3', 'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1', 'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1', 'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1', 'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1', 'test-posterior.R:2:1', 'test-plm.R:111:3', 'test-print_AER_labels.R:11:5', 'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-pretty_names.R:65:5', 'test-pretty_names.R:82:7', 'test-quantreg.R:1:1', 'test-random_effects_ci.R:4:1', 'test-robust.R:2:1', 'test-rstanarm.R:2:1', 'test-sampleSelection.R:2:1', 'test-serp.R:16:5', 'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-printing2.R:127:7', 'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3', 'test-weightit.R:43:3', 'test-wrs2.R:58:3', 'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3', 'test-standardize_parameters.R:333:3', 'test-standardize_parameters.R:426:3', 'test-standardize_parameters.R:516:3' • On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1', 'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1', 'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1' • TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3' • TODO: this one actually is not correct. (1): 'test-model_parameters_robust.R:127:3' • empty test (2): 'test-wrs2.R:69:1', 'test-wrs2.R:81:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-marginaleffects.R:52:3'): predictions() ────────────────────── Names of `out` ('Predicted', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p', 'Species') don't match 'Predicted', 'Species', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p' [ FAIL 1 | WARN 4 | SKIP 124 | PASS 692 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 0.28.0
Check: dependencies in R code
Result: WARN Missing or unexported object: ‘insight::get_mixed_info’ Flavors: r-oldrel-macos-arm64, r-oldrel-macos-x86_64

Version: 0.28.0
Check: examples
Result: ERROR Running examples in ‘parameters-Ex.R’ failed The error most likely occurred in: > ### Name: model_parameters.glmmTMB > ### Title: Parameters from Mixed Models > ### Aliases: model_parameters.glmmTMB > > ### ** Examples > > ## Don't show: > if (require("lme4") && require("glmmTMB")) (if (getRversion() >= "3.4") withAutoprint else force)({ # examplesIf + ## End(Don't show) + library(parameters) + data(mtcars) + model <- lme4::lmer(mpg ~ wt + (1 | gear), data = mtcars) + model_parameters(model) + + ## Don't show: + }) # examplesIf Loading required package: lme4 Loading required package: Matrix Loading required package: glmmTMB > library(parameters) > data(mtcars) > model <- lme4::lmer(mpg ~ wt + (1 | gear), data = mtcars) > model_parameters(model) Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Execution halted Flavors: r-oldrel-macos-arm64, r-oldrel-macos-x86_64

Version: 0.28.0
Check: tests
Result: ERROR Running ‘testthat.R’ [39s/20s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes [ FAIL 7 | WARN 0 | SKIP 125 | PASS 685 ] ══ Skipped tests (125) ═════════════════════════════════════════════════════════ • On CRAN (111): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1', 'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1', 'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5', 'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5', 'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3', 'test-equivalence_test.R:82:3', 'test-factor_analysis.R:2:3', 'test-factor_analysis.R:124:3', 'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:8:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3', 'test-include_reference.R:16:3', 'test-include_reference.R:69:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.BFBayesFactor.R:4:3', 'test-model_parameters.BFBayesFactor.R:77:3', 'test-model_parameters.BFBayesFactor.R:114:3', 'test-model_parameters.anova.R:1:1', 'test-model_parameters.aov.R:1:1', 'test-marginaleffects.R:131:3', 'test-marginaleffects.R:154:3', 'test-marginaleffects.R:173:3', 'test-model_parameters.aov_es_ci.R:158:3', 'test-model_parameters.aov_es_ci.R:269:3', 'test-model_parameters.aov_es_ci.R:319:3', 'test-model_parameters.aov_es_ci.R:372:3', 'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1', 'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.fixest_multi.R:3:1', 'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3', 'test-model_parameters.fixest.R:147:5', 'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3', 'test-model_parameters.glm.R:76:3', 'test-model_parameters.logistf.R:1:1', 'test-model_parameters.logitr.R:1:1', 'test-model_parameters.mclogit.R:5:1', 'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1', 'test-model_parameters.nnet.R:5:1', 'test-model_parameters_df.R:1:1', 'test-model_parameters.vgam.R:3:1', 'test-model_parameters_ordinal.R:1:1', 'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1', 'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3', 'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1', 'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1', 'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1', 'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1', 'test-print_AER_labels.R:11:5', 'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-pretty_names.R:65:5', 'test-pretty_names.R:82:7', 'test-quantreg.R:1:1', 'test-robust.R:2:1', 'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-printing2.R:127:7', 'test-serp.R:16:5', 'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3', 'test-wrs2.R:58:3', 'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3', 'test-standardize_parameters.R:333:3', 'test-standardize_parameters.R:426:3', 'test-standardize_parameters.R:516:3' • On Mac (9): 'test-group_level_total.R:1:1', 'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1', 'test-random_effects_ci.R:1:1', 'test-rstanarm.R:1:1', 'test-sampleSelection.R:1:1', 'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1', 'test-weightit.R:1:1' • TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3' • TODO: this one actually is not correct. (1): 'test-model_parameters_robust.R:127:3' • empty test (2): 'test-wrs2.R:69:1', 'test-wrs2.R:81:1' • getRversion() < "4.5" is TRUE (1): 'test-include_reference.R:112:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-compare_parameters.R:30:7'): compare_parameters, default ───── colnames(out) (`actual`) not identical to c("Parameter", "m1", "m2", "m3") (`expected`). `actual`: "Parameter" "m1" "m2" "Log-Mean (CI) (m3)" "Log-Mean (m3)" `expected`: "Parameter" "m1" "m2" "m3" ── Failure ('test-compare_parameters.R:58:7'): compare_parameters, se_p2 ─────── colnames(out) (`actual`) not identical to c(...) (`expected`). `actual[5:8]`: "p (m2)" "Log-Mean (SE) (m3)" "p (m3)" "Log-Mean (m3)" `expected[5:7]`: "p (m2)" "Log-Mean (SE) (m3)" "p (m3)" ── Failure ('test-compare_parameters.R:83:7'): compare_parameters, column name with escaping regex characters ── out[1] (`actual`) not identical to "Parameter | linear model (m1) | logistic reg. (m2)" (`expected`). actual vs expected - "Parameter | linear model (m1) | Log-Odds (CI) (logistic reg. (m2)) | Log-Odds (logistic reg. (m2))" + "Parameter | linear model (m1) | logistic reg. (m2)" ── Error ('test-format_model_parameters.R:190:3'): format, compare_parameters, mixed models ── Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Backtrace: ▆ 1. └─parameters::compare_parameters(...) at test-format_model_parameters.R:190:3 2. └─base::lapply(...) 3. └─parameters (local) FUN(X[[i]], ...) 4. ├─parameters::model_parameters(...) 5. └─parameters:::model_parameters.merMod(...) 6. └─parameters:::.add_random_effects_lme4(...) 7. ├─parameters:::.extract_random_variances(...) 8. └─parameters:::.extract_random_variances.default(...) 9. ├─base::suppressWarnings(...) 10. │ └─base::withCallingHandlers(...) 11. └─parameters:::.extract_random_variances_helper(...) ── Error ('test-model_parameters.blmerMod.R:19:3'): model_parameters.blmerMod-all ── Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Backtrace: ▆ 1. ├─parameters::model_parameters(model, effects = "all") at test-model_parameters.blmerMod.R:19:3 2. └─parameters:::model_parameters.merMod(model, effects = "all") 3. └─parameters:::.add_random_effects_lme4(...) 4. ├─parameters:::.extract_random_variances(...) 5. └─parameters:::.extract_random_variances.default(...) 6. ├─base::suppressWarnings(...) 7. │ └─base::withCallingHandlers(...) 8. └─parameters:::.extract_random_variances_helper(...) ── Error ('test-model_parameters_labels.R:13:7'): model_parameters_labels ────── Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Backtrace: ▆ 1. ├─testthat::expect_equal(...) at test-model_parameters_labels.R:13:7 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object") 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. ├─parameters::model_parameters(m1) 5. └─parameters:::model_parameters.merMod(m1) 6. └─parameters:::.add_random_effects_lme4(...) 7. ├─parameters:::.extract_random_variances(...) 8. └─parameters:::.extract_random_variances.default(...) 9. ├─base::suppressWarnings(...) 10. │ └─base::withCallingHandlers(...) 11. └─parameters:::.extract_random_variances_helper(...) ── Error ('test-model_parameters_labels.R:119:7'): Issue #806: Missing label for variance component in lme4 ── Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Backtrace: ▆ 1. ├─parameters::parameters(mod, pretty_names = "labels") at test-model_parameters_labels.R:119:7 2. └─parameters:::model_parameters.merMod(mod, pretty_names = "labels") 3. └─parameters:::.add_random_effects_lme4(...) 4. ├─parameters:::.extract_random_variances(...) 5. └─parameters:::.extract_random_variances.default(...) 6. ├─base::suppressWarnings(...) 7. │ └─base::withCallingHandlers(...) 8. └─parameters:::.extract_random_variances_helper(...) [ FAIL 7 | WARN 0 | SKIP 125 | PASS 685 ] Error: Test failures Execution halted Flavor: r-oldrel-macos-arm64

Version: 0.28.0
Check: tests
Result: ERROR Running ‘testthat.R’ [90s/52s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes [ FAIL 8 | WARN 4 | SKIP 125 | PASS 684 ] ══ Skipped tests (125) ═════════════════════════════════════════════════════════ • On CRAN (111): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1', 'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1', 'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5', 'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5', 'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3', 'test-equivalence_test.R:82:3', 'test-factor_analysis.R:2:3', 'test-factor_analysis.R:124:3', 'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:8:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3', 'test-include_reference.R:16:3', 'test-include_reference.R:69:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.BFBayesFactor.R:4:3', 'test-model_parameters.BFBayesFactor.R:77:3', 'test-model_parameters.BFBayesFactor.R:114:3', 'test-marginaleffects.R:131:3', 'test-marginaleffects.R:154:3', 'test-marginaleffects.R:173:3', 'test-model_parameters.anova.R:1:1', 'test-model_parameters.aov.R:1:1', 'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1', 'test-model_parameters.aov_es_ci.R:158:3', 'test-model_parameters.aov_es_ci.R:269:3', 'test-model_parameters.aov_es_ci.R:319:3', 'test-model_parameters.aov_es_ci.R:372:3', 'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.fixest_multi.R:3:1', 'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3', 'test-model_parameters.fixest.R:147:5', 'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.logistf.R:1:1', 'test-model_parameters.logitr.R:1:1', 'test-model_parameters.glm.R:40:3', 'test-model_parameters.glm.R:76:3', 'test-model_parameters.mclogit.R:5:1', 'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1', 'test-model_parameters.nnet.R:5:1', 'test-model_parameters_df.R:1:1', 'test-model_parameters.vgam.R:3:1', 'test-model_parameters_ordinal.R:1:1', 'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1', 'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3', 'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1', 'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1', 'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1', 'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1', 'test-print_AER_labels.R:11:5', 'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-pretty_names.R:65:5', 'test-pretty_names.R:82:7', 'test-quantreg.R:1:1', 'test-robust.R:2:1', 'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-printing2.R:127:7', 'test-serp.R:16:5', 'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3', 'test-wrs2.R:58:3', 'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3', 'test-standardize_parameters.R:333:3', 'test-standardize_parameters.R:426:3', 'test-standardize_parameters.R:516:3' • On Mac (9): 'test-group_level_total.R:1:1', 'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1', 'test-random_effects_ci.R:1:1', 'test-rstanarm.R:1:1', 'test-sampleSelection.R:1:1', 'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1', 'test-weightit.R:1:1' • TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3' • TODO: this one actually is not correct. (1): 'test-model_parameters_robust.R:127:3' • empty test (2): 'test-wrs2.R:69:1', 'test-wrs2.R:81:1' • getRversion() < "4.5" is TRUE (1): 'test-include_reference.R:112:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-compare_parameters.R:30:7'): compare_parameters, default ───── colnames(out) (`actual`) not identical to c("Parameter", "m1", "m2", "m3") (`expected`). `actual`: "Parameter" "m1" "m2" "Log-Mean (CI) (m3)" "Log-Mean (m3)" `expected`: "Parameter" "m1" "m2" "m3" ── Failure ('test-compare_parameters.R:58:7'): compare_parameters, se_p2 ─────── colnames(out) (`actual`) not identical to c(...) (`expected`). `actual[5:8]`: "p (m2)" "Log-Mean (SE) (m3)" "p (m3)" "Log-Mean (m3)" `expected[5:7]`: "p (m2)" "Log-Mean (SE) (m3)" "p (m3)" ── Failure ('test-compare_parameters.R:83:7'): compare_parameters, column name with escaping regex characters ── out[1] (`actual`) not identical to "Parameter | linear model (m1) | logistic reg. (m2)" (`expected`). actual vs expected - "Parameter | linear model (m1) | Log-Odds (CI) (logistic reg. (m2)) | Log-Odds (logistic reg. (m2))" + "Parameter | linear model (m1) | logistic reg. (m2)" ── Error ('test-format_model_parameters.R:190:3'): format, compare_parameters, mixed models ── Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Backtrace: ▆ 1. └─parameters::compare_parameters(...) at test-format_model_parameters.R:190:3 2. └─base::lapply(...) 3. └─parameters (local) FUN(X[[i]], ...) 4. ├─parameters::model_parameters(...) 5. └─parameters:::model_parameters.merMod(...) 6. └─parameters:::.add_random_effects_lme4(...) 7. ├─parameters:::.extract_random_variances(...) 8. └─parameters:::.extract_random_variances.default(...) 9. ├─base::suppressWarnings(...) 10. │ └─base::withCallingHandlers(...) 11. └─parameters:::.extract_random_variances_helper(...) ── Failure ('test-marginaleffects.R:52:3'): predictions() ────────────────────── Names of `out` ('Predicted', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p', 'Species') don't match 'Predicted', 'Species', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p' ── Error ('test-model_parameters.blmerMod.R:19:3'): model_parameters.blmerMod-all ── Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Backtrace: ▆ 1. ├─parameters::model_parameters(model, effects = "all") at test-model_parameters.blmerMod.R:19:3 2. └─parameters:::model_parameters.merMod(model, effects = "all") 3. └─parameters:::.add_random_effects_lme4(...) 4. ├─parameters:::.extract_random_variances(...) 5. └─parameters:::.extract_random_variances.default(...) 6. ├─base::suppressWarnings(...) 7. │ └─base::withCallingHandlers(...) 8. └─parameters:::.extract_random_variances_helper(...) ── Error ('test-model_parameters_labels.R:13:7'): model_parameters_labels ────── Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Backtrace: ▆ 1. ├─testthat::expect_equal(...) at test-model_parameters_labels.R:13:7 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object") 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. ├─parameters::model_parameters(m1) 5. └─parameters:::model_parameters.merMod(m1) 6. └─parameters:::.add_random_effects_lme4(...) 7. ├─parameters:::.extract_random_variances(...) 8. └─parameters:::.extract_random_variances.default(...) 9. ├─base::suppressWarnings(...) 10. │ └─base::withCallingHandlers(...) 11. └─parameters:::.extract_random_variances_helper(...) ── Error ('test-model_parameters_labels.R:119:7'): Issue #806: Missing label for variance component in lme4 ── Error: 'get_mixed_info' is not an exported object from 'namespace:insight' Backtrace: ▆ 1. ├─parameters::parameters(mod, pretty_names = "labels") at test-model_parameters_labels.R:119:7 2. └─parameters:::model_parameters.merMod(mod, pretty_names = "labels") 3. └─parameters:::.add_random_effects_lme4(...) 4. ├─parameters:::.extract_random_variances(...) 5. └─parameters:::.extract_random_variances.default(...) 6. ├─base::suppressWarnings(...) 7. │ └─base::withCallingHandlers(...) 8. └─parameters:::.extract_random_variances_helper(...) [ FAIL 8 | WARN 4 | SKIP 125 | PASS 684 ] Error: Test failures Execution halted Flavor: r-oldrel-macos-x86_64

Version: 0.28.0
Check: tests
Result: ERROR Running 'testthat.R' [100s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes [ FAIL 1 | WARN 4 | SKIP 125 | PASS 711 ] ══ Skipped tests (125) ═════════════════════════════════════════════════════════ • On CRAN (120): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1', 'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1', 'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5', 'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5', 'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3', 'test-equivalence_test.R:82:3', 'test-factor_analysis.R:2:3', 'test-factor_analysis.R:124:3', 'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:8:1', 'test-group_level_total.R:2:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3', 'test-include_reference.R:16:3', 'test-include_reference.R:69:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.BFBayesFactor.R:4:3', 'test-model_parameters.BFBayesFactor.R:77:3', 'test-model_parameters.BFBayesFactor.R:114:3', 'test-model_parameters.anova.R:1:1', 'test-marginaleffects.R:131:3', 'test-marginaleffects.R:154:3', 'test-marginaleffects.R:173:3', 'test-model_parameters.aov.R:1:1', 'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1', 'test-model_parameters.aov_es_ci.R:158:3', 'test-model_parameters.aov_es_ci.R:269:3', 'test-model_parameters.aov_es_ci.R:319:3', 'test-model_parameters.aov_es_ci.R:372:3', 'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3', 'test-model_parameters.fixest.R:147:5', 'test-model_parameters.fixest_multi.R:3:1', 'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3', 'test-model_parameters.glm.R:76:3', 'test-model_parameters.logistf.R:1:1', 'test-model_parameters.logitr.R:1:1', 'test-model_parameters.mclogit.R:5:1', 'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1', 'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1', 'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1', 'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1', 'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3', 'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1', 'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1', 'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1', 'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1', 'test-print_AER_labels.R:11:5', 'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-pretty_names.R:65:5', 'test-pretty_names.R:82:7', 'test-quantreg.R:1:1', 'test-random_effects_ci-glmmTMB.R:6:1', 'test-random_effects_ci.R:4:1', 'test-robust.R:2:1', 'test-rstanarm.R:2:1', 'test-sampleSelection.R:2:1', 'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-printing2.R:127:7', 'test-simulate_model.R:19:1', 'test-serp.R:16:5', 'test-simulate_parameters.R:18:1', 'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3', 'test-weightit.R:43:3', 'test-wrs2.R:58:3', 'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3', 'test-standardize_parameters.R:333:3', 'test-standardize_parameters.R:426:3', 'test-standardize_parameters.R:516:3' • TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3' • TODO: this one actually is not correct. (1): 'test-model_parameters_robust.R:127:3' • empty test (2): 'test-wrs2.R:69:1', 'test-wrs2.R:81:1' • getRversion() < "4.5" is TRUE (1): 'test-include_reference.R:112:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-marginaleffects.R:52:3'): predictions() ────────────────────── Names of `out` ('Predicted', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p', 'Species') don't match 'Predicted', 'Species', 'SE', 'CI', 'CI_low', 'CI_high', 'S', 'Statistic', 'df', 'p' [ FAIL 1 | WARN 4 | SKIP 125 | PASS 711 ] Error: Test failures Execution halted Flavor: r-oldrel-windows-x86_64

Package performance

Current CRAN status: OK: 13

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.