CRAN Package Check Results for Package performance

Last updated on 2025-10-09 23:51:21 CEST.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.15.2 21.81 238.58 260.39 OK
r-devel-linux-x86_64-debian-gcc 0.15.2 12.68 151.55 164.23 OK
r-devel-linux-x86_64-fedora-clang 0.15.2 437.74 OK
r-devel-linux-x86_64-fedora-gcc 0.15.2 420.44 OK
r-devel-windows-x86_64 0.15.2 21.00 224.00 245.00 OK
r-patched-linux-x86_64 0.15.2 21.03 223.92 244.95 OK
r-release-linux-x86_64 0.15.1 19.07 227.92 246.99 ERROR
r-release-macos-arm64 0.15.2 88.00 OK
r-release-macos-x86_64 0.15.2 205.00 OK
r-release-windows-x86_64 0.15.1 21.00 222.00 243.00 ERROR
r-oldrel-macos-arm64 0.15.2 90.00 OK
r-oldrel-macos-x86_64 0.15.2 190.00 OK
r-oldrel-windows-x86_64 0.15.2 30.00 248.00 278.00 OK

Check Details

Version: 0.15.1
Check: tests
Result: ERROR Running ‘testthat.R’ [76s/40s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] ══ Skipped tests (40) ══════════════════════════════════════════════════════════ • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:137:3', 'test-binned_residuals.R:164:3', 'test-check_dag.R:1:1', 'test-check_distribution.R:35:3', 'test-check_itemscale.R:31:3', 'test-check_itemscale.R:103:3', 'test-check_model.R:1:1', 'test-check_collinearity.R:181:3', 'test-check_collinearity.R:218:3', 'test-check_predictions.R:2:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-compare_performance.R:21:3', 'test-helpers.R:1:1', 'test-icc.R:2:1', 'test-item_omega.R:10:3', 'test-item_omega.R:31:3', 'test-mclogit.R:53:3', 'test-check_outliers.R:110:3', 'test-model_performance.bayesian.R:1:1', 'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:25:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:33:3', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:7:3', 'test-r2_nagelkerke.R:22:3', 'test-r2_nakagawa.R:20:1', 'test-rmse.R:35:3', 'test-test_likelihoodratio.R:55:1' • On Linux (3): 'test-nestedLogit.R:1:1', 'test-r2_bayes.R:1:1', 'test-test_wald.R:1:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:258:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_outliers.R:304:3'): pareto which ─────────────────────── which(check_outliers(model, method = "pareto", threshold = list(pareto = 0.5))) (`actual`) not identical to 17L (`expected`). `actual`: 17 18 `expected`: 17 [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] Error: Test failures Execution halted Flavor: r-release-linux-x86_64

Version: 0.15.1
Check: tests
Result: ERROR Running 'testthat.R' [41s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 39 | PASS 428 ] ══ Skipped tests (39) ══════════════════════════════════════════════════════════ • On CRAN (38): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:137:3', 'test-binned_residuals.R:164:3', 'test-check_dag.R:1:1', 'test-check_distribution.R:35:3', 'test-check_collinearity.R:181:3', 'test-check_collinearity.R:218:3', 'test-check_model.R:1:1', 'test-check_itemscale.R:31:3', 'test-check_itemscale.R:103:3', 'test-check_predictions.R:2:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-compare_performance.R:21:3', 'test-helpers.R:1:1', 'test-icc.R:2:1', 'test-item_omega.R:10:3', 'test-item_omega.R:31:3', 'test-mclogit.R:53:3', 'test-check_outliers.R:110:3', 'test-model_performance.bayesian.R:1:1', 'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:25:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:33:3', 'test-nestedLogit.R:64:3', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:7:3', 'test-r2_nagelkerke.R:22:3', 'test-r2_nakagawa.R:20:1', 'test-r2_bayes.R:34:3', 'test-rmse.R:35:3', 'test-test_likelihoodratio.R:55:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:258:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_outliers.R:304:3'): pareto which ─────────────────────── which(check_outliers(model, method = "pareto", threshold = list(pareto = 0.5))) (`actual`) not identical to 17L (`expected`). `actual`: 17 18 `expected`: 17 [ FAIL 1 | WARN 0 | SKIP 39 | PASS 428 ] Error: Test failures Execution halted Flavor: r-release-windows-x86_64

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.