• using R version 4.5.2 (2025-10-31)
  • using platform: aarch64-apple-darwin20
  • R was compiled by     Apple clang version 16.0.0 (clang-1600.0.26.6)     GNU Fortran (GCC) 14.2.0
  • running under: macOS Ventura 13.7.8
  • using session charset: UTF-8
  • checking for file ‘performance/DESCRIPTION’ ... OK
  • checking extension type ... Package
  • this is package ‘performance’ version ‘0.15.3’
  • package encoding: UTF-8
  • checking package namespace information ... OK
  • checking package dependencies ... OK
  • checking if this is a source package ... OK
  • checking if there is a namespace ... OK
  • checking for executable files ... OK
  • checking for hidden files and directories ... OK
  • checking for portable file names ... OK
  • checking for sufficient/correct file permissions ... OK
  • checking whether package ‘performance’ can be installed ... [5s/5s] OK See the install log for details.
  • checking installed package size ... OK
  • checking package directory ... OK
  • checking DESCRIPTION meta-information ... OK
  • checking top-level files ... OK
  • checking for left-over files ... OK
  • checking index information ... OK
  • checking package subdirectories ... OK
  • checking code files for non-ASCII characters ... OK
  • checking R files for syntax errors ... OK
  • checking whether the package can be loaded ... [0s/0s] OK
  • checking whether the package can be loaded with stated dependencies ... [0s/0s] OK
  • checking whether the package can be unloaded cleanly ... [0s/0s] OK
  • checking whether the namespace can be loaded with stated dependencies ... [0s/0s] OK
  • checking whether the namespace can be unloaded cleanly ... [0s/0s] OK
  • checking loading without being on the library search path ... [0s/0s] OK
  • checking dependencies in R code ... OK
  • checking S3 generic/method consistency ... OK
  • checking replacement functions ... OK
  • checking foreign function calls ... OK
  • checking R code for possible problems ... [7s/7s] OK
  • checking Rd files ... [0s/0s] OK
  • checking Rd metadata ... OK
  • checking Rd cross-references ... OK
  • checking for missing documentation entries ... OK
  • checking for code/documentation mismatches ... OK
  • checking Rd \usage sections ... OK
  • checking Rd contents ... OK
  • checking for unstated dependencies in examples ... OK
  • checking R/sysdata.rda ... OK
  • checking examples ... [10s/17s] OK
  • checking for unstated dependencies in ‘tests’ ... OK
  • checking tests ... [19s/11s] ERROR   Running ‘testthat.R’ [19s/11s] Running the tests in ‘tests/testthat.R’ failed. Complete output:   > library(testthat)   > library(performance)   >   > test_check("performance")   Starting 2 test processes.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be   > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-check_collinearity.R: NOTE: 2 fixed-effect singletons were removed (2 observations).   Saving _problems/test-check_collinearity-157.R   Saving _problems/test-check_collinearity-185.R   > test-check_overdispersion.R: Overdispersion detected.   > test-check_overdispersion.R: Underdispersion detected.   > test-check_outliers.R: No outliers were detected (p = 0.238).   > test-glmmPQL.R: iteration 1   > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be   > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be   > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be   > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`.   > test-performance_aic.R: Model was not fitted with REML, however, `estimator = "REML"`. Set   > test-performance_aic.R: `estimator = "ML"` to obtain identical results as from `AIC()`.   [ FAIL 2 | WARN 2 | SKIP 41 | PASS 443 ]      ══ Skipped tests (41) ══════════════════════════════════════════════════════════   • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3',     'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:163:3',     'test-binned_residuals.R:190:3', 'test-check_convergence.R:1:1',     'test-check_dag.R:1:1', 'test-check_distribution.R:1:1',     'test-check_itemscale.R:1:1', 'test-check_itemscale.R:100:1',     'test-check_model.R:1:1', 'test-check_collinearity.R:193:1',     'test-check_collinearity.R:226:1', 'test-check_residuals.R:2:3',     'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3',     'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3',     'test-check_outliers.R:115:3', 'test-check_outliers.R:339:3',     'test-helpers.R:1:1', 'test-item_omega.R:1:1', 'test-item_omega.R:31:3',     'test-compare_performance.R:1:1', 'test-mclogit.R:56:1',     'test-model_performance.bayesian.R:1:1',     'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3',     'test-model_performance.merMod.R:37:3', 'test-model_performance.psych.R:1:1',     'test-model_performance.rma.R:36:1', 'test-performance_reliability.R:23:3',     'test-pkg-ivreg.R:1:1', 'test-r2_bayes.R:39:3', 'test-r2_nagelkerke.R:35:3',     'test-rmse.R:39:3', 'test-test_likelihoodratio.R:55:1'   • On Mac (4): 'test-check_predictions.R:1:1', 'test-icc.R:1:1',     'test-nestedLogit.R:1:1', 'test-r2_nakagawa.R:1:1'   • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:300:3'      ══ Failed tests ════════════════════════════════════════════════════════════════   ── Failure ('test-check_collinearity.R:157:3'): check_collinearity | afex ──────   Expected `expect_message(ccoW <- check_collinearity(aW))` to throw a warning.   ── Failure ('test-check_collinearity.R:185:3'): check_collinearity | afex ──────   Expected `expect_message(ccoW <- check_collinearity(aW))` to throw a warning.      [ FAIL 2 | WARN 2 | SKIP 41 | PASS 443 ]   Error:   ! Test failures.   Execution halted
  • checking PDF version of manual ... [3s/3s] OK
  • DONE Status: 1 ERROR
  • using check arguments '--no-clean-on-error '