- using R Under development (unstable) (2026-01-12 r89299)
- using platform: x86_64-pc-linux-gnu
- R was compiled by
gcc (GCC) 15.1.1 20250521 (Red Hat 15.1.1-2)
GNU Fortran (GCC) 15.1.1 20250521 (Red Hat 15.1.1-2)
- running under: Fedora Linux 42 (Workstation Edition)
- using session charset: UTF-8
- using option ‘--no-stop-on-test-error’
- checking for file ‘mllrnrs/DESCRIPTION’ ... OK
- this is package ‘mllrnrs’ version ‘0.0.7’
- package encoding: UTF-8
- checking package namespace information ... OK
- checking package dependencies ... INFO
Package suggested but not available for checking: ‘ParBayesianOptimization’
- checking if this is a source package ... OK
- checking if there is a namespace ... OK
- checking for executable files ... OK
- checking for hidden files and directories ... OK
- checking for portable file names ... OK
- checking for sufficient/correct file permissions ... OK
- checking whether package ‘mllrnrs’ can be installed ... [9s/11s] OK
See the install log for details.
- checking package directory ... OK
- checking ‘build’ directory ... OK
- checking DESCRIPTION meta-information ... OK
- checking top-level files ... OK
- checking for left-over files ... OK
- checking index information ... OK
- checking package subdirectories ... OK
- checking code files for non-ASCII characters ... OK
- checking R files for syntax errors ... OK
- checking whether the package can be loaded ... OK
- checking whether the package can be loaded with stated dependencies ... OK
- checking whether the package can be unloaded cleanly ... OK
- checking whether the namespace can be loaded with stated dependencies ... OK
- checking whether the namespace can be unloaded cleanly ... OK
- checking loading without being on the library search path ... OK
- checking whether startup messages can be suppressed ... OK
- checking use of S3 registration ... OK
- checking dependencies in R code ... OK
- checking S3 generic/method consistency ... OK
- checking replacement functions ... OK
- checking foreign function calls ... OK
- checking R code for possible problems ... OK
- checking Rd files ... OK
- checking Rd metadata ... OK
- checking Rd line widths ... OK
- checking Rd cross-references ... OK
- checking for missing documentation entries ... OK
- checking for code/documentation mismatches ... OK
- checking Rd \usage sections ... OK
- checking Rd contents ... OK
- checking for unstated dependencies in examples ... OK
- checking installed files from ‘inst/doc’ ... OK
- checking files in ‘vignettes’ ... OK
- checking examples ... OK
- checking for unstated dependencies in ‘tests’ ... OK
- checking tests ... [113s/117s] ERROR
Running ‘testthat.R’ [112s/117s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
> # https://github.com/Rdatatable/data.table/issues/5658
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mllrnrs)
>
> test_check("mllrnrs")
CV fold: Fold1
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-binary-225.R
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Saving _problems/test-regression-107.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-regression-309.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
══ Skipped tests (3) ═══════════════════════════════════════════════════════════
• On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5',
'test-multiclass.R:57:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-binary.R:225:5'): test nested cv, bayesian, binary - lightgbm ──
Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─lightgbm_optimizer$execute() at test-binary.R:225:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-regression.R:107:5'): test nested cv, bayesian, regression - glmnet ──
Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─glmnet_optimizer$execute() at test-regression.R:107:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ──
Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-regression.R:309:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
Error:
! Test failures.
Execution halted
- checking for unstated dependencies in vignettes ... OK
- checking package vignettes ... OK
- checking re-building of vignette outputs ... [144s/184s] OK
- checking PDF version of manual ... OK
- checking HTML version of manual ... OK
- checking for non-standard things in the check directory ... OK
- checking for detritus in the temp directory ... OK
- DONE
Status: 1 ERROR