CRAN Package Check Results for Package mllrnrs

Last updated on 2025-09-08 09:50:24 CEST.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.0.5 5.96 190.22 196.18 OK
r-devel-linux-x86_64-debian-gcc 0.0.5 3.75 90.24 93.99 ERROR
r-devel-linux-x86_64-fedora-clang 0.0.5 192.48 ERROR
r-devel-linux-x86_64-fedora-gcc 0.0.5 262.13 ERROR
r-devel-windows-x86_64 0.0.5 7.00 628.00 635.00 OK
r-patched-linux-x86_64 0.0.5 5.22 195.34 200.56 OK
r-release-linux-x86_64 0.0.5 5.06 197.48 202.54 OK
r-release-macos-arm64 0.0.5 322.00 OK
r-release-macos-x86_64 0.0.5 354.00 OK
r-release-windows-x86_64 0.0.5 7.00 639.00 646.00 OK
r-oldrel-macos-arm64 0.0.5 205.00 OK
r-oldrel-macos-x86_64 0.0.5 359.00 OK
r-oldrel-windows-x86_64 0.0.5 9.00 898.00 907.00 OK

Check Details

Version: 0.0.5
Check: examples
Result: ERROR Running examples in ‘mllrnrs-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: LearnerGlmnet > ### Title: R6 Class to construct a Glmnet learner > ### Aliases: LearnerGlmnet > > ### ** Examples > > # binary classification > > library(mlbench) > data("PimaIndiansDiabetes2") > dataset <- PimaIndiansDiabetes2 |> + data.table::as.data.table() |> + na.omit() > > seed <- 123 > feature_cols <- colnames(dataset)[1:8] > > train_x <- model.matrix( + ~ -1 + ., + dataset[, .SD, .SDcols = feature_cols] + ) > train_y <- as.integer(dataset[, get("diabetes")]) - 1L > > fold_list <- splitTools::create_folds( + y = train_y, + k = 3, + type = "stratified", + seed = seed + ) > glmnet_cv <- mlexperiments::MLCrossValidation$new( + learner = mllrnrs::LearnerGlmnet$new( + metric_optimization_higher_better = FALSE + ), + fold_list = fold_list, + ncores = 2, + seed = 123 + ) > glmnet_cv$learner_args <- list( + alpha = 1, + lambda = 0.1, + family = "binomial", + type.measure = "class", + standardize = TRUE + ) > glmnet_cv$predict_args <- list(type = "response") > glmnet_cv$performance_metric_args <- list(positive = "1") > glmnet_cv$performance_metric <- mlexperiments::metric("auc") Error: Package "measures" must be installed to use function 'metric()'. Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.0.5
Check: tests
Result: ERROR Running ‘testthat.R’ [3s/5s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") [ FAIL 17 | WARN 0 | SKIP 1 | PASS 0 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glmnet_binary.R:88:5'): test nested cv, bayesian, binary - glmnet ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("prauc") at test-glmnet_binary.R:88:5 ── Error ('test-glmnet_binary.R:134:5'): test nested cv, grid - glmnet, errors ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("prauc") at test-glmnet_binary.R:134:5 ── Error ('test-glmnet_multiclass.R:89:5'): test nested cv, bayesian, multiclass - glmnet ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-glmnet_multiclass.R:89:5 ── Error ('test-glmnet_regression.R:87:5'): test nested cv, bayesian, regression - glmnet ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("rmsle") at test-glmnet_regression.R:87:5 ── Error ('test-glmnet_regression.R:134:5'): test nested cv, grid - glmnet ───── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("rmsle") at test-glmnet_regression.R:134:5 ── Error ('test-lightgbm_binary.R:99:5'): test nested cv, bayesian, binary - lightgbm ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("auc") at test-lightgbm_binary.R:99:5 ── Error ('test-lightgbm_multiclass.R:101:5'): test nested cv, bayesian, multiclass - lightgbm ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-lightgbm_multiclass.R:101:5 ── Error ('test-lightgbm_regression.R:99:5'): test nested cv, bayesian, regression - lightgbm ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-lightgbm_regression.R:99:5 ── Error ('test-lightgbm_regression.R:141:5'): test nested cv, grid - lightgbm ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-lightgbm_regression.R:141:5 ── Error ('test-ranger_binary.R:92:5'): test nested cv, bayesian, binary - ranger ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("auc") at test-ranger_binary.R:92:5 ── Error ('test-ranger_multiclass.R:93:5'): test nested cv, bayesian, regression - ranger ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-ranger_multiclass.R:93:5 ── Error ('test-ranger_regression.R:87:5'): test nested cv, bayesian, regression - ranger ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-ranger_regression.R:87:5 ── Error ('test-ranger_regression.R:125:5'): test nested cv, grid - ranger ───── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-ranger_regression.R:125:5 ── Error ('test-xgboost_binary.R:83:5'): test nested cv, bayesian, binary:logistic - xgboost ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("auc") at test-xgboost_binary.R:83:5 ── Error ('test-xgboost_multiclass.R:86:5'): test nested cv, bayesian, multi:softprob - xgboost, with weights ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-xgboost_multiclass.R:86:5 ── Error ('test-xgboost_regression.R:82:5'): test nested cv, bayesian, reg:squarederror - xgboost ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-xgboost_regression.R:82:5 ── Error ('test-xgboost_regression.R:125:5'): test nested cv, grid - xgboost ─── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-xgboost_regression.R:125:5 [ FAIL 17 | WARN 0 | SKIP 1 | PASS 0 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.0.5
Check: examples
Result: ERROR Running examples in ‘mllrnrs-Ex.R’ failed The error most likely occurred in: > ### Name: LearnerGlmnet > ### Title: R6 Class to construct a Glmnet learner > ### Aliases: LearnerGlmnet > > ### ** Examples > > # binary classification > > library(mlbench) > data("PimaIndiansDiabetes2") > dataset <- PimaIndiansDiabetes2 |> + data.table::as.data.table() |> + na.omit() > > seed <- 123 > feature_cols <- colnames(dataset)[1:8] > > train_x <- model.matrix( + ~ -1 + ., + dataset[, .SD, .SDcols = feature_cols] + ) > train_y <- as.integer(dataset[, get("diabetes")]) - 1L > > fold_list <- splitTools::create_folds( + y = train_y, + k = 3, + type = "stratified", + seed = seed + ) > glmnet_cv <- mlexperiments::MLCrossValidation$new( + learner = mllrnrs::LearnerGlmnet$new( + metric_optimization_higher_better = FALSE + ), + fold_list = fold_list, + ncores = 2, + seed = 123 + ) > glmnet_cv$learner_args <- list( + alpha = 1, + lambda = 0.1, + family = "binomial", + type.measure = "class", + standardize = TRUE + ) > glmnet_cv$predict_args <- list(type = "response") > glmnet_cv$performance_metric_args <- list(positive = "1") > glmnet_cv$performance_metric <- mlexperiments::metric("auc") Error: Package "measures" must be installed to use function 'metric()'. Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.0.5
Check: tests
Result: ERROR Running ‘testthat.R’ Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") [ FAIL 17 | WARN 0 | SKIP 1 | PASS 0 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glmnet_binary.R:88:5'): test nested cv, bayesian, binary - glmnet ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("prauc") at test-glmnet_binary.R:88:5 ── Error ('test-glmnet_binary.R:134:5'): test nested cv, grid - glmnet, errors ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("prauc") at test-glmnet_binary.R:134:5 ── Error ('test-glmnet_multiclass.R:89:5'): test nested cv, bayesian, multiclass - glmnet ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-glmnet_multiclass.R:89:5 ── Error ('test-glmnet_regression.R:87:5'): test nested cv, bayesian, regression - glmnet ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("rmsle") at test-glmnet_regression.R:87:5 ── Error ('test-glmnet_regression.R:134:5'): test nested cv, grid - glmnet ───── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("rmsle") at test-glmnet_regression.R:134:5 ── Error ('test-lightgbm_binary.R:99:5'): test nested cv, bayesian, binary - lightgbm ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("auc") at test-lightgbm_binary.R:99:5 ── Error ('test-lightgbm_multiclass.R:101:5'): test nested cv, bayesian, multiclass - lightgbm ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-lightgbm_multiclass.R:101:5 ── Error ('test-lightgbm_regression.R:99:5'): test nested cv, bayesian, regression - lightgbm ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-lightgbm_regression.R:99:5 ── Error ('test-lightgbm_regression.R:141:5'): test nested cv, grid - lightgbm ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-lightgbm_regression.R:141:5 ── Error ('test-ranger_binary.R:92:5'): test nested cv, bayesian, binary - ranger ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("auc") at test-ranger_binary.R:92:5 ── Error ('test-ranger_multiclass.R:93:5'): test nested cv, bayesian, regression - ranger ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-ranger_multiclass.R:93:5 ── Error ('test-ranger_regression.R:87:5'): test nested cv, bayesian, regression - ranger ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-ranger_regression.R:87:5 ── Error ('test-ranger_regression.R:125:5'): test nested cv, grid - ranger ───── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-ranger_regression.R:125:5 ── Error ('test-xgboost_binary.R:83:5'): test nested cv, bayesian, binary:logistic - xgboost ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("auc") at test-xgboost_binary.R:83:5 ── Error ('test-xgboost_multiclass.R:86:5'): test nested cv, bayesian, multi:softprob - xgboost, with weights ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-xgboost_multiclass.R:86:5 ── Error ('test-xgboost_regression.R:82:5'): test nested cv, bayesian, reg:squarederror - xgboost ── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-xgboost_regression.R:82:5 ── Error ('test-xgboost_regression.R:125:5'): test nested cv, grid - xgboost ─── Error: Package "measures" must be installed to use function 'metric()'. Backtrace: ▆ 1. └─mlexperiments::metric("msle") at test-xgboost_regression.R:125:5 [ FAIL 17 | WARN 0 | SKIP 1 | PASS 0 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.0.5
Check: tests
Result: ERROR Running ‘testthat.R’ [1m/11m] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 18.377 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.961 seconds 3) Running FUN 2 times in 2 thread(s)... 1.236 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 14.076 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.039 seconds 3) Running FUN 2 times in 2 thread(s)... 1.222 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 14.338 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.05 seconds 3) Running FUN 2 times in 2 thread(s)... 1.085 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 16.241 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 27.775 seconds 3) Running FUN 2 times in 2 thread(s)... 1.249 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 15.659 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 22.895 seconds 3) Running FUN 2 times in 2 thread(s)... 0.753 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 12.79 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 38.675 seconds 3) Running FUN 2 times in 2 thread(s)... 0.968 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 12.992 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 23.837 seconds 3) Running FUN 2 times in 2 thread(s)... 1.05 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 11.24 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 29.711 seconds 3) Running FUN 2 times in 2 thread(s)... 0.895 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.93 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 32.119 seconds 3) Running FUN 2 times in 2 thread(s)... 0.79 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.538 seconds num.trees mtry min.node.size max.depth sample.fraction <num> <num> <num> <num> <num> 1: 1000 6 1 9 0.5 2: 1000 2 5 1 0.8 3: 1000 6 9 9 0.8 4: 1000 4 1 9 0.5 5: 1000 4 5 9 0.8 6: 1000 2 9 9 0.8 7: 500 2 9 1 0.8 8: 500 4 1 9 0.8 9: 1000 6 5 9 0.8 10: 500 6 1 1 0.5 errorMessage <char> 1: `name` is not a function exported from R package {measures} 2: `name` is not a function exported from R package {measures} 3: `name` is not a function exported from R package {measures} 4: `name` is not a function exported from R package {measures} 5: `name` is not a function exported from R package {measures} 6: `name` is not a function exported from R package {measures} 7: `name` is not a function exported from R package {measures} 8: `name` is not a function exported from R package {measures} 9: `name` is not a function exported from R package {measures} 10: `name` is not a function exported from R package {measures} CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.594 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.704 seconds 3) Running FUN 2 times in 2 thread(s)... 0.753 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.062 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.846 seconds 3) Running FUN 2 times in 2 thread(s)... 0.777 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.811 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.125 seconds 3) Running FUN 2 times in 2 thread(s)... 0.888 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.065 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 13.027 seconds 3) Running FUN 2 times in 2 thread(s)... 0.52 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.982 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 9.738 seconds 3) Running FUN 2 times in 2 thread(s)... 0.441 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.971 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 16.539 seconds 3) Running FUN 2 times in 2 thread(s)... 0.588 seconds CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.979 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 34.856 seconds 3) Running FUN 2 times in 2 thread(s)... 0.734 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.295 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 4.398 seconds 3) Running FUN 2 times in 2 thread(s)... 0.61 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.21 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 18.203 seconds 3) Running FUN 2 times in 2 thread(s)... 0.62 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 7 | WARN 3 | SKIP 1 | PASS 30 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glmnet_binary.R:88:5'): test nested cv, bayesian, binary - glmnet ── Error in `mlexperiments::metric("prauc")`: `name` is not a function exported from R package {measures} Backtrace: ▆ 1. └─mlexperiments::metric("prauc") at test-glmnet_binary.R:88:5 ── Error ('test-glmnet_binary.R:134:5'): test nested cv, grid - glmnet, errors ── Error in `mlexperiments::metric("prauc")`: `name` is not a function exported from R package {measures} Backtrace: ▆ 1. └─mlexperiments::metric("prauc") at test-glmnet_binary.R:134:5 ── Error ('test-glmnet_multiclass.R:89:5'): test nested cv, bayesian, multiclass - glmnet ── Error in `mlexperiments::metric("bacc")`: `name` is not a function exported from R package {measures} Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-glmnet_multiclass.R:89:5 ── Error ('test-lightgbm_multiclass.R:101:5'): test nested cv, bayesian, multiclass - lightgbm ── Error in `mlexperiments::metric("bacc")`: `name` is not a function exported from R package {measures} Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-lightgbm_multiclass.R:101:5 ── Error ('test-ranger_binary.R:100:5'): test nested cv, bayesian, binary - ranger ── Error in `(function (FUN, bounds, saveFile = NULL, initGrid, initPoints = 4, iters.n = 3, iters.k = 1, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 2.576, eps = 0, parallel = FALSE, gsPoints = pmax(100, length(bounds)^3), convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1, ...) { startT <- Sys.time() optObj <- list() class(optObj) <- "bayesOpt" optObj$FUN <- FUN optObj$bounds <- bounds optObj$iters <- 0 optObj$initPars <- list() optObj$optPars <- list() optObj$GauProList <- list() optObj <- changeSaveFile(optObj, saveFile) checkParameters(bounds, iters.n, iters.k, otherHalting, acq, acqThresh, errorHandling, plotProgress, parallel, verbose) boundsDT <- boundsToDT(bounds) otherHalting <- formatOtherHalting(otherHalting) if (missing(initGrid) + missing(initPoints) != 1) stop("Please provide 1 of initGrid or initPoints, but not both.") if (!missing(initGrid)) { setDT(initGrid) inBounds <- checkBounds(initGrid, bounds) inBounds <- as.logical(apply(inBounds, 1, prod)) if (any(!inBounds)) stop("initGrid not within bounds.") optObj$initPars$initialSample <- "User Provided Grid" initPoints <- nrow(initGrid) } else { initGrid <- randParams(boundsDT, initPoints) optObj$initPars$initialSample <- "Latin Hypercube Sampling" } optObj$initPars$initGrid <- initGrid if (nrow(initGrid) <= 2) stop("Cannot initialize with less than 3 samples.") optObj$initPars$initPoints <- nrow(initGrid) if (initPoints <= length(bounds)) stop("initPoints must be greater than the number of FUN inputs.") sinkFile <- file() on.exit({ while (sink.number() > 0) sink() close(sinkFile) }) `%op%` <- ParMethod(parallel) if (parallel) Workers <- getDoParWorkers() else Workers <- 1 if (verbose > 0) cat("\nRunning initial scoring function", nrow(initGrid), "times in", Workers, "thread(s)...") sink(file = sinkFile) tm <- system.time(scoreSummary <- foreach(iter = 1:nrow(initGrid), .options.multicore = list(preschedule = FALSE), .combine = list, .multicombine = TRUE, .inorder = FALSE, .errorhandling = "pass", .verbose = FALSE) %op% { Params <- initGrid[get("iter"), ] Elapsed <- system.time(Result <- tryCatch({ do.call(what = FUN, args = as.list(Params)) }, error = function(e) e)) if (any(class(Result) %in% c("simpleError", "error", "condition"))) return(Result) if (!inherits(x = Result, what = "list")) stop("Object returned from FUN was not a list.") resLengths <- lengths(Result) if (!any(names(Result) == "Score")) stop("FUN must return list with element 'Score' at a minimum.") if (!is.numeric(Result$Score)) stop("Score returned from FUN was not numeric.") if (any(resLengths != 1)) { badReturns <- names(Result)[which(resLengths != 1)] stop("FUN returned these elements with length > 1: ", paste(badReturns, collapse = ",")) } data.table(Params, Elapsed = Elapsed[[3]], as.data.table(Result)) })[[3]] while (sink.number() > 0) sink() if (verbose > 0) cat(" ", tm, "seconds\n") se <- which(sapply(scoreSummary, function(cl) any(class(cl) %in% c("simpleError", "error", "condition")))) if (length(se) > 0) { print(data.table(initGrid[se, ], errorMessage = sapply(scoreSummary[se], function(x) x$message))) stop("Errors encountered in initialization are listed above.") } else { scoreSummary <- rbindlist(scoreSummary) } scoreSummary[, `:=`(("gpUtility"), rep(as.numeric(NA), nrow(scoreSummary)))] scoreSummary[, `:=`(("acqOptimum"), rep(FALSE, nrow(scoreSummary)))] scoreSummary[, `:=`(("Epoch"), rep(0, nrow(scoreSummary)))] scoreSummary[, `:=`(("Iteration"), 1:nrow(scoreSummary))] scoreSummary[, `:=`(("inBounds"), rep(TRUE, nrow(scoreSummary)))] scoreSummary[, `:=`(("errorMessage"), rep(NA, nrow(scoreSummary)))] extraRet <- setdiff(names(scoreSummary), c("Epoch", "Iteration", boundsDT$N, "inBounds", "Elapsed", "Score", "gpUtility", "acqOptimum")) setcolorder(scoreSummary, c("Epoch", "Iteration", boundsDT$N, "gpUtility", "acqOptimum", "inBounds", "Elapsed", "Score", extraRet)) if (any(scoreSummary$Elapsed < 1) & acq == "eips") { cat("\n FUN elapsed time is too low to be precise. Switching acq to 'ei'.\n") acq <- "ei" } optObj$optPars$acq <- acq optObj$optPars$kappa <- kappa optObj$optPars$eps <- eps optObj$optPars$parallel <- parallel optObj$optPars$gsPoints <- gsPoints optObj$optPars$convThresh <- convThresh optObj$optPars$acqThresh <- acqThresh optObj$scoreSummary <- scoreSummary optObj$GauProList$gpUpToDate <- FALSE optObj$iters <- nrow(scoreSummary) optObj$stopStatus <- "OK" optObj$elapsedTime <- as.numeric(difftime(Sys.time(), startT, units = "secs")) saveSoFar(optObj, 0) optObj <- addIterations(optObj, otherHalting = otherHalting, iters.n = iters.n, iters.k = iters.k, parallel = parallel, plotProgress = plotProgress, errorHandling = errorHandling, saveFile = saveFile, verbose = verbose, ...) return(optObj) })(FUN = function (...) { kwargs <- list(...) args <- .method_params_refactor(kwargs, method_helper) set.seed(self$seed) res <- do.call(private$fun_bayesian_scoring_function, args) if (isFALSE(self$metric_optimization_higher_better)) { res$Score <- as.numeric(I(res$Score * -1L)) } return(res) }, bounds = list(num.trees = c(100L, 1000L), mtry = c(2L, 9L), min.node.size = c(1L, 20L), max.depth = c(1L, 40L), sample.fraction = c(0.3, 1)), initGrid = structure(list(num.trees = c(1000, 1000, 1000, 1000, 1000, 1000, 500, 500, 1000, 500), mtry = c(6, 2, 6, 4, 4, 2, 2, 4, 6, 6), min.node.size = c(1, 5, 9, 1, 5, 9, 9, 1, 5, 1), max.depth = c(9, 1, 9, 9, 9, 9, 1, 9, 9, 1), sample.fraction = c(0.5, 0.8, 0.8, 0.5, 0.8, 0.8, 0.8, 0.8, 0.8, 0.5)), out.attrs = list( dim = c(num.trees = 2L, mtry = 3L, min.node.size = 3L, max.depth = 3L, sample.fraction = 2L), dimnames = list(num.trees = c("num.trees= 500", "num.trees=1000"), mtry = c("mtry=2", "mtry=4", "mtry=6"), min.node.size = c("min.node.size=1", "min.node.size=5", "min.node.size=9"), max.depth = c("max.depth=1", "max.depth=5", "max.depth=9"), sample.fraction = c("sample.fraction=0.5", "sample.fraction=0.8"))), row.names = c(NA, -10L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x834b550>), iters.n = 2L, iters.k = 2L, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 3.5, eps = 0, parallel = TRUE, gsPoints = 125, convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1)`: Errors encountered in initialization are listed above. Backtrace: ▆ 1. └─ranger_optimizer$execute() at test-ranger_binary.R:100:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. ├─base::do.call(...) 13. └─mlexperiments (local) `<fn>`(...) 14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args) 15. └─ParBayesianOptimization (local) `<fn>`(...) ── Error ('test-ranger_multiclass.R:93:5'): test nested cv, bayesian, regression - ranger ── Error in `mlexperiments::metric("bacc")`: `name` is not a function exported from R package {measures} Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-ranger_multiclass.R:93:5 ── Error ('test-xgboost_multiclass.R:86:5'): test nested cv, bayesian, multi:softprob - xgboost, with weights ── Error in `mlexperiments::metric("bacc")`: `name` is not a function exported from R package {measures} Backtrace: ▆ 1. └─mlexperiments::metric("bacc") at test-xgboost_multiclass.R:86:5 [ FAIL 7 | WARN 3 | SKIP 1 | PASS 30 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc