- using R Under development (unstable) (2025-12-31 r89265)
- using platform: x86_64-pc-linux-gnu
- R was compiled by
gcc-15 (Debian 15.2.0-12) 15.2.0
GNU Fortran (Debian 15.2.0-12) 15.2.0
- running under: Debian GNU/Linux forky/sid
- using session charset: UTF-8
- checking for file ‘deepregression/DESCRIPTION’ ... OK
- this is package ‘deepregression’ version ‘2.3.2’
- package encoding: UTF-8
- checking CRAN incoming feasibility ... [1s/2s] OK
- checking package namespace information ... OK
- checking package dependencies ... OK
- checking if this is a source package ... OK
- checking if there is a namespace ... OK
- checking for executable files ... OK
- checking for hidden files and directories ... OK
- checking for portable file names ... OK
- checking for sufficient/correct file permissions ... OK
- checking whether package ‘deepregression’ can be installed ... OK
See the install log for details.
- checking package directory ... OK
- checking for future file timestamps ... OK
- checking DESCRIPTION meta-information ... OK
- checking top-level files ... OK
- checking for left-over files ... OK
- checking index information ... OK
- checking package subdirectories ... OK
- checking code files for non-ASCII characters ... OK
- checking R files for syntax errors ... OK
- checking whether the package can be loaded ... [4s/5s] OK
- checking whether the package can be loaded with stated dependencies ... [3s/4s] OK
- checking whether the package can be unloaded cleanly ... [3s/4s] OK
- checking whether the namespace can be loaded with stated dependencies ... [3s/4s] OK
- checking whether the namespace can be unloaded cleanly ... [4s/4s] OK
- checking loading without being on the library search path ... [3s/4s] OK
- checking whether startup messages can be suppressed ... [4s/4s] OK
- checking use of S3 registration ... OK
- checking dependencies in R code ... OK
- checking S3 generic/method consistency ... OK
- checking replacement functions ... OK
- checking foreign function calls ... OK
- checking R code for possible problems ... [24s/33s] OK
- checking Rd files ... [1s/1s] OK
- checking Rd metadata ... OK
- checking Rd line widths ... OK
- checking Rd cross-references ... OK
- checking for missing documentation entries ... OK
- checking for code/documentation mismatches ... OK
- checking Rd \usage sections ... OK
- checking Rd contents ... OK
- checking for unstated dependencies in examples ... OK
- checking examples ... [3s/3s] OK
- checking for unstated dependencies in ‘tests’ ... OK
- checking tests ... [358s/514s] ERROR
Running ‘testthat.R’ [358s/513s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(deepregression)
Loading required package: tensorflow
Loading required package: tfprobability
Loading required package: keras
The keras package is deprecated. Use the keras3 package instead.
>
> if (reticulate::py_module_available("tensorflow") &
+ reticulate::py_module_available("keras") &
+ .Platform$OS.type != "windows"){
+ test_check("deepregression")
+ }
Downloading pygments (1.2MiB)
Downloading keras (1.4MiB)
Downloading tensorflow-probability (6.7MiB)
Downloading tensorboard (5.2MiB)
Downloading tf-keras (1.6MiB)
Downloading numpy (17.1MiB)
Downloading ml-dtypes (4.8MiB)
Downloading h5py (4.9MiB)
Downloading grpcio (6.3MiB)
Downloading tensorflow (615.1MiB)
Downloaded pygments
Downloaded ml-dtypes
Downloaded keras
Downloaded h5py
Downloaded grpcio
Downloaded tf-keras
Downloaded tensorboard
Downloaded numpy
Downloaded tensorflow-probability
Downloaded tensorflow
Installed 43 packages in 496ms
2026-01-01 15:42:48.257847: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2026-01-01 15:42:48.259513: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2026-01-01 15:42:48.265968: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2026-01-01 15:42:48.280147: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1767278568.306446 157091 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1767278568.315039 157091 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1767278568.335017 157091 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1767278568.335106 157091 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1767278568.335110 157091 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1767278568.335115 157091 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2026-01-01 15:42:48.339558: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
/tmp/check-CRAN-regular-hornik/cache/R/reticulate/uv/cache/archive-v0/m9gGv5_W1a-Izt_MzIg74/lib/python3.12/site-packages/keras/src/export/tf2onnx_lib.py:8: FutureWarning: In the future `np.object` will be defined as the corresponding NumPy scalar.
if not hasattr(np, "object"):
Saving _problems/test_customtraining_torch-6.R
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
8192/11490434 [..............................] - ETA: 0s
16384/11490434 [..............................] - ETA: 57s
49152/11490434 [..............................] - ETA: 46s
81920/11490434 [..............................] - ETA: 42s
131072/11490434 [..............................] - ETA: 31s
180224/11490434 [..............................] - ETA: 26s
245760/11490434 [..............................] - ETA: 21s
360448/11490434 [..............................] - ETA: 16s
475136/11490434 [>.............................] - ETA: 13s
696320/11490434 [>.............................] - ETA: 9s
999424/11490434 [=>............................] - ETA: 7s
1409024/11490434 [==>...........................] - ETA: 5s
2121728/11490434 [====>.........................] - ETA: 3s
3006464/11490434 [======>.......................] - ETA: 2s
3989504/11490434 [=========>....................] - ETA: 1s
6193152/11490434 [===============>..............] - ETA: 0s
8388608/11490434 [====================>.........] - ETA: 0s
10305536/11490434 [=========================>....] - ETA: 0s
11490434/11490434 [==============================] - 1s 0us/step
Saving _problems/test_data_handler_torch-78.R
2026-01-01 15:43:05.911699: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Epoch 1/2
1/15 [=>............................] - ETA: 26s - loss: 2.2561
10/15 [===================>..........] - ETA: 0s - loss: 2.2242
15/15 [==============================] - 3s 47ms/step - loss: 2.1993 - val_loss: 2.1406
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1383
10/15 [===================>..........] - ETA: 0s - loss: 2.0781
15/15 [==============================] - 0s 17ms/step - loss: 2.0527 - val_loss: 1.9999
Epoch 1/2
1/15 [=>............................] - ETA: 14s - loss: 2.6629
10/15 [===================>..........] - ETA: 0s - loss: 2.6533
15/15 [==============================] - 1s 32ms/step - loss: 2.6533 - val_loss: 2.6393
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.6358
13/15 [=========================>....] - ETA: 0s - loss: 2.6343
15/15 [==============================] - 0s 14ms/step - loss: 2.6365 - val_loss: 2.6227
Epoch 1/2
1/15 [=>............................] - ETA: 14s - loss: 2.9965
10/15 [===================>..........] - ETA: 0s - loss: 3.0084
15/15 [==============================] - 1s 33ms/step - loss: 2.8998 - val_loss: 1.6201
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1770
13/15 [=========================>....] - ETA: 0s - loss: 2.7153
15/15 [==============================] - 0s 10ms/step - loss: 2.6616 - val_loss: 1.5446
Epoch 1/2
1/15 [=>............................] - ETA: 24s - loss: 4.1701
14/15 [===========================>..] - ETA: 0s - loss: 4.0146
15/15 [==============================] - 2s 25ms/step - loss: 4.0062 - val_loss: 3.4240
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 4.3026
12/15 [=======================>......] - ETA: 0s - loss: 3.9271
15/15 [==============================] - 0s 10ms/step - loss: 3.9103 - val_loss: 3.3606
Epoch 1/3
1/3 [=========>....................] - ETA: 1s - loss: 7.9864
3/3 [==============================] - 1s 157ms/step - loss: 8.7055 - val_loss: 7.2396
Epoch 2/3
1/3 [=========>....................] - ETA: 0s - loss: 9.5053
3/3 [==============================] - 0s 49ms/step - loss: 8.6326 - val_loss: 7.1833
Epoch 3/3
1/3 [=========>....................] - ETA: 0s - loss: 8.5150
3/3 [==============================] - 0s 69ms/step - loss: 8.5566 - val_loss: 7.1282
Epoch 1/2
1/15 [=>............................] - ETA: 14s - loss: 2.6300
12/15 [=======================>......] - ETA: 0s - loss: 2.6278
15/15 [==============================] - 2s 36ms/step - loss: 2.6282 - val_loss: 2.6186
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.6212
8/15 [===============>..............] - ETA: 0s - loss: 2.6149
15/15 [==============================] - 0s 13ms/step - loss: 2.6124 - val_loss: 2.6029
Epoch 1/2
1/15 [=>............................] - ETA: 10s - loss: 653.6502
15/15 [==============================] - ETA: 0s - loss: 1248.0040
15/15 [==============================] - 1s 27ms/step - loss: 1248.0040 - val_loss: 1370.1050
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 755.2901
14/15 [===========================>..] - ETA: 0s - loss: 964.7537
15/15 [==============================] - 0s 12ms/step - loss: 960.7450 - val_loss: 1055.7158
Epoch 1/2
1/15 [=>............................] - ETA: 15s - loss: 927.8141
11/15 [=====================>........] - ETA: 0s - loss: 2320.8728
15/15 [==============================] - 1s 26ms/step - loss: 3026.8003 - val_loss: 321.5525
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 991.2161
9/15 [=================>............] - ETA: 0s - loss: 3691.5452
15/15 [==============================] - 0s 14ms/step - loss: 2679.4148 - val_loss: 292.7626
Epoch 1/2
1/15 [=>............................] - ETA: 13s - loss: 2.3149
10/15 [===================>..........] - ETA: 0s - loss: 2.2399
15/15 [==============================] - 1s 29ms/step - loss: 2.2516 - val_loss: 2.2750
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1948
7/15 [=============>................] - ETA: 0s - loss: 2.1972
15/15 [==============================] - 0s 15ms/step - loss: 2.2132 - val_loss: 2.2328
Saving _problems/test_deepregression_torch-10.R
Saving _problems/test_deepregression_torch-117.R
Saving _problems/test_deepregression_torch-158.R
Saving _problems/test_deepregression_torch-190.R
Saving _problems/test_deepregression_torch-229.R
Fitting member 1 ...Epoch 1/10
1/32 [..............................] - ETA: 22s - loss: 2.3293
10/32 [========>.....................] - ETA: 0s - loss: 2.3532
20/32 [=================>............] - ETA: 0s - loss: 2.3425
32/32 [==============================] - 1s 5ms/step - loss: 2.3303
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.3269
12/32 [==========>...................] - ETA: 0s - loss: 2.3131
22/32 [===================>..........] - ETA: 0s - loss: 2.3062
32/32 [==============================] - 0s 5ms/step - loss: 2.2977
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 2.3270
14/32 [============>.................] - ETA: 0s - loss: 2.2744
26/32 [=======================>......] - ETA: 0s - loss: 2.2684
32/32 [==============================] - 0s 4ms/step - loss: 2.2650
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.2270
13/32 [===========>..................] - ETA: 0s - loss: 2.2309
29/32 [==========================>...] - ETA: 0s - loss: 2.2305
32/32 [==============================] - 0s 4ms/step - loss: 2.2325
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.2185
14/32 [============>.................] - ETA: 0s - loss: 2.2171
28/32 [=========================>....] - ETA: 0s - loss: 2.2033
32/32 [==============================] - 0s 4ms/step - loss: 2.2002
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 2.1431
15/32 [=============>................] - ETA: 0s - loss: 2.1720
31/32 [============================>.] - ETA: 0s - loss: 2.1681
32/32 [==============================] - 0s 4ms/step - loss: 2.1681
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.2840
15/32 [=============>................] - ETA: 0s - loss: 2.1480
26/32 [=======================>......] - ETA: 0s - loss: 2.1343
32/32 [==============================] - 0s 4ms/step - loss: 2.1359
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 2.0769
18/32 [===============>..............] - ETA: 0s - loss: 2.1130
32/32 [==============================] - 0s 3ms/step - loss: 2.1036
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 2.0602
16/32 [==============>...............] - ETA: 0s - loss: 2.0848
30/32 [===========================>..] - ETA: 0s - loss: 2.0719
32/32 [==============================] - 0s 4ms/step - loss: 2.0717
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.0980
15/32 [=============>................] - ETA: 0s - loss: 2.0439
30/32 [===========================>..] - ETA: 0s - loss: 2.0396
32/32 [==============================] - 0s 4ms/step - loss: 2.0400
Done in 2.178689 secs
Fitting member 2 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 2.1814
13/32 [===========>..................] - ETA: 0s - loss: 2.3593
26/32 [=======================>......] - ETA: 0s - loss: 2.3386
32/32 [==============================] - 0s 4ms/step - loss: 2.3312
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.4545
12/32 [==========>...................] - ETA: 0s - loss: 2.3284
23/32 [====================>.........] - ETA: 0s - loss: 2.3046
32/32 [==============================] - 0s 5ms/step - loss: 2.2785
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 2.5406
16/32 [==============>...............] - ETA: 0s - loss: 2.2770
32/32 [==============================] - 0s 3ms/step - loss: 2.2334
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.0854
16/32 [==============>...............] - ETA: 0s - loss: 2.1746
32/32 [==============================] - ETA: 0s - loss: 2.1937
32/32 [==============================] - 0s 4ms/step - loss: 2.1937
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.2393
10/32 [========>.....................] - ETA: 0s - loss: 2.1985
31/32 [============================>.] - ETA: 0s - loss: 2.1610
32/32 [==============================] - 0s 4ms/step - loss: 2.1597
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 2.0225
17/32 [==============>...............] - ETA: 0s - loss: 2.1515
32/32 [==============================] - ETA: 0s - loss: 2.1276
32/32 [==============================] - 0s 3ms/step - loss: 2.1276
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.4568
21/32 [==================>...........] - ETA: 0s - loss: 2.0984
32/32 [==============================] - 0s 3ms/step - loss: 2.0961
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 1.9680
14/32 [============>.................] - ETA: 0s - loss: 2.0677
32/32 [==============================] - 0s 3ms/step - loss: 2.0644
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 2.0730
21/32 [==================>...........] - ETA: 0s - loss: 2.0468
32/32 [==============================] - 0s 3ms/step - loss: 2.0336
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.1117
14/32 [============>.................] - ETA: 0s - loss: 2.0013
22/32 [===================>..........] - ETA: 0s - loss: 1.9871
32/32 [==============================] - 0s 4ms/step - loss: 2.0026
Done in 1.370645 secs
Fitting member 3 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 41.9180
17/32 [==============>...............] - ETA: 0s - loss: 42.8272
32/32 [==============================] - 0s 3ms/step - loss: 39.2828
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 34.1315
13/32 [===========>..................] - ETA: 0s - loss: 29.4543
30/32 [===========================>..] - ETA: 0s - loss: 27.3239
32/32 [==============================] - 0s 4ms/step - loss: 27.2884
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 27.6960
11/32 [=========>....................] - ETA: 0s - loss: 26.6765
29/32 [==========================>...] - ETA: 0s - loss: 21.6970
32/32 [==============================] - 0s 4ms/step - loss: 21.7021
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 13.7574
23/32 [====================>.........] - ETA: 0s - loss: 18.4335
32/32 [==============================] - 0s 3ms/step - loss: 18.2036
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 15.2942
24/32 [=====================>........] - ETA: 0s - loss: 16.2152
32/32 [==============================] - 0s 3ms/step - loss: 15.8282
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 9.8128
17/32 [==============>...............] - ETA: 0s - loss: 13.8691
31/32 [============================>.] - ETA: 0s - loss: 13.9740
32/32 [==============================] - 0s 4ms/step - loss: 14.0666
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 14.0067
13/32 [===========>..................] - ETA: 0s - loss: 13.3359
26/32 [=======================>......] - ETA: 0s - loss: 13.0431
32/32 [==============================] - 0s 4ms/step - loss: 12.6950
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 12.2746
18/32 [===============>..............] - ETA: 0s - loss: 11.8792
30/32 [===========================>..] - ETA: 0s - loss: 11.5003
32/32 [==============================] - 0s 4ms/step - loss: 11.5684
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 13.3544
17/32 [==============>...............] - ETA: 0s - loss: 10.6364
25/32 [======================>.......] - ETA: 0s - loss: 10.7123
32/32 [==============================] - 0s 5ms/step - loss: 10.6440
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 8.5922
14/32 [============>.................] - ETA: 0s - loss: 9.4214
22/32 [===================>..........] - ETA: 0s - loss: 9.4548
32/32 [==============================] - 0s 5ms/step - loss: 9.8495
Done in 1.383712 secs
Fitting member 4 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 2.8816
14/32 [============>.................] - ETA: 0s - loss: 3.0249
28/32 [=========================>....] - ETA: 0s - loss: 2.9629
32/32 [==============================] - 0s 4ms/step - loss: 2.9588
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.3825
15/32 [=============>................] - ETA: 0s - loss: 2.9621
25/32 [======================>.......] - ETA: 0s - loss: 2.9174
32/32 [==============================] - 0s 5ms/step - loss: 2.9011
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 3.9315
13/32 [===========>..................] - ETA: 0s - loss: 2.9584
29/32 [==========================>...] - ETA: 0s - loss: 2.8853
32/32 [==============================] - 0s 4ms/step - loss: 2.8534
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.3602
15/32 [=============>................] - ETA: 0s - loss: 2.6057
31/32 [============================>.] - ETA: 0s - loss: 2.7971
32/32 [==============================] - 0s 4ms/step - loss: 2.8062
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.7419
16/32 [==============>...............] - ETA: 0s - loss: 2.8197
26/32 [=======================>......] - ETA: 0s - loss: 2.7826
32/32 [==============================] - 0s 4ms/step - loss: 2.7623
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 1.7251
15/32 [=============>................] - ETA: 0s - loss: 2.5493
27/32 [========================>.....] - ETA: 0s - loss: 2.7264
32/32 [==============================] - 0s 4ms/step - loss: 2.7193
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.8260
15/32 [=============>................] - ETA: 0s - loss: 2.6259
27/32 [========================>.....] - ETA: 0s - loss: 2.6416
32/32 [==============================] - 0s 4ms/step - loss: 2.6774
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 2.2710
14/32 [============>.................] - ETA: 0s - loss: 2.6583
28/32 [=========================>....] - ETA: 0s - loss: 2.6731
32/32 [==============================] - 0s 4ms/step - loss: 2.6383
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 3.0022
19/32 [================>.............] - ETA: 0s - loss: 2.6721
31/32 [============================>.] - ETA: 0s - loss: 2.6048
32/32 [==============================] - 0s 4ms/step - loss: 2.6007
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.6980
16/32 [==============>...............] - ETA: 0s - loss: 2.3762
27/32 [========================>.....] - ETA: 0s - loss: 2.4664
32/32 [==============================] - 0s 4ms/step - loss: 2.5630
Done in 1.466803 secs
Fitting member 5 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 112.3703
12/32 [==========>...................] - ETA: 0s - loss: 164.2471
23/32 [====================>.........] - ETA: 0s - loss: 144.3657
32/32 [==============================] - 0s 4ms/step - loss: 139.0890
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 141.4559
13/32 [===========>..................] - ETA: 0s - loss: 106.1431
22/32 [===================>..........] - ETA: 0s - loss: 98.7140
31/32 [============================>.] - ETA: 0s - loss: 95.8028
32/32 [==============================] - 0s 5ms/step - loss: 95.6168
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 97.2787
11/32 [=========>....................] - ETA: 0s - loss: 92.3579
26/32 [=======================>......] - ETA: 0s - loss: 76.9319
32/32 [==============================] - 0s 4ms/step - loss: 74.6476
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 42.6719
11/32 [=========>....................] - ETA: 0s - loss: 63.2135
23/32 [====================>.........] - ETA: 0s - loss: 63.0748
32/32 [==============================] - 0s 4ms/step - loss: 61.8040
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 51.5790
13/32 [===========>..................] - ETA: 0s - loss: 52.5085
27/32 [========================>.....] - ETA: 0s - loss: 54.0916
32/32 [==============================] - 0s 4ms/step - loss: 53.1899
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 35.5863
18/32 [===============>..............] - ETA: 0s - loss: 45.5716
31/32 [============================>.] - ETA: 0s - loss: 46.6430
32/32 [==============================] - 0s 4ms/step - loss: 46.9254
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 43.7696
16/32 [==============>...............] - ETA: 0s - loss: 42.5416
31/32 [============================>.] - ETA: 0s - loss: 42.2115
32/32 [==============================] - 0s 4ms/step - loss: 42.0744
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 43.0723
11/32 [=========>....................] - ETA: 0s - loss: 39.5955
23/32 [====================>.........] - ETA: 0s - loss: 37.6153
32/32 [==============================] - 0s 5ms/step - loss: 38.1152
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 45.3285
16/32 [==============>...............] - ETA: 0s - loss: 35.4772
28/32 [=========================>....] - ETA: 0s - loss: 35.0104
32/32 [==============================] - 0s 4ms/step - loss: 34.8808
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 29.3974
12/32 [==========>...................] - ETA: 0s - loss: 30.0236
27/32 [========================>.....] - ETA: 0s - loss: 31.4974
32/32 [==============================] - 0s 4ms/step - loss: 32.1088
Done in 1.557684 secs
Epoch 1/2
1/3 [=========>....................] - ETA: 1s - loss: 2.3341
3/3 [==============================] - 1s 142ms/step - loss: 2.3038 - val_loss: 2.2154
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 2.3662
3/3 [==============================] - 0s 59ms/step - loss: 2.3004 - val_loss: 2.2128
Epoch 1/2
1/3 [=========>....................] - ETA: 0s - loss: 52.8941
3/3 [==============================] - 0s 50ms/step - loss: 47.0024 - val_loss: 27.1291
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 49.6579
3/3 [==============================] - 0s 37ms/step - loss: 46.6172 - val_loss: 26.8568
Saving _problems/test_ensemble_torch-17.R
Saving _problems/test_ensemble_torch-63.R
Fitting normal
Fitting bernoulli
Fitting bernoulli_prob
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fa22c7d0d60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Fitting beta
WARNING:tensorflow:5 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x7fa22c58c400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Fitting betar
Fitting chi2
Fitting chi
Fitting exponential
Fitting gamma
Fitting gammar
Fitting gumbel
Fitting half_normal
Fitting horseshoe
Fitting inverse_gaussian
Fitting laplace
Fitting log_normal
Fitting logistic
Fitting negbinom
Fitting negbinom
Fitting pareto_ls
Fitting poisson
Fitting poisson_lograte
Saving _problems/test_families_torch-82.R
Saving _problems/test_layers_torch-6.R
Saving _problems/test_methods_torch-23.R
Epoch 1/2
1/29 [>.............................] - ETA: 22s - loss: 11.2004
11/29 [==========>...................] - ETA: 0s - loss: 10.4834
20/29 [===================>..........] - ETA: 0s - loss: 10.7392
29/29 [==============================] - 1s 16ms/step - loss: 10.6607 - val_loss: 7.6350
Epoch 2/2
1/29 [>.............................] - ETA: 0s - loss: 7.1379
10/29 [=========>....................] - ETA: 0s - loss: 9.8350
21/29 [====================>.........] - ETA: 0s - loss: 9.6194
29/29 [==============================] - ETA: 0s - loss: 9.5296
29/29 [==============================] - 0s 11ms/step - loss: 9.5296 - val_loss: 6.8647
Epoch 1/10
1/29 [>.............................] - ETA: 1:35 - loss: 6.4103
4/29 [===>..........................] - ETA: 0s - loss: 7.1513
6/29 [=====>........................] - ETA: 0s - loss: 7.8103
9/29 [========>.....................] - ETA: 0s - loss: 8.4596
11/29 [==========>...................] - ETA: 0s - loss: 8.7260
14/29 [=============>................] - ETA: 0s - loss: 8.5808
18/29 [=================>............] - ETA: 0s - loss: 8.4055
21/29 [====================>.........] - ETA: 0s - loss: 8.3033
25/29 [========================>.....] - ETA: 0s - loss: 8.1120
28/29 [===========================>..] - ETA: 0s - loss: 7.8466
29/29 [==============================] - 4s 38ms/step - loss: 7.8712 - val_loss: 8.8751
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 6.9897
5/29 [====>.........................] - ETA: 0s - loss: 7.3446
8/29 [=======>......................] - ETA: 0s - loss: 7.4540
11/29 [==========>...................] - ETA: 0s - loss: 7.5382
15/29 [==============>...............] - ETA: 0s - loss: 7.6423
19/29 [==================>...........] - ETA: 0s - loss: 7.7156
22/29 [=====================>........] - ETA: 0s - loss: 7.6910
26/29 [=========================>....] - ETA: 0s - loss: 7.4651
29/29 [==============================] - 1s 21ms/step - loss: 7.4694 - val_loss: 8.4332
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 14.7798
4/29 [===>..........................] - ETA: 0s - loss: 10.5156
8/29 [=======>......................] - ETA: 0s - loss: 9.1643
13/29 [============>.................] - ETA: 0s - loss: 8.2038
16/29 [===============>..............] - ETA: 0s - loss: 7.8376
20/29 [===================>..........] - ETA: 0s - loss: 7.4106
24/29 [=======================>......] - ETA: 0s - loss: 7.2601
27/29 [==========================>...] - ETA: 0s - loss: 7.1724
29/29 [==============================] - 1s 21ms/step - loss: 7.1170 - val_loss: 8.0236
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 9.3139
5/29 [====>.........................] - ETA: 0s - loss: 7.5219
9/29 [========>.....................] - ETA: 0s - loss: 7.0688
12/29 [===========>..................] - ETA: 0s - loss: 6.7038
16/29 [===============>..............] - ETA: 0s - loss: 6.8798
20/29 [===================>..........] - ETA: 0s - loss: 6.9852
23/29 [======================>.......] - ETA: 0s - loss: 6.7889
28/29 [===========================>..] - ETA: 0s - loss: 6.8088
29/29 [==============================] - 1s 19ms/step - loss: 6.7865 - val_loss: 7.6687
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 3.8834
5/29 [====>.........................] - ETA: 0s - loss: 5.7258
9/29 [========>.....................] - ETA: 0s - loss: 6.8411
14/29 [=============>................] - ETA: 0s - loss: 6.9430
19/29 [==================>...........] - ETA: 0s - loss: 6.5392
23/29 [======================>.......] - ETA: 0s - loss: 6.6037
27/29 [==========================>...] - ETA: 0s - loss: 6.4236
29/29 [==============================] - 0s 17ms/step - loss: 6.4899 - val_loss: 7.3231
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 5.2272
5/29 [====>.........................] - ETA: 0s - loss: 7.8718
10/29 [=========>....................] - ETA: 0s - loss: 7.0988
14/29 [=============>................] - ETA: 0s - loss: 6.4518
17/29 [================>.............] - ETA: 0s - loss: 6.3520
19/29 [==================>...........] - ETA: 0s - loss: 6.5262
22/29 [=====================>........] - ETA: 0s - loss: 6.3931
25/29 [========================>.....] - ETA: 0s - loss: 6.2993
29/29 [==============================] - ETA: 0s - loss: 6.2000
29/29 [==============================] - 1s 21ms/step - loss: 6.2000 - val_loss: 6.9819
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 6.7016
5/29 [====>.........................] - ETA: 0s - loss: 6.6629
9/29 [========>.....................] - ETA: 0s - loss: 6.3465
15/29 [==============>...............] - ETA: 0s - loss: 6.1710
20/29 [===================>..........] - ETA: 0s - loss: 6.1112
24/29 [=======================>......] - ETA: 0s - loss: 6.0264
27/29 [==========================>...] - ETA: 0s - loss: 5.8750
29/29 [==============================] - 1s 19ms/step - loss: 5.9123 - val_loss: 6.6514
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 5.7023
4/29 [===>..........................] - ETA: 0s - loss: 5.9071
8/29 [=======>......................] - ETA: 0s - loss: 5.6464
12/29 [===========>..................] - ETA: 0s - loss: 5.8810
17/29 [================>.............] - ETA: 0s - loss: 5.5613
22/29 [=====================>........] - ETA: 0s - loss: 5.6236
26/29 [=========================>....] - ETA: 0s - loss: 5.6011
29/29 [==============================] - 1s 19ms/step - loss: 5.6402 - val_loss: 6.3442
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 4.7939
5/29 [====>.........................] - ETA: 0s - loss: 5.6148
9/29 [========>.....................] - ETA: 0s - loss: 5.1551
14/29 [=============>................] - ETA: 0s - loss: 5.5927
18/29 [=================>............] - ETA: 0s - loss: 5.2364
23/29 [======================>.......] - ETA: 0s - loss: 5.4027
27/29 [==========================>...] - ETA: 0s - loss: 5.3527
29/29 [==============================] - 1s 18ms/step - loss: 5.3794 - val_loss: 6.0578
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 4.1903
5/29 [====>.........................] - ETA: 0s - loss: 5.5030
9/29 [========>.....................] - ETA: 0s - loss: 5.3980
13/29 [============>.................] - ETA: 0s - loss: 4.9643
17/29 [================>.............] - ETA: 0s - loss: 5.1210
20/29 [===================>..........] - ETA: 0s - loss: 5.0690
24/29 [=======================>......] - ETA: 0s - loss: 5.2108
27/29 [==========================>...] - ETA: 0s - loss: 5.1396
29/29 [==============================] - 1s 21ms/step - loss: 5.1438 - val_loss: 5.7953
Epoch 1/10
1/29 [>.............................] - ETA: 1:31 - loss: 1.4933
5/29 [====>.........................] - ETA: 0s - loss: 1.4920
9/29 [========>.....................] - ETA: 0s - loss: 1.4907
12/29 [===========>..................] - ETA: 0s - loss: 1.4898
17/29 [================>.............] - ETA: 0s - loss: 1.4882
20/29 [===================>..........] - ETA: 0s - loss: 1.4873
24/29 [=======================>......] - ETA: 0s - loss: 1.4861
27/29 [==========================>...] - ETA: 0s - loss: 1.4851
29/29 [==============================] - 4s 35ms/step - loss: 1.4848 - val_loss: 1.4755
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 1.4755
5/29 [====>.........................] - ETA: 0s - loss: 1.4743
9/29 [========>.....................] - ETA: 0s - loss: 1.4732
12/29 [===========>..................] - ETA: 0s - loss: 1.4723
17/29 [================>.............] - ETA: 0s - loss: 1.4709
21/29 [====================>.........] - ETA: 0s - loss: 1.4698
25/29 [========================>.....] - ETA: 0s - loss: 1.4688
29/29 [==============================] - 1s 18ms/step - loss: 1.4679 - val_loss: 1.4596
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 1.4597
4/29 [===>..........................] - ETA: 0s - loss: 1.4589
8/29 [=======>......................] - ETA: 0s - loss: 1.4579
11/29 [==========>...................] - ETA: 0s - loss: 1.4572
15/29 [==============>...............] - ETA: 0s - loss: 1.4562
19/29 [==================>...........] - ETA: 0s - loss: 1.4552
22/29 [=====================>........] - ETA: 0s - loss: 1.4545
26/29 [=========================>....] - ETA: 0s - loss: 1.4536
29/29 [==============================] - ETA: 0s - loss: 1.4531
29/29 [==============================] - 1s 22ms/step - loss: 1.4531 - val_loss: 1.4458
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 1.4459
3/29 [==>...........................] - ETA: 0s - loss: 1.4454
7/29 [======>.......................] - ETA: 0s - loss: 1.4445
11/29 [==========>...................] - ETA: 0s - loss: 1.4437
13/29 [============>.................] - ETA: 0s - loss: 1.4432
16/29 [===============>..............] - ETA: 0s - loss: 1.4426
19/29 [==================>...........] - ETA: 0s - loss: 1.4420
23/29 [======================>.......] - ETA: 0s - loss: 1.4411
26/29 [=========================>....] - ETA: 0s - loss: 1.4405
29/29 [==============================] - ETA: 0s - loss: 1.4401
29/29 [==============================] - 1s 23ms/step - loss: 1.4401 - val_loss: 1.4336
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 1.4338
4/29 [===>..........................] - ETA: 0s - loss: 1.4332
11/29 [==========>...................] - ETA: 0s - loss: 1.4318
16/29 [===============>..............] - ETA: 0s - loss: 1.4309
19/29 [==================>...........] - ETA: 0s - loss: 1.4303
23/29 [======================>.......] - ETA: 0s - loss: 1.4296
28/29 [===========================>..] - ETA: 0s - loss: 1.4287
29/29 [==============================] - 1s 18ms/step - loss: 1.4287 - val_loss: 1.4230
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 1.4231
6/29 [=====>........................] - ETA: 0s - loss: 1.4223
11/29 [==========>...................] - ETA: 0s - loss: 1.4215
15/29 [==============>...............] - ETA: 0s - loss: 1.4208
21/29 [====================>.........] - ETA: 0s - loss: 1.4198
26/29 [=========================>....] - ETA: 0s - loss: 1.4190
29/29 [==============================] - 0s 16ms/step - loss: 1.4187 - val_loss: 1.4137
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 1.4140
5/29 [====>.........................] - ETA: 0s - loss: 1.4133
9/29 [========>.....................] - ETA: 0s - loss: 1.4128
12/29 [===========>..................] - ETA: 0s - loss: 1.4123
16/29 [===============>..............] - ETA: 0s - loss: 1.4118
21/29 [====================>.........] - ETA: 0s - loss: 1.4111
26/29 [=========================>....] - ETA: 0s - loss: 1.4104
29/29 [==============================] - 0s 17ms/step - loss: 1.4101 - val_loss: 1.4059
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 1.4061
4/29 [===>..........................] - ETA: 0s - loss: 1.4056
9/29 [========>.....................] - ETA: 0s - loss: 1.4051
13/29 [============>.................] - ETA: 0s - loss: 1.4047
19/29 [==================>...........] - ETA: 0s - loss: 1.4040
23/29 [======================>.......] - ETA: 0s - loss: 1.4035
26/29 [=========================>....] - ETA: 0s - loss: 1.4032
29/29 [==============================] - 0s 16ms/step - loss: 1.4030 - val_loss: 1.3995
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 1.3999
6/29 [=====>........................] - ETA: 0s - loss: 1.3993
9/29 [========>.....................] - ETA: 0s - loss: 1.3990
13/29 [============>.................] - ETA: 0s - loss: 1.3986
17/29 [================>.............] - ETA: 0s - loss: 1.3983
22/29 [=====================>........] - ETA: 0s - loss: 1.3978
25/29 [========================>.....] - ETA: 0s - loss: 1.3976
29/29 [==============================] - 1s 18ms/step - loss: 1.3973 - val_loss: 1.3948
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 1.3947
5/29 [====>.........................] - ETA: 0s - loss: 1.3946
8/29 [=======>......................] - ETA: 0s - loss: 1.3944
13/29 [============>.................] - ETA: 0s - loss: 1.3941
18/29 [=================>............] - ETA: 0s - loss: 1.3938
21/29 [====================>.........] - ETA: 0s - loss: 1.3936
25/29 [========================>.....] - ETA: 0s - loss: 1.3934
29/29 [==============================] - ETA: 0s - loss: 1.3933
29/29 [==============================] - 1s 18ms/step - loss: 1.3933 - val_loss: 1.3916
Epoch 1/10
1/29 [>.............................] - ETA: 1:27 - loss: 1.2453
3/29 [==>...........................] - ETA: 0s - loss: 1.1829
7/29 [======>.......................] - ETA: 0s - loss: 1.1467
10/29 [=========>....................] - ETA: 0s - loss: 1.1705
14/29 [=============>................] - ETA: 0s - loss: 1.1720
18/29 [=================>............] - ETA: 0s - loss: 1.1901
22/29 [=====================>........] - ETA: 0s - loss: 1.1751
26/29 [=========================>....] - ETA: 0s - loss: 1.1809
29/29 [==============================] - 4s 39ms/step - loss: 1.1842 - val_loss: 2.1275
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 0.9681
5/29 [====>.........................] - ETA: 0s - loss: 1.1339
9/29 [========>.....................] - ETA: 0s - loss: 1.2046
13/29 [============>.................] - ETA: 0s - loss: 1.1714
17/29 [================>.............] - ETA: 0s - loss: 1.1683
21/29 [====================>.........] - ETA: 0s - loss: 1.1765
25/29 [========================>.....] - ETA: 0s - loss: 1.1784
29/29 [==============================] - 0s 17ms/step - loss: 1.1645 - val_loss: 2.0574
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 1.0562
4/29 [===>..........................] - ETA: 0s - loss: 1.1474
8/29 [=======>......................] - ETA: 0s - loss: 1.1602
12/29 [===========>..................] - ETA: 0s - loss: 1.1461
17/29 [================>.............] - ETA: 0s - loss: 1.1433
22/29 [=====================>........] - ETA: 0s - loss: 1.1261
26/29 [=========================>....] - ETA: 0s - loss: 1.1467
29/29 [==============================] - 1s 19ms/step - loss: 1.1441 - val_loss: 1.9952
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 1.0159
6/29 [=====>........................] - ETA: 0s - loss: 1.1034
10/29 [=========>....................] - ETA: 0s - loss: 1.1139
14/29 [=============>................] - ETA: 0s - loss: 1.0867
19/29 [==================>...........] - ETA: 0s - loss: 1.0980
23/29 [======================>.......] - ETA: 0s - loss: 1.1112
26/29 [=========================>....] - ETA: 0s - loss: 1.1183
29/29 [==============================] - 1s 20ms/step - loss: 1.1216 - val_loss: 1.9393
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 1.0572
4/29 [===>..........................] - ETA: 0s - loss: 1.0924
8/29 [=======>......................] - ETA: 0s - loss: 1.1377
12/29 [===========>..................] - ETA: 0s - loss: 1.1144
16/29 [===============>..............] - ETA: 0s - loss: 1.1067
20/29 [===================>..........] - ETA: 0s - loss: 1.0966
24/29 [=======================>......] - ETA: 0s - loss: 1.1042
27/29 [==========================>...] - ETA: 0s - loss: 1.0958
29/29 [==============================] - 1s 19ms/step - loss: 1.0955 - val_loss: 1.8846
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 0.9186
3/29 [==>...........................] - ETA: 0s - loss: 1.0358
7/29 [======>.......................] - ETA: 0s - loss: 1.0629
12/29 [===========>..................] - ETA: 0s - loss: 1.0595
15/29 [==============>...............] - ETA: 0s - loss: 1.0680
21/29 [====================>.........] - ETA: 0s - loss: 1.0574
26/29 [=========================>....] - ETA: 0s - loss: 1.0681
29/29 [==============================] - 1s 18ms/step - loss: 1.0666 - val_loss: 1.8390
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 1.0423
7/29 [======>.......................] - ETA: 0s - loss: 1.0373
12/29 [===========>..................] - ETA: 0s - loss: 1.0630
17/29 [================>.............] - ETA: 0s - loss: 1.0570
21/29 [====================>.........] - ETA: 0s - loss: 1.0509
24/29 [=======================>......] - ETA: 0s - loss: 1.0433
28/29 [===========================>..] - ETA: 0s - loss: 1.0399
29/29 [==============================] - 0s 17ms/step - loss: 1.0385 - val_loss: 1.7931
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 1.0082
5/29 [====>.........................] - ETA: 0s - loss: 1.0376
10/29 [=========>....................] - ETA: 0s - loss: 1.0007
15/29 [==============>...............] - ETA: 0s - loss: 1.0137
20/29 [===================>..........] - ETA: 0s - loss: 1.0023
23/29 [======================>.......] - ETA: 0s - loss: 1.0112
26/29 [=========================>....] - ETA: 0s - loss: 1.0099
29/29 [==============================] - 1s 19ms/step - loss: 1.0098 - val_loss: 1.7559
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 0.9752
4/29 [===>..........................] - ETA: 0s - loss: 1.0184
8/29 [=======>......................] - ETA: 0s - loss: 1.0186
13/29 [============>.................] - ETA: 0s - loss: 1.0040
18/29 [=================>............] - ETA: 0s - loss: 0.9932
22/29 [=====================>........] - ETA: 0s - loss: 0.9905
27/29 [==========================>...] - ETA: 0s - loss: 0.9807
29/29 [==============================] - 1s 19ms/step - loss: 0.9810 - val_loss: 1.7130
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 0.8496
4/29 [===>..........................] - ETA: 0s - loss: 0.8875
9/29 [========>.....................] - ETA: 0s - loss: 0.9342
13/29 [============>.................] - ETA: 0s - loss: 0.9356
18/29 [=================>............] - ETA: 0s - loss: 0.9415
23/29 [======================>.......] - ETA: 0s - loss: 0.9511
26/29 [=========================>....] - ETA: 0s - loss: 0.9487
29/29 [==============================] - 1s 19ms/step - loss: 0.9513 - val_loss: 1.6782
2026-01-01 15:48:05.046515: E tensorflow/core/util/util.cc:131] oneDNN supports DT_INT64 only on platforms with AVX-512. Falling back to the default Eigen-based implementation if present.
Model: "model_43"
________________________________________________________________________________
Layer (type) Output Shape Para Connected to Trainable
m #
================================================================================
input_node_x1_x2_ [(None, 2)] 0 [] Y
n_trees_2_n_layer
s_3_tree_depth_5_
_1 (InputLayer)
input__Intercept_ [(None, 1)] 0 [] Y
_1 (InputLayer)
node_2 (NODE) (None, 3) 1754 ['input_node_x1_x2 Y
_n_trees_2_n_layer
s_3_tree_depth_5__
1[0][0]']
1_1 (Dense) (None, 3) 3 ['input__Intercept Y
__1[0][0]']
add_77 (Add) (None, 3) 0 ['node_2[0][0]', Y
'1_1[0][0]']
distribution_lamb ((None, 3), 0 ['add_77[0][0]'] Y
da_43 (Distributi (None, 3))
onLambda)
================================================================================
Total params: 1757 (6.86 KB)
Trainable params: 793 (3.10 KB)
Non-trainable params: 964 (3.77 KB)
________________________________________________________________________________
Model formulas:
---------------
loc :
~node(x1, x2, n_trees = 2, n_layers = 3, tree_depth = 5)
<environment: 0x555799c60130>
Fitting model with 1 orthogonalization(s) ... Fitting model with 2 orthogonalization(s) ... Fitting model with 3 orthogonalization(s) ... Fitting model with 4 orthogonalization(s) ... Fitting model with 5 orthogonalization(s) ... Saving _problems/test_reproducibility_torch-30.R
Saving _problems/test_subnetwork_init_torch-20.R
Fitting Fold 1 ...
Done in 3.605067 secs
Fitting Fold 2 ...
Done in 0.6917861 secs
Epoch 1/2
1/2 [==============>...............] - ETA: 0s - loss: 22.0463
2/2 [==============================] - 0s 16ms/step - loss: 22.4672
Epoch 2/2
1/2 [==============>...............] - ETA: 0s - loss: 21.8272
2/2 [==============================] - 0s 19ms/step - loss: 20.5671
Fitting Fold 1 ...
Done in 1.357061 secs
Fitting Fold 2 ...
Done in 0.4739804 secs
Epoch 1/2
1/2 [==============>...............] - ETA: 0s - loss: 22.0463
2/2 [==============================] - 0s 15ms/step - loss: 22.4672
Epoch 2/2
1/2 [==============>...............] - ETA: 0s - loss: 21.8272
2/2 [==============================] - 0s 14ms/step - loss: 20.5671
[ FAIL 14 | WARN 0 | SKIP 1 | PASS 680 ]
══ Skipped tests (1) ═══════════════════════════════════════════════════════════
• empty test (1):
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test_customtraining_torch.R:6:3'): Use multiple optimizers torch ────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(50, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─torch::nn_linear(1, 50) at test_customtraining_torch.R:6:3
2. └─Module$new(...)
3. └─torch (local) initialize(...)
4. ├─torch::nn_parameter(torch_empty(out_features, in_features))
5. │ └─torch:::is_torch_tensor(x)
6. └─torch::torch_empty(out_features, in_features)
7. ├─base::do.call(.torch_empty, args)
8. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
9. └─torch:::call_c_function(...)
10. └─torch:::do_call(f, args)
11. ├─base::do.call(fun, args)
12. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_data_handler_torch.R:75:3'): properties of dataset torch ───────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_data_handler_torch.R:75:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_deepregression_torch.R:6:5'): Simple additive model ────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(2, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:21:5
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─torch::nn_sequential(...) at test_deepregression_torch.R:6:5
9. │ └─Module$new(...)
10. │ └─torch (local) initialize(...)
11. │ └─rlang::list2(...)
12. └─torch::nn_linear(in_features = i, out_features = 2, bias = F)
13. └─Module$new(...)
14. └─torch (local) initialize(...)
15. ├─torch::nn_parameter(torch_empty(out_features, in_features))
16. │ └─torch:::is_torch_tensor(x)
17. └─torch::torch_empty(out_features, in_features)
18. ├─base::do.call(.torch_empty, args)
19. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
20. └─torch:::call_c_function(...)
21. └─torch:::do_call(f, args)
22. ├─base::do.call(fun, args)
23. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_deepregression_torch.R:110:3'): Generalized additive model ─────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:110:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:151:3'): Deep generalized additive model with LSS ──
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:151:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:181:3'): GAMs with shared weights ───────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:181:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. ├─base::do.call(...)
6. └─deepregression (local) `<fn>`(...)
7. └─torch::torch_tensor(P)
8. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
9. └─methods$initialize(NULL, NULL, ...)
10. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:220:3'): GAMs with fixed weights ────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:220:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_ensemble_torch.R:13:3'): deep ensemble ─────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_ensemble_torch.R:13:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_ensemble_torch.R:55:3'): reinitializing weights ────────────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_ensemble_torch.R:55:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_families_torch.R:76:7'): torch families can be fitted ──────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_families_torch.R:76:7
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_layers_torch.R:6:3'): lasso layers ─────────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `cpp_torch_manual_seed(as.character(seed))`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─torch::torch_manual_seed(42) at test_layers_torch.R:6:3
2. └─torch:::cpp_torch_manual_seed(as.character(seed))
── Error ('test_methods_torch.R:18:3'): all methods ────────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_methods_torch.R:18:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_reproducibility_torch.R:21:17'): reproducibility ───────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(64, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_reproducibility_torch.R:33:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─torch::nn_sequential(...) at test_reproducibility_torch.R:21:17
9. │ └─Module$new(...)
10. │ └─torch (local) initialize(...)
11. │ └─rlang::list2(...)
12. └─torch::nn_linear(in_features = 1, out_features = 64, bias = F)
13. └─Module$new(...)
14. └─torch (local) initialize(...)
15. ├─torch::nn_parameter(torch_empty(out_features, in_features))
16. │ └─torch:::is_torch_tensor(x)
17. └─torch::torch_empty(out_features, in_features)
18. ├─base::do.call(.torch_empty, args)
19. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
20. └─torch:::call_c_function(...)
21. └─torch:::do_call(f, args)
22. ├─base::do.call(fun, args)
23. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_subnetwork_init_torch.R:15:33'): subnetwork_init ───────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(5, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::subnetwork_init_torch(list(pp)) at test_subnetwork_init_torch.R:38:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─pp_lay[[i]]$layer()
5. ├─torch::nn_sequential(...) at test_subnetwork_init_torch.R:15:33
6. │ └─Module$new(...)
7. │ └─torch (local) initialize(...)
8. │ └─rlang::list2(...)
9. └─torch::nn_linear(in_features = 1, out_features = 5)
10. └─Module$new(...)
11. └─torch (local) initialize(...)
12. ├─torch::nn_parameter(torch_empty(out_features, in_features))
13. │ └─torch:::is_torch_tensor(x)
14. └─torch::torch_empty(out_features, in_features)
15. ├─base::do.call(.torch_empty, args)
16. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
17. └─torch:::call_c_function(...)
18. └─torch:::do_call(f, args)
19. ├─base::do.call(fun, args)
20. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
[ FAIL 14 | WARN 0 | SKIP 1 | PASS 680 ]
Error:
! Test failures.
Execution halted
- checking PDF version of manual ... [5s/6s] OK
- checking HTML version of manual ... [4s/6s] OK
- checking for non-standard things in the check directory ... OK
- DONE
Status: 1 ERROR