- using R Under development (unstable) (2025-12-12 r89163)
- using platform: x86_64-pc-linux-gnu
- R was compiled by
gcc-15 (Debian 15.2.0-9) 15.2.0
GNU Fortran (Debian 15.2.0-9) 15.2.0
- running under: Debian GNU/Linux forky/sid
- using session charset: UTF-8
- checking for file ‘deepregression/DESCRIPTION’ ... OK
- this is package ‘deepregression’ version ‘2.3.2’
- package encoding: UTF-8
- checking CRAN incoming feasibility ... [1s/2s] OK
- checking package namespace information ... OK
- checking package dependencies ... OK
- checking if this is a source package ... OK
- checking if there is a namespace ... OK
- checking for executable files ... OK
- checking for hidden files and directories ... OK
- checking for portable file names ... OK
- checking for sufficient/correct file permissions ... OK
- checking whether package ‘deepregression’ can be installed ... OK
See the install log for details.
- checking package directory ... OK
- checking for future file timestamps ... OK
- checking DESCRIPTION meta-information ... OK
- checking top-level files ... OK
- checking for left-over files ... OK
- checking index information ... OK
- checking package subdirectories ... OK
- checking code files for non-ASCII characters ... OK
- checking R files for syntax errors ... OK
- checking whether the package can be loaded ... [3s/4s] OK
- checking whether the package can be loaded with stated dependencies ... [3s/3s] OK
- checking whether the package can be unloaded cleanly ... [3s/4s] OK
- checking whether the namespace can be loaded with stated dependencies ... [3s/4s] OK
- checking whether the namespace can be unloaded cleanly ... [3s/5s] OK
- checking loading without being on the library search path ... [3s/4s] OK
- checking whether startup messages can be suppressed ... [4s/5s] OK
- checking use of S3 registration ... OK
- checking dependencies in R code ... OK
- checking S3 generic/method consistency ... OK
- checking replacement functions ... OK
- checking foreign function calls ... OK
- checking R code for possible problems ... [23s/28s] OK
- checking Rd files ... [1s/1s] OK
- checking Rd metadata ... OK
- checking Rd line widths ... OK
- checking Rd cross-references ... OK
- checking for missing documentation entries ... OK
- checking for code/documentation mismatches ... OK
- checking Rd \usage sections ... OK
- checking Rd contents ... OK
- checking for unstated dependencies in examples ... OK
- checking examples ... [3s/3s] OK
- checking for unstated dependencies in ‘tests’ ... OK
- checking tests ... [365s/531s] ERROR
Running ‘testthat.R’ [365s/531s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(deepregression)
Loading required package: tensorflow
Loading required package: tfprobability
Loading required package: keras
The keras package is deprecated. Use the keras3 package instead.
>
> if (reticulate::py_module_available("tensorflow") &
+ reticulate::py_module_available("keras") &
+ .Platform$OS.type != "windows"){
+ test_check("deepregression")
+ }
Downloading grpcio (6.3MiB)
Downloading numpy (17.1MiB)
Downloading tensorflow (615.1MiB)
Downloading tensorflow-probability (6.7MiB)
Downloading pygments (1.2MiB)
Downloading keras (1.4MiB)
Downloading h5py (4.9MiB)
Downloading ml-dtypes (4.8MiB)
Downloading tensorboard (5.2MiB)
Downloading tf-keras (1.6MiB)
Downloaded pygments
Downloaded ml-dtypes
Downloaded h5py
Downloaded keras
Downloaded grpcio
Downloaded tf-keras
Downloaded tensorboard
Downloaded numpy
Downloaded tensorflow-probability
Downloaded tensorflow
Installed 43 packages in 545ms
2025-12-13 15:43:12.555254: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-12-13 15:43:12.561273: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-12-13 15:43:12.573205: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-12-13 15:43:12.597356: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1765636992.647292 2254377 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1765636992.662836 2254377 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1765636992.702301 2254377 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1765636992.702381 2254377 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1765636992.702387 2254377 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1765636992.702391 2254377 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-12-13 15:43:12.708738: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Saving _problems/test_customtraining_torch-6.R
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
8192/11490434 [..............................] - ETA: 0s
16384/11490434 [..............................] - ETA: 44s
49152/11490434 [..............................] - ETA: 40s
81920/11490434 [..............................] - ETA: 39s
131072/11490434 [..............................] - ETA: 29s
180224/11490434 [..............................] - ETA: 25s
245760/11490434 [..............................] - ETA: 20s
360448/11490434 [..............................] - ETA: 15s
475136/11490434 [>.............................] - ETA: 12s
679936/11490434 [>.............................] - ETA: 9s
933888/11490434 [=>............................] - ETA: 7s
1359872/11490434 [==>...........................] - ETA: 5s
1966080/11490434 [====>.........................] - ETA: 3s
2777088/11490434 [======>.......................] - ETA: 2s
3907584/11490434 [=========>....................] - ETA: 1s
5505024/11490434 [=============>................] - ETA: 0s
8044544/11490434 [====================>.........] - ETA: 0s
9289728/11490434 [=======================>......] - ETA: 0s
11490434/11490434 [==============================] - 1s 0us/step
Saving _problems/test_data_handler_torch-78.R
2025-12-13 15:43:30.096800: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Epoch 1/2
1/15 [=>............................] - ETA: 20s - loss: 2.2502
9/15 [=================>............] - ETA: 0s - loss: 2.2234
15/15 [==============================] - 2s 43ms/step - loss: 2.2049 - val_loss: 2.1441
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1231
10/15 [===================>..........] - ETA: 0s - loss: 2.0762
15/15 [==============================] - 0s 15ms/step - loss: 2.0575 - val_loss: 2.0101
Epoch 1/2
1/15 [=>............................] - ETA: 11s - loss: 2.6629
10/15 [===================>..........] - ETA: 0s - loss: 2.6533
15/15 [==============================] - 1s 35ms/step - loss: 2.6533 - val_loss: 2.6393
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.6358
13/15 [=========================>....] - ETA: 0s - loss: 2.6343
15/15 [==============================] - 0s 12ms/step - loss: 2.6365 - val_loss: 2.6227
Epoch 1/2
1/15 [=>............................] - ETA: 12s - loss: 2.9965
10/15 [===================>..........] - ETA: 0s - loss: 3.0084
15/15 [==============================] - 1s 28ms/step - loss: 2.8998 - val_loss: 1.6201
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1770
11/15 [=====================>........] - ETA: 0s - loss: 2.7130
15/15 [==============================] - 0s 12ms/step - loss: 2.6616 - val_loss: 1.5446
Epoch 1/2
1/15 [=>............................] - ETA: 19s - loss: 4.1701
10/15 [===================>..........] - ETA: 0s - loss: 4.0762
15/15 [==============================] - 2s 29ms/step - loss: 4.0062 - val_loss: 3.4240
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 4.3026
9/15 [=================>............] - ETA: 0s - loss: 3.9501
15/15 [==============================] - 0s 14ms/step - loss: 3.9103 - val_loss: 3.3606
Epoch 1/3
1/3 [=========>....................] - ETA: 1s - loss: 7.9864
3/3 [==============================] - 1s 160ms/step - loss: 8.7055 - val_loss: 7.2396
Epoch 2/3
1/3 [=========>....................] - ETA: 0s - loss: 9.5053
3/3 [==============================] - 0s 39ms/step - loss: 8.6326 - val_loss: 7.1833
Epoch 3/3
1/3 [=========>....................] - ETA: 0s - loss: 8.5150
3/3 [==============================] - 0s 41ms/step - loss: 8.5566 - val_loss: 7.1282
Epoch 1/2
1/15 [=>............................] - ETA: 16s - loss: 2.6300
8/15 [===============>..............] - ETA: 0s - loss: 2.6297
15/15 [==============================] - 2s 44ms/step - loss: 2.6282 - val_loss: 2.6186
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.6212
9/15 [=================>............] - ETA: 0s - loss: 2.6144
15/15 [==============================] - 0s 11ms/step - loss: 2.6124 - val_loss: 2.6029
Epoch 1/2
1/15 [=>............................] - ETA: 11s - loss: 653.6502
10/15 [===================>..........] - ETA: 0s - loss: 1199.5112
15/15 [==============================] - 1s 31ms/step - loss: 1248.0040 - val_loss: 1370.1050
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 755.2901
10/15 [===================>..........] - ETA: 0s - loss: 1093.7516
15/15 [==============================] - 0s 13ms/step - loss: 960.7450 - val_loss: 1055.7158
Epoch 1/2
1/15 [=>............................] - ETA: 14s - loss: 927.8141
11/15 [=====================>........] - ETA: 0s - loss: 2320.8728
15/15 [==============================] - 1s 31ms/step - loss: 3026.8003 - val_loss: 321.5525
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 991.2161
7/15 [=============>................] - ETA: 0s - loss: 3623.5266
15/15 [==============================] - ETA: 0s - loss: 2679.4148
15/15 [==============================] - 0s 16ms/step - loss: 2679.4148 - val_loss: 292.7626
Epoch 1/2
1/15 [=>............................] - ETA: 21s - loss: 2.3149
9/15 [=================>............] - ETA: 0s - loss: 2.2388
15/15 [==============================] - ETA: 0s - loss: 2.2516
15/15 [==============================] - 2s 40ms/step - loss: 2.2516 - val_loss: 2.2750
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1948
12/15 [=======================>......] - ETA: 0s - loss: 2.2076
15/15 [==============================] - 0s 11ms/step - loss: 2.2132 - val_loss: 2.2328
Saving _problems/test_deepregression_torch-10.R
Saving _problems/test_deepregression_torch-117.R
Saving _problems/test_deepregression_torch-158.R
Saving _problems/test_deepregression_torch-190.R
Saving _problems/test_deepregression_torch-229.R
Fitting member 1 ...Epoch 1/10
1/32 [..............................] - ETA: 24s - loss: 2.3293
10/32 [========>.....................] - ETA: 0s - loss: 2.3532
22/32 [===================>..........] - ETA: 0s - loss: 2.3386
32/32 [==============================] - 1s 5ms/step - loss: 2.3303
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.3269
12/32 [==========>...................] - ETA: 0s - loss: 2.3131
21/32 [==================>...........] - ETA: 0s - loss: 2.3073
30/32 [===========================>..] - ETA: 0s - loss: 2.3004
32/32 [==============================] - 0s 6ms/step - loss: 2.2977
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 2.3270
12/32 [==========>...................] - ETA: 0s - loss: 2.2760
26/32 [=======================>......] - ETA: 0s - loss: 2.2684
32/32 [==============================] - 0s 4ms/step - loss: 2.2650
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.2270
20/32 [=================>............] - ETA: 0s - loss: 2.2229
32/32 [==============================] - 0s 3ms/step - loss: 2.2325
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.2185
24/32 [=====================>........] - ETA: 0s - loss: 2.2060
32/32 [==============================] - 0s 2ms/step - loss: 2.2002
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 2.1431
11/32 [=========>....................] - ETA: 0s - loss: 2.1683
25/32 [======================>.......] - ETA: 0s - loss: 2.1768
32/32 [==============================] - 0s 4ms/step - loss: 2.1681
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.2840
23/32 [====================>.........] - ETA: 0s - loss: 2.1389
32/32 [==============================] - ETA: 0s - loss: 2.1359
32/32 [==============================] - 0s 4ms/step - loss: 2.1359
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 2.0769
10/32 [========>.....................] - ETA: 0s - loss: 2.1152
17/32 [==============>...............] - ETA: 0s - loss: 2.1073
24/32 [=====================>........] - ETA: 0s - loss: 2.1131
32/32 [==============================] - 0s 7ms/step - loss: 2.1036
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 2.0602
12/32 [==========>...................] - ETA: 0s - loss: 2.0958
21/32 [==================>...........] - ETA: 0s - loss: 2.0773
32/32 [==============================] - 0s 5ms/step - loss: 2.0717
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.0980
15/32 [=============>................] - ETA: 0s - loss: 2.0439
32/32 [==============================] - 0s 3ms/step - loss: 2.0400
Done in 2.464089 secs
Fitting member 2 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 2.1814
12/32 [==========>...................] - ETA: 0s - loss: 2.3720
24/32 [=====================>........] - ETA: 0s - loss: 2.3417
32/32 [==============================] - 0s 4ms/step - loss: 2.3312
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.4545
15/32 [=============>................] - ETA: 0s - loss: 2.3411
29/32 [==========================>...] - ETA: 0s - loss: 2.2884
32/32 [==============================] - 0s 4ms/step - loss: 2.2785
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 2.5406
12/32 [==========>...................] - ETA: 0s - loss: 2.2769
28/32 [=========================>....] - ETA: 0s - loss: 2.2342
32/32 [==============================] - 0s 4ms/step - loss: 2.2334
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.0854
20/32 [=================>............] - ETA: 0s - loss: 2.1597
32/32 [==============================] - 0s 3ms/step - loss: 2.1937
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.2393
16/32 [==============>...............] - ETA: 0s - loss: 2.1884
32/32 [==============================] - 0s 3ms/step - loss: 2.1597
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 2.0225
14/32 [============>.................] - ETA: 0s - loss: 2.1257
32/32 [==============================] - 0s 3ms/step - loss: 2.1276
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.4568
19/32 [================>.............] - ETA: 0s - loss: 2.1057
32/32 [==============================] - 0s 4ms/step - loss: 2.0961
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 1.9680
15/32 [=============>................] - ETA: 0s - loss: 2.0623
32/32 [==============================] - 0s 3ms/step - loss: 2.0644
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 2.0730
13/32 [===========>..................] - ETA: 0s - loss: 2.0726
30/32 [===========================>..] - ETA: 0s - loss: 2.0361
32/32 [==============================] - 0s 4ms/step - loss: 2.0336
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.1117
25/32 [======================>.......] - ETA: 0s - loss: 1.9961
32/32 [==============================] - 0s 2ms/step - loss: 2.0026
Done in 1.384029 secs
Fitting member 3 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 41.9180
17/32 [==============>...............] - ETA: 0s - loss: 42.8272
28/32 [=========================>....] - ETA: 0s - loss: 39.4926
32/32 [==============================] - 0s 4ms/step - loss: 39.2828
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 34.1315
21/32 [==================>...........] - ETA: 0s - loss: 27.9420
32/32 [==============================] - 0s 3ms/step - loss: 27.2884
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 27.6960
14/32 [============>.................] - ETA: 0s - loss: 25.4046
32/32 [==============================] - 0s 3ms/step - loss: 21.7021
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 13.7574
18/32 [===============>..............] - ETA: 0s - loss: 19.2103
32/32 [==============================] - 0s 3ms/step - loss: 18.2036
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 15.2942
23/32 [====================>.........] - ETA: 0s - loss: 16.3811
32/32 [==============================] - 0s 3ms/step - loss: 15.8282
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 9.8128
8/32 [======>.......................] - ETA: 0s - loss: 12.7886
26/32 [=======================>......] - ETA: 0s - loss: 13.8696
32/32 [==============================] - 0s 4ms/step - loss: 14.0666
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 14.0067
20/32 [=================>............] - ETA: 0s - loss: 12.8080
32/32 [==============================] - 0s 3ms/step - loss: 12.6950
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 12.2746
18/32 [===============>..............] - ETA: 0s - loss: 11.8792
32/32 [==============================] - 0s 3ms/step - loss: 11.5684
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 13.3544
15/32 [=============>................] - ETA: 0s - loss: 10.5481
32/32 [==============================] - 0s 3ms/step - loss: 10.6440
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 8.5922
21/32 [==================>...........] - ETA: 0s - loss: 9.3876
32/32 [==============================] - 0s 3ms/step - loss: 9.8495
Done in 1.390871 secs
Fitting member 4 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 2.8816
21/32 [==================>...........] - ETA: 0s - loss: 3.0081
32/32 [==============================] - 0s 2ms/step - loss: 2.9588
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.3825
23/32 [====================>.........] - ETA: 0s - loss: 2.8445
32/32 [==============================] - 0s 3ms/step - loss: 2.9011
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 3.9315
13/32 [===========>..................] - ETA: 0s - loss: 2.9584
27/32 [========================>.....] - ETA: 0s - loss: 2.9342
32/32 [==============================] - 0s 4ms/step - loss: 2.8534
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.3602
14/32 [============>.................] - ETA: 0s - loss: 2.6339
31/32 [============================>.] - ETA: 0s - loss: 2.7971
32/32 [==============================] - 0s 4ms/step - loss: 2.8062
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.7419
19/32 [================>.............] - ETA: 0s - loss: 2.8571
32/32 [==============================] - 0s 3ms/step - loss: 2.7623
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 1.7251
16/32 [==============>...............] - ETA: 0s - loss: 2.5689
32/32 [==============================] - 0s 3ms/step - loss: 2.7193
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.8260
19/32 [================>.............] - ETA: 0s - loss: 2.5692
32/32 [==============================] - 0s 3ms/step - loss: 2.6774
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 2.2710
20/32 [=================>............] - ETA: 0s - loss: 2.6715
32/32 [==============================] - 0s 3ms/step - loss: 2.6383
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 3.0022
18/32 [===============>..............] - ETA: 0s - loss: 2.6811
30/32 [===========================>..] - ETA: 0s - loss: 2.6331
32/32 [==============================] - 0s 4ms/step - loss: 2.6007
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.6980
15/32 [=============>................] - ETA: 0s - loss: 2.4089
30/32 [===========================>..] - ETA: 0s - loss: 2.5591
32/32 [==============================] - 0s 4ms/step - loss: 2.5630
Done in 1.402577 secs
Fitting member 5 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 112.3703
13/32 [===========>..................] - ETA: 0s - loss: 162.3551
26/32 [=======================>......] - ETA: 0s - loss: 142.5452
32/32 [==============================] - 0s 4ms/step - loss: 139.0890
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 141.4559
16/32 [==============>...............] - ETA: 0s - loss: 103.7999
32/32 [==============================] - 0s 3ms/step - loss: 95.6168
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 97.2787
12/32 [==========>...................] - ETA: 0s - loss: 91.9508
31/32 [============================>.] - ETA: 0s - loss: 73.9531
32/32 [==============================] - 0s 3ms/step - loss: 74.6476
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 42.6719
22/32 [===================>..........] - ETA: 0s - loss: 64.0068
32/32 [==============================] - 0s 3ms/step - loss: 61.8040
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 51.5790
23/32 [====================>.........] - ETA: 0s - loss: 54.9223
32/32 [==============================] - 0s 2ms/step - loss: 53.1899
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 35.5863
24/32 [=====================>........] - ETA: 0s - loss: 46.4906
32/32 [==============================] - 0s 2ms/step - loss: 46.9254
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 43.7696
11/32 [=========>....................] - ETA: 0s - loss: 46.3336
32/32 [==============================] - 0s 3ms/step - loss: 42.0744
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 43.0723
22/32 [===================>..........] - ETA: 0s - loss: 38.1366
32/32 [==============================] - 0s 3ms/step - loss: 38.1152
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 45.3285
14/32 [============>.................] - ETA: 0s - loss: 34.9222
27/32 [========================>.....] - ETA: 0s - loss: 34.9941
32/32 [==============================] - 0s 4ms/step - loss: 34.8808
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 29.3974
13/32 [===========>..................] - ETA: 0s - loss: 29.7491
29/32 [==========================>...] - ETA: 0s - loss: 31.5973
32/32 [==============================] - 0s 4ms/step - loss: 32.1088
Done in 1.259778 secs
Epoch 1/2
1/3 [=========>....................] - ETA: 1s - loss: 2.3341
3/3 [==============================] - 1s 168ms/step - loss: 2.3038 - val_loss: 2.2154
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 2.3662
3/3 [==============================] - 0s 55ms/step - loss: 2.3004 - val_loss: 2.2128
Epoch 1/2
1/3 [=========>....................] - ETA: 0s - loss: 52.8941
3/3 [==============================] - 0s 85ms/step - loss: 47.0024 - val_loss: 27.1291
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 49.6579
3/3 [==============================] - 0s 36ms/step - loss: 46.6172 - val_loss: 26.8568
Saving _problems/test_ensemble_torch-17.R
Saving _problems/test_ensemble_torch-63.R
Fitting normal
Fitting bernoulli
Fitting bernoulli_prob
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fb5b017fec0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Fitting beta
WARNING:tensorflow:5 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x7fb5a3f2fba0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Fitting betar
Fitting chi2
Fitting chi
Fitting exponential
Fitting gamma
Fitting gammar
Fitting gumbel
Fitting half_normal
Fitting horseshoe
Fitting inverse_gaussian
Fitting laplace
Fitting log_normal
Fitting logistic
Fitting negbinom
Fitting negbinom
Fitting pareto_ls
Fitting poisson
Fitting poisson_lograte
Saving _problems/test_families_torch-82.R
Saving _problems/test_layers_torch-6.R
Saving _problems/test_methods_torch-23.R
Epoch 1/2
1/29 [>.............................] - ETA: 22s - loss: 11.2004
11/29 [==========>...................] - ETA: 0s - loss: 10.4834
23/29 [======================>.......] - ETA: 0s - loss: 10.4031
29/29 [==============================] - 1s 19ms/step - loss: 10.6607 - val_loss: 7.6350
Epoch 2/2
1/29 [>.............................] - ETA: 0s - loss: 7.1379
9/29 [========>.....................] - ETA: 0s - loss: 10.0824
21/29 [====================>.........] - ETA: 0s - loss: 9.6194
28/29 [===========================>..] - ETA: 0s - loss: 9.5347
29/29 [==============================] - 0s 9ms/step - loss: 9.5296 - val_loss: 6.8647
Epoch 1/10
1/29 [>.............................] - ETA: 1:21 - loss: 6.4103
5/29 [====>.........................] - ETA: 0s - loss: 6.7037
9/29 [========>.....................] - ETA: 0s - loss: 8.4596
13/29 [============>.................] - ETA: 0s - loss: 8.6694
16/29 [===============>..............] - ETA: 0s - loss: 8.5659
19/29 [==================>...........] - ETA: 0s - loss: 8.3854
22/29 [=====================>........] - ETA: 0s - loss: 8.2316
25/29 [========================>.....] - ETA: 0s - loss: 8.1120
29/29 [==============================] - ETA: 0s - loss: 7.8712
29/29 [==============================] - 4s 44ms/step - loss: 7.8712 - val_loss: 8.8751
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 6.9897
5/29 [====>.........................] - ETA: 0s - loss: 7.3446
8/29 [=======>......................] - ETA: 0s - loss: 7.4540
11/29 [==========>...................] - ETA: 0s - loss: 7.5382
14/29 [=============>................] - ETA: 0s - loss: 7.4321
18/29 [=================>............] - ETA: 0s - loss: 7.7106
22/29 [=====================>........] - ETA: 0s - loss: 7.6910
26/29 [=========================>....] - ETA: 0s - loss: 7.4651
29/29 [==============================] - 1s 21ms/step - loss: 7.4694 - val_loss: 8.4332
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 14.7798
4/29 [===>..........................] - ETA: 0s - loss: 10.5156
7/29 [======>.......................] - ETA: 0s - loss: 9.9702
11/29 [==========>...................] - ETA: 0s - loss: 8.7902
15/29 [==============>...............] - ETA: 0s - loss: 7.9705
18/29 [=================>............] - ETA: 0s - loss: 7.5799
21/29 [====================>.........] - ETA: 0s - loss: 7.4352
25/29 [========================>.....] - ETA: 0s - loss: 7.2180
29/29 [==============================] - ETA: 0s - loss: 7.1170
29/29 [==============================] - 1s 23ms/step - loss: 7.1170 - val_loss: 8.0236
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 9.3139
5/29 [====>.........................] - ETA: 0s - loss: 7.5219
10/29 [=========>....................] - ETA: 0s - loss: 6.8699
13/29 [============>.................] - ETA: 0s - loss: 6.7301
16/29 [===============>..............] - ETA: 0s - loss: 6.8798
19/29 [==================>...........] - ETA: 0s - loss: 6.9360
22/29 [=====================>........] - ETA: 0s - loss: 6.8245
25/29 [========================>.....] - ETA: 0s - loss: 6.8528
28/29 [===========================>..] - ETA: 0s - loss: 6.8088
29/29 [==============================] - 1s 22ms/step - loss: 6.7865 - val_loss: 7.6687
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 3.8834
4/29 [===>..........................] - ETA: 0s - loss: 5.6267
8/29 [=======>......................] - ETA: 0s - loss: 7.1880
11/29 [==========>...................] - ETA: 0s - loss: 6.8273
13/29 [============>.................] - ETA: 0s - loss: 6.9684
16/29 [===============>..............] - ETA: 0s - loss: 6.7281
19/29 [==================>...........] - ETA: 0s - loss: 6.5392
22/29 [=====================>........] - ETA: 0s - loss: 6.5759
26/29 [=========================>....] - ETA: 0s - loss: 6.3531
29/29 [==============================] - 1s 21ms/step - loss: 6.4899 - val_loss: 7.3231
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 5.2272
5/29 [====>.........................] - ETA: 0s - loss: 7.8718
9/29 [========>.....................] - ETA: 0s - loss: 7.1006
13/29 [============>.................] - ETA: 0s - loss: 6.6607
17/29 [================>.............] - ETA: 0s - loss: 6.3520
20/29 [===================>..........] - ETA: 0s - loss: 6.6261
23/29 [======================>.......] - ETA: 0s - loss: 6.2762
27/29 [==========================>...] - ETA: 0s - loss: 6.2323
29/29 [==============================] - 1s 21ms/step - loss: 6.2000 - val_loss: 6.9819
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 6.7016
5/29 [====>.........................] - ETA: 0s - loss: 6.6629
8/29 [=======>......................] - ETA: 0s - loss: 6.3293
12/29 [===========>..................] - ETA: 0s - loss: 6.2371
15/29 [==============>...............] - ETA: 0s - loss: 6.1710
19/29 [==================>...........] - ETA: 0s - loss: 6.1555
23/29 [======================>.......] - ETA: 0s - loss: 5.9828
27/29 [==========================>...] - ETA: 0s - loss: 5.8750
29/29 [==============================] - 1s 20ms/step - loss: 5.9123 - val_loss: 6.6514
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 5.7023
5/29 [====>.........................] - ETA: 0s - loss: 6.1573
10/29 [=========>....................] - ETA: 0s - loss: 5.5707
14/29 [=============>................] - ETA: 0s - loss: 5.9413
17/29 [================>.............] - ETA: 0s - loss: 5.5613
21/29 [====================>.........] - ETA: 0s - loss: 5.6820
25/29 [========================>.....] - ETA: 0s - loss: 5.6613
28/29 [===========================>..] - ETA: 0s - loss: 5.6376
29/29 [==============================] - 1s 21ms/step - loss: 5.6402 - val_loss: 6.3442
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 4.7939
5/29 [====>.........................] - ETA: 0s - loss: 5.6148
8/29 [=======>......................] - ETA: 0s - loss: 5.0807
11/29 [==========>...................] - ETA: 0s - loss: 5.3717
15/29 [==============>...............] - ETA: 0s - loss: 5.5019
19/29 [==================>...........] - ETA: 0s - loss: 5.3698
22/29 [=====================>........] - ETA: 0s - loss: 5.4228
26/29 [=========================>....] - ETA: 0s - loss: 5.4283
28/29 [===========================>..] - ETA: 0s - loss: 5.3859
29/29 [==============================] - 1s 20ms/step - loss: 5.3794 - val_loss: 6.0578
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 4.1903
5/29 [====>.........................] - ETA: 0s - loss: 5.5030
9/29 [========>.....................] - ETA: 0s - loss: 5.3980
12/29 [===========>..................] - ETA: 0s - loss: 5.0002
15/29 [==============>...............] - ETA: 0s - loss: 5.0702
18/29 [=================>............] - ETA: 0s - loss: 5.2039
21/29 [====================>.........] - ETA: 0s - loss: 5.1271
25/29 [========================>.....] - ETA: 0s - loss: 5.1996
29/29 [==============================] - ETA: 0s - loss: 5.1438
29/29 [==============================] - 1s 20ms/step - loss: 5.1438 - val_loss: 5.7953
Epoch 1/10
1/29 [>.............................] - ETA: 1:04 - loss: 1.4933
4/29 [===>..........................] - ETA: 0s - loss: 1.4923
7/29 [======>.......................] - ETA: 0s - loss: 1.4914
10/29 [=========>....................] - ETA: 0s - loss: 1.4904
13/29 [============>.................] - ETA: 0s - loss: 1.4895
17/29 [================>.............] - ETA: 0s - loss: 1.4882
21/29 [====================>.........] - ETA: 0s - loss: 1.4870
25/29 [========================>.....] - ETA: 0s - loss: 1.4857
29/29 [==============================] - 3s 35ms/step - loss: 1.4848 - val_loss: 1.4755
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 1.4755
4/29 [===>..........................] - ETA: 0s - loss: 1.4746
7/29 [======>.......................] - ETA: 0s - loss: 1.4738
11/29 [==========>...................] - ETA: 0s - loss: 1.4726
15/29 [==============>...............] - ETA: 0s - loss: 1.4715
18/29 [=================>............] - ETA: 0s - loss: 1.4707
22/29 [=====================>........] - ETA: 0s - loss: 1.4696
25/29 [========================>.....] - ETA: 0s - loss: 1.4688
29/29 [==============================] - 1s 20ms/step - loss: 1.4679 - val_loss: 1.4596
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 1.4597
4/29 [===>..........................] - ETA: 0s - loss: 1.4589
8/29 [=======>......................] - ETA: 0s - loss: 1.4579
12/29 [===========>..................] - ETA: 0s - loss: 1.4569
15/29 [==============>...............] - ETA: 0s - loss: 1.4562
18/29 [=================>............] - ETA: 0s - loss: 1.4555
22/29 [=====================>........] - ETA: 0s - loss: 1.4545
25/29 [========================>.....] - ETA: 0s - loss: 1.4538
28/29 [===========================>..] - ETA: 0s - loss: 1.4531
29/29 [==============================] - 1s 23ms/step - loss: 1.4531 - val_loss: 1.4458
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 1.4459
5/29 [====>.........................] - ETA: 0s - loss: 1.4450
8/29 [=======>......................] - ETA: 0s - loss: 1.4443
12/29 [===========>..................] - ETA: 0s - loss: 1.4435
15/29 [==============>...............] - ETA: 0s - loss: 1.4428
19/29 [==================>...........] - ETA: 0s - loss: 1.4420
23/29 [======================>.......] - ETA: 0s - loss: 1.4411
27/29 [==========================>...] - ETA: 0s - loss: 1.4403
29/29 [==============================] - 1s 19ms/step - loss: 1.4401 - val_loss: 1.4336
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 1.4338
4/29 [===>..........................] - ETA: 0s - loss: 1.4332
8/29 [=======>......................] - ETA: 0s - loss: 1.4324
10/29 [=========>....................] - ETA: 0s - loss: 1.4320
14/29 [=============>................] - ETA: 0s - loss: 1.4313
17/29 [================>.............] - ETA: 0s - loss: 1.4307
20/29 [===================>..........] - ETA: 0s - loss: 1.4302
24/29 [=======================>......] - ETA: 0s - loss: 1.4294
28/29 [===========================>..] - ETA: 0s - loss: 1.4287
29/29 [==============================] - 1s 22ms/step - loss: 1.4287 - val_loss: 1.4230
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 1.4231
4/29 [===>..........................] - ETA: 0s - loss: 1.4226
8/29 [=======>......................] - ETA: 0s - loss: 1.4220
11/29 [==========>...................] - ETA: 0s - loss: 1.4215
14/29 [=============>................] - ETA: 0s - loss: 1.4210
18/29 [=================>............] - ETA: 0s - loss: 1.4203
21/29 [====================>.........] - ETA: 0s - loss: 1.4198
25/29 [========================>.....] - ETA: 0s - loss: 1.4192
29/29 [==============================] - ETA: 0s - loss: 1.4187
29/29 [==============================] - 1s 22ms/step - loss: 1.4187 - val_loss: 1.4137
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 1.4140
4/29 [===>..........................] - ETA: 0s - loss: 1.4135
8/29 [=======>......................] - ETA: 0s - loss: 1.4129
12/29 [===========>..................] - ETA: 0s - loss: 1.4123
15/29 [==============>...............] - ETA: 0s - loss: 1.4119
19/29 [==================>...........] - ETA: 0s - loss: 1.4114
23/29 [======================>.......] - ETA: 0s - loss: 1.4108
27/29 [==========================>...] - ETA: 0s - loss: 1.4103
29/29 [==============================] - 1s 21ms/step - loss: 1.4101 - val_loss: 1.4059
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 1.4061
5/29 [====>.........................] - ETA: 0s - loss: 1.4055
8/29 [=======>......................] - ETA: 0s - loss: 1.4052
12/29 [===========>..................] - ETA: 0s - loss: 1.4048
15/29 [==============>...............] - ETA: 0s - loss: 1.4044
18/29 [=================>............] - ETA: 0s - loss: 1.4041
22/29 [=====================>........] - ETA: 0s - loss: 1.4036
25/29 [========================>.....] - ETA: 0s - loss: 1.4033
28/29 [===========================>..] - ETA: 0s - loss: 1.4030
29/29 [==============================] - 1s 21ms/step - loss: 1.4030 - val_loss: 1.3995
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 1.3999
5/29 [====>.........................] - ETA: 0s - loss: 1.3994
9/29 [========>.....................] - ETA: 0s - loss: 1.3990
12/29 [===========>..................] - ETA: 0s - loss: 1.3987
16/29 [===============>..............] - ETA: 0s - loss: 1.3983
19/29 [==================>...........] - ETA: 0s - loss: 1.3981
23/29 [======================>.......] - ETA: 0s - loss: 1.3978
26/29 [=========================>....] - ETA: 0s - loss: 1.3975
29/29 [==============================] - ETA: 0s - loss: 1.3973
29/29 [==============================] - 1s 20ms/step - loss: 1.3973 - val_loss: 1.3948
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 1.3947
5/29 [====>.........................] - ETA: 0s - loss: 1.3946
8/29 [=======>......................] - ETA: 0s - loss: 1.3944
13/29 [============>.................] - ETA: 0s - loss: 1.3941
17/29 [================>.............] - ETA: 0s - loss: 1.3939
21/29 [====================>.........] - ETA: 0s - loss: 1.3936
24/29 [=======================>......] - ETA: 0s - loss: 1.3935
27/29 [==========================>...] - ETA: 0s - loss: 1.3933
29/29 [==============================] - 1s 19ms/step - loss: 1.3933 - val_loss: 1.3916
Epoch 1/10
1/29 [>.............................] - ETA: 1:12 - loss: 1.2453
4/29 [===>..........................] - ETA: 0s - loss: 1.1612
8/29 [=======>......................] - ETA: 0s - loss: 1.1673
11/29 [==========>...................] - ETA: 0s - loss: 1.1896
14/29 [=============>................] - ETA: 0s - loss: 1.1720
18/29 [=================>............] - ETA: 0s - loss: 1.1901
22/29 [=====================>........] - ETA: 0s - loss: 1.1751
24/29 [=======================>......] - ETA: 0s - loss: 1.1818
28/29 [===========================>..] - ETA: 0s - loss: 1.1832
29/29 [==============================] - 4s 42ms/step - loss: 1.1842 - val_loss: 2.1275
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 0.9681
4/29 [===>..........................] - ETA: 0s - loss: 1.1265
7/29 [======>.......................] - ETA: 0s - loss: 1.1654
10/29 [=========>....................] - ETA: 0s - loss: 1.2102
13/29 [============>.................] - ETA: 0s - loss: 1.1714
16/29 [===============>..............] - ETA: 0s - loss: 1.1651
19/29 [==================>...........] - ETA: 0s - loss: 1.1706
23/29 [======================>.......] - ETA: 0s - loss: 1.1805
26/29 [=========================>....] - ETA: 0s - loss: 1.1699
29/29 [==============================] - ETA: 0s - loss: 1.1645
29/29 [==============================] - 1s 25ms/step - loss: 1.1645 - val_loss: 2.0574
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 1.0562
4/29 [===>..........................] - ETA: 0s - loss: 1.1474
7/29 [======>.......................] - ETA: 0s - loss: 1.1547
10/29 [=========>....................] - ETA: 0s - loss: 1.1602
13/29 [============>.................] - ETA: 0s - loss: 1.1387
16/29 [===============>..............] - ETA: 0s - loss: 1.1374
20/29 [===================>..........] - ETA: 0s - loss: 1.1263
22/29 [=====================>........] - ETA: 0s - loss: 1.1261
25/29 [========================>.....] - ETA: 0s - loss: 1.1459
28/29 [===========================>..] - ETA: 0s - loss: 1.1432
29/29 [==============================] - 1s 24ms/step - loss: 1.1441 - val_loss: 1.9952
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 1.0159
5/29 [====>.........................] - ETA: 0s - loss: 1.1023
9/29 [========>.....................] - ETA: 0s - loss: 1.1085
13/29 [============>.................] - ETA: 0s - loss: 1.0884
16/29 [===============>..............] - ETA: 0s - loss: 1.0907
20/29 [===================>..........] - ETA: 0s - loss: 1.1008
23/29 [======================>.......] - ETA: 0s - loss: 1.1112
27/29 [==========================>...] - ETA: 0s - loss: 1.1185
29/29 [==============================] - 1s 21ms/step - loss: 1.1216 - val_loss: 1.9393
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 1.0572
4/29 [===>..........................] - ETA: 0s - loss: 1.0924
8/29 [=======>......................] - ETA: 0s - loss: 1.1377
12/29 [===========>..................] - ETA: 0s - loss: 1.1144
17/29 [================>.............] - ETA: 0s - loss: 1.1024
21/29 [====================>.........] - ETA: 0s - loss: 1.1049
26/29 [=========================>....] - ETA: 0s - loss: 1.1004
29/29 [==============================] - 1s 19ms/step - loss: 1.0955 - val_loss: 1.8846
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 0.9186
4/29 [===>..........................] - ETA: 0s - loss: 1.0221
8/29 [=======>......................] - ETA: 0s - loss: 1.0631
12/29 [===========>..................] - ETA: 0s - loss: 1.0595
17/29 [================>.............] - ETA: 0s - loss: 1.0677
19/29 [==================>...........] - ETA: 0s - loss: 1.0632
23/29 [======================>.......] - ETA: 0s - loss: 1.0627
26/29 [=========================>....] - ETA: 0s - loss: 1.0681
29/29 [==============================] - 1s 22ms/step - loss: 1.0666 - val_loss: 1.8390
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 1.0423
4/29 [===>..........................] - ETA: 0s - loss: 0.9951
7/29 [======>.......................] - ETA: 0s - loss: 1.0373
9/29 [========>.....................] - ETA: 0s - loss: 1.0501
13/29 [============>.................] - ETA: 0s - loss: 1.0622
16/29 [===============>..............] - ETA: 0s - loss: 1.0597
19/29 [==================>...........] - ETA: 0s - loss: 1.0489
23/29 [======================>.......] - ETA: 0s - loss: 1.0429
26/29 [=========================>....] - ETA: 0s - loss: 1.0424
29/29 [==============================] - ETA: 0s - loss: 1.0385
29/29 [==============================] - 1s 24ms/step - loss: 1.0385 - val_loss: 1.7931
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 1.0082
5/29 [====>.........................] - ETA: 0s - loss: 1.0376
8/29 [=======>......................] - ETA: 0s - loss: 1.0003
10/29 [=========>....................] - ETA: 0s - loss: 1.0007
13/29 [============>.................] - ETA: 0s - loss: 1.0113
17/29 [================>.............] - ETA: 0s - loss: 1.0122
19/29 [==================>...........] - ETA: 0s - loss: 1.0058
22/29 [=====================>........] - ETA: 0s - loss: 1.0098
26/29 [=========================>....] - ETA: 0s - loss: 1.0099
29/29 [==============================] - 1s 23ms/step - loss: 1.0098 - val_loss: 1.7559
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 0.9752
4/29 [===>..........................] - ETA: 0s - loss: 1.0184
7/29 [======>.......................] - ETA: 0s - loss: 1.0118
10/29 [=========>....................] - ETA: 0s - loss: 1.0247
13/29 [============>.................] - ETA: 0s - loss: 1.0040
15/29 [==============>...............] - ETA: 0s - loss: 1.0008
17/29 [================>.............] - ETA: 0s - loss: 0.9974
21/29 [====================>.........] - ETA: 0s - loss: 0.9925
23/29 [======================>.......] - ETA: 0s - loss: 0.9875
27/29 [==========================>...] - ETA: 0s - loss: 0.9807
29/29 [==============================] - 1s 24ms/step - loss: 0.9810 - val_loss: 1.7130
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 0.8496
4/29 [===>..........................] - ETA: 0s - loss: 0.8875
7/29 [======>.......................] - ETA: 0s - loss: 0.9145
10/29 [=========>....................] - ETA: 0s - loss: 0.9303
13/29 [============>.................] - ETA: 0s - loss: 0.9356
16/29 [===============>..............] - ETA: 0s - loss: 0.9381
19/29 [==================>...........] - ETA: 0s - loss: 0.9456
23/29 [======================>.......] - ETA: 0s - loss: 0.9511
26/29 [=========================>....] - ETA: 0s - loss: 0.9487
28/29 [===========================>..] - ETA: 0s - loss: 0.9515
29/29 [==============================] - 1s 24ms/step - loss: 0.9513 - val_loss: 1.6782
2025-12-13 15:48:35.086958: E tensorflow/core/util/util.cc:131] oneDNN supports DT_INT64 only on platforms with AVX-512. Falling back to the default Eigen-based implementation if present.
Model: "model_43"
________________________________________________________________________________
Layer (type) Output Shape Para Connected to Trainable
m #
================================================================================
input_node_x1_x2_ [(None, 2)] 0 [] Y
n_trees_2_n_layer
s_3_tree_depth_5_
_1 (InputLayer)
input__Intercept_ [(None, 1)] 0 [] Y
_1 (InputLayer)
node_2 (NODE) (None, 3) 1754 ['input_node_x1_x2 Y
_n_trees_2_n_layer
s_3_tree_depth_5__
1[0][0]']
1_1 (Dense) (None, 3) 3 ['input__Intercept Y
__1[0][0]']
add_77 (Add) (None, 3) 0 ['node_2[0][0]', Y
'1_1[0][0]']
distribution_lamb ((None, 3), 0 ['add_77[0][0]'] Y
da_43 (Distributi (None, 3))
onLambda)
================================================================================
Total params: 1757 (6.86 KB)
Trainable params: 793 (3.10 KB)
Non-trainable params: 964 (3.77 KB)
________________________________________________________________________________
Model formulas:
---------------
loc :
~node(x1, x2, n_trees = 2, n_layers = 3, tree_depth = 5)
<environment: 0x55d246048b38>
Fitting model with 1 orthogonalization(s) ... Fitting model with 2 orthogonalization(s) ... Fitting model with 3 orthogonalization(s) ... Fitting model with 4 orthogonalization(s) ... Fitting model with 5 orthogonalization(s) ... Saving _problems/test_reproducibility_torch-30.R
Saving _problems/test_subnetwork_init_torch-20.R
Fitting Fold 1 ...
Done in 2.008227 secs
Fitting Fold 2 ...
Done in 0.4814897 secs
Epoch 1/2
1/2 [==============>...............] - ETA: 0s - loss: 22.0463
2/2 [==============================] - 0s 17ms/step - loss: 22.4672
Epoch 2/2
1/2 [==============>...............] - ETA: 0s - loss: 21.8272
2/2 [==============================] - 0s 26ms/step - loss: 20.5671
Fitting Fold 1 ...
Done in 1.558068 secs
Fitting Fold 2 ...
Done in 0.4165711 secs
Epoch 1/2
1/2 [==============>...............] - ETA: 0s - loss: 22.0463
2/2 [==============================] - 0s 21ms/step - loss: 22.4672
Epoch 2/2
1/2 [==============>...............] - ETA: 0s - loss: 21.8272
2/2 [==============================] - 0s 17ms/step - loss: 20.5671
[ FAIL 14 | WARN 0 | SKIP 1 | PASS 680 ]
══ Skipped tests (1) ═══════════════════════════════════════════════════════════
• empty test (1):
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test_customtraining_torch.R:6:3'): Use multiple optimizers torch ────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(50, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─torch::nn_linear(1, 50) at test_customtraining_torch.R:6:3
2. └─Module$new(...)
3. └─torch (local) initialize(...)
4. ├─torch::nn_parameter(torch_empty(out_features, in_features))
5. │ └─torch:::is_torch_tensor(x)
6. └─torch::torch_empty(out_features, in_features)
7. ├─base::do.call(.torch_empty, args)
8. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
9. └─torch:::call_c_function(...)
10. └─torch:::do_call(f, args)
11. ├─base::do.call(fun, args)
12. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_data_handler_torch.R:75:3'): properties of dataset torch ───────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_data_handler_torch.R:75:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_deepregression_torch.R:6:5'): Simple additive model ────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(2, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:21:5
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─torch::nn_sequential(...) at test_deepregression_torch.R:6:5
9. │ └─Module$new(...)
10. │ └─torch (local) initialize(...)
11. │ └─rlang::list2(...)
12. └─torch::nn_linear(in_features = i, out_features = 2, bias = F)
13. └─Module$new(...)
14. └─torch (local) initialize(...)
15. ├─torch::nn_parameter(torch_empty(out_features, in_features))
16. │ └─torch:::is_torch_tensor(x)
17. └─torch::torch_empty(out_features, in_features)
18. ├─base::do.call(.torch_empty, args)
19. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
20. └─torch:::call_c_function(...)
21. └─torch:::do_call(f, args)
22. ├─base::do.call(fun, args)
23. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_deepregression_torch.R:110:3'): Generalized additive model ─────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:110:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:151:3'): Deep generalized additive model with LSS ──
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:151:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:181:3'): GAMs with shared weights ───────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:181:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. ├─base::do.call(...)
6. └─deepregression (local) `<fn>`(...)
7. └─torch::torch_tensor(P)
8. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
9. └─methods$initialize(NULL, NULL, ...)
10. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:220:3'): GAMs with fixed weights ────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:220:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_ensemble_torch.R:13:3'): deep ensemble ─────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_ensemble_torch.R:13:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_ensemble_torch.R:55:3'): reinitializing weights ────────────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_ensemble_torch.R:55:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_families_torch.R:76:7'): torch families can be fitted ──────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_families_torch.R:76:7
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_layers_torch.R:6:3'): lasso layers ─────────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `cpp_torch_manual_seed(as.character(seed))`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─torch::torch_manual_seed(42) at test_layers_torch.R:6:3
2. └─torch:::cpp_torch_manual_seed(as.character(seed))
── Error ('test_methods_torch.R:18:3'): all methods ────────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_methods_torch.R:18:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_reproducibility_torch.R:21:17'): reproducibility ───────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(64, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_reproducibility_torch.R:33:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─torch::nn_sequential(...) at test_reproducibility_torch.R:21:17
9. │ └─Module$new(...)
10. │ └─torch (local) initialize(...)
11. │ └─rlang::list2(...)
12. └─torch::nn_linear(in_features = 1, out_features = 64, bias = F)
13. └─Module$new(...)
14. └─torch (local) initialize(...)
15. ├─torch::nn_parameter(torch_empty(out_features, in_features))
16. │ └─torch:::is_torch_tensor(x)
17. └─torch::torch_empty(out_features, in_features)
18. ├─base::do.call(.torch_empty, args)
19. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
20. └─torch:::call_c_function(...)
21. └─torch:::do_call(f, args)
22. ├─base::do.call(fun, args)
23. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_subnetwork_init_torch.R:15:33'): subnetwork_init ───────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(5, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::subnetwork_init_torch(list(pp)) at test_subnetwork_init_torch.R:38:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─pp_lay[[i]]$layer()
5. ├─torch::nn_sequential(...) at test_subnetwork_init_torch.R:15:33
6. │ └─Module$new(...)
7. │ └─torch (local) initialize(...)
8. │ └─rlang::list2(...)
9. └─torch::nn_linear(in_features = 1, out_features = 5)
10. └─Module$new(...)
11. └─torch (local) initialize(...)
12. ├─torch::nn_parameter(torch_empty(out_features, in_features))
13. │ └─torch:::is_torch_tensor(x)
14. └─torch::torch_empty(out_features, in_features)
15. ├─base::do.call(.torch_empty, args)
16. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
17. └─torch:::call_c_function(...)
18. └─torch:::do_call(f, args)
19. ├─base::do.call(fun, args)
20. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
[ FAIL 14 | WARN 0 | SKIP 1 | PASS 680 ]
Error:
! Test failures.
Execution halted
- checking PDF version of manual ... [6s/8s] OK
- checking HTML version of manual ... [4s/6s] OK
- checking for non-standard things in the check directory ... OK
- DONE
Status: 1 ERROR