- using R Under development (unstable) (2025-11-21 r89046)
- using platform: x86_64-pc-linux-gnu
- R was compiled by
gcc-15 (Debian 15.2.0-7) 15.2.0
GNU Fortran (Debian 15.2.0-7) 15.2.0
- running under: Debian GNU/Linux forky/sid
- using session charset: UTF-8
- checking for file ‘deepregression/DESCRIPTION’ ... OK
- this is package ‘deepregression’ version ‘2.3.2’
- package encoding: UTF-8
- checking CRAN incoming feasibility ... [1s/1s] OK
- checking package namespace information ... OK
- checking package dependencies ... OK
- checking if this is a source package ... OK
- checking if there is a namespace ... OK
- checking for executable files ... OK
- checking for hidden files and directories ... OK
- checking for portable file names ... OK
- checking for sufficient/correct file permissions ... OK
- checking whether package ‘deepregression’ can be installed ... OK
See the install log for details.
- checking package directory ... OK
- checking for future file timestamps ... OK
- checking DESCRIPTION meta-information ... OK
- checking top-level files ... OK
- checking for left-over files ... OK
- checking index information ... OK
- checking package subdirectories ... OK
- checking code files for non-ASCII characters ... OK
- checking R files for syntax errors ... OK
- checking whether the package can be loaded ... [3s/4s] OK
- checking whether the package can be loaded with stated dependencies ... [3s/4s] OK
- checking whether the package can be unloaded cleanly ... [3s/3s] OK
- checking whether the namespace can be loaded with stated dependencies ... [3s/4s] OK
- checking whether the namespace can be unloaded cleanly ... [3s/4s] OK
- checking loading without being on the library search path ... [3s/4s] OK
- checking whether startup messages can be suppressed ... [3s/4s] OK
- checking use of S3 registration ... OK
- checking dependencies in R code ... OK
- checking S3 generic/method consistency ... OK
- checking replacement functions ... OK
- checking foreign function calls ... OK
- checking R code for possible problems ... [21s/26s] OK
- checking Rd files ... [1s/1s] OK
- checking Rd metadata ... OK
- checking Rd line widths ... OK
- checking Rd cross-references ... OK
- checking for missing documentation entries ... OK
- checking for code/documentation mismatches ... OK
- checking Rd \usage sections ... OK
- checking Rd contents ... OK
- checking for unstated dependencies in examples ... OK
- checking examples ... [3s/3s] OK
- checking for unstated dependencies in ‘tests’ ... OK
- checking tests ... [359s/532s] ERROR
Running ‘testthat.R’ [359s/532s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(deepregression)
Loading required package: tensorflow
Loading required package: tfprobability
Loading required package: keras
The keras package is deprecated. Use the keras3 package instead.
>
> if (reticulate::py_module_available("tensorflow") &
+ reticulate::py_module_available("keras") &
+ .Platform$OS.type != "windows"){
+ test_check("deepregression")
+ }
Downloading pygments (1.2MiB)
Downloading tf-keras (1.6MiB)
Downloading tensorboard (5.2MiB)
Downloading keras (1.4MiB)
Downloading h5py (4.9MiB)
Downloading grpcio (6.3MiB)
Downloading tensorflow (615.1MiB)
Downloading ml-dtypes (4.8MiB)
Downloading numpy (17.1MiB)
Downloading tensorflow-probability (6.7MiB)
Downloaded pygments
Downloaded ml-dtypes
Downloaded h5py
Downloaded grpcio
Downloaded keras
Downloaded tf-keras
Downloaded tensorboard
Downloaded numpy
Downloaded tensorflow-probability
Downloaded tensorflow
Installed 43 packages in 371ms
2025-11-22 15:35:45.912348: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-11-22 15:35:45.913885: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-11-22 15:35:45.919636: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-11-22 15:35:45.932965: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1763822145.953673 1037078 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1763822145.958787 1037078 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1763822145.975507 1037078 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1763822145.975702 1037078 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1763822145.975707 1037078 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1763822145.975712 1037078 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-11-22 15:35:45.980265: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Saving _problems/test_customtraining_torch-6.R
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
8192/11490434 [..............................] - ETA: 0s
16384/11490434 [..............................] - ETA: 45s
49152/11490434 [..............................] - ETA: 40s
81920/11490434 [..............................] - ETA: 40s
147456/11490434 [..............................] - ETA: 28s
196608/11490434 [..............................] - ETA: 23s
278528/11490434 [..............................] - ETA: 18s
393216/11490434 [>.............................] - ETA: 14s
532480/11490434 [>.............................] - ETA: 11s
770048/11490434 [=>............................] - ETA: 8s
1048576/11490434 [=>............................] - ETA: 6s
1523712/11490434 [==>...........................] - ETA: 4s
2269184/11490434 [====>.........................] - ETA: 3s
3211264/11490434 [=======>......................] - ETA: 2s
4358144/11490434 [==========>...................] - ETA: 1s
6594560/11490434 [================>.............] - ETA: 0s
8699904/11490434 [=====================>........] - ETA: 0s
10362880/11490434 [==========================>...] - ETA: 0s
11490434/11490434 [==============================] - 1s 0us/step
Saving _problems/test_data_handler_torch-78.R
2025-11-22 15:36:04.437852: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Epoch 1/2
1/15 [=>............................] - ETA: 24s - loss: 2.2580
9/15 [=================>............] - ETA: 0s - loss: 2.2232
15/15 [==============================] - 2s 34ms/step - loss: 2.1942 - val_loss: 2.1377
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1032
13/15 [=========================>....] - ETA: 0s - loss: 2.0592
15/15 [==============================] - 0s 15ms/step - loss: 2.0505 - val_loss: 1.9994
Epoch 1/2
1/15 [=>............................] - ETA: 14s - loss: 2.6629
10/15 [===================>..........] - ETA: 0s - loss: 2.6533
15/15 [==============================] - 2s 35ms/step - loss: 2.6533 - val_loss: 2.6393
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.6358
11/15 [=====================>........] - ETA: 0s - loss: 2.6352
15/15 [==============================] - 0s 12ms/step - loss: 2.6365 - val_loss: 2.6227
Epoch 1/2
1/15 [=>............................] - ETA: 20s - loss: 2.9965
12/15 [=======================>......] - ETA: 0s - loss: 2.9363
15/15 [==============================] - 2s 30ms/step - loss: 2.8998 - val_loss: 1.6201
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1770
10/15 [===================>..........] - ETA: 0s - loss: 2.7329
15/15 [==============================] - 0s 12ms/step - loss: 2.6616 - val_loss: 1.5446
Epoch 1/2
1/15 [=>............................] - ETA: 18s - loss: 4.1701
11/15 [=====================>........] - ETA: 0s - loss: 4.1792
15/15 [==============================] - 2s 30ms/step - loss: 4.0062 - val_loss: 3.4240
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 4.3026
10/15 [===================>..........] - ETA: 0s - loss: 3.9434
15/15 [==============================] - 0s 11ms/step - loss: 3.9103 - val_loss: 3.3606
Epoch 1/3
1/3 [=========>....................] - ETA: 1s - loss: 7.9864
3/3 [==============================] - 1s 143ms/step - loss: 8.7055 - val_loss: 7.2396
Epoch 2/3
1/3 [=========>....................] - ETA: 0s - loss: 9.5053
3/3 [==============================] - 0s 44ms/step - loss: 8.6326 - val_loss: 7.1833
Epoch 3/3
1/3 [=========>....................] - ETA: 0s - loss: 8.5150
3/3 [==============================] - 0s 38ms/step - loss: 8.5566 - val_loss: 7.1282
Epoch 1/2
1/15 [=>............................] - ETA: 18s - loss: 2.6300
11/15 [=====================>........] - ETA: 0s - loss: 2.6286
15/15 [==============================] - 2s 39ms/step - loss: 2.6282 - val_loss: 2.6186
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.6212
14/15 [===========================>..] - ETA: 0s - loss: 2.6122
15/15 [==============================] - 0s 10ms/step - loss: 2.6124 - val_loss: 2.6029
Epoch 1/2
1/15 [=>............................] - ETA: 16s - loss: 653.6502
9/15 [=================>............] - ETA: 0s - loss: 1222.3798
15/15 [==============================] - 2s 39ms/step - loss: 1248.0040 - val_loss: 1370.1050
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 755.2901
7/15 [=============>................] - ETA: 0s - loss: 1083.8412
15/15 [==============================] - 0s 14ms/step - loss: 960.7450 - val_loss: 1055.7158
Epoch 1/2
1/15 [=>............................] - ETA: 20s - loss: 927.8141
12/15 [=======================>......] - ETA: 0s - loss: 2380.3286
15/15 [==============================] - 2s 31ms/step - loss: 3026.8003 - val_loss: 321.5525
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 991.2161
13/15 [=========================>....] - ETA: 0s - loss: 2603.2161
15/15 [==============================] - 0s 14ms/step - loss: 2679.4148 - val_loss: 292.7626
Epoch 1/2
1/15 [=>............................] - ETA: 25s - loss: 2.3149
10/15 [===================>..........] - ETA: 0s - loss: 2.2399
15/15 [==============================] - 2s 43ms/step - loss: 2.2516 - val_loss: 2.2750
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1948
11/15 [=====================>........] - ETA: 0s - loss: 2.2114
15/15 [==============================] - 0s 14ms/step - loss: 2.2132 - val_loss: 2.2328
Saving _problems/test_deepregression_torch-10.R
Saving _problems/test_deepregression_torch-117.R
Saving _problems/test_deepregression_torch-158.R
Saving _problems/test_deepregression_torch-190.R
Saving _problems/test_deepregression_torch-229.R
Fitting member 1 ...Epoch 1/10
1/32 [..............................] - ETA: 25s - loss: 2.3293
10/32 [========>.....................] - ETA: 0s - loss: 2.3532
19/32 [================>.............] - ETA: 0s - loss: 2.3442
28/32 [=========================>....] - ETA: 0s - loss: 2.3344
32/32 [==============================] - 1s 6ms/step - loss: 2.3303
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.3269
9/32 [=======>......................] - ETA: 0s - loss: 2.3223
19/32 [================>.............] - ETA: 0s - loss: 2.3079
29/32 [==========================>...] - ETA: 0s - loss: 2.3010
32/32 [==============================] - 0s 6ms/step - loss: 2.2977
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 2.3270
12/32 [==========>...................] - ETA: 0s - loss: 2.2760
21/32 [==================>...........] - ETA: 0s - loss: 2.2748
31/32 [============================>.] - ETA: 0s - loss: 2.2615
32/32 [==============================] - 0s 6ms/step - loss: 2.2650
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.2270
11/32 [=========>....................] - ETA: 0s - loss: 2.2375
21/32 [==================>...........] - ETA: 0s - loss: 2.2214
32/32 [==============================] - 0s 5ms/step - loss: 2.2325
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.2185
9/32 [=======>......................] - ETA: 0s - loss: 2.2298
22/32 [===================>..........] - ETA: 0s - loss: 2.2113
31/32 [============================>.] - ETA: 0s - loss: 2.2011
32/32 [==============================] - 0s 6ms/step - loss: 2.2002
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 2.1431
10/32 [========>.....................] - ETA: 0s - loss: 2.1695
19/32 [================>.............] - ETA: 0s - loss: 2.1769
30/32 [===========================>..] - ETA: 0s - loss: 2.1702
32/32 [==============================] - 0s 6ms/step - loss: 2.1681
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.2840
11/32 [=========>....................] - ETA: 0s - loss: 2.1667
21/32 [==================>...........] - ETA: 0s - loss: 2.1390
31/32 [============================>.] - ETA: 0s - loss: 2.1356
32/32 [==============================] - 0s 5ms/step - loss: 2.1359
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 2.0769
11/32 [=========>....................] - ETA: 0s - loss: 2.1122
23/32 [====================>.........] - ETA: 0s - loss: 2.1145
32/32 [==============================] - 0s 5ms/step - loss: 2.1036
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 2.0602
11/32 [=========>....................] - ETA: 0s - loss: 2.1029
23/32 [====================>.........] - ETA: 0s - loss: 2.0765
31/32 [============================>.] - ETA: 0s - loss: 2.0715
32/32 [==============================] - 0s 5ms/step - loss: 2.0717
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.0980
17/32 [==============>...............] - ETA: 0s - loss: 2.0371
28/32 [=========================>....] - ETA: 0s - loss: 2.0400
32/32 [==============================] - 0s 4ms/step - loss: 2.0400
Done in 2.809544 secs
Fitting member 2 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 2.1814
12/32 [==========>...................] - ETA: 0s - loss: 2.3720
22/32 [===================>..........] - ETA: 0s - loss: 2.3458
31/32 [============================>.] - ETA: 0s - loss: 2.3332
32/32 [==============================] - 0s 6ms/step - loss: 2.3312
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.4545
10/32 [========>.....................] - ETA: 0s - loss: 2.3328
18/32 [===============>..............] - ETA: 0s - loss: 2.3090
27/32 [========================>.....] - ETA: 0s - loss: 2.2986
32/32 [==============================] - 0s 7ms/step - loss: 2.2785
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 2.5406
11/32 [=========>....................] - ETA: 0s - loss: 2.2785
20/32 [=================>............] - ETA: 0s - loss: 2.2709
30/32 [===========================>..] - ETA: 0s - loss: 2.2240
32/32 [==============================] - 0s 6ms/step - loss: 2.2334
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.0854
13/32 [===========>..................] - ETA: 0s - loss: 2.1742
22/32 [===================>..........] - ETA: 0s - loss: 2.1684
29/32 [==========================>...] - ETA: 0s - loss: 2.1824
32/32 [==============================] - 0s 6ms/step - loss: 2.1937
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.2393
8/32 [======>.......................] - ETA: 0s - loss: 2.2023
15/32 [=============>................] - ETA: 0s - loss: 2.1972
21/32 [==================>...........] - ETA: 0s - loss: 2.1809
29/32 [==========================>...] - ETA: 0s - loss: 2.1651
32/32 [==============================] - 0s 8ms/step - loss: 2.1597
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 2.0225
11/32 [=========>....................] - ETA: 0s - loss: 2.1111
21/32 [==================>...........] - ETA: 0s - loss: 2.1341
28/32 [=========================>....] - ETA: 0s - loss: 2.1318
32/32 [==============================] - 0s 6ms/step - loss: 2.1276
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.4568
10/32 [========>.....................] - ETA: 0s - loss: 2.1524
15/32 [=============>................] - ETA: 0s - loss: 2.1130
24/32 [=====================>........] - ETA: 0s - loss: 2.0974
32/32 [==============================] - 0s 7ms/step - loss: 2.0961
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 1.9680
13/32 [===========>..................] - ETA: 0s - loss: 2.0686
20/32 [=================>............] - ETA: 0s - loss: 2.0726
28/32 [=========================>....] - ETA: 0s - loss: 2.0678
32/32 [==============================] - 0s 6ms/step - loss: 2.0644
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 2.0730
12/32 [==========>...................] - ETA: 0s - loss: 2.0829
24/32 [=====================>........] - ETA: 0s - loss: 2.0498
32/32 [==============================] - 0s 5ms/step - loss: 2.0336
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.1117
9/32 [=======>......................] - ETA: 0s - loss: 2.0130
17/32 [==============>...............] - ETA: 0s - loss: 1.9820
24/32 [=====================>........] - ETA: 0s - loss: 1.9973
32/32 [==============================] - ETA: 0s - loss: 2.0026
32/32 [==============================] - 0s 7ms/step - loss: 2.0026
Done in 2.270218 secs
Fitting member 3 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 41.9180
13/32 [===========>..................] - ETA: 0s - loss: 46.0774
21/32 [==================>...........] - ETA: 0s - loss: 41.5402
28/32 [=========================>....] - ETA: 0s - loss: 39.4926
32/32 [==============================] - 0s 6ms/step - loss: 39.2828
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 34.1315
8/32 [======>.......................] - ETA: 0s - loss: 30.4575
19/32 [================>.............] - ETA: 0s - loss: 28.2665
29/32 [==========================>...] - ETA: 0s - loss: 27.0886
32/32 [==============================] - 0s 6ms/step - loss: 27.2884
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 27.6960
13/32 [===========>..................] - ETA: 0s - loss: 25.7246
21/32 [==================>...........] - ETA: 0s - loss: 23.2212
30/32 [===========================>..] - ETA: 0s - loss: 21.5371
32/32 [==============================] - 0s 6ms/step - loss: 21.7021
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 13.7574
7/32 [=====>........................] - ETA: 0s - loss: 17.2677
17/32 [==============>...............] - ETA: 0s - loss: 18.7698
26/32 [=======================>......] - ETA: 0s - loss: 18.4618
32/32 [==============================] - 0s 6ms/step - loss: 18.2036
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 15.2942
12/32 [==========>...................] - ETA: 0s - loss: 15.9608
21/32 [==================>...........] - ETA: 0s - loss: 16.1793
29/32 [==========================>...] - ETA: 0s - loss: 15.9619
32/32 [==============================] - 0s 6ms/step - loss: 15.8282
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 9.8128
13/32 [===========>..................] - ETA: 0s - loss: 13.8890
22/32 [===================>..........] - ETA: 0s - loss: 13.7030
32/32 [==============================] - 0s 5ms/step - loss: 14.0666
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 14.0067
9/32 [=======>......................] - ETA: 0s - loss: 14.2036
20/32 [=================>............] - ETA: 0s - loss: 12.8080
32/32 [==============================] - ETA: 0s - loss: 12.6950
32/32 [==============================] - 0s 5ms/step - loss: 12.6950
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 12.2746
9/32 [=======>......................] - ETA: 0s - loss: 11.6077
22/32 [===================>..........] - ETA: 0s - loss: 11.5587
32/32 [==============================] - 0s 5ms/step - loss: 11.5684
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 13.3544
13/32 [===========>..................] - ETA: 0s - loss: 10.4875
24/32 [=====================>........] - ETA: 0s - loss: 10.7448
32/32 [==============================] - 0s 5ms/step - loss: 10.6440
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 8.5922
13/32 [===========>..................] - ETA: 0s - loss: 9.2266
21/32 [==================>...........] - ETA: 0s - loss: 9.3876
32/32 [==============================] - ETA: 0s - loss: 9.8495
32/32 [==============================] - 0s 5ms/step - loss: 9.8495
Done in 2.001302 secs
Fitting member 4 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 2.8816
8/32 [======>.......................] - ETA: 0s - loss: 3.2879
19/32 [================>.............] - ETA: 0s - loss: 3.0365
26/32 [=======================>......] - ETA: 0s - loss: 2.9227
32/32 [==============================] - 0s 6ms/step - loss: 2.9588
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.3825
15/32 [=============>................] - ETA: 0s - loss: 2.9621
29/32 [==========================>...] - ETA: 0s - loss: 2.8595
32/32 [==============================] - 0s 4ms/step - loss: 2.9011
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 3.9315
17/32 [==============>...............] - ETA: 0s - loss: 2.9411
31/32 [============================>.] - ETA: 0s - loss: 2.8401
32/32 [==============================] - 0s 4ms/step - loss: 2.8534
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.3602
15/32 [=============>................] - ETA: 0s - loss: 2.6057
30/32 [===========================>..] - ETA: 0s - loss: 2.7315
32/32 [==============================] - 0s 4ms/step - loss: 2.8062
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.7419
17/32 [==============>...............] - ETA: 0s - loss: 2.7865
32/32 [==============================] - 0s 3ms/step - loss: 2.7623
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 1.7251
19/32 [================>.............] - ETA: 0s - loss: 2.6876
32/32 [==============================] - 0s 3ms/step - loss: 2.7193
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.8260
15/32 [=============>................] - ETA: 0s - loss: 2.6259
25/32 [======================>.......] - ETA: 0s - loss: 2.6624
32/32 [==============================] - 0s 4ms/step - loss: 2.6774
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 2.2710
18/32 [===============>..............] - ETA: 0s - loss: 2.7490
32/32 [==============================] - 0s 3ms/step - loss: 2.6383
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 3.0022
17/32 [==============>...............] - ETA: 0s - loss: 2.6928
29/32 [==========================>...] - ETA: 0s - loss: 2.6502
32/32 [==============================] - 0s 4ms/step - loss: 2.6007
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.6980
14/32 [============>.................] - ETA: 0s - loss: 2.4191
32/32 [==============================] - ETA: 0s - loss: 2.5630
32/32 [==============================] - 0s 4ms/step - loss: 2.5630
Done in 1.409164 secs
Fitting member 5 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 112.3703
10/32 [========>.....................] - ETA: 0s - loss: 155.6538
19/32 [================>.............] - ETA: 0s - loss: 150.9393
30/32 [===========================>..] - ETA: 0s - loss: 139.5462
32/32 [==============================] - 0s 5ms/step - loss: 139.0890
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 141.4559
15/32 [=============>................] - ETA: 0s - loss: 104.8303
21/32 [==================>...........] - ETA: 0s - loss: 99.5094
32/32 [==============================] - 0s 5ms/step - loss: 95.6168
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 97.2787
14/32 [============>.................] - ETA: 0s - loss: 88.5640
29/32 [==========================>...] - ETA: 0s - loss: 74.4262
32/32 [==============================] - 0s 4ms/step - loss: 74.6476
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 42.6719
9/32 [=======>......................] - ETA: 0s - loss: 65.1981
22/32 [===================>..........] - ETA: 0s - loss: 64.0068
32/32 [==============================] - 0s 4ms/step - loss: 61.8040
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 51.5790
21/32 [==================>...........] - ETA: 0s - loss: 53.9275
32/32 [==============================] - 0s 3ms/step - loss: 53.1899
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 35.5863
17/32 [==============>...............] - ETA: 0s - loss: 46.4085
27/32 [========================>.....] - ETA: 0s - loss: 47.2335
32/32 [==============================] - 0s 4ms/step - loss: 46.9254
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 43.7696
20/32 [=================>............] - ETA: 0s - loss: 42.1246
32/32 [==============================] - ETA: 0s - loss: 42.0744
32/32 [==============================] - 0s 3ms/step - loss: 42.0744
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 43.0723
12/32 [==========>...................] - ETA: 0s - loss: 39.9295
24/32 [=====================>........] - ETA: 0s - loss: 37.5887
32/32 [==============================] - 0s 5ms/step - loss: 38.1152
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 45.3285
12/32 [==========>...................] - ETA: 0s - loss: 35.6146
24/32 [=====================>........] - ETA: 0s - loss: 35.3619
32/32 [==============================] - 0s 5ms/step - loss: 34.8808
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 29.3974
22/32 [===================>..........] - ETA: 0s - loss: 30.7542
32/32 [==============================] - 0s 3ms/step - loss: 32.1088
Done in 2.763182 secs
Epoch 1/2
1/3 [=========>....................] - ETA: 2s - loss: 2.3341
3/3 [==============================] - 2s 284ms/step - loss: 2.3038 - val_loss: 2.2154
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 2.3662
3/3 [==============================] - 0s 66ms/step - loss: 2.3004 - val_loss: 2.2128
Epoch 1/2
1/3 [=========>....................] - ETA: 0s - loss: 52.8941
3/3 [==============================] - 0s 105ms/step - loss: 47.0024 - val_loss: 27.1291
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 49.6579
3/3 [==============================] - 0s 35ms/step - loss: 46.6172 - val_loss: 26.8568
Saving _problems/test_ensemble_torch-17.R
Saving _problems/test_ensemble_torch-63.R
Fitting normal
Fitting bernoulli
Fitting bernoulli_prob
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7f12b51defc0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Fitting beta
WARNING:tensorflow:5 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x7f12b4faede0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Fitting betar
Fitting chi2
Fitting chi
Fitting exponential
Fitting gamma
Fitting gammar
Fitting gumbel
Fitting half_normal
Fitting horseshoe
Fitting inverse_gaussian
Fitting laplace
Fitting log_normal
Fitting logistic
Fitting negbinom
Fitting negbinom
Fitting pareto_ls
Fitting poisson
Fitting poisson_lograte
Saving _problems/test_families_torch-82.R
Saving _problems/test_layers_torch-6.R
Saving _problems/test_methods_torch-23.R
Epoch 1/2
1/29 [>.............................] - ETA: 27s - loss: 11.2004
13/29 [============>.................] - ETA: 0s - loss: 10.5970
23/29 [======================>.......] - ETA: 0s - loss: 10.4031
29/29 [==============================] - 2s 20ms/step - loss: 10.6607 - val_loss: 7.6350
Epoch 2/2
1/29 [>.............................] - ETA: 0s - loss: 7.1379
12/29 [===========>..................] - ETA: 0s - loss: 9.7647
23/29 [======================>.......] - ETA: 0s - loss: 9.7949
29/29 [==============================] - 0s 8ms/step - loss: 9.5296 - val_loss: 6.8647
Epoch 1/10
1/29 [>.............................] - ETA: 1:19 - loss: 6.4103
4/29 [===>..........................] - ETA: 0s - loss: 7.1513
7/29 [======>.......................] - ETA: 0s - loss: 8.3079
10/29 [=========>....................] - ETA: 0s - loss: 8.4085
13/29 [============>.................] - ETA: 0s - loss: 8.6694
17/29 [================>.............] - ETA: 0s - loss: 8.3282
21/29 [====================>.........] - ETA: 0s - loss: 8.3033
25/29 [========================>.....] - ETA: 0s - loss: 8.1120
29/29 [==============================] - ETA: 0s - loss: 7.8712
29/29 [==============================] - 4s 38ms/step - loss: 7.8712 - val_loss: 8.8751
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 6.9897
4/29 [===>..........................] - ETA: 0s - loss: 7.7692
7/29 [======>.......................] - ETA: 0s - loss: 7.0320
10/29 [=========>....................] - ETA: 0s - loss: 7.5030
14/29 [=============>................] - ETA: 0s - loss: 7.4321
17/29 [================>.............] - ETA: 0s - loss: 7.6919
21/29 [====================>.........] - ETA: 0s - loss: 7.8491
25/29 [========================>.....] - ETA: 0s - loss: 7.4780
29/29 [==============================] - ETA: 0s - loss: 7.4694
29/29 [==============================] - 1s 21ms/step - loss: 7.4694 - val_loss: 8.4332
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 14.7798
4/29 [===>..........................] - ETA: 0s - loss: 10.5156
8/29 [=======>......................] - ETA: 0s - loss: 9.1643
12/29 [===========>..................] - ETA: 0s - loss: 8.4318
16/29 [===============>..............] - ETA: 0s - loss: 7.8376
19/29 [==================>...........] - ETA: 0s - loss: 7.4744
21/29 [====================>.........] - ETA: 0s - loss: 7.4352
25/29 [========================>.....] - ETA: 0s - loss: 7.2180
29/29 [==============================] - 1s 19ms/step - loss: 7.1170 - val_loss: 8.0236
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 9.3139
4/29 [===>..........................] - ETA: 0s - loss: 7.8766
6/29 [=====>........................] - ETA: 0s - loss: 7.3959
10/29 [=========>....................] - ETA: 0s - loss: 6.8699
14/29 [=============>................] - ETA: 0s - loss: 6.6848
18/29 [=================>............] - ETA: 0s - loss: 6.7634
22/29 [=====================>........] - ETA: 0s - loss: 6.8245
25/29 [========================>.....] - ETA: 0s - loss: 6.8528
28/29 [===========================>..] - ETA: 0s - loss: 6.8088
29/29 [==============================] - 1s 22ms/step - loss: 6.7865 - val_loss: 7.6687
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 3.8834
5/29 [====>.........................] - ETA: 0s - loss: 5.7258
8/29 [=======>......................] - ETA: 0s - loss: 7.1880
12/29 [===========>..................] - ETA: 0s - loss: 7.0123
16/29 [===============>..............] - ETA: 0s - loss: 6.7281
19/29 [==================>...........] - ETA: 0s - loss: 6.5392
23/29 [======================>.......] - ETA: 0s - loss: 6.6037
26/29 [=========================>....] - ETA: 0s - loss: 6.3531
29/29 [==============================] - ETA: 0s - loss: 6.4899
29/29 [==============================] - 1s 20ms/step - loss: 6.4899 - val_loss: 7.3231
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 5.2272
5/29 [====>.........................] - ETA: 0s - loss: 7.8718
8/29 [=======>......................] - ETA: 0s - loss: 7.1353
12/29 [===========>..................] - ETA: 0s - loss: 6.6544
15/29 [==============>...............] - ETA: 0s - loss: 6.4547
20/29 [===================>..........] - ETA: 0s - loss: 6.6261
24/29 [=======================>......] - ETA: 0s - loss: 6.2656
28/29 [===========================>..] - ETA: 0s - loss: 6.1921
29/29 [==============================] - 1s 18ms/step - loss: 6.2000 - val_loss: 6.9819
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 6.7016
5/29 [====>.........................] - ETA: 0s - loss: 6.6629
9/29 [========>.....................] - ETA: 0s - loss: 6.3465
12/29 [===========>..................] - ETA: 0s - loss: 6.2371
16/29 [===============>..............] - ETA: 0s - loss: 6.1589
19/29 [==================>...........] - ETA: 0s - loss: 6.1555
23/29 [======================>.......] - ETA: 0s - loss: 5.9828
27/29 [==========================>...] - ETA: 0s - loss: 5.8750
29/29 [==============================] - 1s 20ms/step - loss: 5.9123 - val_loss: 6.6514
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 5.7023
5/29 [====>.........................] - ETA: 0s - loss: 6.1573
8/29 [=======>......................] - ETA: 0s - loss: 5.6464
11/29 [==========>...................] - ETA: 0s - loss: 5.6565
15/29 [==============>...............] - ETA: 0s - loss: 5.7464
19/29 [==================>...........] - ETA: 0s - loss: 5.6294
23/29 [======================>.......] - ETA: 0s - loss: 5.6690
28/29 [===========================>..] - ETA: 0s - loss: 5.6376
29/29 [==============================] - 1s 20ms/step - loss: 5.6402 - val_loss: 6.3442
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 4.7939
6/29 [=====>........................] - ETA: 0s - loss: 5.2871
11/29 [==========>...................] - ETA: 0s - loss: 5.3717
14/29 [=============>................] - ETA: 0s - loss: 5.5927
17/29 [================>.............] - ETA: 0s - loss: 5.3213
21/29 [====================>.........] - ETA: 0s - loss: 5.4728
24/29 [=======================>......] - ETA: 0s - loss: 5.5022
28/29 [===========================>..] - ETA: 0s - loss: 5.3859
29/29 [==============================] - 1s 18ms/step - loss: 5.3794 - val_loss: 6.0578
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 4.1903
5/29 [====>.........................] - ETA: 0s - loss: 5.5030
8/29 [=======>......................] - ETA: 0s - loss: 5.4467
12/29 [===========>..................] - ETA: 0s - loss: 5.0002
15/29 [==============>...............] - ETA: 0s - loss: 5.0702
18/29 [=================>............] - ETA: 0s - loss: 5.2039
22/29 [=====================>........] - ETA: 0s - loss: 5.1664
25/29 [========================>.....] - ETA: 0s - loss: 5.1996
29/29 [==============================] - 1s 20ms/step - loss: 5.1438 - val_loss: 5.7953
Epoch 1/10
1/29 [>.............................] - ETA: 1:13 - loss: 1.4933
4/29 [===>..........................] - ETA: 0s - loss: 1.4923
8/29 [=======>......................] - ETA: 0s - loss: 1.4910
12/29 [===========>..................] - ETA: 0s - loss: 1.4898
15/29 [==============>...............] - ETA: 0s - loss: 1.4888
18/29 [=================>............] - ETA: 0s - loss: 1.4879
21/29 [====================>.........] - ETA: 0s - loss: 1.4870
26/29 [=========================>....] - ETA: 0s - loss: 1.4854
29/29 [==============================] - 4s 42ms/step - loss: 1.4848 - val_loss: 1.4755
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 1.4755
4/29 [===>..........................] - ETA: 0s - loss: 1.4746
7/29 [======>.......................] - ETA: 0s - loss: 1.4738
10/29 [=========>....................] - ETA: 0s - loss: 1.4729
14/29 [=============>................] - ETA: 0s - loss: 1.4718
18/29 [=================>............] - ETA: 0s - loss: 1.4707
21/29 [====================>.........] - ETA: 0s - loss: 1.4698
24/29 [=======================>......] - ETA: 0s - loss: 1.4690
29/29 [==============================] - ETA: 0s - loss: 1.4679
29/29 [==============================] - 1s 20ms/step - loss: 1.4679 - val_loss: 1.4596
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 1.4597
5/29 [====>.........................] - ETA: 0s - loss: 1.4587
8/29 [=======>......................] - ETA: 0s - loss: 1.4579
11/29 [==========>...................] - ETA: 0s - loss: 1.4572
14/29 [=============>................] - ETA: 0s - loss: 1.4564
17/29 [================>.............] - ETA: 0s - loss: 1.4557
20/29 [===================>..........] - ETA: 0s - loss: 1.4550
23/29 [======================>.......] - ETA: 0s - loss: 1.4543
26/29 [=========================>....] - ETA: 0s - loss: 1.4536
29/29 [==============================] - ETA: 0s - loss: 1.4531
29/29 [==============================] - 1s 26ms/step - loss: 1.4531 - val_loss: 1.4458
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 1.4459
4/29 [===>..........................] - ETA: 0s - loss: 1.4452
7/29 [======>.......................] - ETA: 0s - loss: 1.4445
10/29 [=========>....................] - ETA: 0s - loss: 1.4439
14/29 [=============>................] - ETA: 0s - loss: 1.4430
16/29 [===============>..............] - ETA: 0s - loss: 1.4426
20/29 [===================>..........] - ETA: 0s - loss: 1.4418
23/29 [======================>.......] - ETA: 0s - loss: 1.4411
26/29 [=========================>....] - ETA: 0s - loss: 1.4405
29/29 [==============================] - 1s 23ms/step - loss: 1.4401 - val_loss: 1.4336
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 1.4338
5/29 [====>.........................] - ETA: 0s - loss: 1.4330
8/29 [=======>......................] - ETA: 0s - loss: 1.4324
12/29 [===========>..................] - ETA: 0s - loss: 1.4316
15/29 [==============>...............] - ETA: 0s - loss: 1.4311
17/29 [================>.............] - ETA: 0s - loss: 1.4307
19/29 [==================>...........] - ETA: 0s - loss: 1.4303
22/29 [=====================>........] - ETA: 0s - loss: 1.4298
24/29 [=======================>......] - ETA: 0s - loss: 1.4294
28/29 [===========================>..] - ETA: 0s - loss: 1.4287
29/29 [==============================] - 1s 23ms/step - loss: 1.4287 - val_loss: 1.4230
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 1.4231
5/29 [====>.........................] - ETA: 0s - loss: 1.4225
9/29 [========>.....................] - ETA: 0s - loss: 1.4218
12/29 [===========>..................] - ETA: 0s - loss: 1.4213
15/29 [==============>...............] - ETA: 0s - loss: 1.4208
18/29 [=================>............] - ETA: 0s - loss: 1.4203
21/29 [====================>.........] - ETA: 0s - loss: 1.4198
25/29 [========================>.....] - ETA: 0s - loss: 1.4192
29/29 [==============================] - ETA: 0s - loss: 1.4187
29/29 [==============================] - 1s 22ms/step - loss: 1.4187 - val_loss: 1.4137
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 1.4140
5/29 [====>.........................] - ETA: 0s - loss: 1.4133
8/29 [=======>......................] - ETA: 0s - loss: 1.4129
12/29 [===========>..................] - ETA: 0s - loss: 1.4123
16/29 [===============>..............] - ETA: 0s - loss: 1.4118
21/29 [====================>.........] - ETA: 0s - loss: 1.4111
24/29 [=======================>......] - ETA: 0s - loss: 1.4107
28/29 [===========================>..] - ETA: 0s - loss: 1.4101
29/29 [==============================] - 1s 19ms/step - loss: 1.4101 - val_loss: 1.4059
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 1.4061
4/29 [===>..........................] - ETA: 0s - loss: 1.4056
8/29 [=======>......................] - ETA: 0s - loss: 1.4052
12/29 [===========>..................] - ETA: 0s - loss: 1.4048
15/29 [==============>...............] - ETA: 0s - loss: 1.4044
18/29 [=================>............] - ETA: 0s - loss: 1.4041
21/29 [====================>.........] - ETA: 0s - loss: 1.4037
23/29 [======================>.......] - ETA: 0s - loss: 1.4035
27/29 [==========================>...] - ETA: 0s - loss: 1.4031
29/29 [==============================] - 1s 20ms/step - loss: 1.4030 - val_loss: 1.3995
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 1.3999
5/29 [====>.........................] - ETA: 0s - loss: 1.3994
10/29 [=========>....................] - ETA: 0s - loss: 1.3989
14/29 [=============>................] - ETA: 0s - loss: 1.3985
17/29 [================>.............] - ETA: 0s - loss: 1.3983
20/29 [===================>..........] - ETA: 0s - loss: 1.3980
25/29 [========================>.....] - ETA: 0s - loss: 1.3976
28/29 [===========================>..] - ETA: 0s - loss: 1.3973
29/29 [==============================] - 1s 20ms/step - loss: 1.3973 - val_loss: 1.3948
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 1.3947
5/29 [====>.........................] - ETA: 0s - loss: 1.3946
9/29 [========>.....................] - ETA: 0s - loss: 1.3944
14/29 [=============>................] - ETA: 0s - loss: 1.3941
18/29 [=================>............] - ETA: 0s - loss: 1.3938
21/29 [====================>.........] - ETA: 0s - loss: 1.3936
26/29 [=========================>....] - ETA: 0s - loss: 1.3934
29/29 [==============================] - 0s 17ms/step - loss: 1.3933 - val_loss: 1.3916
Epoch 1/10
1/29 [>.............................] - ETA: 1:05 - loss: 1.2453
4/29 [===>..........................] - ETA: 0s - loss: 1.1612
7/29 [======>.......................] - ETA: 0s - loss: 1.1467
10/29 [=========>....................] - ETA: 0s - loss: 1.1705
13/29 [============>.................] - ETA: 0s - loss: 1.1763
16/29 [===============>..............] - ETA: 0s - loss: 1.1849
20/29 [===================>..........] - ETA: 0s - loss: 1.1881
24/29 [=======================>......] - ETA: 0s - loss: 1.1818
27/29 [==========================>...] - ETA: 0s - loss: 1.1817
29/29 [==============================] - 4s 49ms/step - loss: 1.1842 - val_loss: 2.1275
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 0.9681
6/29 [=====>........................] - ETA: 0s - loss: 1.1706
10/29 [=========>....................] - ETA: 0s - loss: 1.2102
14/29 [=============>................] - ETA: 0s - loss: 1.1598
18/29 [=================>............] - ETA: 0s - loss: 1.1774
21/29 [====================>.........] - ETA: 0s - loss: 1.1765
24/29 [=======================>......] - ETA: 0s - loss: 1.1779
28/29 [===========================>..] - ETA: 0s - loss: 1.1617
29/29 [==============================] - 1s 20ms/step - loss: 1.1645 - val_loss: 2.0574
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 1.0562
6/29 [=====>........................] - ETA: 0s - loss: 1.1321
10/29 [=========>....................] - ETA: 0s - loss: 1.1602
13/29 [============>.................] - ETA: 0s - loss: 1.1387
16/29 [===============>..............] - ETA: 0s - loss: 1.1374
19/29 [==================>...........] - ETA: 0s - loss: 1.1366
23/29 [======================>.......] - ETA: 0s - loss: 1.1351
28/29 [===========================>..] - ETA: 0s - loss: 1.1432
29/29 [==============================] - 1s 18ms/step - loss: 1.1441 - val_loss: 1.9952
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 1.0159
4/29 [===>..........................] - ETA: 0s - loss: 1.0840
7/29 [======>.......................] - ETA: 0s - loss: 1.1153
11/29 [==========>...................] - ETA: 0s - loss: 1.1059
14/29 [=============>................] - ETA: 0s - loss: 1.0867
18/29 [=================>............] - ETA: 0s - loss: 1.1009
22/29 [=====================>........] - ETA: 0s - loss: 1.1012
27/29 [==========================>...] - ETA: 0s - loss: 1.1185
29/29 [==============================] - 1s 19ms/step - loss: 1.1216 - val_loss: 1.9393
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 1.0572
6/29 [=====>........................] - ETA: 0s - loss: 1.0819
11/29 [==========>...................] - ETA: 0s - loss: 1.1126
15/29 [==============>...............] - ETA: 0s - loss: 1.1055
18/29 [=================>............] - ETA: 0s - loss: 1.1042
22/29 [=====================>........] - ETA: 0s - loss: 1.1126
26/29 [=========================>....] - ETA: 0s - loss: 1.1004
29/29 [==============================] - 0s 17ms/step - loss: 1.0955 - val_loss: 1.8846
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 0.9186
6/29 [=====>........................] - ETA: 0s - loss: 1.0189
10/29 [=========>....................] - ETA: 0s - loss: 1.0511
14/29 [=============>................] - ETA: 0s - loss: 1.0626
19/29 [==================>...........] - ETA: 0s - loss: 1.0632
24/29 [=======================>......] - ETA: 0s - loss: 1.0662
28/29 [===========================>..] - ETA: 0s - loss: 1.0668
29/29 [==============================] - 1s 18ms/step - loss: 1.0666 - val_loss: 1.8390
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 1.0423
5/29 [====>.........................] - ETA: 0s - loss: 1.0287
10/29 [=========>....................] - ETA: 0s - loss: 1.0637
14/29 [=============>................] - ETA: 0s - loss: 1.0709
20/29 [===================>..........] - ETA: 0s - loss: 1.0477
23/29 [======================>.......] - ETA: 0s - loss: 1.0429
26/29 [=========================>....] - ETA: 0s - loss: 1.0424
29/29 [==============================] - ETA: 0s - loss: 1.0385
29/29 [==============================] - 1s 19ms/step - loss: 1.0385 - val_loss: 1.7931
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 1.0082
6/29 [=====>........................] - ETA: 0s - loss: 1.0126
10/29 [=========>....................] - ETA: 0s - loss: 1.0007
14/29 [=============>................] - ETA: 0s - loss: 1.0074
18/29 [=================>............] - ETA: 0s - loss: 1.0070
22/29 [=====================>........] - ETA: 0s - loss: 1.0098
25/29 [========================>.....] - ETA: 0s - loss: 1.0129
29/29 [==============================] - 1s 19ms/step - loss: 1.0098 - val_loss: 1.7559
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 0.9752
5/29 [====>.........................] - ETA: 0s - loss: 1.0275
10/29 [=========>....................] - ETA: 0s - loss: 1.0247
15/29 [==============>...............] - ETA: 0s - loss: 1.0008
19/29 [==================>...........] - ETA: 0s - loss: 0.9963
22/29 [=====================>........] - ETA: 0s - loss: 0.9905
25/29 [========================>.....] - ETA: 0s - loss: 0.9855
29/29 [==============================] - ETA: 0s - loss: 0.9810
29/29 [==============================] - 1s 18ms/step - loss: 0.9810 - val_loss: 1.7130
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 0.8496
5/29 [====>.........................] - ETA: 0s - loss: 0.9156
9/29 [========>.....................] - ETA: 0s - loss: 0.9342
13/29 [============>.................] - ETA: 0s - loss: 0.9356
16/29 [===============>..............] - ETA: 0s - loss: 0.9381
20/29 [===================>..........] - ETA: 0s - loss: 0.9430
25/29 [========================>.....] - ETA: 0s - loss: 0.9500
28/29 [===========================>..] - ETA: 0s - loss: 0.9515
29/29 [==============================] - 1s 18ms/step - loss: 0.9513 - val_loss: 1.6782
2025-11-22 15:41:08.725657: E tensorflow/core/util/util.cc:131] oneDNN supports DT_INT64 only on platforms with AVX-512. Falling back to the default Eigen-based implementation if present.
Model: "model_43"
________________________________________________________________________________
Layer (type) Output Shape Para Connected to Trainable
m #
================================================================================
input_node_x1_x2_ [(None, 2)] 0 [] Y
n_trees_2_n_layer
s_3_tree_depth_5_
_1 (InputLayer)
input__Intercept_ [(None, 1)] 0 [] Y
_1 (InputLayer)
node_2 (NODE) (None, 3) 1754 ['input_node_x1_x2 Y
_n_trees_2_n_layer
s_3_tree_depth_5__
1[0][0]']
1_1 (Dense) (None, 3) 3 ['input__Intercept Y
__1[0][0]']
add_77 (Add) (None, 3) 0 ['node_2[0][0]', Y
'1_1[0][0]']
distribution_lamb ((None, 3), 0 ['add_77[0][0]'] Y
da_43 (Distributi (None, 3))
onLambda)
================================================================================
Total params: 1757 (6.86 KB)
Trainable params: 793 (3.10 KB)
Non-trainable params: 964 (3.77 KB)
________________________________________________________________________________
Model formulas:
---------------
loc :
~node(x1, x2, n_trees = 2, n_layers = 3, tree_depth = 5)
<environment: 0x564b65b97cc0>
Fitting model with 1 orthogonalization(s) ... Fitting model with 2 orthogonalization(s) ... Fitting model with 3 orthogonalization(s) ... Fitting model with 4 orthogonalization(s) ... Fitting model with 5 orthogonalization(s) ... Saving _problems/test_reproducibility_torch-30.R
Saving _problems/test_subnetwork_init_torch-20.R
Fitting Fold 1 ...
Done in 2.045503 secs
Fitting Fold 2 ...
Done in 0.6135597 secs
Epoch 1/2
1/2 [==============>...............] - ETA: 0s - loss: 22.0463
2/2 [==============================] - 0s 11ms/step - loss: 22.4672
Epoch 2/2
1/2 [==============>...............] - ETA: 0s - loss: 21.8272
2/2 [==============================] - 0s 22ms/step - loss: 20.5671
Fitting Fold 1 ...
Done in 2.37188 secs
Fitting Fold 2 ...
Done in 0.4814572 secs
Epoch 1/2
1/2 [==============>...............] - ETA: 0s - loss: 22.0463
2/2 [==============================] - 0s 27ms/step - loss: 22.4672
Epoch 2/2
1/2 [==============>...............] - ETA: 0s - loss: 21.8272
2/2 [==============================] - 0s 13ms/step - loss: 20.5671
[ FAIL 14 | WARN 0 | SKIP 1 | PASS 680 ]
══ Skipped tests (1) ═══════════════════════════════════════════════════════════
• empty test (1):
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test_customtraining_torch.R:6:3'): Use multiple optimizers torch ────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(50, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─torch::nn_linear(1, 50) at test_customtraining_torch.R:6:3
2. └─Module$new(...)
3. └─torch (local) initialize(...)
4. ├─torch::nn_parameter(torch_empty(out_features, in_features))
5. │ └─torch:::is_torch_tensor(x)
6. └─torch::torch_empty(out_features, in_features)
7. ├─base::do.call(.torch_empty, args)
8. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
9. └─torch:::call_c_function(...)
10. └─torch:::do_call(f, args)
11. ├─base::do.call(fun, args)
12. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_data_handler_torch.R:75:3'): properties of dataset torch ───────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_data_handler_torch.R:75:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_deepregression_torch.R:6:5'): Simple additive model ────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(2, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:21:5
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─torch::nn_sequential(...) at test_deepregression_torch.R:6:5
9. │ └─Module$new(...)
10. │ └─torch (local) initialize(...)
11. │ └─rlang::list2(...)
12. └─torch::nn_linear(in_features = i, out_features = 2, bias = F)
13. └─Module$new(...)
14. └─torch (local) initialize(...)
15. ├─torch::nn_parameter(torch_empty(out_features, in_features))
16. │ └─torch:::is_torch_tensor(x)
17. └─torch::torch_empty(out_features, in_features)
18. ├─base::do.call(.torch_empty, args)
19. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
20. └─torch:::call_c_function(...)
21. └─torch:::do_call(f, args)
22. ├─base::do.call(fun, args)
23. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_deepregression_torch.R:110:3'): Generalized additive model ─────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:110:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:151:3'): Deep generalized additive model with LSS ──
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:151:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:181:3'): GAMs with shared weights ───────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:181:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. ├─base::do.call(...)
6. └─deepregression (local) `<fn>`(...)
7. └─torch::torch_tensor(P)
8. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
9. └─methods$initialize(NULL, NULL, ...)
10. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:220:3'): GAMs with fixed weights ────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:220:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_ensemble_torch.R:13:3'): deep ensemble ─────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_ensemble_torch.R:13:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_ensemble_torch.R:55:3'): reinitializing weights ────────────────
<std::runtime_error/C++Error/error/condition>
Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_ensemble_torch.R:55:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_families_torch.R:76:7'): torch families can be fitted ──────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_families_torch.R:76:7
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_layers_torch.R:6:3'): lasso layers ─────────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `cpp_torch_manual_seed(as.character(seed))`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─torch::torch_manual_seed(42) at test_layers_torch.R:6:3
2. └─torch:::cpp_torch_manual_seed(as.character(seed))
── Error ('test_methods_torch.R:18:3'): all methods ────────────────────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_methods_torch.R:18:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_reproducibility_torch.R:21:17'): reproducibility ───────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(64, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_reproducibility_torch.R:33:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─torch::nn_sequential(...) at test_reproducibility_torch.R:21:17
9. │ └─Module$new(...)
10. │ └─torch (local) initialize(...)
11. │ └─rlang::list2(...)
12. └─torch::nn_linear(in_features = 1, out_features = 64, bias = F)
13. └─Module$new(...)
14. └─torch (local) initialize(...)
15. ├─torch::nn_parameter(torch_empty(out_features, in_features))
16. │ └─torch:::is_torch_tensor(x)
17. └─torch::torch_empty(out_features, in_features)
18. ├─base::do.call(.torch_empty, args)
19. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
20. └─torch:::call_c_function(...)
21. └─torch:::do_call(f, args)
22. ├─base::do.call(fun, args)
23. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_subnetwork_init_torch.R:15:33'): subnetwork_init ───────────────
<std::runtime_error/C++Error/error/condition>
Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(5, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::subnetwork_init_torch(list(pp)) at test_subnetwork_init_torch.R:38:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─pp_lay[[i]]$layer()
5. ├─torch::nn_sequential(...) at test_subnetwork_init_torch.R:15:33
6. │ └─Module$new(...)
7. │ └─torch (local) initialize(...)
8. │ └─rlang::list2(...)
9. └─torch::nn_linear(in_features = 1, out_features = 5)
10. └─Module$new(...)
11. └─torch (local) initialize(...)
12. ├─torch::nn_parameter(torch_empty(out_features, in_features))
13. │ └─torch:::is_torch_tensor(x)
14. └─torch::torch_empty(out_features, in_features)
15. ├─base::do.call(.torch_empty, args)
16. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
17. └─torch:::call_c_function(...)
18. └─torch:::do_call(f, args)
19. ├─base::do.call(fun, args)
20. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
[ FAIL 14 | WARN 0 | SKIP 1 | PASS 680 ]
Error:
! Test failures.
Execution halted
- checking PDF version of manual ... [7s/9s] OK
- checking HTML version of manual ... [4s/6s] OK
- checking for non-standard things in the check directory ... OK
- DONE
Status: 1 ERROR