- using R Under development (unstable) (2026-01-20 r89309)
- using platform: x86_64-pc-linux-gnu
- R was compiled by
gcc-15 (Debian 15.2.0-12) 15.2.0
GNU Fortran (Debian 15.2.0-12) 15.2.0
- running under: Debian GNU/Linux forky/sid
- using session charset: UTF-8
- checking for file ‘deepregression/DESCRIPTION’ ... OK
- this is package ‘deepregression’ version ‘2.3.2’
- package encoding: UTF-8
- checking CRAN incoming feasibility ... [1s/1s] OK
- checking package namespace information ... OK
- checking package dependencies ... OK
- checking if this is a source package ... OK
- checking if there is a namespace ... OK
- checking for executable files ... OK
- checking for hidden files and directories ... OK
- checking for portable file names ... OK
- checking for sufficient/correct file permissions ... OK
- checking whether package ‘deepregression’ can be installed ... OK
See the install log for details.
- checking package directory ... OK
- checking for future file timestamps ... OK
- checking DESCRIPTION meta-information ... OK
- checking top-level files ... OK
- checking for left-over files ... OK
- checking index information ... OK
- checking package subdirectories ... OK
- checking code files for non-ASCII characters ... OK
- checking R files for syntax errors ... OK
- checking whether the package can be loaded ... [3s/5s] OK
- checking whether the package can be loaded with stated dependencies ... [3s/4s] OK
- checking whether the package can be unloaded cleanly ... [3s/4s] OK
- checking whether the namespace can be loaded with stated dependencies ... [3s/4s] OK
- checking whether the namespace can be unloaded cleanly ... [3s/5s] OK
- checking loading without being on the library search path ... [3s/3s] OK
- checking whether startup messages can be suppressed ... [3s/4s] OK
- checking use of S3 registration ... OK
- checking dependencies in R code ... OK
- checking S3 generic/method consistency ... OK
- checking replacement functions ... OK
- checking foreign function calls ... OK
- checking R code for possible problems ... [24s/30s] OK
- checking Rd files ... [1s/1s] OK
- checking Rd metadata ... OK
- checking Rd line widths ... OK
- checking Rd cross-references ... OK
- checking for missing documentation entries ... OK
- checking for code/documentation mismatches ... OK
- checking Rd \usage sections ... OK
- checking Rd contents ... OK
- checking for unstated dependencies in examples ... OK
- checking examples ... [4s/6s] OK
- checking for unstated dependencies in ‘tests’ ... OK
- checking tests ... [334s/468s] ERROR
Running ‘testthat.R’ [334s/468s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(deepregression)
Loading required package: tensorflow
Loading required package: tfprobability
Loading required package: keras
The keras package is deprecated. Use the keras3 package instead.
>
> if (reticulate::py_module_available("tensorflow") &
+ reticulate::py_module_available("keras") &
+ .Platform$OS.type != "windows"){
+ test_check("deepregression")
+ }
Downloading pygments (1.2MiB)
Downloading tensorflow (615.1MiB)
Downloading grpcio (6.3MiB)
Downloading tensorboard (5.2MiB)
Downloading tensorflow-probability (6.7MiB)
Downloading tf-keras (1.6MiB)
Downloading ml-dtypes (4.8MiB)
Downloading h5py (4.9MiB)
Downloading keras (1.4MiB)
Downloading numpy (17.1MiB)
Downloaded pygments
Downloaded keras
Downloaded ml-dtypes
Downloaded tf-keras
Downloaded h5py
Downloaded grpcio
Downloaded tensorboard
Downloaded numpy
Downloaded tensorflow-probability
Downloaded tensorflow
Installed 43 packages in 527ms
2026-01-21 15:41:18.075659: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2026-01-21 15:41:18.077075: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2026-01-21 15:41:18.082463: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2026-01-21 15:41:18.094431: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1769006478.115870 56620 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1769006478.124578 56620 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1769006478.159533 56620 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1769006478.159605 56620 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1769006478.159609 56620 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1769006478.159612 56620 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2026-01-21 15:41:18.167862: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Saving _problems/test_customtraining_torch-6.R
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
8192/11490434 [..............................] - ETA: 0s
49152/11490434 [..............................] - ETA: 27s
81920/11490434 [..............................] - ETA: 30s
131072/11490434 [..............................] - ETA: 23s
172032/11490434 [..............................] - ETA: 21s
229376/11490434 [..............................] - ETA: 18s
335872/11490434 [..............................] - ETA: 14s
458752/11490434 [>.............................] - ETA: 11s
688128/11490434 [>.............................] - ETA: 8s
991232/11490434 [=>............................] - ETA: 6s
1449984/11490434 [==>...........................] - ETA: 4s
2080768/11490434 [====>.........................] - ETA: 3s
3121152/11490434 [=======>......................] - ETA: 1s
4218880/11490434 [==========>...................] - ETA: 1s
6471680/11490434 [===============>..............] - ETA: 0s
8175616/11490434 [====================>.........] - ETA: 0s
11190272/11490434 [============================>.] - ETA: 0s
11490434/11490434 [==============================] - 1s 0us/step
Saving _problems/test_data_handler_torch-78.R
2026-01-21 15:41:37.832176: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Epoch 1/2
1/15 [=>............................] - ETA: 29s - loss: 2.2490
9/15 [=================>............] - ETA: 0s - loss: 2.2239
15/15 [==============================] - ETA: 0s - loss: 2.1949
15/15 [==============================] - 3s 42ms/step - loss: 2.1949 - val_loss: 2.0942
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1182
9/15 [=================>............] - ETA: 0s - loss: 2.0737
15/15 [==============================] - 0s 14ms/step - loss: 2.0484 - val_loss: 1.9518
Epoch 1/2
1/15 [=>............................] - ETA: 21s - loss: 2.6629
10/15 [===================>..........] - ETA: 0s - loss: 2.6533
15/15 [==============================] - 2s 31ms/step - loss: 2.6533 - val_loss: 2.6393
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.6358
11/15 [=====================>........] - ETA: 0s - loss: 2.6352
15/15 [==============================] - 0s 12ms/step - loss: 2.6365 - val_loss: 2.6227
Epoch 1/2
1/15 [=>............................] - ETA: 22s - loss: 2.9965
9/15 [=================>............] - ETA: 0s - loss: 3.0407
15/15 [==============================] - 2s 41ms/step - loss: 2.8998 - val_loss: 1.6201
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1770
13/15 [=========================>....] - ETA: 0s - loss: 2.7153
15/15 [==============================] - 0s 12ms/step - loss: 2.6616 - val_loss: 1.5446
Epoch 1/2
1/15 [=>............................] - ETA: 24s - loss: 4.1701
8/15 [===============>..............] - ETA: 0s - loss: 4.1024
15/15 [==============================] - 2s 34ms/step - loss: 4.0062 - val_loss: 3.4240
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 4.3026
13/15 [=========================>....] - ETA: 0s - loss: 3.9103
15/15 [==============================] - 0s 11ms/step - loss: 3.9103 - val_loss: 3.3606
Epoch 1/3
1/3 [=========>....................] - ETA: 1s - loss: 7.9864
3/3 [==============================] - 1s 131ms/step - loss: 8.7055 - val_loss: 7.2396
Epoch 2/3
1/3 [=========>....................] - ETA: 0s - loss: 9.5053
3/3 [==============================] - 0s 40ms/step - loss: 8.6326 - val_loss: 7.1833
Epoch 3/3
1/3 [=========>....................] - ETA: 0s - loss: 8.5150
3/3 [==============================] - 0s 50ms/step - loss: 8.5566 - val_loss: 7.1282
Epoch 1/2
1/15 [=>............................] - ETA: 17s - loss: 2.6300
11/15 [=====================>........] - ETA: 0s - loss: 2.6286
15/15 [==============================] - 2s 60ms/step - loss: 2.6282 - val_loss: 2.6186
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.6212
13/15 [=========================>....] - ETA: 0s - loss: 2.6114
15/15 [==============================] - 0s 12ms/step - loss: 2.6124 - val_loss: 2.6029
Epoch 1/2
1/15 [=>............................] - ETA: 10s - loss: 653.6502
14/15 [===========================>..] - ETA: 0s - loss: 1229.8389
15/15 [==============================] - 1s 36ms/step - loss: 1248.0040 - val_loss: 1370.1050
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 755.2901
12/15 [=======================>......] - ETA: 0s - loss: 1009.8615
15/15 [==============================] - 0s 13ms/step - loss: 960.7450 - val_loss: 1055.7158
Epoch 1/2
1/15 [=>............................] - ETA: 14s - loss: 927.8141
11/15 [=====================>........] - ETA: 0s - loss: 2320.8728
15/15 [==============================] - 2s 39ms/step - loss: 3026.8003 - val_loss: 321.5525
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 991.2161
12/15 [=======================>......] - ETA: 0s - loss: 2786.2039
15/15 [==============================] - 0s 12ms/step - loss: 2679.4148 - val_loss: 292.7626
Epoch 1/2
1/15 [=>............................] - ETA: 13s - loss: 2.3149
13/15 [=========================>....] - ETA: 0s - loss: 2.2476
15/15 [==============================] - 1s 29ms/step - loss: 2.2516 - val_loss: 2.2750
Epoch 2/2
1/15 [=>............................] - ETA: 0s - loss: 2.1948
12/15 [=======================>......] - ETA: 0s - loss: 2.2076
15/15 [==============================] - 0s 9ms/step - loss: 2.2132 - val_loss: 2.2328
Saving _problems/test_deepregression_torch-10.R
Saving _problems/test_deepregression_torch-117.R
Saving _problems/test_deepregression_torch-158.R
Saving _problems/test_deepregression_torch-190.R
Saving _problems/test_deepregression_torch-229.R
Fitting member 1 ...Epoch 1/10
1/32 [..............................] - ETA: 23s - loss: 2.3293
15/32 [=============>................] - ETA: 0s - loss: 2.3365
28/32 [=========================>....] - ETA: 0s - loss: 2.3344
32/32 [==============================] - 1s 4ms/step - loss: 2.3303
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.3269
15/32 [=============>................] - ETA: 0s - loss: 2.3209
32/32 [==============================] - 0s 3ms/step - loss: 2.2977
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 2.3270
17/32 [==============>...............] - ETA: 0s - loss: 2.2753
32/32 [==============================] - 0s 3ms/step - loss: 2.2650
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.2270
13/32 [===========>..................] - ETA: 0s - loss: 2.2309
25/32 [======================>.......] - ETA: 0s - loss: 2.2338
32/32 [==============================] - 0s 4ms/step - loss: 2.2325
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.2185
13/32 [===========>..................] - ETA: 0s - loss: 2.2165
27/32 [========================>.....] - ETA: 0s - loss: 2.2045
32/32 [==============================] - 0s 4ms/step - loss: 2.2002
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 2.1431
14/32 [============>.................] - ETA: 0s - loss: 2.1716
32/32 [==============================] - 0s 3ms/step - loss: 2.1681
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.2840
13/32 [===========>..................] - ETA: 0s - loss: 2.1583
29/32 [==========================>...] - ETA: 0s - loss: 2.1325
32/32 [==============================] - 0s 4ms/step - loss: 2.1359
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 2.0769
19/32 [================>.............] - ETA: 0s - loss: 2.1104
32/32 [==============================] - 0s 3ms/step - loss: 2.1036
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 2.0602
15/32 [=============>................] - ETA: 0s - loss: 2.0856
31/32 [============================>.] - ETA: 0s - loss: 2.0715
32/32 [==============================] - 0s 4ms/step - loss: 2.0717
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.0980
18/32 [===============>..............] - ETA: 0s - loss: 2.0391
32/32 [==============================] - 0s 3ms/step - loss: 2.0400
Done in 2.082653 secs
Fitting member 2 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 2.1814
23/32 [====================>.........] - ETA: 0s - loss: 2.3483
32/32 [==============================] - 0s 2ms/step - loss: 2.3312
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.4545
22/32 [===================>..........] - ETA: 0s - loss: 2.3071
32/32 [==============================] - 0s 3ms/step - loss: 2.2785
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 2.5406
17/32 [==============>...............] - ETA: 0s - loss: 2.2745
29/32 [==========================>...] - ETA: 0s - loss: 2.2297
32/32 [==============================] - 0s 4ms/step - loss: 2.2334
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.0854
11/32 [=========>....................] - ETA: 0s - loss: 2.1906
25/32 [======================>.......] - ETA: 0s - loss: 2.1946
32/32 [==============================] - 0s 4ms/step - loss: 2.1937
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.2393
17/32 [==============>...............] - ETA: 0s - loss: 2.1791
32/32 [==============================] - 0s 3ms/step - loss: 2.1597
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 2.0225
13/32 [===========>..................] - ETA: 0s - loss: 2.1352
28/32 [=========================>....] - ETA: 0s - loss: 2.1318
32/32 [==============================] - 0s 4ms/step - loss: 2.1276
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.4568
12/32 [==========>...................] - ETA: 0s - loss: 2.1486
25/32 [======================>.......] - ETA: 0s - loss: 2.0964
32/32 [==============================] - 0s 4ms/step - loss: 2.0961
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 1.9680
13/32 [===========>..................] - ETA: 0s - loss: 2.0686
27/32 [========================>.....] - ETA: 0s - loss: 2.0730
32/32 [==============================] - 0s 4ms/step - loss: 2.0644
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 2.0730
21/32 [==================>...........] - ETA: 0s - loss: 2.0468
32/32 [==============================] - 0s 3ms/step - loss: 2.0336
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.1117
20/32 [=================>............] - ETA: 0s - loss: 1.9858
32/32 [==============================] - 0s 3ms/step - loss: 2.0026
Done in 1.276942 secs
Fitting member 3 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 41.9180
17/32 [==============>...............] - ETA: 0s - loss: 42.8272
29/32 [==========================>...] - ETA: 0s - loss: 39.0741
32/32 [==============================] - 0s 4ms/step - loss: 39.2828
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 34.1315
15/32 [=============>................] - ETA: 0s - loss: 29.0796
26/32 [=======================>......] - ETA: 0s - loss: 27.7863
32/32 [==============================] - 0s 5ms/step - loss: 27.2884
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 27.6960
19/32 [================>.............] - ETA: 0s - loss: 23.8761
32/32 [==============================] - 0s 3ms/step - loss: 21.7021
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 13.7574
13/32 [===========>..................] - ETA: 0s - loss: 19.0923
26/32 [=======================>......] - ETA: 0s - loss: 18.4618
32/32 [==============================] - 0s 5ms/step - loss: 18.2036
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 15.2942
12/32 [==========>...................] - ETA: 0s - loss: 15.9608
20/32 [=================>............] - ETA: 0s - loss: 16.3552
32/32 [==============================] - 0s 4ms/step - loss: 15.8282
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 9.8128
16/32 [==============>...............] - ETA: 0s - loss: 13.7337
29/32 [==========================>...] - ETA: 0s - loss: 14.0164
32/32 [==============================] - 0s 4ms/step - loss: 14.0666
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 14.0067
16/32 [==============>...............] - ETA: 0s - loss: 12.9455
31/32 [============================>.] - ETA: 0s - loss: 12.7421
32/32 [==============================] - 0s 4ms/step - loss: 12.6950
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 12.2746
15/32 [=============>................] - ETA: 0s - loss: 11.7635
27/32 [========================>.....] - ETA: 0s - loss: 11.5104
32/32 [==============================] - 0s 4ms/step - loss: 11.5684
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 13.3544
18/32 [===============>..............] - ETA: 0s - loss: 10.5524
28/32 [=========================>....] - ETA: 0s - loss: 10.6648
32/32 [==============================] - 0s 4ms/step - loss: 10.6440
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 8.5922
9/32 [=======>......................] - ETA: 0s - loss: 9.9198
22/32 [===================>..........] - ETA: 0s - loss: 9.4548
32/32 [==============================] - 0s 4ms/step - loss: 9.8495
Done in 1.540925 secs
Fitting member 4 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 2.8816
13/32 [===========>..................] - ETA: 0s - loss: 3.1058
21/32 [==================>...........] - ETA: 0s - loss: 3.0081
32/32 [==============================] - 0s 5ms/step - loss: 2.9588
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 2.3825
16/32 [==============>...............] - ETA: 0s - loss: 2.9405
27/32 [========================>.....] - ETA: 0s - loss: 2.8995
32/32 [==============================] - 0s 4ms/step - loss: 2.9011
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 3.9315
15/32 [=============>................] - ETA: 0s - loss: 2.8970
29/32 [==========================>...] - ETA: 0s - loss: 2.8853
32/32 [==============================] - 0s 4ms/step - loss: 2.8534
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 2.3602
15/32 [=============>................] - ETA: 0s - loss: 2.6057
31/32 [============================>.] - ETA: 0s - loss: 2.7971
32/32 [==============================] - 0s 3ms/step - loss: 2.8062
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 2.7419
16/32 [==============>...............] - ETA: 0s - loss: 2.8197
28/32 [=========================>....] - ETA: 0s - loss: 2.7736
32/32 [==============================] - 0s 4ms/step - loss: 2.7623
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 1.7251
15/32 [=============>................] - ETA: 0s - loss: 2.5493
29/32 [==========================>...] - ETA: 0s - loss: 2.7168
32/32 [==============================] - 0s 4ms/step - loss: 2.7193
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 2.8260
13/32 [===========>..................] - ETA: 0s - loss: 2.7036
24/32 [=====================>........] - ETA: 0s - loss: 2.6041
32/32 [==============================] - 0s 5ms/step - loss: 2.6774
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 2.2710
17/32 [==============>...............] - ETA: 0s - loss: 2.6489
32/32 [==============================] - ETA: 0s - loss: 2.6383
32/32 [==============================] - 0s 3ms/step - loss: 2.6383
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 3.0022
13/32 [===========>..................] - ETA: 0s - loss: 2.7827
27/32 [========================>.....] - ETA: 0s - loss: 2.6806
32/32 [==============================] - 0s 4ms/step - loss: 2.6007
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 2.6980
14/32 [============>.................] - ETA: 0s - loss: 2.4191
31/32 [============================>.] - ETA: 0s - loss: 2.5584
32/32 [==============================] - 0s 4ms/step - loss: 2.5630
Done in 1.538869 secs
Fitting member 5 ...Epoch 1/10
1/32 [..............................] - ETA: 0s - loss: 112.3703
14/32 [============>.................] - ETA: 0s - loss: 160.9385
27/32 [========================>.....] - ETA: 0s - loss: 141.0971
32/32 [==============================] - 0s 4ms/step - loss: 139.0890
Epoch 2/10
1/32 [..............................] - ETA: 0s - loss: 141.4559
15/32 [=============>................] - ETA: 0s - loss: 104.8303
30/32 [===========================>..] - ETA: 0s - loss: 96.1611
32/32 [==============================] - 0s 4ms/step - loss: 95.6168
Epoch 3/10
1/32 [..............................] - ETA: 0s - loss: 97.2787
15/32 [=============>................] - ETA: 0s - loss: 86.1382
32/32 [==============================] - ETA: 0s - loss: 74.6476
32/32 [==============================] - 0s 3ms/step - loss: 74.6476
Epoch 4/10
1/32 [..............................] - ETA: 0s - loss: 42.6719
20/32 [=================>............] - ETA: 0s - loss: 64.3395
32/32 [==============================] - 0s 3ms/step - loss: 61.8040
Epoch 5/10
1/32 [..............................] - ETA: 0s - loss: 51.5790
14/32 [============>.................] - ETA: 0s - loss: 53.1144
28/32 [=========================>....] - ETA: 0s - loss: 53.8337
32/32 [==============================] - 0s 4ms/step - loss: 53.1899
Epoch 6/10
1/32 [..............................] - ETA: 0s - loss: 35.5863
16/32 [==============>...............] - ETA: 0s - loss: 46.1803
29/32 [==========================>...] - ETA: 0s - loss: 46.8487
32/32 [==============================] - 0s 4ms/step - loss: 46.9254
Epoch 7/10
1/32 [..............................] - ETA: 0s - loss: 43.7696
14/32 [============>.................] - ETA: 0s - loss: 42.8699
29/32 [==========================>...] - ETA: 0s - loss: 42.6314
32/32 [==============================] - 0s 4ms/step - loss: 42.0744
Epoch 8/10
1/32 [..............................] - ETA: 0s - loss: 43.0723
15/32 [=============>................] - ETA: 0s - loss: 39.0076
28/32 [=========================>....] - ETA: 0s - loss: 37.6709
32/32 [==============================] - 0s 4ms/step - loss: 38.1152
Epoch 9/10
1/32 [..............................] - ETA: 0s - loss: 45.3285
16/32 [==============>...............] - ETA: 0s - loss: 35.4772
31/32 [============================>.] - ETA: 0s - loss: 34.9863
32/32 [==============================] - 0s 4ms/step - loss: 34.8808
Epoch 10/10
1/32 [..............................] - ETA: 0s - loss: 29.3974
12/32 [==========>...................] - ETA: 0s - loss: 30.0236
26/32 [=======================>......] - ETA: 0s - loss: 31.2554
32/32 [==============================] - 0s 5ms/step - loss: 32.1088
Done in 2.711755 secs
Epoch 1/2
1/3 [=========>....................] - ETA: 2s - loss: 2.3341
3/3 [==============================] - 1s 212ms/step - loss: 2.3038 - val_loss: 2.2154
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 2.3662
3/3 [==============================] - 0s 49ms/step - loss: 2.3004 - val_loss: 2.2128
Epoch 1/2
1/3 [=========>....................] - ETA: 0s - loss: 52.8941
3/3 [==============================] - 0s 110ms/step - loss: 47.0024 - val_loss: 27.1291
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 49.6579
3/3 [==============================] - 0s 40ms/step - loss: 46.6172 - val_loss: 26.8568
Saving _problems/test_ensemble_torch-17.R
Saving _problems/test_ensemble_torch-63.R
Fitting normal
Fitting bernoulli
Fitting bernoulli_prob
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fec2ceb31a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Fitting beta
WARNING:tensorflow:5 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x7fec2cd074c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Fitting betar
Fitting chi2
Fitting chi
Fitting exponential
Fitting gamma
Fitting gammar
Fitting gumbel
Fitting half_normal
Fitting horseshoe
Fitting inverse_gaussian
Fitting laplace
Fitting log_normal
Fitting logistic
Fitting negbinom
Fitting negbinom
Fitting pareto_ls
Fitting poisson
Fitting poisson_lograte
Saving _problems/test_families_torch-82.R
Saving _problems/test_layers_torch-6.R
Saving _problems/test_methods_torch-23.R
Epoch 1/2
1/29 [>.............................] - ETA: 17s - loss: 11.2004
13/29 [============>.................] - ETA: 0s - loss: 10.5970
20/29 [===================>..........] - ETA: 0s - loss: 10.7392
29/29 [==============================] - 1s 15ms/step - loss: 10.6607 - val_loss: 7.6350
Epoch 2/2
1/29 [>.............................] - ETA: 0s - loss: 7.1379
18/29 [=================>............] - ETA: 0s - loss: 10.0961
29/29 [==============================] - 0s 7ms/step - loss: 9.5296 - val_loss: 6.8647
Epoch 1/10
1/29 [>.............................] - ETA: 1:17 - loss: 6.4103
5/29 [====>.........................] - ETA: 0s - loss: 6.7037
8/29 [=======>......................] - ETA: 0s - loss: 8.4438
11/29 [==========>...................] - ETA: 0s - loss: 8.7260
15/29 [==============>...............] - ETA: 0s - loss: 8.4671
19/29 [==================>...........] - ETA: 0s - loss: 8.3854
22/29 [=====================>........] - ETA: 0s - loss: 8.2316
25/29 [========================>.....] - ETA: 0s - loss: 8.1120
27/29 [==========================>...] - ETA: 0s - loss: 7.9340
29/29 [==============================] - 4s 42ms/step - loss: 7.8712 - val_loss: 8.8751
Epoch 2/10
1/29 [>.............................] - ETA: 1s - loss: 6.9897
5/29 [====>.........................] - ETA: 0s - loss: 7.3446
9/29 [========>.....................] - ETA: 0s - loss: 7.4722
12/29 [===========>..................] - ETA: 0s - loss: 7.4187
16/29 [===============>..............] - ETA: 0s - loss: 7.5893
19/29 [==================>...........] - ETA: 0s - loss: 7.7156
23/29 [======================>.......] - ETA: 0s - loss: 7.5970
26/29 [=========================>....] - ETA: 0s - loss: 7.4651
29/29 [==============================] - 1s 20ms/step - loss: 7.4694 - val_loss: 8.4332
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 14.7798
3/29 [==>...........................] - ETA: 0s - loss: 12.1980
7/29 [======>.......................] - ETA: 0s - loss: 9.9702
11/29 [==========>...................] - ETA: 0s - loss: 8.7902
14/29 [=============>................] - ETA: 0s - loss: 7.9490
19/29 [==================>...........] - ETA: 0s - loss: 7.4744
23/29 [======================>.......] - ETA: 0s - loss: 7.1847
26/29 [=========================>....] - ETA: 0s - loss: 7.1461
29/29 [==============================] - 1s 19ms/step - loss: 7.1170 - val_loss: 8.0236
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 9.3139
4/29 [===>..........................] - ETA: 0s - loss: 7.8766
8/29 [=======>......................] - ETA: 0s - loss: 7.4017
12/29 [===========>..................] - ETA: 0s - loss: 6.7038
17/29 [================>.............] - ETA: 0s - loss: 6.9135
22/29 [=====================>........] - ETA: 0s - loss: 6.8245
28/29 [===========================>..] - ETA: 0s - loss: 6.8088
29/29 [==============================] - 0s 15ms/step - loss: 6.7865 - val_loss: 7.6687
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 3.8834
5/29 [====>.........................] - ETA: 0s - loss: 5.7258
10/29 [=========>....................] - ETA: 0s - loss: 6.9172
14/29 [=============>................] - ETA: 0s - loss: 6.9430
20/29 [===================>..........] - ETA: 0s - loss: 6.5464
24/29 [=======================>......] - ETA: 0s - loss: 6.5349
27/29 [==========================>...] - ETA: 0s - loss: 6.4236
29/29 [==============================] - 0s 17ms/step - loss: 6.4899 - val_loss: 7.3231
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 5.2272
6/29 [=====>........................] - ETA: 0s - loss: 7.6922
11/29 [==========>...................] - ETA: 0s - loss: 6.8838
15/29 [==============>...............] - ETA: 0s - loss: 6.4547
20/29 [===================>..........] - ETA: 0s - loss: 6.6261
23/29 [======================>.......] - ETA: 0s - loss: 6.2762
28/29 [===========================>..] - ETA: 0s - loss: 6.1921
29/29 [==============================] - 0s 16ms/step - loss: 6.2000 - val_loss: 6.9819
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 6.7016
6/29 [=====>........................] - ETA: 0s - loss: 6.3961
9/29 [========>.....................] - ETA: 0s - loss: 6.3465
14/29 [=============>................] - ETA: 0s - loss: 6.2707
19/29 [==================>...........] - ETA: 0s - loss: 6.1555
23/29 [======================>.......] - ETA: 0s - loss: 5.9828
27/29 [==========================>...] - ETA: 0s - loss: 5.8750
29/29 [==============================] - 0s 16ms/step - loss: 5.9123 - val_loss: 6.6514
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 5.7023
5/29 [====>.........................] - ETA: 0s - loss: 6.1573
9/29 [========>.....................] - ETA: 0s - loss: 5.5534
13/29 [============>.................] - ETA: 0s - loss: 5.8662
18/29 [=================>............] - ETA: 0s - loss: 5.6322
22/29 [=====================>........] - ETA: 0s - loss: 5.6236
27/29 [==========================>...] - ETA: 0s - loss: 5.7346
29/29 [==============================] - 0s 15ms/step - loss: 5.6402 - val_loss: 6.3442
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 4.7939
7/29 [======>.......................] - ETA: 0s - loss: 5.0800
12/29 [===========>..................] - ETA: 0s - loss: 5.2898
17/29 [================>.............] - ETA: 0s - loss: 5.3213
20/29 [===================>..........] - ETA: 0s - loss: 5.5627
24/29 [=======================>......] - ETA: 0s - loss: 5.5022
29/29 [==============================] - ETA: 0s - loss: 5.3794
29/29 [==============================] - 0s 16ms/step - loss: 5.3794 - val_loss: 6.0578
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 4.1903
4/29 [===>..........................] - ETA: 0s - loss: 5.4622
7/29 [======>.......................] - ETA: 0s - loss: 5.3571
10/29 [=========>....................] - ETA: 0s - loss: 5.1840
14/29 [=============>................] - ETA: 0s - loss: 4.9105
18/29 [=================>............] - ETA: 0s - loss: 5.2039
22/29 [=====================>........] - ETA: 0s - loss: 5.1664
26/29 [=========================>....] - ETA: 0s - loss: 5.1272
29/29 [==============================] - 1s 19ms/step - loss: 5.1438 - val_loss: 5.7953
Epoch 1/10
1/29 [>.............................] - ETA: 1:09 - loss: 1.4933
4/29 [===>..........................] - ETA: 0s - loss: 1.4923
8/29 [=======>......................] - ETA: 0s - loss: 1.4910
11/29 [==========>...................] - ETA: 0s - loss: 1.4901
15/29 [==============>...............] - ETA: 0s - loss: 1.4888
19/29 [==================>...........] - ETA: 0s - loss: 1.4876
22/29 [=====================>........] - ETA: 0s - loss: 1.4867
27/29 [==========================>...] - ETA: 0s - loss: 1.4851
29/29 [==============================] - 3s 31ms/step - loss: 1.4848 - val_loss: 1.4755
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 1.4755
4/29 [===>..........................] - ETA: 0s - loss: 1.4746
8/29 [=======>......................] - ETA: 0s - loss: 1.4735
12/29 [===========>..................] - ETA: 0s - loss: 1.4723
17/29 [================>.............] - ETA: 0s - loss: 1.4709
20/29 [===================>..........] - ETA: 0s - loss: 1.4701
24/29 [=======================>......] - ETA: 0s - loss: 1.4690
28/29 [===========================>..] - ETA: 0s - loss: 1.4680
29/29 [==============================] - 1s 19ms/step - loss: 1.4679 - val_loss: 1.4596
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 1.4597
4/29 [===>..........................] - ETA: 0s - loss: 1.4589
8/29 [=======>......................] - ETA: 0s - loss: 1.4579
12/29 [===========>..................] - ETA: 0s - loss: 1.4569
15/29 [==============>...............] - ETA: 0s - loss: 1.4562
19/29 [==================>...........] - ETA: 0s - loss: 1.4552
24/29 [=======================>......] - ETA: 0s - loss: 1.4540
28/29 [===========================>..] - ETA: 0s - loss: 1.4531
29/29 [==============================] - 1s 20ms/step - loss: 1.4531 - val_loss: 1.4458
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 1.4459
6/29 [=====>........................] - ETA: 0s - loss: 1.4448
12/29 [===========>..................] - ETA: 0s - loss: 1.4435
16/29 [===============>..............] - ETA: 0s - loss: 1.4426
20/29 [===================>..........] - ETA: 0s - loss: 1.4418
23/29 [======================>.......] - ETA: 0s - loss: 1.4411
28/29 [===========================>..] - ETA: 0s - loss: 1.4401
29/29 [==============================] - 0s 15ms/step - loss: 1.4401 - val_loss: 1.4336
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 1.4338
5/29 [====>.........................] - ETA: 0s - loss: 1.4330
9/29 [========>.....................] - ETA: 0s - loss: 1.4322
12/29 [===========>..................] - ETA: 0s - loss: 1.4316
17/29 [================>.............] - ETA: 0s - loss: 1.4307
19/29 [==================>...........] - ETA: 0s - loss: 1.4303
25/29 [========================>.....] - ETA: 0s - loss: 1.4292
29/29 [==============================] - ETA: 0s - loss: 1.4287
29/29 [==============================] - 1s 20ms/step - loss: 1.4287 - val_loss: 1.4230
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 1.4231
5/29 [====>.........................] - ETA: 0s - loss: 1.4225
10/29 [=========>....................] - ETA: 0s - loss: 1.4216
12/29 [===========>..................] - ETA: 0s - loss: 1.4213
18/29 [=================>............] - ETA: 0s - loss: 1.4203
22/29 [=====================>........] - ETA: 0s - loss: 1.4197
26/29 [=========================>....] - ETA: 0s - loss: 1.4190
29/29 [==============================] - 1s 18ms/step - loss: 1.4187 - val_loss: 1.4137
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 1.4140
4/29 [===>..........................] - ETA: 0s - loss: 1.4135
7/29 [======>.......................] - ETA: 0s - loss: 1.4131
11/29 [==========>...................] - ETA: 0s - loss: 1.4125
17/29 [================>.............] - ETA: 0s - loss: 1.4116
21/29 [====================>.........] - ETA: 0s - loss: 1.4111
26/29 [=========================>....] - ETA: 0s - loss: 1.4104
29/29 [==============================] - 1s 18ms/step - loss: 1.4101 - val_loss: 1.4059
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 1.4061
6/29 [=====>........................] - ETA: 0s - loss: 1.4055
11/29 [==========>...................] - ETA: 0s - loss: 1.4049
16/29 [===============>..............] - ETA: 0s - loss: 1.4043
21/29 [====================>.........] - ETA: 0s - loss: 1.4037
25/29 [========================>.....] - ETA: 0s - loss: 1.4033
29/29 [==============================] - ETA: 0s - loss: 1.4030
29/29 [==============================] - 0s 16ms/step - loss: 1.4030 - val_loss: 1.3995
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 1.3999
4/29 [===>..........................] - ETA: 0s - loss: 1.3994
8/29 [=======>......................] - ETA: 0s - loss: 1.3991
12/29 [===========>..................] - ETA: 0s - loss: 1.3987
15/29 [==============>...............] - ETA: 0s - loss: 1.3984
20/29 [===================>..........] - ETA: 0s - loss: 1.3980
26/29 [=========================>....] - ETA: 0s - loss: 1.3975
29/29 [==============================] - 1s 18ms/step - loss: 1.3973 - val_loss: 1.3948
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 1.3947
4/29 [===>..........................] - ETA: 0s - loss: 1.3946
7/29 [======>.......................] - ETA: 0s - loss: 1.3944
10/29 [=========>....................] - ETA: 0s - loss: 1.3943
14/29 [=============>................] - ETA: 0s - loss: 1.3941
20/29 [===================>..........] - ETA: 0s - loss: 1.3937
25/29 [========================>.....] - ETA: 0s - loss: 1.3934
28/29 [===========================>..] - ETA: 0s - loss: 1.3933
29/29 [==============================] - 1s 19ms/step - loss: 1.3933 - val_loss: 1.3916
Epoch 1/10
1/29 [>.............................] - ETA: 1:14 - loss: 1.2453
5/29 [====>.........................] - ETA: 0s - loss: 1.1505
9/29 [========>.....................] - ETA: 0s - loss: 1.1685
12/29 [===========>..................] - ETA: 0s - loss: 1.1893
16/29 [===============>..............] - ETA: 0s - loss: 1.1849
21/29 [====================>.........] - ETA: 0s - loss: 1.1810
25/29 [========================>.....] - ETA: 0s - loss: 1.1802
28/29 [===========================>..] - ETA: 0s - loss: 1.1832
29/29 [==============================] - 4s 33ms/step - loss: 1.1842 - val_loss: 2.1275
Epoch 2/10
1/29 [>.............................] - ETA: 0s - loss: 0.9681
7/29 [======>.......................] - ETA: 0s - loss: 1.1654
10/29 [=========>....................] - ETA: 0s - loss: 1.2102
15/29 [==============>...............] - ETA: 0s - loss: 1.1534
20/29 [===================>..........] - ETA: 0s - loss: 1.1739
23/29 [======================>.......] - ETA: 0s - loss: 1.1805
27/29 [==========================>...] - ETA: 0s - loss: 1.1643
29/29 [==============================] - 1s 18ms/step - loss: 1.1645 - val_loss: 2.0574
Epoch 3/10
1/29 [>.............................] - ETA: 0s - loss: 1.0562
5/29 [====>.........................] - ETA: 0s - loss: 1.1289
7/29 [======>.......................] - ETA: 0s - loss: 1.1547
10/29 [=========>....................] - ETA: 0s - loss: 1.1602
13/29 [============>.................] - ETA: 0s - loss: 1.1387
16/29 [===============>..............] - ETA: 0s - loss: 1.1374
20/29 [===================>..........] - ETA: 0s - loss: 1.1263
23/29 [======================>.......] - ETA: 0s - loss: 1.1351
27/29 [==========================>...] - ETA: 0s - loss: 1.1452
29/29 [==============================] - 1s 20ms/step - loss: 1.1441 - val_loss: 1.9952
Epoch 4/10
1/29 [>.............................] - ETA: 0s - loss: 1.0159
6/29 [=====>........................] - ETA: 0s - loss: 1.1034
10/29 [=========>....................] - ETA: 0s - loss: 1.1139
14/29 [=============>................] - ETA: 0s - loss: 1.0867
16/29 [===============>..............] - ETA: 0s - loss: 1.0907
21/29 [====================>.........] - ETA: 0s - loss: 1.1026
26/29 [=========================>....] - ETA: 0s - loss: 1.1183
29/29 [==============================] - 0s 17ms/step - loss: 1.1216 - val_loss: 1.9393
Epoch 5/10
1/29 [>.............................] - ETA: 0s - loss: 1.0572
6/29 [=====>........................] - ETA: 0s - loss: 1.0819
11/29 [==========>...................] - ETA: 0s - loss: 1.1126
15/29 [==============>...............] - ETA: 0s - loss: 1.1055
21/29 [====================>.........] - ETA: 0s - loss: 1.1049
26/29 [=========================>....] - ETA: 0s - loss: 1.1004
29/29 [==============================] - 0s 16ms/step - loss: 1.0955 - val_loss: 1.8846
Epoch 6/10
1/29 [>.............................] - ETA: 0s - loss: 0.9186
5/29 [====>.........................] - ETA: 0s - loss: 1.0092
8/29 [=======>......................] - ETA: 0s - loss: 1.0631
12/29 [===========>..................] - ETA: 0s - loss: 1.0595
18/29 [=================>............] - ETA: 0s - loss: 1.0644
24/29 [=======================>......] - ETA: 0s - loss: 1.0662
29/29 [==============================] - 0s 15ms/step - loss: 1.0666 - val_loss: 1.8390
Epoch 7/10
1/29 [>.............................] - ETA: 0s - loss: 1.0423
6/29 [=====>........................] - ETA: 0s - loss: 1.0231
8/29 [=======>......................] - ETA: 0s - loss: 1.0520
10/29 [=========>....................] - ETA: 0s - loss: 1.0637
13/29 [============>.................] - ETA: 0s - loss: 1.0622
18/29 [=================>............] - ETA: 0s - loss: 1.0511
23/29 [======================>.......] - ETA: 0s - loss: 1.0429
29/29 [==============================] - ETA: 0s - loss: 1.0385
29/29 [==============================] - 1s 18ms/step - loss: 1.0385 - val_loss: 1.7931
Epoch 8/10
1/29 [>.............................] - ETA: 0s - loss: 1.0082
5/29 [====>.........................] - ETA: 0s - loss: 1.0376
9/29 [========>.....................] - ETA: 0s - loss: 0.9901
13/29 [============>.................] - ETA: 0s - loss: 1.0113
20/29 [===================>..........] - ETA: 0s - loss: 1.0023
25/29 [========================>.....] - ETA: 0s - loss: 1.0129
29/29 [==============================] - 0s 14ms/step - loss: 1.0098 - val_loss: 1.7559
Epoch 9/10
1/29 [>.............................] - ETA: 0s - loss: 0.9752
5/29 [====>.........................] - ETA: 0s - loss: 1.0275
8/29 [=======>......................] - ETA: 0s - loss: 1.0186
11/29 [==========>...................] - ETA: 0s - loss: 1.0164
15/29 [==============>...............] - ETA: 0s - loss: 1.0008
19/29 [==================>...........] - ETA: 0s - loss: 0.9963
24/29 [=======================>......] - ETA: 0s - loss: 0.9845
28/29 [===========================>..] - ETA: 0s - loss: 0.9812
29/29 [==============================] - 1s 18ms/step - loss: 0.9810 - val_loss: 1.7130
Epoch 10/10
1/29 [>.............................] - ETA: 0s - loss: 0.8496
5/29 [====>.........................] - ETA: 0s - loss: 0.9156
9/29 [========>.....................] - ETA: 0s - loss: 0.9342
14/29 [=============>................] - ETA: 0s - loss: 0.9383
19/29 [==================>...........] - ETA: 0s - loss: 0.9456
23/29 [======================>.......] - ETA: 0s - loss: 0.9511
28/29 [===========================>..] - ETA: 0s - loss: 0.9515
29/29 [==============================] - 0s 16ms/step - loss: 0.9513 - val_loss: 1.6782
2026-01-21 15:46:30.575306: E tensorflow/core/util/util.cc:131] oneDNN supports DT_INT64 only on platforms with AVX-512. Falling back to the default Eigen-based implementation if present.
Model: "model_43"
________________________________________________________________________________
Layer (type) Output Shape Para Connected to Trainable
m #
================================================================================
input_node_x1_x2_ [(None, 2)] 0 [] Y
n_trees_2_n_layer
s_3_tree_depth_5_
_1 (InputLayer)
input__Intercept_ [(None, 1)] 0 [] Y
_1 (InputLayer)
node_2 (NODE) (None, 3) 1754 ['input_node_x1_x2 Y
_n_trees_2_n_layer
s_3_tree_depth_5__
1[0][0]']
1_1 (Dense) (None, 3) 3 ['input__Intercept Y
__1[0][0]']
add_77 (Add) (None, 3) 0 ['node_2[0][0]', Y
'1_1[0][0]']
distribution_lamb ((None, 3), 0 ['add_77[0][0]'] Y
da_43 (Distributi (None, 3))
onLambda)
================================================================================
Total params: 1757 (6.86 KB)
Trainable params: 793 (3.10 KB)
Non-trainable params: 964 (3.77 KB)
________________________________________________________________________________
Model formulas:
---------------
loc :
~node(x1, x2, n_trees = 2, n_layers = 3, tree_depth = 5)
<environment: 0x55bfae29a438>
Fitting model with 1 orthogonalization(s) ... Fitting model with 2 orthogonalization(s) ... Fitting model with 3 orthogonalization(s) ... Fitting model with 4 orthogonalization(s) ... Fitting model with 5 orthogonalization(s) ... Saving _problems/test_reproducibility_torch-30.R
Saving _problems/test_subnetwork_init_torch-20.R
Fitting Fold 1 ...
Done in 2.318894 secs
Fitting Fold 2 ...
Done in 0.5824044 secs
Epoch 1/2
1/2 [==============>...............] - ETA: 0s - loss: 22.0463
2/2 [==============================] - 0s 19ms/step - loss: 22.4672
Epoch 2/2
1/2 [==============>...............] - ETA: 0s - loss: 21.8272
2/2 [==============================] - 0s 17ms/step - loss: 20.5671
Fitting Fold 1 ...
Done in 1.537049 secs
Fitting Fold 2 ...
Done in 0.5929973 secs
Epoch 1/2
1/2 [==============>...............] - ETA: 0s - loss: 22.0463
2/2 [==============================] - 0s 12ms/step - loss: 22.4672
Epoch 2/2
1/2 [==============>...............] - ETA: 0s - loss: 21.8272
2/2 [==============================] - 0s 16ms/step - loss: 20.5671
[ FAIL 14 | WARN 0 | SKIP 1 | PASS 680 ]
══ Skipped tests (1) ═══════════════════════════════════════════════════════════
• empty test (1):
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test_customtraining_torch.R:6:3'): Use multiple optimizers torch ────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─torch::nn_linear(1, 50) at test_customtraining_torch.R:6:3
2. └─Module$new(...)
3. └─torch (local) initialize(...)
4. ├─torch::nn_parameter(torch_empty(out_features, in_features))
5. │ └─torch:::is_torch_tensor(x)
6. └─torch::torch_empty(out_features, in_features)
7. ├─base::do.call(.torch_empty, args)
8. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
9. └─torch:::call_c_function(...)
10. └─torch:::do_call(f, args)
11. ├─base::do.call(fun, args)
12. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_data_handler_torch.R:75:3'): properties of dataset torch ───────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_data_handler_torch.R:75:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_deepregression_torch.R:6:5'): Simple additive model ────────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:21:5
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─torch::nn_sequential(...) at test_deepregression_torch.R:6:5
9. │ └─Module$new(...)
10. │ └─torch (local) initialize(...)
11. │ └─rlang::list2(...)
12. └─torch::nn_linear(in_features = i, out_features = 2, bias = F)
13. └─Module$new(...)
14. └─torch (local) initialize(...)
15. ├─torch::nn_parameter(torch_empty(out_features, in_features))
16. │ └─torch:::is_torch_tensor(x)
17. └─torch::torch_empty(out_features, in_features)
18. ├─base::do.call(.torch_empty, args)
19. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
20. └─torch:::call_c_function(...)
21. └─torch:::do_call(f, args)
22. ├─base::do.call(fun, args)
23. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_deepregression_torch.R:110:3'): Generalized additive model ─────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:110:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:151:3'): Deep generalized additive model with LSS ──
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:151:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:181:3'): GAMs with shared weights ───────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:181:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. ├─base::do.call(...)
6. └─deepregression (local) `<fn>`(...)
7. └─torch::torch_tensor(P)
8. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
9. └─methods$initialize(NULL, NULL, ...)
10. └─torch:::torch_tensor_cpp(...)
── Error ('test_deepregression_torch.R:220:3'): GAMs with fixed weights ────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_deepregression_torch.R:220:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_ensemble_torch.R:13:3'): deep ensemble ─────────────────────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_ensemble_torch.R:13:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_ensemble_torch.R:55:3'): reinitializing weights ────────────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_ensemble_torch.R:55:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch::torch_tensor(P)
11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory)
12. └─methods$initialize(NULL, NULL, ...)
13. └─torch:::torch_tensor_cpp(...)
── Error ('test_families_torch.R:76:7'): torch families can be fitted ──────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_families_torch.R:76:7
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_layers_torch.R:6:3'): lasso layers ─────────────────────────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─torch::torch_manual_seed(42) at test_layers_torch.R:6:3
2. └─torch:::cpp_torch_manual_seed(as.character(seed))
── Error ('test_methods_torch.R:18:3'): all methods ────────────────────────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_methods_torch.R:18:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─base::do.call(layer_class, layer_args)
9. └─deepregression (local) `<fn>`(...)
10. └─torch (local) layer_module(...)
11. └─Module$new(...)
12. └─deepregression (local) initialize(...)
13. ├─torch::nn_parameter(torch_empty(out_features, in_features))
14. │ └─torch:::is_torch_tensor(x)
15. └─torch::torch_empty(out_features, in_features)
16. ├─base::do.call(.torch_empty, args)
17. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
18. └─torch:::call_c_function(...)
19. └─torch:::do_call(f, args)
20. ├─base::do.call(fun, args)
21. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_reproducibility_torch.R:21:17'): reproducibility ───────────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::deepregression(...) at test_reproducibility_torch.R:33:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─subnetwork_builder[[i]](...)
5. └─base::lapply(...)
6. └─deepregression (local) FUN(X[[i]], ...)
7. └─pp_lay[[i]]$layer()
8. ├─torch::nn_sequential(...) at test_reproducibility_torch.R:21:17
9. │ └─Module$new(...)
10. │ └─torch (local) initialize(...)
11. │ └─rlang::list2(...)
12. └─torch::nn_linear(in_features = 1, out_features = 64, bias = F)
13. └─Module$new(...)
14. └─torch (local) initialize(...)
15. ├─torch::nn_parameter(torch_empty(out_features, in_features))
16. │ └─torch:::is_torch_tensor(x)
17. └─torch::torch_empty(out_features, in_features)
18. ├─base::do.call(.torch_empty, args)
19. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
20. └─torch:::call_c_function(...)
21. └─torch:::do_call(f, args)
22. ├─base::do.call(fun, args)
23. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
── Error ('test_subnetwork_init_torch.R:15:33'): subnetwork_init ───────────────
<std::runtime_error/C++Error/error/condition>
Error: Lantern is not loaded. Please use `install_torch()` to install additional dependencies.
Backtrace:
▆
1. └─deepregression::subnetwork_init_torch(list(pp)) at test_subnetwork_init_torch.R:38:3
2. └─base::lapply(...)
3. └─deepregression (local) FUN(X[[i]], ...)
4. └─pp_lay[[i]]$layer()
5. ├─torch::nn_sequential(...) at test_subnetwork_init_torch.R:15:33
6. │ └─Module$new(...)
7. │ └─torch (local) initialize(...)
8. │ └─rlang::list2(...)
9. └─torch::nn_linear(in_features = 1, out_features = 5)
10. └─Module$new(...)
11. └─torch (local) initialize(...)
12. ├─torch::nn_parameter(torch_empty(out_features, in_features))
13. │ └─torch:::is_torch_tensor(x)
14. └─torch::torch_empty(out_features, in_features)
15. ├─base::do.call(.torch_empty, args)
16. └─torch (local) `<fn>`(options = `<named list>`, size = `<list>`)
17. └─torch:::call_c_function(...)
18. └─torch:::do_call(f, args)
19. ├─base::do.call(fun, args)
20. └─torch (local) `<fn>`(size = `<list>`, options = `<named list>`, memory_format = NULL)
[ FAIL 14 | WARN 0 | SKIP 1 | PASS 680 ]
Error:
! Test failures.
Execution halted
- checking PDF version of manual ... [6s/9s] OK
- checking HTML version of manual ... [4s/7s] OK
- checking for non-standard things in the check directory ... OK
- DONE
Status: 1 ERROR