- using R Under development (unstable) (2026-03-18 r89649)
- using platform: x86_64-apple-darwin20
- R was compiled by
Apple clang version 14.0.0 (clang-1400.0.29.202)
GNU Fortran (GCC) 14.2.0
- running under: macOS Ventura 13.3.1
- using session charset: UTF-8
* current time: 2026-03-31 01:10:39 UTC
- checking for file ‘evanverse/DESCRIPTION’ ... OK
- checking extension type ... Package
- this is package ‘evanverse’ version ‘0.5.1’
- package encoding: UTF-8
- checking package namespace information ... OK
- checking package dependencies ... OK
- checking if this is a source package ... OK
- checking if there is a namespace ... OK
- checking for executable files ... OK
- checking for hidden files and directories ... OK
- checking for portable file names ... OK
- checking for sufficient/correct file permissions ... OK
- checking whether package ‘evanverse’ can be installed ... [5s/5s] OK
See the install log for details.
- checking installed package size ... OK
- checking package directory ... OK
- checking ‘build’ directory ... OK
- checking DESCRIPTION meta-information ... OK
- checking top-level files ... OK
- checking for left-over files ... OK
- checking index information ... OK
- checking package subdirectories ... OK
- checking code files for non-ASCII characters ... OK
- checking R files for syntax errors ... OK
- checking whether the package can be loaded ... [0s/0s] OK
- checking whether the package can be loaded with stated dependencies ... [0s/0s] OK
- checking whether the package can be unloaded cleanly ... [0s/0s] OK
- checking whether the namespace can be loaded with stated dependencies ... [0s/0s] OK
- checking whether the namespace can be unloaded cleanly ... [0s/0s] OK
- checking loading without being on the library search path ... [0s/1s] OK
- checking whether startup messages can be suppressed ... [0s/1s] OK
- checking dependencies in R code ... OK
- checking S3 generic/method consistency ... OK
- checking replacement functions ... OK
- checking foreign function calls ... OK
- checking R code for possible problems ... [6s/7s] OK
- checking Rd files ... [1s/1s] OK
- checking Rd metadata ... OK
- checking Rd cross-references ... OK
- checking for missing documentation entries ... OK
- checking for code/documentation mismatches ... OK
- checking Rd \usage sections ... OK
- checking Rd contents ... OK
- checking for unstated dependencies in examples ... OK
- checking contents of ‘data’ directory ... OK
- checking data for non-ASCII characters ... [0s/0s] OK
- checking LazyData ... OK
- checking data for ASCII and uncompressed saves ... OK
- checking installed files from ‘inst/doc’ ... OK
- checking files in ‘vignettes’ ... OK
- checking examples ... [6s/6s] OK
- checking for unstated dependencies in ‘tests’ ... OK
- checking tests ... [29s/35s] ERROR
Running ‘testthat.R’ [29s/34s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview
> # * https://testthat.r-lib.org/articles/special-files.html
>
> library(testthat)
> library(evanverse)
>
> test_check("evanverse")
/Volumes/Temp/tmp/RtmpO1O5Pk/file1109e4e11bf33
+-- a.R
/Volumes/Temp/tmp/RtmpO1O5Pk/file1109e530a654
! P(1000, 999) exceeds double range and will return Inf.
! P(1000, 999) exceeds double range and will return Inf.
i Downloading human gene annotation (Ensembl 110)...
v Retrieved 5 annotated genes.
v Saved to '/Volumes/Temp/tmp/RtmpO1O5Pk/file1109e7ae09ef4.rds'
i Downloading human gene annotation (Ensembl 110)...
v Retrieved 5 annotated genes.
v Saved to '/Volumes/Temp/tmp/RtmpO1O5Pk/file1109e130cc475.rds'
i Downloading human gene annotation (Ensembl 110)...
v Retrieved 5 annotated genes.
Saving _problems/test-download-399.R
v Palette saved: '/Volumes/Temp/tmp/RtmpO1O5Pk/cp_test_69790/sequential/my_blues.json'
v Palette saved: '/Volumes/Temp/tmp/RtmpO1O5Pk/cp_json_69790/qualitative/qual_trio.json'
v Palette saved: '/Volumes/Temp/tmp/RtmpO1O5Pk/cp_ow_69790/sequential/blues.json'
v Palette saved: '/Volumes/Temp/tmp/RtmpO1O5Pk/cp_owt_69790/sequential/blues.json'
i Overwriting existing palette: "blues"
v Palette saved: '/Volumes/Temp/tmp/RtmpO1O5Pk/cp_owt_69790/sequential/blues.json'
v Palette saved: '/Volumes/Temp/tmp/RtmpO1O5Pk/rm_test_69790/sequential/to_remove.json'
v Removed "to_remove" from sequential
v Palette saved: '/Volumes/Temp/tmp/RtmpO1O5Pk/rm_auto_69790/diverging/find_me.json'
v Removed "find_me" from diverging
v Compiled 3 palettes: Sequential=1, Diverging=1, Qualitative=1
v Compiled 3 palettes: Sequential=1, Diverging=1, Qualitative=1
v Compiled 3 palettes: Sequential=1, Diverging=1, Qualitative=1
! Failed to parse JSON: '/Volumes/Temp/tmp/RtmpO1O5Pk/pal_test_69790/sequential/broken.json'
v Compiled 3 palettes: Sequential=1, Diverging=1, Qualitative=1
v CRAN mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/CRAN>
v CRAN mirror set to: <https://mirrors.ustc.edu.cn/CRAN>
v CRAN mirror set to: <https://mirrors.westlake.edu.cn/CRAN>
v Bioconductor mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/bioconductor>
v Bioconductor mirror set to: <https://mirrors.ustc.edu.cn/bioconductor>
v CRAN mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/CRAN>
v Bioconductor mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/bioconductor>
v CRAN mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/CRAN>
v Bioconductor mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/bioconductor>
v CRAN mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/CRAN>
v Bioconductor mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/bioconductor>
v CRAN mirror set to: <https://cloud.r-project.org>
v Bioconductor mirror set to: <https://bioconductor.org>
v CRAN mirror set to: <https://mirrors.tuna.tsinghua.edu.cn/CRAN>
v Installed: stats
v Installed: stats
v Installed: stats
v Installed: utils
v Installed: cli
x Power: 47.8% (very low) | Two-sample t-test
n = 30 per group, effect size = 0.500, alpha = 0.050
With only 47.8% power, the study is unlikely to detect a true effect of size
0.50.
x Power: 47.8% (very low) | Two-sample t-test
n = 30 per group, effect size = 0.500, alpha = 0.050
With only 47.8% power, the study is unlikely to detect a true effect of size
0.50.
-- Statistical Power Analysis --------------------------------------------------
-- Parameters --
Test: Two-sample t-test
n: 30 per group
Effect size: 0.5000
alpha: 0.0500
Alternative: two.sided
-- Result --
x Power (1−β): 47.79% (very low)
-- Interpretation --
With only 47.8% power, the study is unlikely to detect a true effect of size
0.50.
-- Recommendation --
i To reach 80% power, increase n from 30 to 64 per group.
-- Statistical Power Analysis --------------------------------------------------
-- Parameters --
Test: Two-sample t-test
n: 30 per group
Effect size: 0.5000
alpha: 0.0500
Alternative: two.sided
-- Result --
x Power (1−β): 47.79% (very low)
-- Interpretation --
With only 47.8% power, the study is unlikely to detect a true effect of size
0.50.
-- Recommendation --
i To reach 80% power, increase n from 30 to 64 per group.
v n = 64 per group (128 total) | Two-sample t-test
Target power = 80%, effect size = 0.500, alpha = 0.050
To detect an effect of size 0.50 with 80% power, recruit 64 subjects per group
(128 total).
v n = 64 per group (128 total) | Two-sample t-test
Target power = 80%, effect size = 0.500, alpha = 0.050
To detect an effect of size 0.50 with 80% power, recruit 64 subjects per group
(128 total).
-- Sample Size Estimation ------------------------------------------------------
-- Parameters --
Test: Two-sample t-test
Target power: 80% (0.8000)
Effect size: 0.5000
alpha: 0.0500
Alternative: two.sided
-- Result --
v n per group: 64
v n total: 128
-- Interpretation --
To detect an effect of size 0.50 with 80% power, recruit 64 subjects per group
(128 total).
-- Recommendation --
i Recruit 10–20% extra to account for dropout, missing data, or protocol violations.
-- Sample Size Estimation ------------------------------------------------------
-- Parameters --
Test: Two-sample t-test
Target power: 80% (0.8000)
Effect size: 0.5000
alpha: 0.0500
Alternative: two.sided
-- Result --
v n per group: 64
v n total: 128
-- Interpretation --
To detect an effect of size 0.50 with 80% power, recruit 64 subjects per group
(128 total).
-- Recommendation --
i Recruit 10–20% extra to account for dropout, missing data, or protocol violations.
t.test | p < 0.001* | A n=30, B n=30
t.test | p < 0.001* | A n=30, B n=30
-- Two-group Comparison --------------------------------------------------------
-- Parameters --
Test: Welch two-sample t-test
Direction: A - B
Alternative: two.sided
alpha: 0.050
Paired: FALSE
-- Result --
v p < 0.001 (significant at alpha = 0.05)
Welch Two Sample t-test
data: value by group
t = -4.7184, df = 56.741, p-value = 1.594e-05
alternative hypothesis: true difference in means between group A and group B is not equal to 0
95 percent confidence interval:
-1.4961113 -0.6045215
sample estimates:
mean in group A mean in group B
0.08245817 1.13277458
-- Descriptive statistics --
# A tibble: 2 x 7
group n mean sd median min max
<fct> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 A 30 0.0825 0.924 0.257 -2.21 1.60
2 B 30 1.13 0.795 0.943 -0.377 2.98
-- Normality (Shapiro-Wilk) --
A: n = 30, p = 0.170
B: n = 30, p = 0.948
→ Medium samples (min n = 30). Data reasonably normal (all Shapiro p ≥ 0.01).
-- Two-group Comparison --------------------------------------------------------
-- Parameters --
Test: Welch two-sample t-test
Direction: A - B
Alternative: two.sided
alpha: 0.050
Paired: FALSE
-- Result --
v p < 0.001 (significant at alpha = 0.05)
Welch Two Sample t-test
data: value by group
t = -4.7184, df = 56.741, p-value = 1.594e-05
alternative hypothesis: true difference in means between group A and group B is not equal to 0
95 percent confidence interval:
-1.4961113 -0.6045215
sample estimates:
mean in group A mean in group B
0.08245817 1.13277458
-- Descriptive statistics --
# A tibble: 2 x 7
group n mean sd median min max
<fct> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 A 30 0.0825 0.924 0.257 -2.21 1.60
2 B 30 1.13 0.795 0.943 -0.377 2.98
-- Normality (Shapiro-Wilk) --
A: n = 30, p = 0.170
B: n = 30, p = 0.948
→ Medium samples (min n = 30). Data reasonably normal (all Shapiro p ≥ 0.01).
One-way ANOVA | p < 0.001* | A n=25, B n=25, C n=25
One-way ANOVA | p < 0.001* | A n=25, B n=25, C n=25
-- One-way Comparison ----------------------------------------------------------
-- Parameters --
Test: One-way ANOVA
alpha: 0.050
-- Omnibus Test --
v p < 0.001 (significant at alpha = 0.05)
eta_squared = 0.357, omega_squared = 0.336
Df Sum Sq Mean Sq F value Pr(>F)
group 2 30.11 15.054 19.99 1.24e-07 ***
Residuals 72 54.21 0.753
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-- Descriptive statistics --
# A tibble: 3 x 7
group n mean sd median min max
<fct> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 A 25 -0.289 0.774 -0.308 -1.67 1.27
2 B 25 0.661 0.955 0.848 -1.77 2.20
3 C 25 1.25 0.865 1.46 -0.537 2.56
-- Normality (Shapiro-Wilk) --
A: n = 25, p = 0.267
B: n = 25, p = 0.211
C: n = 25, p = 0.264
-> Small samples (min n = 25). Data appears normal (all Shapiro p ≥ 0.05).
-- Variance (Levene's test) --
p = 0.467 | equal variances: TRUE
-- Post-hoc (tukey) --
# A tibble: 3 x 6
group2 group1 diff lwr upr `p adj`
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 B A 0.950 0.362 1.54 0.000687
2 C A 1.54 0.951 2.13 0.0000000721
3 C B 0.588 0.000817 1.18 0.0496
-- One-way Comparison ----------------------------------------------------------
-- Parameters --
Test: One-way ANOVA
alpha: 0.050
-- Omnibus Test --
v p < 0.001 (significant at alpha = 0.05)
eta_squared = 0.357, omega_squared = 0.336
Df Sum Sq Mean Sq F value Pr(>F)
group 2 30.11 15.054 19.99 1.24e-07 ***
Residuals 72 54.21 0.753
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-- Descriptive statistics --
# A tibble: 3 x 7
group n mean sd median min max
<fct> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 A 25 -0.289 0.774 -0.308 -1.67 1.27
2 B 25 0.661 0.955 0.848 -1.77 2.20
3 C 25 1.25 0.865 1.46 -0.537 2.56
-- Normality (Shapiro-Wilk) --
A: n = 25, p = 0.267
B: n = 25, p = 0.211
C: n = 25, p = 0.264
-> Small samples (min n = 25). Data appears normal (all Shapiro p ≥ 0.05).
-- Variance (Levene's test) --
p = 0.467 | equal variances: TRUE
-- Post-hoc (tukey) --
# A tibble: 3 x 6
group2 group1 diff lwr upr `p adj`
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 B A 0.950 0.362 1.54 0.000687
2 C A 1.54 0.951 2.13 0.0000000721
3 C B 0.588 0.000817 1.18 0.0496
Chi-square test | p = 0.3156 | 3x2 | V = 0.152 (small)
Chi-square test | p = 0.3156 | 3x2 | V = 0.152 (small)
-- Categorical Association Test ------------------------------------------------
-- Parameters --
Test: Chi-square test
Variables: treatment × response
Table size: 3x2
alpha: 0.050
-- Result --
i p = 0.3156 (not significant at alpha = 0.05)
Pearson's Chi-squared test
data: cont_table
X-squared = 2.3064, df = 2, p-value = 0.3156
-- Effect Size (Cramer's V) --
V: 0.152
Interpretation: small
-- Observed Frequencies --
No Yes
A 12 24
B 16 17
C 10 21
-- Expected Frequencies --
No Yes
A 13.68 22.32
B 12.54 20.46
C 11.78 19.22
-- Pearson Residuals --
No Yes
A -0.45 0.36
B 0.98 -0.76
C -0.52 0.41
→ |residual| > 2 indicates significant deviation from independence
-- Method Selection --
Table size: 3x2
Total N: 100
Min expected freq: 11.78
Cells with freq < 5: 0
Decision: All expected frequencies adequate: using standard chi-square test
-- Categorical Association Test ------------------------------------------------
-- Parameters --
Test: Chi-square test
Variables: treatment × response
Table size: 3x2
alpha: 0.050
-- Result --
i p = 0.3156 (not significant at alpha = 0.05)
Pearson's Chi-squared test
data: cont_table
X-squared = 2.3064, df = 2, p-value = 0.3156
-- Effect Size (Cramer's V) --
V: 0.152
Interpretation: small
-- Observed Frequencies --
No Yes
A 12 24
B 16 17
C 10 21
-- Expected Frequencies --
No Yes
A 13.68 22.32
B 12.54 20.46
C 11.78 19.22
-- Pearson Residuals --
No Yes
A -0.45 0.36
B 0.98 -0.76
C -0.52 0.41
→ |residual| > 2 indicates significant deviation from independence
-- Method Selection --
Table size: 3x2
Total N: 100
Min expected freq: 11.78
Cells with freq < 5: 0
Decision: All expected frequencies adequate: using standard chi-square test
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 3 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 10 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
pearson | 4 vars | 0/6 significant pairs (alpha = 0.05)
pearson | 4 vars | 0/6 significant pairs (alpha = 0.05)
i Found 0 significant pairs out of 6 tests.
-- Correlation Analysis --------------------------------------------------------
-- Parameters --
Method: pearson
Missing obs: pairwise.complete.obs
P-adjust: none
Variables: 4
alpha: 0.050
-- Descriptive Statistics --
variable n mean sd median min max
x1 50 0.064934373 1.0686820 -0.14079713 -2.183967 2.215461
x2 50 -0.001664339 0.8130295 -0.06993174 -2.102329 2.387233
x3 50 -0.033696299 1.1076103 0.01153808 -1.995387 2.600142
x4 50 0.066717211 0.9896570 -0.02757658 -2.621345 2.246255
-- Correlation Summary --
Min: -0.136
Max: 0.262
Mean |r|: 0.143
-- Significant Pairs --
i Based on unadjusted p-values.
0 out of 6 pairs significant at alpha = 0.05
-- Correlation Analysis --------------------------------------------------------
-- Parameters --
Method: pearson
Missing obs: pairwise.complete.obs
P-adjust: none
Variables: 4
alpha: 0.050
-- Descriptive Statistics --
variable n mean sd median min max
x1 50 0.064934373 1.0686820 -0.14079713 -2.183967 2.215461
x2 50 -0.001664339 0.8130295 -0.06993174 -2.102329 2.387233
x3 50 -0.033696299 1.1076103 0.01153808 -1.995387 2.600142
x4 50 0.066717211 0.9896570 -0.02757658 -2.621345 2.246255
-- Correlation Summary --
Min: -0.136
Max: 0.262
Mean |r|: 0.143
-- Significant Pairs --
i Based on unadjusted p-values.
0 out of 6 pairs significant at alpha = 0.05
i Found 0 significant pairs out of 6 tests.
i Found 0 significant pairs out of 6 tests.
i Created directory: '/Volumes/Temp/tmp/RtmpO1O5Pk/evanverse_dest_1109e5155b8d4/nested'
[ FAIL 1 | WARN 0 | SKIP 9 | PASS 906 ]
══ Skipped tests (9) ═══════════════════════════════════════════════════════════
• On CRAN (8): 'test-download.R:94:3', 'test-download.R:190:3',
'test-download.R:200:3', 'test-download.R:341:3', 'test-download.R:425:3',
'test-pkg.R:362:3', 'test-pkg.R:373:3', 'test-pkg.R:385:3'
• {GEOquery} cannot be loaded (1): 'test-download.R:408:3'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-download.R:399:3'): download_geo orchestrates helpers and returns correct list structure ──
Error: Please install GEOquery: BiocManager::install('GEOquery')
Backtrace:
▆
1. ├─base::suppressWarnings(download_geo("GSE99999", dest_dir = tmp)) at test-download.R:399:3
2. │ └─base::withCallingHandlers(...)
3. └─evanverse::download_geo("GSE99999", dest_dir = tmp)
4. └─cli::cli_abort(...)
5. └─rlang::abort(...)
[ FAIL 1 | WARN 0 | SKIP 9 | PASS 906 ]
Error:
! Test failures.
Execution halted
- checking for unstated dependencies in vignettes ... OK
- checking package vignettes ... OK
- checking re-building of vignette outputs ... [22s/23s] OK
- checking PDF version of manual ... [7s/7s] OK
- DONE
Status: 1 ERROR
- using check arguments '--no-clean-on-error '