- using R Under development (unstable) (2023-11-28 r85645)
- using platform: x86_64-pc-linux-gnu
- R was compiled by
gcc-13 (Debian 13.2.0-5) 13.2.0
GNU Fortran (Debian 13.2.0-5) 13.2.0
- running under: Debian GNU/Linux trixie/sid
- using session charset: UTF-8
- checking for file ‘spaMM/DESCRIPTION’ ... OK
- checking extension type ... Package
- this is package ‘spaMM’ version ‘4.4.0’
- package encoding: UTF-8
- checking package namespace information ... OK
- checking package dependencies ... OK
- checking if this is a source package ... OK
- checking if there is a namespace ... OK
- checking for executable files ... OK
- checking for hidden files and directories ... OK
- checking for portable file names ... OK
- checking for sufficient/correct file permissions ... OK
- checking whether package ‘spaMM’ can be installed ... OK
See the install log for details.
- used C++ compiler: ‘g++-13 (Debian 13.2.0-5) 13.2.0’
- checking package directory ... OK
- checking for future file timestamps ... OK
- checking DESCRIPTION meta-information ... OK
- checking top-level files ... OK
- checking for left-over files ... OK
- checking index information ... OK
- checking package subdirectories ... OK
- checking R files for non-ASCII characters ... OK
- checking R files for syntax errors ... OK
- checking whether the package can be loaded ... [2s/4s] OK
- checking whether the package can be loaded with stated dependencies ... [2s/3s] OK
- checking whether the package can be unloaded cleanly ... [2s/4s] OK
- checking whether the namespace can be loaded with stated dependencies ... [2s/4s] OK
- checking whether the namespace can be unloaded cleanly ... [2s/4s] OK
- checking loading without being on the library search path ... [2s/3s] OK
- checking whether startup messages can be suppressed ... [2s/4s] OK
- checking use of S3 registration ... OK
- checking dependencies in R code ... OK
- checking S3 generic/method consistency ... OK
- checking replacement functions ... OK
- checking foreign function calls ... OK
- checking R code for possible problems ... [163s/250s] OK
- checking Rd files ... [2s/3s] NOTE
checkRd: (-1) NEWS.Rd:114: Lost braces; missing escapes or markup?
114 | { * } When fitted by "inner" estimation methods [in particular, by HLfit() or HLCor() rather than fitme()], the fit could stop in cases where the observed information matrix differs from the expected one (e.g, for the Gamma(log) response family). \cr
| ^
checkRd: (-1) NEWS.Rd:115: Lost braces; missing escapes or markup?
115 | { * } Using fitme() to fit such models worked for all response families, but a profile confint() procedure for fixed-effect coefficients on the resulting fit object could then stop.\cr \cr
| ^
checkRd: (-1) NEWS.Rd:145: Lost braces; missing escapes or markup?
145 | \item predict(., {large newdata,> 2000 rows}, binding= < some name >, ...) retained only the first 2000 element of any attribute for variances or intervals.
| ^
checkRd: (-1) IMRF.Rd:29: Lost braces; missing escapes or markup?
29 | { * } for grids given by \code{model=<inla.spde2 object>}, the non-zero weights are the barycentric coordinates of the focal point in the enclosing triangle from the mesh triangulation (points from outside the mesh would have zero weights, so the predicted effect \bold{Ab=0});\cr { * }for regular grids (NULL \code{model}), the weights are computed as <Wendland function>(<scaled Euclidean distances between focal point and vertices>).
| ^
checkRd: (-1) IMRF.Rd:29: Lost braces; missing escapes or markup?
29 | { * } for grids given by \code{model=<inla.spde2 object>}, the non-zero weights are the barycentric coordinates of the focal point in the enclosing triangle from the mesh triangulation (points from outside the mesh would have zero weights, so the predicted effect \bold{Ab=0});\cr { * }for regular grids (NULL \code{model}), the weights are computed as <Wendland function>(<scaled Euclidean distances between focal point and vertices>).
| ^
checkRd: (-1) Leuca.Rd:32: Lost braces
32 | \seealso{\code{\link{MaternCorr}} and code{\link{composite-ranef}} for examples using these data.}
| ^
checkRd: (-1) autoregressive.Rd:24: Lost braces
24 | In \bold{CAR} models, the covariance matrix of random effects \bold{u} can be described as \eqn{\lambda}(\bold{I}\eqn{-\rho} \bold{W}\eqn{)^{-1}} where \bold{W} is the (symmetric) adjacency matrix. \code{HLCor} uses the spectral decomposition of the adjacency matrix, written as bold{W=VDV'} where \bold{D} is a diagonal matrix of eigenvalues \eqn{d_i}. The covariance of \bold{V'u} is
| ^
checkRd: (-1) corrMatrix.Rd:7: Lost braces; missing escapes or markup?
7 | \code{corrMatrix} is an argument of \code{HLCor}, of class \code{dist} or \code{matrix}, with can be used if the model formula contains a term of the form \code{corrMatrix(1|<...>)}. It describes a correlation matrix, possibly as a \code{dist} object. A covariance matrix can actually be passed through this argument, but then it must be a full matrix, not a \code{dist} object. The way the rows and columns of the matrix are matched to the rows of the \code{data} depends on the nature of the grouping term {<...>}.
| ^
checkRd: (-1) predict.Rd:18: Lost braces; missing escapes or markup?
18 | { * }\code{predict} can be used for prediction of the response variable by its expected value obtained as (the inverse link transformation of) the linear predictor (\eqn{\eta}) and more generally for terms of the form \bold{X}_n\eqn{\beta}+\bold{Z}_n\bold{L}\bold{v}, for new design matrices \bold{X}_n and \bold{Z}_n.\cr
| ^
checkRd: (-1) predict.Rd:19: Lost braces; missing escapes or markup?
19 | { * }Various components of prediction variances and predictions intervals can also be computed using \code{predict}.
| ^
checkRd: (-1) predict.Rd:21: Lost braces; missing escapes or markup?
21 | { * }\code{get_predCov_var_fix} extracts a block of a prediction covariance matrix. It was conceived for the specific purpose of computing the spatial prediction covariances between two \dQuote{new} sets of geographic locations, without computing the full covariance matrix for both the new locations and the original (fitted) locations. When one of the two sets of new locations is fixed while the other varies, some expensive computations can be performed once for all sets of new locations, and be provided as the \code{fix_X_ZAC.object} argument. The \code{preprocess_fix_corr} extractor is designed to compute this argument.
| ^
checkRd: (-1) resid.model.Rd:14: Lost braces
14 | dispersion parameter{ }=\code{ exp(}\bold{X}\eqn{\beta}\code{+offset}),\cr
| ^
checkRd: (-1) spaMM-conventions.Rd:11: Lost braces
11 | The \\bold{default likelihood target} for dispersion parameters is restricted likelihood (REML estimation) for \code{corrHLfit} and (marginal) likelihood (ML estimation) for \code{fitme}.
| ^
checkRd: (-1) spaMM.Rd:62: Lost braces; missing escapes or markup?
62 | If one wishes to fit uncorrelated group-specific random-effects with distinct variances for different groups or for different response variables, three syntaxes are thus possible. The most general, suitable for fitting several variances (see {help("GxE")} for an example), is to fit a (0 + <factor>| <RHS>) random-coefficient term with correlation(s) fixed to 0. Alternatively, one can define \bold{numeric} (0|1) variables for each group (as \code{as.numeric(<boolean for given group membership>)}), and use each of them in a \verb{0 + <numeric>} LHS (so that the variance of each such random effect is zero for response not belonging to the given group). See \code{\link{lev2bool}} for various ways of specifying such indicator variables for several levels.
| ^
checkRd: (-1) update.Rd:21: Lost braces; missing escapes or markup?
21 | However, in some cases, dynamic evaluation of the response variable may be helpful. For example, for bootstrapping hurdle models, the zero-truncated response may be specified as I({count[presence>0] <- NA; count}) (where both the zero-truncated \code{count} and binary \code{presence} variables are both updated by the bootstrap simulation). In that case the names of the two variables to be updated is provided by setting (say)\cr
| ^
- checking Rd metadata ... OK
- checking Rd line widths ... OK
- checking Rd cross-references ... OK
- checking for missing documentation entries ... OK
- checking for code/documentation mismatches ... OK
- checking Rd \usage sections ... OK
- checking Rd contents ... OK
- checking for unstated dependencies in examples ... OK
- checking contents of ‘data’ directory ... OK
- checking data for non-ASCII characters ... [2s/3s] OK
- checking data for ASCII and uncompressed saves ... OK
- checking R/sysdata.rda ... OK
- checking line endings in C/C++/Fortran sources/headers ... OK
- checking line endings in Makefiles ... OK
- checking compilation flags in Makevars ... OK
- checking for GNU extensions in Makefiles ... OK
- checking for portable use of $(BLAS_LIBS) and $(LAPACK_LIBS) ... OK
- checking use of PKG_*FLAGS in Makefiles ... OK
- checking use of SHLIB_OPENMP_*FLAGS in Makefiles ... OK
- checking pragmas in C/C++ headers and code ... OK
- checking compilation flags used ... OK
- checking compiled code ... OK
- checking examples ... [40s/74s] OK
Examples with CPU (user + system) or elapsed time > 5s
user system elapsed
fitmv 6.963 0.009 13.105
register_cF 6.028 0.020 10.378
- checking for unstated dependencies in ‘tests’ ... OK
- checking tests ... [1s/1s] OK
Running ‘test-all.R’ [0s/1s]
- checking PDF version of manual ... [13s/28s] OK
- checking HTML version of manual ... [9s/19s] OK
- checking for non-standard things in the check directory ... OK
- DONE
Status: 1 NOTE