The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
add.distance
mode_rep
and
mode_fold
)imp_sample_from
)partition_disc()
: set default value of arg
buffer
to 0 instead of NULL, fixes #61partition_disc()
: set default value of arg
buffer
to 0 instead of NULL, fixes #61partition_loo()
: Sequence along observations instead of
columns. Before, the train set was only composed of ncol
observation. (#60)sperrorest()
run sequentially by default again rather
than in parallel.err_fun()
throws an error during performance calculation.
An exemplary case would be a binary classification in which only one
level of the response exists in the test data (due to spatial
partitioning).future_lapply
from future.apply
instead of future
train_fun
and test_fun
are now handled
correctly and eventual sub-sampling is correctly reflected to the
resulting ‘resampling’ objectNA
and a message is printed to the console. sperrorest()
will
continue normally and uses the successful folds to calculate the
repetition error. This helps to run CV with many repetitions using
models which do not always converge like maxnet()
,
gamm()
or svm()
.ecuador
has been adjusted to
avoid exact duplicates of partitions when using
partition_kmeans()
.parsperrorest()
into
sperrorest()
.sperrorest()
now runs in parallel using all
available cores.runfolds()
and runreps()
are now doing the
heavy lifting in the background. All modes are now running on the same
code base. Before, all parallel modes were running on different code
implementations.apply
: calls pbmclapply()
on Unix and
pbapply()
on Windows.future
: calls future_lapply()
with various
future
options (multiprocess
,
multicore
, etc.).foreach
: foreach()
with various
future
options (multiprocess
,
multicore
, etc.). Default option to cluster
.
This is also the overall default mode for
sperrorest()
.sequential
: sequential execution using
future
backend.repetition
argument of sperrorest()
. Specifying a range like
repetition = 1:10
will also stay valid.sperrorest::parallel-modes
comparing the
various parallel modes.sperrorest::custom-pred-and-model-functions
explaining why
and how custom defined model and predict functions are needed for some
model setups.do_try
argument has been removed.error.fold
, error.rep
and
err.train
arguments have been removed because they are all
calculated by default now.add parsperrorest()
: This function lets you execute
sperrorest()
in parallel. It includes two modes
(par.mode = 1
and par.mode = 2
) which use
different parallelization approaches in the background. See
?parsperrorest()
for more details.
add partition.factor.cv()
: This resampling method
enables partitioning based on a given factor variable. This can be used,
for example, to resample agricultural data, that is grouped by fields,
at the agricultural field level in order to preserve spatial
autocorrelation within fields.
sperrorest()
and parsperrorest()
: Add
benchmark
item to returned object giving information about
execution time, used cores and other system details.
Changes to functions: * {sperrorest}(): Change argument naming.
err.unpooled
is now error.fold
and
err.pooled
is now error.rep
sperrorest()
and parsperrorest()
: Change
order and naming of returned object
sperrorestpoolederror
is now
sperrorestreperror
add package NEWS
add package vignette ->
vignette("sperrorest-vignette", package = "sperrorest")
package is now ByteCompiled
Github repo of {sperrorest} now at https://github.com/giscience-fsu/sperrorest/
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.