The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
LearnerTorchModel
can now be parallelized and trained
with encapsulation activated.jit_trace
now works in combination with batch
normalization.R6
version 2.6.0LearnerTorch$.dataloader()
method now
operates no longer on the task
but on the
dataset
generated by the private
LearnerTorch$.dataset()
method.shuffle
parameter during model training is now
initialized to TRUE
to sidestep issues where data is
sorted.jit_trace
parameter was added to
LearnerTorch
, which when set to TRUE
can lead
to significant speedups. This should only be enabled for ‘static’
models, see the torch
tutorial for more information.num_interop_threads
to
LearnerTorch
.tensor_dataset
parameter was added, which allows to
stack all batches at the beginning of training to make loading of
batches afterwards faster.PipeOp
for adaptive average pooling.n_layers
parameter was added to the MLP
learner.AutoTuner
.epochs - patience
for the
internally tuned values instead of the trained number of
epochs
as it was before.dataset
of a learner must no longer return the
tensors on the specified device
, which allows for parallel
dataloading on GPUs.PipeOpBlock
should no longer create ID clashes with
other PipeOps in the graph (#260).data_formats
anymoreCallbackSetTB
, which allows logging that can be
viewed by TensorBoard.PipeOps
such as po("trafo_resize")
which
failed in some cases.LearnerTabResnet
now works correctlynn()
helper function to simplify the
creation of neural network layersThese binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.