The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
workflows::add_case_weights()
parameters (#151)tabnet_model
and
from_epoch
parameters (#143)tune::finalize_workflow()
test to {parsnip} v1.2
breaking change. (#155)autoplot()
now position the “has_checkpoint” points
correctly when a tabnet_fit()
is continuing a previous
training using tabnet_model =
. (#150)tabnet_model
option will not be
used in tabnet_pretrain()
tasks. (#150)Node
dataset. (#126)tabnet_pretrain()
now allows different GLU blocks in
GLU layers in encoder and in decoder through the config()
parameters num_idependant_decoder
and
num_shared_decoder
(#129)reduce_on_plateau
as option for
lr_scheduler
at tabnet_config()
(@SvenVw, #120)autoplot.tabnet_fit()
(#67)tabnet_pretrain()
now allows missing values in
predictors. (#68)tabnet_explain()
now works for
tabnet_pretrain
models. (#68)random_obfuscator()
torch_nn
module. (#68)tabnet_fit()
and predict()
now allow
missing values in predictors. (#76)tabnet_config()
now supports a
num_workers=
parameters to control parallel dataloading
(#83)tabnet_config()
now has a flag
skip_importance
to skip calculating feature importance
(@egillax, #91)tabnet_nn
min_grid.tabnet
method for tune
(@cphaarmeyer,
#107)tabnet_explain()
method for parsnip models (@cphaarmeyer,
#108)tabnet_fit()
and predict()
now allow
multi-outcome, all numeric or all factors but not
mixed. (#118)tabnet_explain()
is now correctly handling missing
values in predictors. (#77)dataloader
can now use num_workers>0
(#83)batch_size
and
virtual_batch_size
improves performance on mid-range
devices.engine="torch"
to tabnet parsnip model
(#114)autoplot()
warnings turned into errors with
{ggplot2} v3.4 (#113)update
method for tabnet models to allow the
correct usage of finalize_workflow
(#60).tabnet_fit()
(@cregouby, #26)tabnet_explain()
.tabnet_pretrain()
for unsupervised pretraining
(@cregouby,
#29)autoplot()
of model loss among epochs (@cregouby, #36)config
argument to
fit() / pretrain()
so one can pass a pre-made config list.
(#42)tabnet_config()
, new mask_type
option
with entmax
additional to default sparsemax
(@cmcmaster1,
#48)tabnet_config()
, loss
now also takes
function (@cregouby,
#55)NEWS.md
file to track changes to the
package.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.