The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
Generalized nn_module() expression generator to
generate torch::nn_module() expression for the same
sequential NN architectures
nn_module() for 1D-CNN
(Convolutional Neural Networks) with 3 hidden layers:nn_module_generator(
nn_name = "CNN1DClassifier",
nn_layer = "nn_conv1d",
layer_arg_fn = ~ if (.is_output) {
list(.in, .out)
} else {
list(
in_channels = .in,
out_channels = .out,
kernel_size = 3L,
stride = 1L,
padding = 1L
)
},
after_output_transform = ~ .$mean(dim = 2),
last_layer_args = list(kernel_size = 1, stride = 2),
hd_neurons = c(16, 32, 64),
no_x = 1,
no_y = 10,
activations = "relu"
)train_nn() to execute
nn_module_generator()
nn_arch() must be supplied to inherit extra arguments
from nn_module_generator() function.early_stopping is supplied
with early_stop().matrix,
data.frame, dataset ({torch}
dataset), and a formula interface.train_nnsnip() is now provided to make
train_nn() bridges with {tidymodels}You can supply customized activation function under
act_funs() with new_act_fn().
torch::nnf_*().new_act_fn() must return a
torch tensor object.act_funs(new_act_fn(torch::torch_tanh)) or
act_funs(new_act_fn(\(x) torch::torch_tanh(x))).name as a displayed name of the custom activation
function.act_funs() as a DSL function now supports
index-style parameter specification for parametric activation
functions
[ syntax
(e.g. softplus[beta = 0.2])args()
(e.g. softplus = args(beta = 0.2)) is now superseded by
that.No suffix generated for 13 by
ordinal_gen(). Now fixed.
hd_neurons for both ffnn_generator()
and rnn_generator() accepts empty arguments, which implies
there’s no hidden layers applied.
Added regularization support for neural network models
mixture = 1mixture = 00 < mixture < 1penalty (regularization strength) and
mixture (L1/L2 balance) parametersglmnet and other packagesn_hlayers() now fully supports tuning the number of
hidden layers
hidden_neurons() gains support for discrete values
via the disc_values argument
disc_values = c(32L, 64L, 128L, 256L)) is now
allowedTuning methods and grid_depth() is now fixed
n_hlayers (no
more invalid sampling when x > 1)tidyr::expand_grid(), not
purrr::cross*(){kindling}‘s own ’dials’n_hlayers = 1The supported models now use hardhat::mold(),
instead of model.frame() and
model.matrix().
Add a vignette to showcase the comparison with other similar packages
The package description got few clarifications
Vignette to showcase the comparison with other similar packages
hidden_neurons parameter now supports discrete
values specification
values
parameter (e.g.,
hidden_neurons(values = c(32, 64, 128)))hidden_neurons(range = c(8L, 512L)) /
hidden_neurons(c(8L, 512L)))Added \value documentation to
kindling-nn-wrappers for CRAN compliance
Documented argument handling and list-column unwrapping in tidymodels wrapper functions
Clarified the relationship between grid_depth() and
wrapper functions
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.