The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
The biggest strength of {kindling} when modelling neural
networks is its versatility — it inherits {torch}’s
versatility while being human friendly, including the ability to apply
custom optimizer functions, loss functions, and per-layer activation
functions. Learn more: https://kindling.joshuamarie.com/articles/special-cases.
With act_funs(), you are not limited to the activation
functions available in {torch}’s namespace. Use
new_act_fn() to wrap any compatible function into a
validated custom activation. This feature, however, is only available on
version 0.3.0 and above.
To do this, use new_act_fn(). It takes a user-supplied
function, validates it against a small dummy tensor at definition
time (a dry-run probe), and wraps it in a call-time type guard.
This means errors surface early — before your model ever starts
training.
The function you supply must:
torch_tensor.Currently, nnf_tanh doesn’t exist in the
{torch} namespace, so tanh is not a valid
argument to act_funs(). With new_act_fn(), you
can wrap torch::torch_tanh() to make it usable.
Here’s a basic example that wraps torch::torch_tanh() as
a custom activation:
You can also pass it directly into act_funs(), just like
any built-in activation:
Naturally, functions for modelling, like ffnn(), accept
act_funs() into the activations argument.
Again, you can pass a custom activation function within
new_act_fn(), then pass it through
act_funs().
Here’s a basic example:
model = ffnn(
Sepal.Length ~ .,
data = iris[, 1:4],
hidden_neurons = c(64, 32, 16),
activations = act_funs(
relu,
silu,
new_act_fn(\(x) torch::torch_tanh(x))
),
epochs = 50
)
modelEach element of act_funs() corresponds to one hidden
layer, in order. Here, the first hidden layer uses ReLU, the second uses
SiLU (Swish), and the third uses Tanh.
You can also use a single custom activation recycled across all layers:
By default, new_act_fn() runs a quick dry-run with a
small dummy tensor to validate your function before training. You can
disable this with probe = FALSE, though this is generally
not recommended:
You can provide a human-readable name via .name, which
is used in print output and diagnostics:
Here’s a simple application:
new_act_fn() is designed to fail loudly and early.
Common errors include:
Function returns a non-tensor. This will error at definition time:
Function accepts no arguments. This will error immediately:
These checks ensure your model’s architecture is valid before any data ever flows through it.
| Feature | Details |
|---|---|
| Wraps any R function | Must accept a tensor, return a tensor |
| Dry-run probe | Validates at definition time (probe = TRUE by
default) |
| Call-time guard | Type-checks output on every forward pass |
Compatible with act_funs() |
Use alongside built-in activations freely |
| Closures supported | Parametric activations work naturally |
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.