The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
Luz is a higher level API for torch providing abstractions to allow for much less verbose training loops.
This package is still under development.
It is heavily inspired by other higher level frameworks for deep learning, to cite a few:
FastAI: we are heavily
inspired by the FastAI library, especially the Learner
object and the callbacks API.
Keras: We are also heavily
inspired by Keras, especially callback names. The lightning module
interface is similar to compile
, too.
PyTorch
Lightning: The idea of the luz_module
being a subclass
of nn_module
is inspired by the
LightningModule
object in
lightning.
HuggingFace Accelerate: The internal device placement API is heavily inspired by Accelerate, but is much more modest in features. Currently only CPU and Single GPU are supported.
You can install the released version from CRAN with:
install.packages("luz")
or the development version with:
::install_github("mlverse/luz") remotes
Luz lets you take your torch nn_module
definition and
fit
it to a dataloader, while handling the boring parts
like moving data between devices, updating the weights, showing progress
bars and tracking metrics.
Here’s an example defining and training an Autoencoder for the MNIST dataset. We selected parts of the code to highlight luz functionality. You can find the full example code here.
<- nn_module(
net "Net",
initialize = function() {
$encoder <- nn_sequential(
selfnn_conv2d(1, 6, kernel_size=5),
nn_relu(),
nn_conv2d(6, 16, kernel_size=5),
nn_relu()
)$decoder <- nn_sequential(
selfnn_conv_transpose2d(16, 6, kernel_size = 5),
nn_relu(),
nn_conv_transpose2d(6, 1, kernel_size = 5),
nn_sigmoid()
)
},forward = function(x) {
%>%
x $encoder() %>%
self$decoder()
self
} )
Now that we have defined the Autoencoder architecture using
torch::nn_module()
, we can fit it using luz:
<- net %>%
fitted setup(
loss = nn_mse_loss(),
optimizer = optim_adam
%>%
) fit(train_dl, epochs = 1, valid_data = test_dl)
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.