The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
Note: This is an R port of the official tutorial available here. All credits goes to Vincent Quenneville-Bélair.
{torch} is an open source deep learning platform that provides a seamless path from research prototyping to production deployment with GPU support.
Significant effort in solving machine learning problems goes into
data preparation. torchaudio
leverages torch’s GPU support,
and provides many tools to make data loading easy and more readable. In
this tutorial, we will see how to load and preprocess data from a simple
dataset.
library(torchaudio)
library(viridis)
torchaudio
also supports loading sound files in the wav
and mp3 format. We call waveform the resulting raw audio signal.
= "https://pytorch.org/tutorials/_static/img/steam-train-whistle-daniel_simon-converted-from-mp3.wav"
url = tempfile(fileext = ".wav")
filename = httr::GET(url, httr::write_disk(filename, overwrite = TRUE))
r
= transform_to_tensor(tuneR_loader(filename))
waveform_and_sample_rate = waveform_and_sample_rate[[1]]
waveform = waveform_and_sample_rate[[2]]
sample_rate
paste("Shape of waveform: ", paste(dim(waveform), collapse = " "))
paste("Sample rate of waveform: ", sample_rate)
plot(waveform[1], col = "royalblue", type = "l")
lines(waveform[2], col = "orange")
Package {tuneR} is the only backend implemented yet.
torchaudio
supports a growing list of transformations
.
Each transform supports batching: you can perform a transform on a single raw audio signal or spectrogram, or many of the same shape.
Since all transforms are torch::nn_modules
, they can be
used as part of a neural network at any point.
To start, we can look at the log of the spectrogram on a log scale.
<- transform_spectrogram()(waveform)
specgram
paste("Shape of spectrogram: ", paste(dim(specgram), collapse = " "))
<- as.array(specgram$log2()[1]$t())
specgram_as_array image(specgram_as_array[,ncol(specgram_as_array):1], col = viridis(n = 257, option = "magma"))
Or we can look at the Mel Spectrogram on a log scale.
<- transform_mel_spectrogram()(waveform)
specgram
paste("Shape of spectrogram: ", paste(dim(specgram), collapse = " "))
<- as.array(specgram$log2()[1]$t())
specgram_as_array image(specgram_as_array[,ncol(specgram_as_array):1], col = viridis(n = 257, option = "magma"))
We can resample the waveform, one channel at a time.
<- sample_rate/10
new_sample_rate
# Since Resample applies to a single channel, we resample first channel here
<- 1
channel <- transform_resample(sample_rate, new_sample_rate)(waveform[channel, ]$view(c(1,-1)))
transformed
paste("Shape of transformed waveform: ", paste(dim(transformed), collapse = " "))
plot(transformed[1], col = "royalblue", type = "l")
As another example of transformations, we can encode the signal based on Mu-Law enconding. But to do so, we need the signal to be between -1 and 1. Since the tensor is just a regular PyTorch tensor, we can apply standard operators on it.
# Let's check if the tensor is in the interval [-1,1]
cat(sprintf("Min of waveform: %f \nMax of waveform: %f \nMean of waveform: %f", as.numeric(waveform$min()), as.numeric(waveform$max()), as.numeric(waveform$mean())))
Since the waveform is already between -1 and 1, we do not need to normalize it.
<- function(tensor) {
normalize # Subtract the mean, and scale to the interval [-1,1]
<- tensor - tensor$mean()
tensor_minusmean return(tensor_minusmean/tensor_minusmean$abs()$max())
}
# Let's normalize to the full interval [-1,1]
# waveform = normalize(waveform)
Let’s apply encode the waveform.
<- transform_mu_law_encoding()(waveform)
transformed
paste("Shape of transformed waveform: ", paste(dim(transformed), collapse = " "))
plot(transformed[1], col = "royalblue", type = "l")
And now decode.
<- transform_mu_law_decoding()(transformed)
reconstructed
paste("Shape of recovered waveform: ", paste(dim(reconstructed), collapse = " "))
plot(reconstructed[1], col = "royalblue", type = "l")
We can finally compare the original waveform with its reconstructed version.
# Compute median relative difference
<- as.numeric(((waveform - reconstructed)$abs() / waveform$abs())$median())
err
paste("Median relative difference between original and MuLaw reconstucted signals:", scales::percent(err, accuracy = 0.01))
The transformations seen above rely on lower level stateless
functions for their computations. These functions are identified by
torchaudio::functional_*
prefix.
For example, let’s try the
functional_mu_law_encoding
:
<- functional_mu_law_encoding(waveform, quantization_channels = 256)
mu_law_encoding_waveform
paste("Shape of transformed waveform: ", paste(dim(mu_law_encoding_waveform), collapse = " "))
plot(mu_law_encoding_waveform[1], col = "royalblue", type = "l")
You can see how the output from
functional_mu_law_encoding
is the same as the output from
transforms_mu_law_encoding
.
Now let’s experiment with a few of the other functionals and visualize their output. Taking our spectogram, we can compute it’s deltas:
<- functional_compute_deltas(specgram$contiguous(), win_length=3)
computed
paste("Shape of computed deltas: ", paste(dim(computed), collapse = " "))
<- as.array(computed[1]$t())
computed_as_array image(computed_as_array[,ncol(computed_as_array):1], col = viridis(n = 257, option = "magma"))
We can take the original waveform and apply different effects to it.
<- as.numeric(functional_gain(waveform, gain_db=5.0))
gain_waveform cat(sprintf("Min of gain_waveform: %f\nMax of gain_waveform: %f\nMean of gain_waveform: %f", min(gain_waveform), max(gain_waveform), mean(gain_waveform)))
<- as.numeric(functional_dither(waveform))
dither_waveform cat(sprintf("Min of dither_waveform: %f\nMax of dither_waveform: %f\nMean of dither_waveform: %f", min(dither_waveform), max(dither_waveform), mean(dither_waveform)))
Another example of the capabilities in
torchaudio::functional_
are applying filters to our
waveform. Applying the lowpass biquad filter to our waveform will output
a new waveform with the signal of the frequency modified.
<- as.array(functional_lowpass_biquad(waveform, sample_rate, cutoff_freq=3000))
lowpass_waveform
cat(sprintf("Min of lowpass_waveform: %f\nMax of lowpass_waveform: %f\nMean of lowpass_waveform: %f", min(lowpass_waveform), max(lowpass_waveform), mean(lowpass_waveform)))
plot(lowpass_waveform[1,], col = "royalblue", type = "l")
lines(lowpass_waveform[2,], col = "orange")
We can also visualize a waveform with the highpass biquad filter.
<- as.array(functional_highpass_biquad(waveform, sample_rate, cutoff_freq=3000))
highpass_waveform
cat(sprintf("Min of highpass_waveform: %f\nMax of highpass_waveform: %f\nMean of highpass_waveform: %f", min(highpass_waveform), max(highpass_waveform), mean(highpass_waveform)))
plot(highpass_waveform[1,], col = "royalblue", type = "l")
lines(highpass_waveform[2,], col = "orange")
Users may be familiar with Kaldi, a toolkit for
speech recognition. torchaudio
will offer compatibility
with it in torchaudio::kaldi_*
in the future.
If you do not want to create your own dataset to train your model,
torchaudio
offers a unified dataset interface. This
interface supports lazy-loading of files to memory, download and extract
functions, and datasets to build models.
The datasets torchaudio
currently supports are:
Yesno
SpeechCommands
CMUArctics
<- tempdir()
temp <- yesno_dataset(temp, download=TRUE)
yesno_data
# A data point in Yesno is a list (waveform, sample_rate, labels) where labels is a list of integers with 1 for yes and 0 for no.
# Pick data point number 3 to see an example of the the yesno_data:
<- 3
n <- yesno_data[n]
sample
sample
plot(sample[[1]][1], col = "royalblue", type = "l")
Now, whenever you ask for a sound file from the dataset, it is loaded in memory only when you ask for it. Meaning, the dataset only loads and keeps in memory the items that you want and use, saving on memory.
We used an example raw audio signal, or waveform, to illustrate how
to open an audio file using torchaudio
, and how to
pre-process, transform, and apply functions to such waveform. We also
demonstrated built-in datasets to construct our models. Given that
torchaudio
is built on {torch}, these techniques can be
used as building blocks for more advanced audio applications, such as
speech recognition, while leveraging GPUs.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.