The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

attention: Self-Attention Algorithm

Self-Attention algorithm helper functions and demonstration vignettes of increasing depth on how to construct the Self-Attention algorithm, this is based on Vaswani et al. (2017) <doi:10.48550/arXiv.1706.03762>, Dan Jurafsky and James H. Martin (2022, ISBN:978-0131873216) <https://web.stanford.edu/~jurafsky/slp3/> "Speech and Language Processing (3rd ed.)" and Alex Graves (2020) <https://www.youtube.com/watch?v=AIiwuClvH6k> "Attention and Memory in Deep Learning".

Version: 0.4.0
Suggests: covr, knitr, rmarkdown, testthat (≥ 3.0.0)
Published: 2023-11-10
Author: Bastiaan Quast ORCID iD [aut, cre]
Maintainer: Bastiaan Quast <bquast at gmail.com>
License: GPL (≥ 3)
NeedsCompilation: no
Materials: README NEWS
CRAN checks: attention results

Documentation:

Reference manual: attention.pdf
Vignettes: Complete Self-Attention from Scratch
Simple Self-Attention from Scratch

Downloads:

Package source: attention_0.4.0.tar.gz
Windows binaries: r-devel: attention_0.4.0.zip, r-release: attention_0.4.0.zip, r-oldrel: attention_0.4.0.zip
macOS binaries: r-release (arm64): attention_0.4.0.tgz, r-oldrel (arm64): attention_0.4.0.tgz, r-release (x86_64): attention_0.4.0.tgz, r-oldrel (x86_64): attention_0.4.0.tgz
Old sources: attention archive

Reverse dependencies:

Reverse imports: rnn, transformer

Linking:

Please use the canonical form https://CRAN.R-project.org/package=attention to link to this page.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.