The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
The L-BAM library comprises information about pre-trained models. The
models can be called with textPredict()
,
textAssess()
or textClassify()
like this:
library(text)
# Example calling a model using the URL
textPredict(
model_info = "facebook_valence",
texts = "what is the valence of this text?"
)
# Example calling a model having an abbreviation
textClassify(
model_info = "implicit_power_fine_tuned_roberta",
texts = "It looks like they have problems collaborating."
)
The text prediction functions can be given a model and a text, and automatically transform the text to word embeddings and produce estimated scores or probabilities.
If you want to add a pre-trained model to the L-BAM library, please fill out the details in this Google sheet and email us (oscar [ d_o t] kjell [a _ t] psy [DOT] lu [d_o_t]se) so that we can update the table online.
Note that you can adjust the width of the columns when scrolling the table.
Gu, Kjell, Schwartz & Kjell. (2024). Natural Language Response Formats for Assessing Depression and Worry with Large Language Models: A Sequential Evaluation with Model Pre-registration.
Kjell, O. N., Sikström, S., Kjell, K., & Schwartz, H. A. (2022). Natural language analyzed with AI-based transformers predict traditional subjective well-being measures approaching the theoretical upper limits in accuracy. Scientific reports, 12(1), 3918.
Nilsson, Runge, Ganesan, Lövenstierne, Soni & Kjell (2024) Automatic Implicit Motives Codings are at Least as Accurate as Humans’ and 99% Faster
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.