The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
LLMR offers a unified interface for Large Language Models in R. It supports multiple providers, robust retries, structured output, and embeddings.
install.packages("LLMR") # CRAN
# Development:
# remotes::install_github("asanaei/LLMR")
library(LLMR)
<- llm_config(
cfg provider = "openai",
model = "gpt-4o-mini",
temperature = 0.2,
max_tokens = 256
)
Store keys in environment variables such as OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY.
<- call_llm(
r config = cfg,
messages = c(
system = "You are a branding expert.",
user = "Six-word catch-phrase for eco-friendly balloons.")
)
print(r) # text + status line
as.character(r) # just the text
finish_reason(r)
tokens(r)
is_truncated(r)
<- list(
schema type = "object",
properties = list(
label = list(type = "string"),
score = list(type = "number")
),required = list("label","score"),
additionalProperties = FALSE
)
<- enable_structured_output(cfg, schema = schema)
cfg_s
<- call_llm(cfg_s, c(system="Reply JSON only.", user="Label and score for 'MNIST'."))
resp <- llm_parse_structured(resp)
parsed str(parsed)
Or use higher-level helpers:
<- c("excellent","awful","fine")
words
<- llm_fn_structured(
out x = words,
prompt = "Classify '{x}' and output {label, score in [0,1]} as JSON.",
.config = cfg,
.schema = schema,
.fields = c("label","score")
) out
<- c(
sentences one="Quiet rivers mirror bright skies.",
two="Thunder shakes the mountain path."
)
<- llm_config(
emb_cfg provider = "voyage",
model = "voyage-large-2",
embedding = TRUE
)
<- call_llm(emb_cfg, sentences) |> parse_embeddings()
emb dim(emb)
Batch embeddings:
<- get_batched_embeddings(
emb texts = sentences,
embed_config = emb_cfg,
batch_size = 8
)
<- chat_session(cfg, system = "You teach statistics tersely.")
chat $send("Explain p-values in 12 words.")
chat$send("Now give a three-word analogy.")
chatprint(chat)
setup_llm_parallel(workers = 4)
<- build_factorial_experiments(
experiments configs = list(cfg),
user_prompts = c("Summarize in one sentence: The Apollo program."),
system_prompts= "Be concise."
)
<- call_llm_par(experiments, progress = TRUE)
res reset_llm_parallel()
Issues and pull requests are welcome. Include a minimal reproducible exampleyy.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.