The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
chatLLM is an R package providing a single, consistent interface to multiple “OpenAI‑compatible” chat APIs (OpenAI, Groq, Anthropic, DeepSeek, Alibaba DashScope, and GitHub Models).
Key features:
verbose = TRUE/FALSE
)list_models()
From CRAN:
install.packages("chatLLM")
Development version:
# install.packages("remotes") # if needed
::install_github("knowusuboaky/chatLLM") remotes
Set your API keys or tokens once per session:
Sys.setenv(
OPENAI_API_KEY = "your-openai-key",
GROQ_API_KEY = "your-groq-key",
ANTHROPIC_API_KEY = "your-anthropic-key",
DEEPSEEK_API_KEY = "your-deepseek-key",
DASHSCOPE_API_KEY = "your-dashscope-key",
GH_MODELS_TOKEN = "your-github-models-token"
)
<- call_llm(
response prompt = "Who is Messi?",
provider = "openai",
max_tokens = 300
)cat(response)
<- list(
conv list(role = "system", content = "You are a helpful assistant."),
list(role = "user", content = "Explain recursion in R.")
)<- call_llm(
response messages = conv,
provider = "openai",
max_tokens = 200,
presence_penalty = 0.2,
frequency_penalty = 0.1,
top_p = 0.95
)cat(response)
Suppress informational messages:
<- call_llm(
res prompt = "Tell me a joke",
provider = "openai",
verbose = FALSE
)cat(res)
Create a reusable LLM function:
# Build a “GitHub Models” engine with defaults baked in
<- call_llm(
GitHubLLM provider = "github",
max_tokens = 60,
verbose = FALSE
)
# Invoke it like a function:
<- GitHubLLM("Tell me a short story about libraries.")
story cat(story)
# All providers at once
<- list_models("all")
all_models names(all_models)
# Only OpenAI models
<- list_models("openai")
openai_models head(openai_models)
Pick from the list and pass it to call_llm()
:
<- list_models("anthropic")
anthro_models cat(call_llm(
prompt = "Write a haiku about autumn.",
provider = "anthropic",
model = anthro_models[1],
max_tokens = 60
))
n_tries
/
backoff
or supply a custom .post_func
with
higher timeout()
.list_models("<provider>")
or consult provider
docs.Issues and PRs welcome at https://github.com/knowusuboaky/chatLLM
MIT © Kwadwo Daddy Nyame Owusu - Boakye
Inspired by RAGFlowChainR, powered by httr and the R community. Enjoy!
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.