The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

chatLLM

License: MIT CRAN status Total Downloads Codecov test coverage Last Commit Issues

Overview

chatLLM is an R package that provides a unified and flexible interface for interacting with popular Large Language Model (LLM) providers such as OpenAI, Groq, and Anthropic. It allows you to easily switch between providers, design complex multi-message interactions, and simulate API calls for testing purposes—using a customizable .post_func parameter.

Features include:


Installation

# Install from GitHub
if (!requireNamespace("remotes", quietly = TRUE)) install.packages("remotes")
remotes::install_github("knowusuboaky/chatLLM")

Environment Setup

To use chatLLM with LLM providers, please ensure your API keys are set as environment variables so that sensitive credentials are not hardcoded in your scripts.

Example: Setting Environment Variables in R

# You can add these to your .Renviron file or run them once per session
Sys.setenv(OPENAI_API_KEY    = "your-openai-api-key")
Sys.setenv(GROQ_API_KEY      = "your-groq-api-key")
Sys.setenv(ANTHROPIC_API_KEY = "your-anthropic-api-key")

💡 Tip: To persist these keys across sessions, add them to your ~/.Renviron file (this file should be excluded from version control).


Usage

1. Simple Prompt Call

response <- call_llm(
  prompt = "Who is messi?",
  provider = "openai",
  max_tokens = 50,
  n_tries = 3,
  backoff = 2
)
cat(response)

2. Multi-Message Conversation

conv <- list(
  list(role = "system", content = "You are a helpful assistant."),
  list(role = "user", content = "Explain recursion in R.")
)
response <- call_llm(
  messages = conv,
  provider = "openai",
  max_tokens = 200,
  presence_penalty = 0.2,
  frequency_penalty = 0.1,
  top_p = 0.95
)
cat(response)

3. Using a Custom Fake POST Function for Testing

For testing or simulation, you can override the default HTTP POST call using the .post_func argument. For example, the code snippet below defines a fake POST function that mimics a real httr response object:

# A revised fake POST function that returns a closer-to-real httr::response object
fake_post <- function(url, encode, body, req_headers, ...) {
  # Check for a "recursion" message: if found, return a recursion-related answer:
  if (!is.null(body$messages)) {
    if (any(grepl("recursion", unlist(lapply(body$messages, `[[`, "content")), ignore.case = TRUE))) {
      return(structure(
        list(
          status_code = 200L,
          url         = url,
          headers     = c("Content-Type" = "application/json"),
          all_headers = list(list(
            status  = 200L,
            version = "HTTP/1.1",
            headers = c("Content-Type" = "application/json")
          )),
          content = charToRaw('{"choices": [{"message": {"content": "Fake explanation: In R, recursion is a technique where a function calls itself. It helps with tree traversals, etc."}}]}'),
          date    = Sys.time()
        ),
        class = "response"
      ))
    }
  }
  # Otherwise, return a generic Lionel Messi answer:
  structure(
    list(
      status_code = 200L,
      url         = url,
      headers     = c("Content-Type" = "application/json"),
      all_headers = list(list(
        status  = 200L,
        version = "HTTP/1.1",
        headers = c("Content-Type" = "application/json")
      )),
      content = charToRaw('{"choices": [{"message": {"content": "Fake answer: Lionel Messi is a renowned professional footballer."}}]}'),
      date    = Sys.time()
    ),
    class = "response"
  )
}

# Using fake_post for a simple prompt:
response <- call_llm(
  prompt = "Who is messi?",
  provider = "openai",
  max_tokens = 50,
  n_tries = 3,
  backoff = 2,
  .post_func = fake_post
)
cat(response, "\n\n")

# And for a multi-message conversation:
conv <- list(
  list(role = "system", content = "You are a helpful assistant."),
  list(role = "user", content = "Explain recursion in R.")
)
response <- call_llm(
  messages = conv,
  provider = "openai",
  max_tokens = 200,
  presence_penalty = 0.2,
  frequency_penalty = 0.1,
  top_p = 0.95,
  .post_func = fake_post
)
cat(response)

LLM Support

chatLLM leverages the call_llm() function to interface with various providers. For example, switching to Groq’s API is as simple as:

call_llm(
  prompt = "Summarize the capital of France.",
  provider = "groq",
  model = "mixtral-8x7b-32768",
  temperature = 0.7,
  max_tokens = 200
)

Issues

If you encounter any issues or have suggestions, please open an issue on GitHub Issues.


License

MIT © Kwadwo Daddy Nyame Owusu Boakye


Acknowledgements

chatLLM draws inspiration from other innovative projects like RAGFlowChainR and benefits from the vibrant R community dedicated to open-source development. Enjoy chatting with your LLMs! ```

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.