The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
hellmer enables sequential or parallel batch processing for chat models from ellmer.
Process multiple chat interactions with:
You can install hellmer
from CRAN with:
install.packages("hellmer")
ellmer
will look for API keys in your environmental
variables. I recommend the usethis
package to setup API
keys in your .Renviron
such as
OPENAI_API_KEY=your-key
.
::edit_r_environ(scope = c("user", "project")) usethis
library(hellmer)
<- chat_sequential(chat_openai,
chat system_prompt = "Reply concisely, one sentence")
<- list(
prompts "What is R?",
"Explain base R versus tidyverse",
"Explain vectors, lists, and data frames",
"How do environments work in R?",
"Compare R and Python for data analysis",
"Explain lazy evaluation in R",
"What are R's apply functions?",
"How do R packages work?",
"Explain R's object-oriented programming systems.",
"What are closures in R?",
"Describe R memory management",
"How does R handle missing values?",
"Explain R's integration with C++",
"Compare dplyr and data.table approaches",
"What are R formulas and when to use them?"
)<- chat$batch(prompts)
result
$progress()
result$texts()
result$chats() result
<- chat_future(chat_openai,
chat system_prompt = "Reply concisely, one sentence")
When using parallel processing with chat_future
, there’s
a trade-off between performance and safety:
chunk_size
equal to the number of prompts results in a 4-5x
faster processing speed$batch(prompts, chunk_size = length(prompts)) chat
chunk_size
ensures state is saved more frequently, allowing
recovery if something goes wrong (default: number of prompts / 10)chat_future
isn’t named chat_parallel
because the latter will be included in ellmer
(#143).
Register and use tools/function calling:
<- function(num) num^2
square_number
$register_tool(tool(
chat
square_number,"Calculates the square of a given number",
num = type_integer("The number to square")
))
<- list(
prompts "What is the square of 3?",
"Calculate the square of 5."
)
Extract structured data using type specifications:
<- type_object(
type_sentiment "Extract sentiment scores",
positive_score = type_number("Positive sentiment score, 0.0 to 1.0"),
negative_score = type_number("Negative sentiment score, 0.0 to 1.0"),
neutral_score = type_number("Neutral sentiment score, 0.0 to 1.0")
)
<- list(
prompts "I love this product! It's amazing!",
"This is okay, nothing special.",
"Terrible experience, very disappointed."
)
<- chat$batch(prompts, type_spec = type_sentiment)
result <- result$structured_data() structured_data
Batch processing automatically saves state and can resume interrupted operations:
<- chat$batch(prompts, state_path = "chat_state.rds") result
If state_path
is not defined, a temporary file will be
created by default.
Control verbosity with the echo
parameter (sequential
only):
"none"
: Silent operation with progress bar"text"
: Show chat responses only"all"
: Show both prompts and responses<- chat_sequential(
chat
chat_openai, echo = "none"
)
Automatically retry failed requests with backoff, which serves as a
wide guardrail against errors while ellmer
and
httr2
serve as a narrow guardrail against specific API
limits:
<- chat_sequential(
chat # Ellmer chat model
chat_openai, max_retries = 3, # Maximum retry attempts
initial_delay = 20, # Initial delay in seconds
max_delay = 60, # Maximum delay between retries
backoff_factor = 2 # Multiply delay by this factor after each retry
)
If a request fails, the code will:
initial_delay
backoff_factor
)max_retries
is reachedIf the code detects an authorization or API key issue, it will stop immediately.
The timeout parameter specifies the maximum time to wait for a response from the chat model for each prompt. However, this parameter is still limited by the timeouts propagated up from the chat model functions.
<- chat_future(
chat
chat_openai,system_prompt = "Reply concisely, one sentence"
timeout = 60
)
Toggle sound notifications on batch completion, interruption, and error:
<- chat_sequential(
chat
chat_openai,beep = TRUE
)
texts()
: Returns response texts in the same format as
the input prompts (i.e., a list if prompts were provided as a list, or a
character vector if prompts were provided as a vector)chats()
: Returns a list of chat objectsprogress()
: Returns processing statisticsstructured_data()
: Returns extracted structured data
(if type_spec
is provided)chat_openai()
as a user-defined object instead of the
chat_openai
function? Of course you can! Learn more about
the two methods and the default interface.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.