The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

Small experiment with LLMR

knitr::opts_chunk$set(
  collapse = TRUE, comment = "#>",
  eval = identical(tolower(Sys.getenv("LLMR_RUN_VIGNETTES", "false")), "true") )

We will compare three configurations on two prompts, once unstructured and once with structured output. In choosing models, note that at the time of writing this vignette, Gemini models are not guaranteeing the schema output and is more likely to run into trouble.

library(LLMR)
library(dplyr)
cfg_openai <- llm_config("openai",   "gpt-5-nano")
cfg_cld    <- llm_config("anthropic","claude-sonnet-4-20250514", max_tokens = 512)
cfg_gem    <- llm_config("groq",     "openai/gpt-oss-20b")

Build a factorial design

experiments <- build_factorial_experiments(
  configs       = list(cfg_openai, cfg_cld, cfg_gem),
  user_prompts  = c("Summarize in one sentence: The Apollo program.",
                    "List two benefits of green tea."),
  system_prompts = c("Be concise.")
)
experiments
#> # A tibble: 6 × 6
#>   config     messages  config_label        user_prompt_label system_prompt_label
#>   <list>     <list>    <chr>               <chr>             <chr>              
#> 1 <llm_cnfg> <chr [2]> openai_gpt-5-nano   user_1            system_1           
#> 2 <llm_cnfg> <chr [2]> openai_gpt-5-nano   user_2            system_1           
#> 3 <llm_cnfg> <chr [2]> anthropic_claude-s… user_1            system_1           
#> 4 <llm_cnfg> <chr [2]> anthropic_claude-s… user_2            system_1           
#> 5 <llm_cnfg> <chr [2]> groq_openai/gpt-os… user_1            system_1           
#> 6 <llm_cnfg> <chr [2]> groq_openai/gpt-os… user_2            system_1           
#> # ℹ 1 more variable: repetition <int>

Run unstructured

setup_llm_parallel(workers = 10)
res_unstructured <- call_llm_par(experiments, progress = TRUE)
#> [2025-08-26 01:03:04.059187] LLMR Error: LLM API request failed.
#> HTTP status: 529
#> Reason: Overloaded
#> Tip: check model params for provider/API version.
#> [2025-08-26 01:03:04.243258] LLMR Error: LLM API request failed.
#> HTTP status: 529
#> Reason: Overloaded
#> Tip: check model params for provider/API version.
reset_llm_parallel()
res_unstructured |>
  select(provider, model, user_prompt_label, response_text, finish_reason) |>
  head()
#> # A tibble: 6 × 5
#>   provider  model                  user_prompt_label response_text finish_reason
#>   <chr>     <chr>                  <chr>             <chr>         <chr>        
#> 1 openai    gpt-5-nano             user_1            "NASA's Apol… stop         
#> 2 openai    gpt-5-nano             user_2            "- Rich in a… stop         
#> 3 anthropic claude-sonnet-4-20250… user_1             <NA>         error:server 
#> 4 anthropic claude-sonnet-4-20250… user_2             <NA>         error:server 
#> 5 groq      openai/gpt-oss-20b     user_1            "The Apollo … stop         
#> 6 groq      openai/gpt-oss-20b     user_2            "- **Rich in… stop

Structured version

schema <- list(
  type = "object",
  properties = list(
    answer = list(type="string"),
    keywords = list(type="array", items = list(type="string"))
  ),
  required = list("answer","keywords"),
  additionalProperties = FALSE
)

experiments2 <- experiments
experiments2$config <- lapply(experiments2$config, enable_structured_output, schema = schema)

setup_llm_parallel(workers = 10)
res_structured <- call_llm_par_structured(experiments2 , .fields = c("answer","keywords") )
reset_llm_parallel()

res_structured |>
  select(provider, model, user_prompt_label, structured_ok, answer) |>
  head()
#> # A tibble: 6 × 5
#>   provider  model                    user_prompt_label structured_ok answer     
#>   <chr>     <chr>                    <chr>             <lgl>         <chr>      
#> 1 openai    gpt-5-nano               user_1            TRUE          "The Apoll…
#> 2 openai    gpt-5-nano               user_2            TRUE          "1) Rich i…
#> 3 anthropic claude-sonnet-4-20250514 user_1            TRUE          "The Apoll…
#> 4 anthropic claude-sonnet-4-20250514 user_2            TRUE          "Green tea…
#> 5 groq      openai/gpt-oss-20b       user_1            TRUE          "The Apoll…
#> 6 groq      openai/gpt-oss-20b       user_2            TRUE          "Green tea…

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.