Parallel API calls: Multiple Configs, Fixed Message
Source:R/LLM_parallel_utils.R
call_llm_compare.Rd
Compares different configurations (models, providers, settings) using the same message.
Perfect for benchmarking across different models or providers.
This function requires setting up the parallel environment using setup_llm_parallel
.
Value
A tibble with columns: config_index (metadata), provider, model, all varying model parameters, response_text, raw_response_json, success, error_message.
Parallel Workflow
All parallel functions require the future
backend to be configured.
The recommended workflow is:
Call
setup_llm_parallel()
once at the start of your script.Run one or more parallel experiments (e.g.,
call_llm_broadcast()
).Call
reset_llm_parallel()
at the end to restore sequential processing.
Examples
if (FALSE) { # \dontrun{
# Compare different models
config1 <- llm_config(provider = "openai", model = "gpt-4o-mini")
config2 <- llm_config(provider = "openai", model = "gpt-4.1-nano")
configs_list <- list(config1, config2)
messages <- "Explain quantum computing"
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_compare(configs_list, messages)
reset_llm_parallel(verbose = TRUE)
} # }