Parallel API calls: Fixed Config, Multiple Messages
Source:R/LLM_parallel_utils.R
call_llm_broadcast.Rd
Broadcasts different messages using the same configuration in parallel.
Perfect for batch processing different prompts with consistent settings.
This function requires setting up the parallel environment using setup_llm_parallel
.
Value
A tibble with columns: message_index (metadata), provider, model, all model parameters, response_text, raw_response_json, success, error_message.
Parallel Workflow
All parallel functions require the future
backend to be configured.
The recommended workflow is:
Call
setup_llm_parallel()
once at the start of your script.Run one or more parallel experiments (e.g.,
call_llm_broadcast()
).Call
reset_llm_parallel()
at the end to restore sequential processing.
Examples
if (FALSE) { # \dontrun{
# Broadcast different questions
config <- llm_config(provider = "openai", model = "gpt-4.1-nano")
messages <- list(
list(list(role = "user", content = "What is 2+2?")),
list(list(role = "user", content = "What is 3*5?")),
list(list(role = "user", content = "What is 10/2?"))
)
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_broadcast(config, messages)
reset_llm_parallel(verbose = TRUE)
} # }