Configure and Call

Configure and Call#

Manage interactions with LLMs across providers. Provides functional configuration, initialization, and response processing/retry/fallback mechanisms for making LLM calls through a unified interface that works with OpenAI, Anthropic, Amazon Bedrock, Google, and others.

configure_llm_backend(provider, model, **kwargs)

Call this function before using LLM integrations.

get_llm_config()

Retrieves the LLM configuration from environment variables.

call_llm(messages, **kwargs)

Call the configured LLM model with given messages and parameters.

retry_call_llm(messages, process_response, ...)

A retry wrapper for call_llm that allows custom response processing and failure handling.