Configure and Call#
Manage interactions with LLMs across providers. Provides functional configuration, initialization, and response processing/retry/fallback mechanisms for making LLM calls through a unified interface that works with OpenAI, Anthropic, Amazon Bedrock, Google, and others.
|
Call this function before using LLM integrations. |
Retrieves the LLM configuration from environment variables. |
|
|
Call the configured LLM model with given messages and parameters. |
|
A retry wrapper for call_llm that allows custom response processing and failure handling. |