LLM Management#
LLM Configuration Module#
This module handles the configuration and use of LLMs from different providers through a unified interface. It manages provider configurations, initialization strategies, and rate limiting for each supported LLM provider.
- The module supports dynamic configuration of various LLM backends including:
OpenAI
Anthropic
AWS Bedrock
Google AI
Azure OpenAI
Azure ML endpoints
Cohere
HuggingFace
Vertex AI
Ollama
- Key Components:
- Provider configuration using dataclasses
Provider-specific initialization strategies
Abstract base classes for provider initialization
- LLM calling with
retry logic
rate limiting
customizable response processing and failure handling
The module is used internally by AnnDictionary
shouldn’t generally be imported directly
by end users. Instead, use the main package interface:
import anndict as adt
adt.configure_llm_backend(...)