anndict.retry_call_llm

Contents

anndict.retry_call_llm#

anndict.retry_call_llm(messages, process_response, failure_handler, *, max_attempts=5, call_llm_kwargs=None, process_response_kwargs=None, failure_handler_kwargs=None)[source]#

A retry wrapper for call_llm that allows custom response processing and failure handling.

Parameters:
messages list[dict[str, str]]

List of message dictionaries, where each dictionary contains:

  • ’role’ : str The role of the message sender ('system', 'user', or 'assistant')

  • ’content’ : str The content of the message

process_response callable

Function to process the LLM response. Should accept response string as first argument. If processing fails, triggers retry logic.

failure_handler callable

Function called after all retry attempts are exhausted. Should handle the complete failure case.

max_attempts int (default: 5)

Maximum number of retry attempts (default: 5)

call_llm_kwargs dict[str, Any] | None (default: None)

Keyword arguments passed to call_llm() (default: None) If contains 'temperature', it’s adjusted on retries: - Attempts 1-2: temperature = 0 - Attempts 3+: temperature = (attempt - 2) * 0.025

process_response_kwargs dict[str, Any] | None (default: None)

Keyword arguments passed to process_response function (default: None)

failure_handler_kwargs dict[str, Any] | None (default: None)

Keyword arguments passed to failure_handler function (default: None)

Return type:

Any

Returns:

The processed result if successful, or the failure handler’s return value.

See also

call_llm()

The wrapped LLM call function.