clients
Attribute PROVIDERS
Type: get_args(Provider)
Class AnthropicClient
The client for the Anthropic LLM model.
Bases:
BaseClient[AnthropicModelId, Anthropic]Attributes
| Name | Type | Description |
|---|---|---|
| client | Anthropic(api_key=api_key, base_url=base_url) | - |
| async_client | AsyncAnthropic(api_key=api_key, base_url=base_url) | - |
Function call
Generate an llm.Response by synchronously calling the Anthropic Messages API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | AnthropicModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | An `llm.Response` object containing the LLM-generated content. |
Function context_call
Generate an llm.ContextResponse by synchronously calling the Anthropic Messages API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | AnthropicModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | An `llm.ContextResponse` object containing the LLM-generated content. |
Function call_async
Generate an llm.AsyncResponse by asynchronously calling the Anthropic Messages API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | AnthropicModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | An `llm.AsyncResponse` object containing the LLM-generated content. |
Function context_call_async
Generate an llm.AsyncContextResponse by asynchronously calling the Anthropic Messages API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | AnthropicModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | An `llm.AsyncContextResponse` object containing the LLM-generated content. |
Function stream
Generate an llm.StreamResponse by synchronously streaming from the Anthropic Messages API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | AnthropicModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | An `llm.StreamResponse` object for iterating over the LLM-generated content. |
Function context_stream
Generate an llm.ContextStreamResponse by synchronously streaming from the Anthropic Messages API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | AnthropicModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT] | An `llm.ContextStreamResponse` object for iterating over the LLM-generated content. |
Function stream_async
Generate an llm.AsyncStreamResponse by asynchronously streaming from the Anthropic Messages API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | AnthropicModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function context_stream_async
Generate an llm.AsyncContextStreamResponse by asynchronously streaming from the Anthropic Messages API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | AnthropicModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse | AsyncContextStreamResponse[DepsT, FormattableT] | An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Attribute AnthropicModelId
Type: TypeAlias
The Anthropic model ids registered with Mirascope.
Class BaseClient
Base abstract client for provider-specific implementations.
This class defines explicit methods for each type of call, eliminating the need for complex overloads in provider implementations.
Bases: Generic[ModelIdT, ProviderClientT], ABC
Attributes
| Name | Type | Description |
|---|---|---|
| client | ProviderClientT | - |
Function call
Generate an llm.Response by synchronously calling this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | ModelIdT | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | An `llm.Response` object containing the LLM-generated content. |
Function context_call
Generate an llm.ContextResponse by synchronously calling this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | ModelIdT | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | An `llm.ContextResponse` object containing the LLM-generated content. |
Function call_async
Generate an llm.AsyncResponse by asynchronously calling this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | ModelIdT | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | An `llm.AsyncResponse` object containing the LLM-generated content. |
Function context_call_async
Generate an llm.AsyncContextResponse by asynchronously calling this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | ModelIdT | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | An `llm.AsyncContextResponse` object containing the LLM-generated content. |
Function stream
Generate an llm.StreamResponse by synchronously streaming from this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | ModelIdT | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | An `llm.StreamResponse` object for iterating over the LLM-generated content. |
Function context_stream
Generate an llm.ContextStreamResponse by synchronously streaming from this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | ModelIdT | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | An `llm.ContextStreamResponse` object for iterating over the LLM-generated content. |
Function stream_async
Generate an llm.AsyncStreamResponse by asynchronously streaming from this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | ModelIdT | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function context_stream_async
Generate an llm.AsyncContextStreamResponse by asynchronously streaming from this client's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | ModelIdT | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function resume
Generate a new llm.Response by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | ModelIdT | Model identifier to use. |
| response | Response | Response[FormattableT] | Previous response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | A new `llm.Response` object containing the extended conversation. |
Function resume_async
Generate a new llm.AsyncResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | ModelIdT | Model identifier to use. |
| response | AsyncResponse | AsyncResponse[FormattableT] | Previous async response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | A new `llm.AsyncResponse` object containing the extended conversation. |
Function context_resume
Generate a new llm.ContextResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | ModelIdT | Model identifier to use. |
| response | ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | Previous context response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | A new `llm.ContextResponse` object containing the extended conversation. |
Function context_resume_async
Generate a new llm.AsyncContextResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | ModelIdT | Model identifier to use. |
| response | AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | Previous async context response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | A new `llm.AsyncContextResponse` object containing the extended conversation. |
Function resume_stream
Generate a new llm.StreamResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | ModelIdT | Model identifier to use. |
| response | StreamResponse | StreamResponse[FormattableT] | Previous stream response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | A new `llm.StreamResponse` object for streaming the extended conversation. |
Function resume_stream_async
Generate a new llm.AsyncStreamResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | ModelIdT | Model identifier to use. |
| response | AsyncStreamResponse | AsyncStreamResponse[FormattableT] | Previous async stream response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | A new `llm.AsyncStreamResponse` object for asynchronously streaming the extended conversation. |
Function context_resume_stream
Generate a new llm.ContextStreamResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | ModelIdT | Model identifier to use. |
| response | ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | Previous context stream response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | A new `llm.ContextStreamResponse` object for streaming the extended conversation. |
Function context_resume_stream_async
Generate a new llm.AsyncContextStreamResponse by extending another response's messages with additional user content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | ModelIdT | Model identifier to use. |
| response | AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | Previous async context stream response to extend. |
| content | UserContent | Additional user content to append. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | A new `llm.AsyncContextStreamResponse` object for asynchronously streaming the extended conversation. |
Attribute ClientT
Type: TypeVar('ClientT', bound='BaseClient')
Type variable for an LLM client.
Class GoogleClient
The client for the Google LLM model.
Bases:
BaseClient[GoogleModelId, Client]Attributes
| Name | Type | Description |
|---|---|---|
| client | Client(api_key=api_key, http_options=http_options) | - |
Function call
Generate an llm.Response by synchronously calling the Google GenAI API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | GoogleModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | An `llm.Response` object containing the LLM-generated content. |
Function context_call
Generate an llm.ContextResponse by synchronously calling the Google GenAI API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | GoogleModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | An `llm.ContextResponse` object containing the LLM-generated content. |
Function call_async
Generate an llm.AsyncResponse by asynchronously calling the Google GenAI API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | GoogleModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | An `llm.AsyncResponse` object containing the LLM-generated content. |
Function context_call_async
Generate an llm.AsyncContextResponse by asynchronously calling the Google GenAI API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | GoogleModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | An `llm.AsyncContextResponse` object containing the LLM-generated content. |
Function stream
Generate an llm.StreamResponse by synchronously streaming from the Google GenAI API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | GoogleModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | An `llm.StreamResponse` object for iterating over the LLM-generated content. |
Function context_stream
Generate an llm.ContextStreamResponse by synchronously streaming from the Google GenAI API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | GoogleModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT] | An `llm.ContextStreamResponse` object for iterating over the LLM-generated content. |
Function stream_async
Generate an llm.AsyncStreamResponse by asynchronously streaming from the Google GenAI API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | GoogleModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function context_stream_async
Generate an llm.AsyncContextStreamResponse by asynchronously streaming from the Google GenAI API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | GoogleModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse | AsyncContextStreamResponse[DepsT, FormattableT] | An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Attribute GoogleModelId
Type: TypeAlias
The Google model ids registered with Mirascope.
Attribute ModelId
Type: TypeAlias
Class OpenAICompletionsClient
The client for the OpenAI LLM model.
Bases:
BaseClient[OpenAICompletionsModelId, OpenAI]Attributes
| Name | Type | Description |
|---|---|---|
| client | OpenAI(api_key=api_key, base_url=base_url) | - |
| async_client | AsyncOpenAI(api_key=api_key, base_url=base_url) | - |
Function call
Generate an llm.Response by synchronously calling the OpenAI ChatCompletions API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | OpenAICompletionsModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | An `llm.Response` object containing the LLM-generated content. |
Function context_call
Generate an llm.ContextResponse by synchronously calling the OpenAI ChatCompletions API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | OpenAICompletionsModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | An `llm.ContextResponse` object containing the LLM-generated content. |
Function call_async
Generate an llm.AsyncResponse by asynchronously calling the OpenAI ChatCompletions API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | OpenAICompletionsModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | An `llm.AsyncResponse` object containing the LLM-generated content. |
Function context_call_async
Generate an llm.AsyncContextResponse by asynchronously calling the OpenAI ChatCompletions API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | OpenAICompletionsModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | An `llm.AsyncContextResponse` object containing the LLM-generated content. |
Function stream
Generate an llm.StreamResponse by synchronously streaming from the OpenAI ChatCompletions API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | OpenAICompletionsModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | An `llm.StreamResponse` object for iterating over the LLM-generated content. |
Function context_stream
Generate an llm.ContextStreamResponse by synchronously streaming from the OpenAI ChatCompletions API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | OpenAICompletionsModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT] | An `llm.ContextStreamResponse` object for iterating over the LLM-generated content. |
Function stream_async
Generate an llm.AsyncStreamResponse by asynchronously streaming from the OpenAI ChatCompletions API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | OpenAICompletionsModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function context_stream_async
Generate an llm.AsyncContextStreamResponse by asynchronously streaming from the OpenAI ChatCompletions API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| model_id | OpenAICompletionsModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse | AsyncContextStreamResponse[DepsT, FormattableT] | An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Attribute OpenAICompletionsModelId
Type: TypeAlias
The OpenAI ChatCompletions model ids registered with Mirascope.
Class OpenAIResponsesClient
The client for the OpenAI Responses API.
Bases:
BaseClient[OpenAIResponsesModelId, OpenAI]Attributes
| Name | Type | Description |
|---|---|---|
| client | OpenAI(api_key=api_key, base_url=base_url) | - |
| async_client | AsyncOpenAI(api_key=api_key, base_url=base_url) | - |
Function call
Generate an llm.Response by synchronously calling the OpenAI Responses API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | OpenAIResponsesModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | An `llm.Response` object containing the LLM-generated content. |
Function call_async
Generate an llm.AsyncResponse by asynchronously calling the OpenAI Responses API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | OpenAIResponsesModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | An `llm.AsyncResponse` object containing the LLM-generated content. |
Function stream
Generate a llm.StreamResponse by synchronously streaming from the OpenAI Responses API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | OpenAIResponsesModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool] | Toolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | A `llm.StreamResponse` object containing the LLM-generated content stream. |
Function stream_async
Generate a llm.AsyncStreamResponse by asynchronously streaming from the OpenAI Responses API.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| model_id | OpenAIResponsesModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | A `llm.AsyncStreamResponse` object containing the LLM-generated content stream. |
Function context_call
Generate a llm.ContextResponse by synchronously calling the OpenAI Responses API with context.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | The context object containing dependencies. |
| model_id | OpenAIResponsesModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT] | ContextResponse[DepsT, FormattableT] | A `llm.ContextResponse` object containing the LLM-generated content and context. |
Function context_call_async
Generate a llm.AsyncContextResponse by asynchronously calling the OpenAI Responses API with context.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | The context object containing dependencies. |
| model_id | OpenAIResponsesModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT] | AsyncContextResponse[DepsT, FormattableT] | A `llm.AsyncContextResponse` object containing the LLM-generated content and context. |
Function context_stream
Generate a llm.ContextStreamResponse by synchronously streaming from the OpenAI Responses API with context.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | The context object containing dependencies. |
| model_id | OpenAIResponsesModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT] | A `llm.ContextStreamResponse` object containing the LLM-generated content stream and context. |
Function context_stream_async
Generate a llm.AsyncContextStreamResponse by asynchronously streaming from the OpenAI Responses API with context.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | The context object containing dependencies. |
| model_id | OpenAIResponsesModelId | Model identifier to use. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
| params= {} | Unpack[Params] | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT] | AsyncContextStreamResponse[DepsT, FormattableT] | A `llm.AsyncContextStreamResponse` object containing the LLM-generated content stream and context. |
Attribute OpenAIResponsesModelId
Type: TypeAlias
The OpenAI Responses model ids registered with Mirascope.
Class Params
Common parameters shared across LLM providers.
Note: Each provider may handle these parameters differently or not support them at all. Please check provider-specific documentation for parameter support and behavior.
Bases:
TypedDictAttributes
| Name | Type | Description |
|---|---|---|
| temperature | float | Controls randomness in the output (0.0 to 1.0). Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. |
| max_tokens | int | Maximum number of tokens to generate. |
| top_p | float | Nucleus sampling parameter (0.0 to 1.0). Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses. |
| top_k | int | Limits token selection to the k most probable tokens (typically 1 to 100). For each token selection step, the ``top_k`` tokens with the highest probabilities are sampled. Then tokens are further filtered based on ``top_p`` with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses. |
| seed | int | Random seed for reproducibility. When ``seed`` is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests. Not supported by all providers, and does not guarantee strict reproducibility. |
| stop_sequences | list[str] | Stop sequences to end generation. The model will stop generating text if one of these strings is encountered in the response. |
| thinking | bool | Configures whether the model should use thinking. Thinking is a process where the model spends additional tokens thinking about the prompt before generating a response. You may configure thinking either by passing a bool to enable or disable it. If `params.thinking` is `True`, then thinking and thought summaries will be enabled (if supported by the model/provider), with a default budget for thinking tokens. If `params.thinking` is `False`, then thinking will be wholly disabled, assuming the model allows this (some models, e.g. `google:gemini-2.5-pro`, do not allow disabling thinking). If `params.thinking` is unset (or `None`), then we will use provider-specific default behavior for the chosen model. |
| encode_thoughts_as_text | bool | Configures whether `Thought` content should be re-encoded as text for model consumption. If `True`, then when an `AssistantMessage` contains `Thoughts` and is being passed back to an LLM, those `Thoughts` will be encoded as `Text`, so that the assistant can read those thoughts. That ensures the assistant has access to (at least the summarized output of) its reasoning process, and contrasts with provider default behaviors which may ignore prior thoughts, particularly if tool calls are not involved. When `True`, we will always re-encode Mirascope messages being passed to the provider, rather than reusing raw provider response content. This may disable provider-specific behavior like cached reasoning tokens. If `False`, then `Thoughts` will not be encoded as text, and whether reasoning context is available to the model depends entirely on the provider's behavior. Defaults to `False` if unset. |
Attribute Provider
Type: TypeAlias
Function client
Create a cached client instance for the specified provider.
Parameters
Returns
| Type | Description |
|---|---|
| AnthropicClient | GoogleClient | OpenAICompletionsClient | OpenAIResponsesClient | A cached client instance for the specified provider with the given parameters. |
Function get_client
Get a client instance for the specified provider.
Multiple calls to get_client will return the same Client rather than constructing new ones.
Parameters
| Name | Type | Description |
|---|---|---|
| provider | Provider | The provider name ("openai:completions", "anthropic", or "google"). |
Returns
| Type | Description |
|---|---|
| AnthropicClient | GoogleClient | OpenAICompletionsClient | OpenAIResponsesClient | A client instance for the specified provider. The specific client type |