Mirascopev2
Lilypad

clients

Attribute PROVIDERS

Type: get_args(Provider)

Class AnthropicClient

The client for the Anthropic LLM model.

Bases:

BaseClient[AnthropicModelId, Anthropic]

Attributes

NameTypeDescription
clientAnthropic(api_key=api_key, base_url=base_url)-
async_clientAsyncAnthropic(api_key=api_key, base_url=base_url)-

Function call

Generate an llm.Response by synchronously calling the Anthropic Messages API.

Parameters

NameTypeDescription
selfAny-
model_idAnthropicModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
Response | Response[FormattableT]An `llm.Response` object containing the LLM-generated content.

Function context_call

Generate an llm.ContextResponse by synchronously calling the Anthropic Messages API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idAnthropicModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]An `llm.ContextResponse` object containing the LLM-generated content.

Function call_async

Generate an llm.AsyncResponse by asynchronously calling the Anthropic Messages API.

Parameters

NameTypeDescription
selfAny-
model_idAnthropicModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]An `llm.AsyncResponse` object containing the LLM-generated content.

Function context_call_async

Generate an llm.AsyncContextResponse by asynchronously calling the Anthropic Messages API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idAnthropicModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]An `llm.AsyncContextResponse` object containing the LLM-generated content.

Function stream

Generate an llm.StreamResponse by synchronously streaming from the Anthropic Messages API.

Parameters

NameTypeDescription
selfAny-
model_idAnthropicModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]An `llm.StreamResponse` object for iterating over the LLM-generated content.

Function context_stream

Generate an llm.ContextStreamResponse by synchronously streaming from the Anthropic Messages API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idAnthropicModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT]An `llm.ContextStreamResponse` object for iterating over the LLM-generated content.

Function stream_async

Generate an llm.AsyncStreamResponse by asynchronously streaming from the Anthropic Messages API.

Parameters

NameTypeDescription
selfAny-
model_idAnthropicModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function context_stream_async

Generate an llm.AsyncContextStreamResponse by asynchronously streaming from the Anthropic Messages API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idAnthropicModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextStreamResponse | AsyncContextStreamResponse[DepsT, FormattableT]An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content.

Attribute AnthropicModelId

Type: TypeAlias

The Anthropic model ids registered with Mirascope.

Class BaseClient

Base abstract client for provider-specific implementations.

This class defines explicit methods for each type of call, eliminating the need for complex overloads in provider implementations.

Bases: Generic[ModelIdT, ProviderClientT], ABC

Attributes

NameTypeDescription
clientProviderClientT-

Function call

Generate an llm.Response by synchronously calling this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
model_idModelIdTModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
Response | Response[FormattableT]An `llm.Response` object containing the LLM-generated content.

Function context_call

Generate an llm.ContextResponse by synchronously calling this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idModelIdTModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]An `llm.ContextResponse` object containing the LLM-generated content.

Function call_async

Generate an llm.AsyncResponse by asynchronously calling this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
model_idModelIdTModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]An `llm.AsyncResponse` object containing the LLM-generated content.

Function context_call_async

Generate an llm.AsyncContextResponse by asynchronously calling this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idModelIdTModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]An `llm.AsyncContextResponse` object containing the LLM-generated content.

Function stream

Generate an llm.StreamResponse by synchronously streaming from this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
model_idModelIdTModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]An `llm.StreamResponse` object for iterating over the LLM-generated content.

Function context_stream

Generate an llm.ContextStreamResponse by synchronously streaming from this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idModelIdTModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]An `llm.ContextStreamResponse` object for iterating over the LLM-generated content.

Function stream_async

Generate an llm.AsyncStreamResponse by asynchronously streaming from this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
model_idModelIdTModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function context_stream_async

Generate an llm.AsyncContextStreamResponse by asynchronously streaming from this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idModelIdTModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function resume

Generate a new llm.Response by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
model_idModelIdTModel identifier to use.
responseResponse | Response[FormattableT]Previous response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
Response | Response[FormattableT]A new `llm.Response` object containing the extended conversation.

Function resume_async

Generate a new llm.AsyncResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
model_idModelIdTModel identifier to use.
responseAsyncResponse | AsyncResponse[FormattableT]Previous async response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]A new `llm.AsyncResponse` object containing the extended conversation.

Function context_resume

Generate a new llm.ContextResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idModelIdTModel identifier to use.
responseContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]Previous context response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]A new `llm.ContextResponse` object containing the extended conversation.

Function context_resume_async

Generate a new llm.AsyncContextResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idModelIdTModel identifier to use.
responseAsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]Previous async context response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]A new `llm.AsyncContextResponse` object containing the extended conversation.

Function resume_stream

Generate a new llm.StreamResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
model_idModelIdTModel identifier to use.
responseStreamResponse | StreamResponse[FormattableT]Previous stream response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]A new `llm.StreamResponse` object for streaming the extended conversation.

Function resume_stream_async

Generate a new llm.AsyncStreamResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
model_idModelIdTModel identifier to use.
responseAsyncStreamResponse | AsyncStreamResponse[FormattableT]Previous async stream response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]A new `llm.AsyncStreamResponse` object for asynchronously streaming the extended conversation.

Function context_resume_stream

Generate a new llm.ContextStreamResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idModelIdTModel identifier to use.
responseContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]Previous context stream response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]A new `llm.ContextStreamResponse` object for streaming the extended conversation.

Function context_resume_stream_async

Generate a new llm.AsyncContextStreamResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idModelIdTModel identifier to use.
responseAsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]Previous async context stream response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]A new `llm.AsyncContextStreamResponse` object for asynchronously streaming the extended conversation.

Attribute ClientT

Type: TypeVar('ClientT', bound='BaseClient')

Type variable for an LLM client.

Class GoogleClient

The client for the Google LLM model.

Bases:

BaseClient[GoogleModelId, Client]

Attributes

NameTypeDescription
clientClient(api_key=api_key, http_options=http_options)-

Function call

Generate an llm.Response by synchronously calling the Google GenAI API.

Parameters

NameTypeDescription
selfAny-
model_idGoogleModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
Response | Response[FormattableT]An `llm.Response` object containing the LLM-generated content.

Function context_call

Generate an llm.ContextResponse by synchronously calling the Google GenAI API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idGoogleModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]An `llm.ContextResponse` object containing the LLM-generated content.

Function call_async

Generate an llm.AsyncResponse by asynchronously calling the Google GenAI API.

Parameters

NameTypeDescription
selfAny-
model_idGoogleModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]An `llm.AsyncResponse` object containing the LLM-generated content.

Function context_call_async

Generate an llm.AsyncContextResponse by asynchronously calling the Google GenAI API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idGoogleModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]An `llm.AsyncContextResponse` object containing the LLM-generated content.

Function stream

Generate an llm.StreamResponse by synchronously streaming from the Google GenAI API.

Parameters

NameTypeDescription
selfAny-
model_idGoogleModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]An `llm.StreamResponse` object for iterating over the LLM-generated content.

Function context_stream

Generate an llm.ContextStreamResponse by synchronously streaming from the Google GenAI API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idGoogleModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT]An `llm.ContextStreamResponse` object for iterating over the LLM-generated content.

Function stream_async

Generate an llm.AsyncStreamResponse by asynchronously streaming from the Google GenAI API.

Parameters

NameTypeDescription
selfAny-
model_idGoogleModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function context_stream_async

Generate an llm.AsyncContextStreamResponse by asynchronously streaming from the Google GenAI API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idGoogleModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextStreamResponse | AsyncContextStreamResponse[DepsT, FormattableT]An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content.

Attribute GoogleModelId

Type: TypeAlias

The Google model ids registered with Mirascope.

Attribute ModelId

Type: TypeAlias

Class OpenAICompletionsClient

The client for the OpenAI LLM model.

Bases:

BaseClient[OpenAICompletionsModelId, OpenAI]

Attributes

NameTypeDescription
clientOpenAI(api_key=api_key, base_url=base_url)-
async_clientAsyncOpenAI(api_key=api_key, base_url=base_url)-

Function call

Generate an llm.Response by synchronously calling the OpenAI ChatCompletions API.

Parameters

NameTypeDescription
selfAny-
model_idOpenAICompletionsModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
Response | Response[FormattableT]An `llm.Response` object containing the LLM-generated content.

Function context_call

Generate an llm.ContextResponse by synchronously calling the OpenAI ChatCompletions API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idOpenAICompletionsModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]An `llm.ContextResponse` object containing the LLM-generated content.

Function call_async

Generate an llm.AsyncResponse by asynchronously calling the OpenAI ChatCompletions API.

Parameters

NameTypeDescription
selfAny-
model_idOpenAICompletionsModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]An `llm.AsyncResponse` object containing the LLM-generated content.

Function context_call_async

Generate an llm.AsyncContextResponse by asynchronously calling the OpenAI ChatCompletions API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idOpenAICompletionsModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]An `llm.AsyncContextResponse` object containing the LLM-generated content.

Function stream

Generate an llm.StreamResponse by synchronously streaming from the OpenAI ChatCompletions API.

Parameters

NameTypeDescription
selfAny-
model_idOpenAICompletionsModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]An `llm.StreamResponse` object for iterating over the LLM-generated content.

Function context_stream

Generate an llm.ContextStreamResponse by synchronously streaming from the OpenAI ChatCompletions API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idOpenAICompletionsModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT]An `llm.ContextStreamResponse` object for iterating over the LLM-generated content.

Function stream_async

Generate an llm.AsyncStreamResponse by asynchronously streaming from the OpenAI ChatCompletions API.

Parameters

NameTypeDescription
selfAny-
model_idOpenAICompletionsModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function context_stream_async

Generate an llm.AsyncContextStreamResponse by asynchronously streaming from the OpenAI ChatCompletions API.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idOpenAICompletionsModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextStreamResponse | AsyncContextStreamResponse[DepsT, FormattableT]An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content.

Attribute OpenAICompletionsModelId

Type: TypeAlias

The OpenAI ChatCompletions model ids registered with Mirascope.

Class OpenAIResponsesClient

The client for the OpenAI Responses API.

Bases:

BaseClient[OpenAIResponsesModelId, OpenAI]

Attributes

NameTypeDescription
clientOpenAI(api_key=api_key, base_url=base_url)-
async_clientAsyncOpenAI(api_key=api_key, base_url=base_url)-

Function call

Generate an llm.Response by synchronously calling the OpenAI Responses API.

Parameters

NameTypeDescription
selfAny-
model_idOpenAIResponsesModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
Response | Response[FormattableT]An `llm.Response` object containing the LLM-generated content.

Function call_async

Generate an llm.AsyncResponse by asynchronously calling the OpenAI Responses API.

Parameters

NameTypeDescription
selfAny-
model_idOpenAIResponsesModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]An `llm.AsyncResponse` object containing the LLM-generated content.

Function stream

Generate a llm.StreamResponse by synchronously streaming from the OpenAI Responses API.

Parameters

NameTypeDescription
selfAny-
model_idOpenAIResponsesModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]A `llm.StreamResponse` object containing the LLM-generated content stream.

Function stream_async

Generate a llm.AsyncStreamResponse by asynchronously streaming from the OpenAI Responses API.

Parameters

NameTypeDescription
selfAny-
model_idOpenAIResponsesModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]A `llm.AsyncStreamResponse` object containing the LLM-generated content stream.

Function context_call

Generate a llm.ContextResponse by synchronously calling the OpenAI Responses API with context.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]The context object containing dependencies.
model_idOpenAIResponsesModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextResponse[DepsT] | ContextResponse[DepsT, FormattableT]A `llm.ContextResponse` object containing the LLM-generated content and context.

Function context_call_async

Generate a llm.AsyncContextResponse by asynchronously calling the OpenAI Responses API with context.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]The context object containing dependencies.
model_idOpenAIResponsesModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextResponse[DepsT] | AsyncContextResponse[DepsT, FormattableT]A `llm.AsyncContextResponse` object containing the LLM-generated content and context.

Function context_stream

Generate a llm.ContextStreamResponse by synchronously streaming from the OpenAI Responses API with context.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]The context object containing dependencies.
model_idOpenAIResponsesModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT]A `llm.ContextStreamResponse` object containing the LLM-generated content stream and context.

Function context_stream_async

Generate a llm.AsyncContextStreamResponse by asynchronously streaming from the OpenAI Responses API with context.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]The context object containing dependencies.
model_idOpenAIResponsesModelIdModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextStreamResponse[DepsT] | AsyncContextStreamResponse[DepsT, FormattableT]A `llm.AsyncContextStreamResponse` object containing the LLM-generated content stream and context.

Attribute OpenAIResponsesModelId

Type: TypeAlias

The OpenAI Responses model ids registered with Mirascope.

Class Params

Common parameters shared across LLM providers.

Note: Each provider may handle these parameters differently or not support them at all. Please check provider-specific documentation for parameter support and behavior.

Bases:

TypedDict

Attributes

NameTypeDescription
temperaturefloatControls randomness in the output (0.0 to 1.0). Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results.
max_tokensintMaximum number of tokens to generate.
top_pfloatNucleus sampling parameter (0.0 to 1.0). Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses.
top_kintLimits token selection to the k most probable tokens (typically 1 to 100). For each token selection step, the ``top_k`` tokens with the highest probabilities are sampled. Then tokens are further filtered based on ``top_p`` with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses.
seedintRandom seed for reproducibility. When ``seed`` is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests. Not supported by all providers, and does not guarantee strict reproducibility.
stop_sequenceslist[str]Stop sequences to end generation. The model will stop generating text if one of these strings is encountered in the response.
thinkingboolConfigures whether the model should use thinking. Thinking is a process where the model spends additional tokens thinking about the prompt before generating a response. You may configure thinking either by passing a bool to enable or disable it. If `params.thinking` is `True`, then thinking and thought summaries will be enabled (if supported by the model/provider), with a default budget for thinking tokens. If `params.thinking` is `False`, then thinking will be wholly disabled, assuming the model allows this (some models, e.g. `google:gemini-2.5-pro`, do not allow disabling thinking). If `params.thinking` is unset (or `None`), then we will use provider-specific default behavior for the chosen model.
encode_thoughts_as_textboolConfigures whether `Thought` content should be re-encoded as text for model consumption. If `True`, then when an `AssistantMessage` contains `Thoughts` and is being passed back to an LLM, those `Thoughts` will be encoded as `Text`, so that the assistant can read those thoughts. That ensures the assistant has access to (at least the summarized output of) its reasoning process, and contrasts with provider default behaviors which may ignore prior thoughts, particularly if tool calls are not involved. When `True`, we will always re-encode Mirascope messages being passed to the provider, rather than reusing raw provider response content. This may disable provider-specific behavior like cached reasoning tokens. If `False`, then `Thoughts` will not be encoded as text, and whether reasoning context is available to the model depends entirely on the provider's behavior. Defaults to `False` if unset.

Attribute Provider

Type: TypeAlias

Function client

Create a cached client instance for the specified provider.

Parameters

NameTypeDescription
providerProviderThe provider name ("openai:completions", "anthropic", or "google").
api_key= Nonestr | NoneAPI key for authentication. If None, uses provider-specific env var.
base_url= Nonestr | NoneBase URL for the API. If None, uses provider-specific env var.

Returns

TypeDescription
AnthropicClient | GoogleClient | OpenAICompletionsClient | OpenAIResponsesClientA cached client instance for the specified provider with the given parameters.

Function get_client

Get a client instance for the specified provider.

Multiple calls to get_client will return the same Client rather than constructing new ones.

Parameters

NameTypeDescription
providerProviderThe provider name ("openai:completions", "anthropic", or "google").

Returns

TypeDescription
AnthropicClient | GoogleClient | OpenAICompletionsClient | OpenAIResponsesClientA client instance for the specified provider. The specific client type