Mirascopev2
Lilypad

providers

Attribute KNOWN_PROVIDER_IDS

Type: get_args(KnownProviderId)

Attribute AnthropicModelId

Type: TypeAlias

The Anthropic model ids registered with Mirascope.

Class AnthropicProvider

The client for the Anthropic LLM model.

Bases:

BaseProvider[Anthropic]

Attributes

NameTypeDescription
id'anthropic'-
default_scope'anthropic/'-
clientAnthropic(api_key=api_key, base_url=base_url)-
async_clientAsyncAnthropic(api_key=api_key, base_url=base_url)-

Class BaseProvider

Base abstract provider for LLM interactions.

This class defines explicit methods for each type of call, eliminating the need for complex overloads in provider implementations.

Bases: Generic[ProviderClientT], ABC

Attributes

NameTypeDescription
idProviderIdProvider identifier (e.g., "anthropic", "openai").
default_scopestr | list[str]Default scope(s) for this provider when explicitly registered. Can be a single scope string or a list of scopes. For example: - "anthropic/" - Single scope - ["anthropic/", "openai/"] - Multiple scopes (e.g., for AWS Bedrock)
clientProviderClientT-

Function call

Generate an llm.Response by synchronously calling this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
model_idstrModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
Response | Response[FormattableT]An `llm.Response` object containing the LLM-generated content.

Function context_call

Generate an llm.ContextResponse by synchronously calling this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idstrModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]An `llm.ContextResponse` object containing the LLM-generated content.

Function call_async

Generate an llm.AsyncResponse by asynchronously calling this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
model_idstrModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]An `llm.AsyncResponse` object containing the LLM-generated content.

Function context_call_async

Generate an llm.AsyncContextResponse by asynchronously calling this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idstrModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]An `llm.AsyncContextResponse` object containing the LLM-generated content.

Function stream

Generate an llm.StreamResponse by synchronously streaming from this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
model_idstrModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]An `llm.StreamResponse` object for iterating over the LLM-generated content.

Function context_stream

Generate an llm.ContextStreamResponse by synchronously streaming from this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idstrModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]An `llm.ContextStreamResponse` object for iterating over the LLM-generated content.

Function stream_async

Generate an llm.AsyncStreamResponse by asynchronously streaming from this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
model_idstrModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function context_stream_async

Generate an llm.AsyncContextStreamResponse by asynchronously streaming from this client's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idstrModel identifier to use.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function resume

Generate a new llm.Response by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
model_idstrModel identifier to use.
responseResponse | Response[FormattableT]Previous response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
Response | Response[FormattableT]A new `llm.Response` object containing the extended conversation.

Function resume_async

Generate a new llm.AsyncResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
model_idstrModel identifier to use.
responseAsyncResponse | AsyncResponse[FormattableT]Previous async response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]A new `llm.AsyncResponse` object containing the extended conversation.

Function context_resume

Generate a new llm.ContextResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idstrModel identifier to use.
responseContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]Previous context response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]A new `llm.ContextResponse` object containing the extended conversation.

Function context_resume_async

Generate a new llm.AsyncContextResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idstrModel identifier to use.
responseAsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]Previous async context response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]A new `llm.AsyncContextResponse` object containing the extended conversation.

Function resume_stream

Generate a new llm.StreamResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
model_idstrModel identifier to use.
responseStreamResponse | StreamResponse[FormattableT]Previous stream response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]A new `llm.StreamResponse` object for streaming the extended conversation.

Function resume_stream_async

Generate a new llm.AsyncStreamResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
model_idstrModel identifier to use.
responseAsyncStreamResponse | AsyncStreamResponse[FormattableT]Previous async stream response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]A new `llm.AsyncStreamResponse` object for asynchronously streaming the extended conversation.

Function context_resume_stream

Generate a new llm.ContextStreamResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idstrModel identifier to use.
responseContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]Previous context stream response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]A new `llm.ContextStreamResponse` object for streaming the extended conversation.

Function context_resume_stream_async

Generate a new llm.AsyncContextStreamResponse by extending another response's messages with additional user content.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
model_idstrModel identifier to use.
responseAsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]Previous async context stream response to extend.
contentUserContentAdditional user content to append.
params= {}Unpack[Params]-

Returns

TypeDescription
AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]A new `llm.AsyncContextStreamResponse` object for asynchronously streaming the extended conversation.

Attribute GoogleModelId

Type: TypeAlias

The Google model ids registered with Mirascope.

Class GoogleProvider

The client for the Google LLM model.

Bases:

BaseProvider[Client]

Attributes

NameTypeDescription
id'google'-
default_scope'google/'-
clientClient(api_key=api_key, http_options=http_options)-

Attribute MLXModelId

Type: TypeAlias

The identifier of the MLX model to be loaded by the MLX client.

An MLX model identifier might be a local path to a model's file, or a huggingface repository such as:

  • "mlx-community/Qwen3-8B-4bit-DWQ-053125"
  • "mlx-community/gpt-oss-20b-MXFP4-Q8"

For more details, see:

Class MLXProvider

Client for interacting with MLX language models.

This client provides methods for generating responses from MLX models, supporting both synchronous and asynchronous operations, as well as streaming responses.

Bases:

BaseProvider[None]

Attributes

NameTypeDescription
id'mlx'-
default_scope'mlx-community/'-

Attribute ModelId

Type: TypeAlias

Attribute OpenAIModelId

Type: OpenAIKnownModels | str

Valid OpenAI model IDs including API-specific variants.

Class OpenAIProvider

Unified provider for OpenAI that routes to Completions or Responses API based on model_id.

Bases:

BaseProvider[OpenAI]

Attributes

NameTypeDescription
id'openai'-
default_scope'openai/'-
clientself._completions_provider.client-

Class Params

Common parameters shared across LLM providers.

Note: Each provider may handle these parameters differently or not support them at all. Please check provider-specific documentation for parameter support and behavior.

Bases:

TypedDict

Attributes

NameTypeDescription
temperaturefloatControls randomness in the output (0.0 to 1.0). Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results.
max_tokensintMaximum number of tokens to generate.
top_pfloatNucleus sampling parameter (0.0 to 1.0). Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses.
top_kintLimits token selection to the k most probable tokens (typically 1 to 100). For each token selection step, the ``top_k`` tokens with the highest probabilities are sampled. Then tokens are further filtered based on ``top_p`` with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses.
seedintRandom seed for reproducibility. When ``seed`` is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests. Not supported by all providers, and does not guarantee strict reproducibility.
stop_sequenceslist[str]Stop sequences to end generation. The model will stop generating text if one of these strings is encountered in the response.
thinkingboolConfigures whether the model should use thinking. Thinking is a process where the model spends additional tokens thinking about the prompt before generating a response. You may configure thinking either by passing a bool to enable or disable it. If `params.thinking` is `True`, then thinking and thought summaries will be enabled (if supported by the model/provider), with a default budget for thinking tokens. If `params.thinking` is `False`, then thinking will be wholly disabled, assuming the model allows this (some models, e.g. `google:gemini-2.5-pro`, do not allow disabling thinking). If `params.thinking` is unset (or `None`), then we will use provider-specific default behavior for the chosen model.
encode_thoughts_as_textboolConfigures whether `Thought` content should be re-encoded as text for model consumption. If `True`, then when an `AssistantMessage` contains `Thoughts` and is being passed back to an LLM, those `Thoughts` will be encoded as `Text`, so that the assistant can read those thoughts. That ensures the assistant has access to (at least the summarized output of) its reasoning process, and contrasts with provider default behaviors which may ignore prior thoughts, particularly if tool calls are not involved. When `True`, we will always re-encode Mirascope messages being passed to the provider, rather than reusing raw provider response content. This may disable provider-specific behavior like cached reasoning tokens. If `False`, then `Thoughts` will not be encoded as text, and whether reasoning context is available to the model depends entirely on the provider's behavior. Defaults to `False` if unset.

Attribute Provider

Type: TypeAlias

Type alias for BaseProvider with any client type.

Attribute ProviderId

Type: KnownProviderId | str

Function get_provider_for_model

Get the provider for a model_id based on the registry.

Uses longest prefix matching to find the most specific provider for the model. If no explicit registration is found, checks for auto-registration defaults and automatically registers the provider on first use.

Parameters

NameTypeDescription
model_idstrThe full model ID (e.g., "anthropic/claude-4-5-sonnet").

Returns

TypeDescription
ProviderThe provider instance registered for this model.

Attribute load

Type: load_provider

Convenient alias as llm.providers.load

Function load_provider

Create a cached provider instance for the specified provider id.

Parameters

NameTypeDescription
provider_idProviderIdThe provider name ("openai", "anthropic", or "google").
api_key= Nonestr | NoneAPI key for authentication. If None, uses provider-specific env var.
base_url= Nonestr | NoneBase URL for the API. If None, uses provider-specific env var.

Returns

TypeDescription
ProviderA cached provider instance for the specified provider with the given parameters.

Function register_provider

Register a provider with scope(s) in the global registry.

Scopes use prefix matching on model IDs:

  • "anthropic/" matches "anthropic/*"
  • "anthropic/claude-4-5" matches "anthropic/claude-4-5*"
  • "anthropic/claude-4-5-sonnet" matches exactly "anthropic/claude-4-5-sonnet"

When multiple scopes match a model_id, the longest match wins.

Parameters

NameTypeDescription
providerProviderId | ProviderEither a provider ID string or a provider instance.
scope= Nonestr | list[str] | NoneScope string or list of scopes for prefix matching on model IDs. If None, uses the provider's default_scope attribute. Can be a single string or a list of strings.
api_key= Nonestr | NoneAPI key for authentication (only used if provider is a string).
base_url= Nonestr | NoneBase URL for the API (only used if provider is a string).

Returns

TypeDescription
Provider-