Mirascopev2
Lilypad

models

Class Model

The unified LLM interface that delegates to provider-specific clients.

This class provides a consistent interface for interacting with language models from various providers. It handles the common operations like generating responses, streaming, and async variants by delegating to the appropriate client methods.

Usage Note: In most cases, you should use llm.use_model() instead of instantiating Model directly. This preserves the ability to override the model at runtime using the llm.model() context manager. Only instantiate Model directly if you want to hardcode a specific model and prevent it from being overridden by context.

Example (recommended - allows override):

from mirascope import llm

def recommend_book(genre: str) -> llm.Response:
    # Uses context model if available, otherwise creates default
    model = llm.use_model(provider="openai", model_id="gpt-4o-mini")
    message = llm.messages.user(f"Please recommend a book in {genre}.")
    return model.call(messages=[message])

# Uses default model
response = recommend_book("fantasy")

# Override with different model
with llm.model(provider="anthropic", model_id="claude-sonnet-4-0"):
    response = recommend_book("fantasy")  # Uses Claude

Example (direct instantiation - prevents override):

from mirascope import llm

def recommend_book(genre: str) -> llm.Response:
    # Hardcoded model, cannot be overridden by context
    model = llm.Model(provider="openai", model_id="gpt-4o-mini")
    message = llm.messages.user(f"Please recommend a book in {genre}.")
    return model.call(messages=[message])

Attributes

NameTypeDescription
providerProviderThe provider being used (e.g. `openai`).
model_idModelIdThe model being used (e.g. `gpt-4o-mini`).
paramsParamsThe default parameters for the model (temperature, max_tokens, etc.).

Function call

Generate an llm.Response by synchronously calling this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
Response | Response[FormattableT]An `llm.Response` object containing the LLM-generated content.

Function call_async

Generate an llm.AsyncResponse by asynchronously calling this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]An `llm.AsyncResponse` object containing the LLM-generated content.

Function stream

Generate an llm.StreamResponse by synchronously streaming from this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool] | Toolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]An `llm.StreamResponse` object for iterating over the LLM-generated content.

Function stream_async

Generate an llm.AsyncStreamResponse by asynchronously streaming from this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
messageslist[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool] | AsyncToolkit | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function context_call

Generate an llm.ContextResponse by synchronously calling this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]An `llm.ContextResponse` object containing the LLM-generated content.

Function context_call_async

Generate an llm.AsyncContextResponse by asynchronously calling this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]An `llm.AsyncContextResponse` object containing the LLM-generated content.

Function context_stream

Generate an llm.ContextStreamResponse by synchronously streaming from this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
messagesSequence[Message]Messages to send to the LLM.
tools= NoneSequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]An `llm.ContextStreamResponse` object for iterating over the LLM-generated content.

Function context_stream_async

Generate an llm.AsyncContextStreamResponse by asynchronously streaming from this model's LLM provider.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
messageslist[Message]Messages to send to the LLM.
tools= NoneSequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | NoneOptional tools that the model may invoke.
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format specifier.

Returns

TypeDescription
AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content.

Function resume

Generate a new llm.Response by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
responseResponse | Response[FormattableT]Previous response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
Response | Response[FormattableT]A new `llm.Response` object containing the extended conversation.

Function resume_async

Generate a new llm.AsyncResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
responseAsyncResponse | AsyncResponse[FormattableT]Previous async response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
AsyncResponse | AsyncResponse[FormattableT]A new `llm.AsyncResponse` object containing the extended conversation.

Function context_resume

Generate a new llm.ContextResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
responseContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]Previous context response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]A new `llm.ContextResponse` object containing the extended conversation.

Function context_resume_async

Generate a new llm.AsyncContextResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
responseAsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]Previous async context response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT]A new `llm.AsyncContextResponse` object containing the extended conversation.

Function resume_stream

Generate a new llm.StreamResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
responseStreamResponse | StreamResponse[FormattableT]Previous stream response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
StreamResponse | StreamResponse[FormattableT]A new `llm.StreamResponse` object for streaming the extended conversation.

Function resume_stream_async

Generate a new llm.AsyncStreamResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
responseAsyncStreamResponse | AsyncStreamResponse[FormattableT]Previous async stream response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
AsyncStreamResponse | AsyncStreamResponse[FormattableT]A new `llm.AsyncStreamResponse` object for asynchronously streaming the extended conversation.

Function context_resume_stream

Generate a new llm.ContextStreamResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
responseContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]Previous context stream response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT]A new `llm.ContextStreamResponse` object for streaming the extended conversation.

Function context_resume_stream_async

Generate a new llm.AsyncContextStreamResponse by extending another response's messages with additional user content.

Uses the previous response's tools and output format, and this model's params.

Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]Context object with dependencies for tools.
responseAsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]Previous async context stream response to extend.
contentUserContentAdditional user content to append.

Returns

TypeDescription
AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT]A new `llm.AsyncContextStreamResponse` object for asynchronously streaming the extended conversation.

Function get_model_from_context

Get the LLM currently set via context, if any.

Returns

TypeDescription
Model | None-

Function model

Set a model in context for the duration of the context manager.

This context manager sets a model that will be used by llm.use_model() calls within the context. This allows you to override the default model at runtime.

Example:

import mirascope.llm as llm

def recommend_book(genre: str) -> llm.Response:
    model = llm.use_model(provider="openai", model_id="gpt-4o-mini")
    message = llm.messages.user(f"Please recommend a book in {genre}.")
    return model.call(messages=[message])

# Override the default model at runtime
with llm.model(provider="anthropic", model_id="claude-sonnet-4-0"):
    response = recommend_book("fantasy")  # Uses Claude instead of GPT

Parameters

NameTypeDescription
providerProviderThe LLM provider to use (e.g., "openai:completions", "anthropic", "google").
model_idModelIdThe specific model identifier for the chosen provider.
params= {}Unpack[Params]-

Returns

TypeDescription
Iterator[None]-

Function use_model

Get the model from context if available, otherwise create a new Model.

This function checks if a model has been set in the context (via llm.model() context manager). If a model is found in the context, it returns that model. Otherwise, it creates and returns a new llm.Model instance with the provided arguments as defaults.

This allows you to write functions that work with a default model but can be overridden at runtime using the llm.model() context manager.

Example:

import mirascope.llm as llm

def recommend_book(genre: str) -> llm.Response:
    model = llm.use_model(provider="openai", model_id="gpt-4o-mini")
    message = llm.messages.user(f"Please recommend a book in {genre}.")
    return model.call(messages=[message])

# Uses the default model (gpt-4o-mini)
response = recommend_book("fantasy")

# Override with a different model
with llm.model(provider="anthropic", model_id="claude-sonnet-4-0"):
    response = recommend_book("fantasy")  # Uses Claude instead

Parameters

NameTypeDescription
providerProviderThe LLM provider to use (e.g., "openai:completions", "anthropic", "google").
model_idModelIdThe specific model identifier for the chosen provider.
params= {}Unpack[Params]-

Returns

TypeDescription
ModelAn `llm.Model` instance from context or a new instance with the specified settings.