llm
Class APIError
Base class for API-related errors.
Bases:
MirascopeErrorAttributes
| Name | Type | Description |
|---|---|---|
| status_code | int | None | - |
Attribute AssistantContent
Type: TypeAlias
Type alias for content that can fit into an AssistantMessage.
Attribute AssistantContentChunk
Type: TypeAlias
Chunks of assistant content that may be streamed as generated by the LLM.
Attribute AssistantContentPart
Type: TypeAlias
Content parts that can be included in an AssistantMessage.
Class AssistantMessage
An assistant message containing the model's response.
Attributes
| Name | Type | Description |
|---|---|---|
| role | Literal['assistant'] | The role of this message. Always "assistant". |
| content | Sequence[AssistantContentPart] | The content of the assistant message. |
| name | str | None | A name identifying the creator of this message. |
| provider | Provider | None | The LLM provider that generated this assistant message, if available. |
| model_id | ModelId | None | The model identifier of the LLM that generated this assistant message, if available. |
| raw_message | Jsonable | None | The provider-specific raw representation of this assistant message, if available. If raw_content is truthy, then it may be used for provider-specific behavior when resuming an LLM interaction that included this assistant message. For example, we can reuse the provider-specific raw encoding rather than re-encoding the message from it's Mirascope content representation. This may also take advantage of server-side provider context, e.g. identifiers of reasoning context tokens that the provider generated. If present, the content should be encoded as JSON-serializable data, and in a format that matches representation the provider expects representing the Mirascope data. This may involve e.g. converting Pydantic `BaseModel`s into plain dicts via `model_dump`. Raw content is not required, as the Mirascope content can also be used to generate a valid input to the provider (potentially without taking advantage of provider-specific reasoning caches, etc). In that case raw content should be left empty. |
Attribute AsyncChunkIterator
Type: TypeAlias
Asynchronous iterator yielding chunks with raw data.
Class AsyncContextResponse
The response generated by an LLM from an async context call.
Bases: BaseResponse[AsyncContextToolkit[DepsT], FormattableT], Generic[DepsT, FormattableT]
Function execute_tools
Execute and return all of the tool calls in the response.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | A `Context` with the required deps type. |
Returns
| Type | Description |
|---|---|
| Sequence[ToolOutput] | A sequence containing a `ToolOutput` for every tool call in the order they appeared. |
Function resume
Generate a new AsyncContextResponse using this response's messages with additional user content.
Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | A Context with the required deps type. |
| content | UserContent | The new user message content to append to the message history. |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT] | AsyncContextResponse[DepsT, FormattableT] | A new `AsyncContextResponse` instance generated from the extended message history. |
Class AsyncContextStreamResponse
An AsyncContextStreamResponse wraps response content from the LLM with a streaming interface.
This class supports iteration to process chunks as they arrive from the model.
Content can be streamed in one of three ways:
- Via
.streams(), which provides an iterator of streams, where each stream contains chunks of streamed data. The chunks containdeltas (new content in that particular chunk), and the stream itself accumulates the collected state of all the chunks processed thus far. - Via
.chunk_stream()which allows iterating over Mirascope's provider- agnostic chunk representation. - Via
.pretty_stream()a helper method which provides all response content asstrdeltas. Iterating throughpretty_streamwill yield text content and optionally placeholder representations for other content types, but it will still consume the full stream. - Via
.structured_stream(), a helper method which provides partial structured outputs from a response (useful when FormatT is set). Iterating throughstructured_streamwill only yield structured partials, but it will still consume the full stream.
As chunks are consumed, they are collected in-memory on the AsyncContextStreamResponse, and they
become available in .content, .messages, .tool_calls, etc. All of the stream
iterators can be restarted after the stream has been consumed, in which case they
will yield chunks from memory in the original sequence that came from the LLM. If
the stream is only partially consumed, a fresh iterator will first iterate through
in-memory content, and then will continue consuming fresh chunks from the LLM.
In the specific case of text chunks, they are included in the response content as soon
as they become available, via an llm.Text part that updates as more deltas come in.
This enables the behavior where resuming a partially-streamed response will include
as much text as the model generated.
For other chunks, like Thinking or ToolCall, they are only added to response
content once the corresponding part has fully streamed. This avoids issues like
adding incomplete tool calls, or thinking blocks missing signatures, to the response.
For each iterator, fully iterating through the iterator will consume the whole LLM stream. You can pause stream execution midway by breaking out of the iterator, and you can safely resume execution from the same iterator if desired.
Bases: BaseAsyncStreamResponse[AsyncContextToolkit, FormattableT], Generic[DepsT, FormattableT]
Function execute_tools
Execute and return all of the tool calls in the response.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | A `Context` with the required deps type. |
Returns
| Type | Description |
|---|---|
| Sequence[ToolOutput] | A sequence containing a `ToolOutput` for every tool call in the order they appeared. |
Function resume
Generate a new AsyncContextStreamResponse using this response's messages with additional user content.
Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | A Context with the required deps type. |
| content | UserContent | The new user message content to append to the message history. |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT] | AsyncContextStreamResponse[DepsT, FormattableT] | A new `AsyncContextStreamResponse` instance generated from the extended message history. |
Class AsyncContextTool
Protocol defining an async tool that can be used by LLMs with context.
An AsyncContextTool represents an async function that can be called by an LLM during a call.
It includes metadata like name, description, and parameter schema.
This class is not instantiated directly but created by the @tool() decorator.
Bases: ToolSchema[AsyncContextToolFn[DepsT, AnyP, JsonableCovariantT]], Generic[DepsT, JsonableCovariantT, AnyP]
Function execute
Execute the async context tool using an LLM-provided ToolCall.
Returns
| Type | Description |
|---|---|
| ToolOutput[JsonableCovariantT] | - |
Class AsyncContextToolkit
A collection of AsyncContextTools, with helpers for getting and executing specific tools.
Bases: BaseToolkit[AsyncTool | AsyncContextTool[DepsT]], Generic[DepsT]
Function execute
Execute an AsyncContextTool using the provided tool call.
Parameters
Returns
| Type | Description |
|---|---|
| ToolOutput[Jsonable] | The output from executing the `AsyncContextTool`. |
Class AsyncResponse
The response generated by an LLM in async mode.
Bases:
BaseResponse[AsyncToolkit, FormattableT]Function execute_tools
Execute and return all of the tool calls in the response.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| Sequence[ToolOutput] | A sequence containing a `ToolOutput` for every tool call in the order they appeared. |
Function resume
Generate a new AsyncResponse using this response's messages with additional user content.
Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| content | UserContent | The new user message content to append to the message history. |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | A new `AsyncResponse` instance generated from the extended message history. |
Attribute AsyncStream
Type: TypeAlias
An asynchronous assistant content stream.
Class AsyncStreamResponse
An AsyncStreamResponse wraps response content from the LLM with a streaming interface.
This class supports iteration to process chunks as they arrive from the model.
Content can be streamed in one of three ways:
- Via
.streams(), which provides an iterator of streams, where each stream contains chunks of streamed data. The chunks containdeltas (new content in that particular chunk), and the stream itself accumulates the collected state of all the chunks processed thus far. - Via
.chunk_stream()which allows iterating over Mirascope's provider- agnostic chunk representation. - Via
.pretty_stream()a helper method which provides all response content asstrdeltas. Iterating throughpretty_streamwill yield text content and optionally placeholder representations for other content types, but it will still consume the full stream. - Via
.structured_stream(), a helper method which provides partial structured outputs from a response (useful when FormatT is set). Iterating throughstructured_streamwill only yield structured partials, but it will still consume the full stream.
As chunks are consumed, they are collected in-memory on the AsyncContextStreamResponse, and they
become available in .content, .messages, .tool_calls, etc. All of the stream
iterators can be restarted after the stream has been consumed, in which case they
will yield chunks from memory in the original sequence that came from the LLM. If
the stream is only partially consumed, a fresh iterator will first iterate through
in-memory content, and then will continue consuming fresh chunks from the LLM.
In the specific case of text chunks, they are included in the response content as soon
as they become available, via an llm.Text part that updates as more deltas come in.
This enables the behavior where resuming a partially-streamed response will include
as much text as the model generated.
For other chunks, like Thinking or ToolCall, they are only added to response
content once the corresponding part has fully streamed. This avoids issues like
adding incomplete tool calls, or thinking blocks missing signatures, to the response.
For each iterator, fully iterating through the iterator will consume the whole LLM stream. You can pause stream execution midway by breaking out of the iterator, and you can safely resume execution from the same iterator if desired.
Bases:
BaseAsyncStreamResponse[AsyncToolkit, FormattableT]Function execute_tools
Execute and return all of the tool calls in the response.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| Sequence[ToolOutput] | A sequence containing a `ToolOutput` for every tool call in the order they appeared. |
Function resume
Generate a new AsyncStreamResponse using this response's messages with additional user content.
Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| content | UserContent | The new user message content to append to the message history. |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | A new `AsyncStreamResponse` instance generated from the extended message history. |
Class AsyncTextStream
Asynchronous text stream implementation.
Bases:
BaseAsyncStream[Text, str]Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['async_text_stream'] | - |
| content_type | Literal['text'] | The type of content stored in this stream. |
| partial_text | str | The accumulated text content as chunks are received. |
Function collect
Asynchronously collect all chunks and return the final Text content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| Text | The complete text content after consuming all chunks. |
Class AsyncThoughtStream
Asynchronous thought stream implementation.
Bases:
BaseAsyncStream[Thought, str]Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['async_thought_stream'] | - |
| content_type | Literal['thought'] | The type of content stored in this stream. |
| partial_thought | str | The accumulated thought content as chunks are received. |
Function collect
Asynchronously collect all chunks and return the final Thought content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| Thought | The complete thought content after consuming all chunks. |
Class AsyncTool
An async tool that can be used by LLMs.
An AsyncTool represents an async function that can be called by an LLM during a call.
It includes metadata like name, description, and parameter schema.
This class is not instantiated directly but created by the @tool() decorator.
Bases: ToolSchema[AsyncToolFn[AnyP, JsonableCovariantT]], Generic[AnyP, JsonableCovariantT]
Function execute
Execute the async tool using an LLM-provided ToolCall.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| tool_call | ToolCall | - |
Returns
| Type | Description |
|---|---|
| ToolOutput[JsonableCovariantT] | - |
Class AsyncToolCallStream
Asynchronous tool call stream implementation.
Bases:
BaseAsyncStream[ToolCall, str]Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['async_tool_call_stream'] | - |
| content_type | Literal['tool_call'] | The type of content stored in this stream. |
| tool_id | str | A unique identifier for this tool call. |
| tool_name | str | The name of the tool being called. |
| partial_args | str | The accumulated tool arguments as chunks are received. |
Function collect
Asynchronously collect all chunks and return the final ToolCall content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| ToolCall | The complete tool call after consuming all chunks. |
Class AsyncToolkit
A collection of AsyncTools, with helpers for getting and executing specific tools.
Bases:
BaseToolkit[AsyncTool]Function execute
Execute an AsyncTool using the provided tool call.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| tool_call | ToolCall | The tool call to execute. |
Returns
| Type | Description |
|---|---|
| ToolOutput[Jsonable] | The output from executing the `AsyncTool`. |
Class Audio
Audio content for a message.
Audio can be included in messages for voice or sound-based interactions.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['audio'] | - |
| source | Base64AudioSource | - |
Function download
Download and encode an audio file from a URL.
Parameters
Returns
| Type | Description |
|---|---|
| Audio | An `Audio` with a `Base64AudioSource` |
Function download_async
Asynchronously download and encode an audio file from a URL.
Parameters
Returns
| Type | Description |
|---|---|
| Audio | An `Audio` with a `Base64AudioSource` |
Function from_file
Create an Audio from a file path.
Parameters
Returns
| Type | Description |
|---|---|
| Audio | - |
Function from_bytes
Create an Audio from raw bytes.
Parameters
Returns
| Type | Description |
|---|---|
| Audio | - |
Class AuthenticationError
Raised for authentication failures (401, invalid API keys).
Bases:
APIErrorClass BadRequestError
Raised for malformed requests (400, 422).
Bases:
APIErrorClass Base64AudioSource
Audio data represented as a base64 encoded string.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['base64_audio_source'] | - |
| data | str | The audio data, as a base64 encoded string. |
| mime_type | AudioMimeType | The mime type of the audio (e.g. audio/mp3). |
Class Base64ImageSource
Image data represented as a base64 encoded string.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['base64_image_source'] | - |
| data | str | The image data, as a base64 encoded string. |
| mime_type | ImageMimeType | The mime type of the image (e.g. image/png). |
Attribute ChunkIterator
Type: TypeAlias
Synchronous iterator yielding chunks with raw data.
Class ConnectionError
Raised when unable to connect to the API (network issues, timeouts).
Bases:
MirascopeErrorClass Context
Context for LLM calls.
This class provides a context for LLM calls, including the model, parameters, and any dependencies needed for the call.
Bases:
Generic[DepsT]Attributes
| Name | Type | Description |
|---|---|---|
| deps | DepsT | The dependencies needed for a call. |
Class ContextResponse
The response generated by an LLM from a context call.
Bases: BaseResponse[ContextToolkit[DepsT], FormattableT], Generic[DepsT, FormattableT]
Function execute_tools
Execute and return all of the tool calls in the response.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | A `Context` with the required deps type. |
Returns
| Type | Description |
|---|---|
| Sequence[ToolOutput] | A sequence containing a `ToolOutput` for every tool call. |
Function resume
Generate a new ContextResponse using this response's messages with additional user content.
Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | A `Context` with the required deps type. |
| content | UserContent | The new user message content to append to the message history. |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT] | ContextResponse[DepsT, FormattableT] | A new `ContextResponse` instance generated from the extended message history. |
Class ContextStreamResponse
A ContextStreamResponse wraps response content from the LLM with a streaming interface.
This class supports iteration to process chunks as they arrive from the model.
Content can be streamed in one of three ways:
- Via
.streams(), which provides an iterator of streams, where each stream contains chunks of streamed data. The chunks containdeltas (new content in that particular chunk), and the stream itself accumulates the collected state of all the chunks processed thus far. - Via
.chunk_stream()which allows iterating over Mirascope's provider- agnostic chunk representation. - Via
.pretty_stream()a helper method which provides all response content asstrdeltas. Iterating throughpretty_streamwill yield text content and optionally placeholder representations for other content types, but it will still consume the full stream. - Via
.structured_stream(), a helper method which provides partial structured outputs from a response (useful when FormatT is set). Iterating throughstructured_streamwill only yield structured partials, but it will still consume the full stream.
As chunks are consumed, they are collected in-memory on the ContextStreamResponse, and they
become available in .content, .messages, .tool_calls, etc. All of the stream
iterators can be restarted after the stream has been consumed, in which case they
will yield chunks from memory in the original sequence that came from the LLM. If
the stream is only partially consumed, a fresh iterator will first iterate through
in-memory content, and then will continue consuming fresh chunks from the LLM.
In the specific case of text chunks, they are included in the response content as soon
as they become available, via an llm.Text part that updates as more deltas come in.
This enables the behavior where resuming a partially-streamed response will include
as much text as the model generated.
For other chunks, like Thinking or ToolCall, they are only added to response
content once the corresponding part has fully streamed. This avoids issues like
adding incomplete tool calls, or thinking blocks missing signatures, to the response.
For each iterator, fully iterating through the iterator will consume the whole LLM stream. You can pause stream execution midway by breaking out of the iterator, and you can safely resume execution from the same iterator if desired.
Bases: BaseSyncStreamResponse[ContextToolkit, FormattableT], Generic[DepsT, FormattableT]
Function execute_tools
Execute and return all of the tool calls in the response.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | A `Context` with the required deps type. |
Returns
| Type | Description |
|---|---|
| Sequence[ToolOutput] | A sequence containing a `ToolOutput` for every tool call. |
Function resume
Generate a new ContextStreamResponse using this response's messages with additional user content.
Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | A Context with the required deps type. |
| content | UserContent | The new user message content to append to the message history. |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT] | ContextStreamResponse[DepsT, FormattableT] | A new `ContextStreamResponse` instance generated from the extended message history. |
Class ContextTool
Protocol defining a tool that can be used by LLMs.
A ContextTool represents a function that can be called by an LLM during a call.
It includes metadata like name, description, and parameter schema.
This class is not instantiated directly but created by the @tool() decorator.
Bases: ToolSchema[ContextToolFn[DepsT, AnyP, JsonableCovariantT]], Generic[DepsT, JsonableCovariantT, AnyP]
Function execute
Execute the context tool using an LLM-provided ToolCall.
Returns
| Type | Description |
|---|---|
| ToolOutput[JsonableCovariantT] | - |
Class ContextToolkit
A collection of ContextTools, with helpers for getting and executing specific tools.
Bases: BaseToolkit[Tool | ContextTool[DepsT]], Generic[DepsT]
Function execute
Execute a ContextTool using the provided tool call.
Parameters
Returns
| Type | Description |
|---|---|
| ToolOutput[Jsonable] | The output from executing the `ContextTool`. |
Class Document
Document content for a message.
Documents (like PDFs) can be included for the model to analyze or reference.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['document'] | - |
| source | Base64DocumentSource | TextDocumentSource | URLDocumentSource | - |
Function from_url
Create a Document from a URL.
Returns
| Type | Description |
|---|---|
| Document | - |
Function from_file
Create a Document from a file path.
Parameters
Returns
| Type | Description |
|---|---|
| Document | - |
Function from_bytes
Create a Document from raw bytes.
Parameters
Returns
| Type | Description |
|---|---|
| Document | - |
Class FeatureNotSupportedError
Raised if a Mirascope feature is unsupported by chosen provider.
If compatibility is model-specific, then model_id should be specified.
If the feature is not supported by the provider at all, then it may be None.
Bases:
MirascopeErrorAttributes
| Name | Type | Description |
|---|---|---|
| provider | Provider | - |
| model_id | ModelId | None | - |
| feature | str | - |
Class FinishReason
The reason why the LLM finished generating a response.
FinishReason is only set when the response did not have a normal finish (e.g. it
ran out of tokens). When a response finishes generating normally, no finish reason
is set.
Attributes
| Name | Type | Description |
|---|---|---|
| MAX_TOKENS | 'max_tokens' | - |
| REFUSAL | 'refusal' | - |
Class Format
Class representing a structured output format for LLM responses.
A Format contains metadata needed to describe a structured output type
to the LLM, including the expected schema. This class is not instantiated directly,
but is created by calling llm.format, or is automatically generated by LLM
providers when a Formattable is passed to a call method.
Example:
from mirascope import llm
class Book:
title: str
author: str
print(llm.format(Book, mode="tool"))Bases:
Generic[FormattableT]Attributes
| Name | Type | Description |
|---|---|---|
| name | str | The name of the response format. |
| description | str | None | A description of the response format, if available. |
| schema | dict[str, object] | JSON schema representation of the structured output format. |
| mode | FormattingMode | The decorator-provided mode of the response format. Determines how the LLM call may be modified in order to extract the expected format. |
| formatting_instructions | str | None | The formatting instructions that will be added to the LLM system prompt. If the format type has a `formatting_instructions` class method, the output of that call will be used for instructions. Otherwise, instructions may be auto-generated based on the formatting mode. |
| formattable | type[FormattableT] | The `Formattable` type that this `Format` describes. While the `FormattbleT` typevar allows for `None`, a `Format` will never be constructed when the `FormattableT` is `None`, so you may treat this as a `RequiredFormattableT` in practice. |
Attribute FormattingMode
Type: Literal['strict', 'json', 'tool']
Available modes for response format generation.
-
"strict": Use strict mode for structured outputs, asking the LLM to strictly adhere to a given JSON schema. Not all providers or models support it, and may not be compatible with tool calling. When making a call using this mode, an
llm.FormattingModeNotSupportedErrorerror may be raised (if "strict" mode is wholly unsupported), or anllm.FeatureNotSupportedErrormay be raised (if trying to use strict along with tools and that is unsupported). -
"json": Use JSON mode for structured outputs. In contrast to strict mode, we ask the LLM to output JSON as text, though without guarantees that the model will output the expected format schema. If the provider has explicit JSON mode, it will be used; otherwise, Mirascope will modify the system prompt to request JSON output. May raise an
llm.FeatureNotSupportedErrorif tools are present and the model does not support tool calling when using JSON mode. -
"tool": Use forced tool calling to structure outputs. Mirascope will construct an ad-hoc tool with the required json schema as tool args. When the LLM chooses that tool, it will automatically be converted from a
ToolCallinto regular response content (abstracting over the tool call). If other tools are present, they will be handled as regular tool calls.
Note: When llm.format is not used, the provider will automatically choose a mode at call time.
Class FormattingModeNotSupportedError
Raised when trying to use a formatting mode that is not supported by the chosen model.
Bases:
FeatureNotSupportedErrorAttributes
| Name | Type | Description |
|---|---|---|
| formatting_mode | FormattingMode | - |
Class Image
Image content for a message.
Images can be included in messages to provide visual context. This can be used for both input (e.g., user uploading an image) and output (e.g., model generating an image).
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['image'] | - |
| source | Base64ImageSource | URLImageSource | - |
Function from_url
Create an Image reference from a URL, without downloading it.
Parameters
| Name | Type | Description |
|---|---|---|
| cls | Any | - |
| url | str | The URL of the image |
Returns
| Type | Description |
|---|---|
| Image | An `Image` with a `URLImageSource` |
Function download
Download and encode an image from a URL.
Parameters
Returns
| Type | Description |
|---|---|
| Image | An `Image` with a `Base64ImageSource` |
Function download_async
Asynchronously download and encode an image from a URL.
Parameters
Returns
| Type | Description |
|---|---|
| Image | An `Image` with a `Base64ImageSource` |
Function from_file
Create an Image from a file path.
Parameters
Returns
| Type | Description |
|---|---|
| Image | - |
Function from_bytes
Create an Image from raw bytes.
Parameters
Returns
| Type | Description |
|---|---|
| Image | - |
Attribute Message
Type: TypeAlias
A message in an LLM interaction.
Messages have a role (system, user, or assistant) and content that is a sequence of content parts. The content can include text, images, audio, documents, and tool interactions.
For most use cases, prefer the convenience functions system(), user(), and
assistant() instead of directly creating Message objects.
Example:
from mirascope import llm
messages = [
llm.messages.system("You are a helpful assistant."),
llm.messages.user("Hello, how are you?"),
]Class MirascopeError
Base exception for all Mirascope errors.
Bases:
ExceptionAttributes
| Name | Type | Description |
|---|---|---|
| original_exception | Exception | None | - |
Class Model
The unified LLM interface that delegates to provider-specific clients.
This class provides a consistent interface for interacting with language models from various providers. It handles the common operations like generating responses, streaming, and async variants by delegating to the appropriate client methods.
Usage Note: In most cases, you should use llm.use_model() instead of instantiating
Model directly. This preserves the ability to override the model at runtime using
the llm.model() context manager. Only instantiate Model directly if you want to
hardcode a specific model and prevent it from being overridden by context.
Example (recommended - allows override):
from mirascope import llm
def recommend_book(genre: str) -> llm.Response:
# Uses context model if available, otherwise creates default
model = llm.use_model(provider="openai", model_id="gpt-4o-mini")
message = llm.messages.user(f"Please recommend a book in {genre}.")
return model.call(messages=[message])
# Uses default model
response = recommend_book("fantasy")
# Override with different model
with llm.model(provider="anthropic", model_id="claude-sonnet-4-0"):
response = recommend_book("fantasy") # Uses ClaudeExample (direct instantiation - prevents override):
from mirascope import llm
def recommend_book(genre: str) -> llm.Response:
# Hardcoded model, cannot be overridden by context
model = llm.Model(provider="openai", model_id="gpt-4o-mini")
message = llm.messages.user(f"Please recommend a book in {genre}.")
return model.call(messages=[message])Attributes
| Name | Type | Description |
|---|---|---|
| provider | Provider | The provider being used (e.g. `openai`). |
| model_id | ModelId | The model being used (e.g. `gpt-4o-mini`). |
| params | Params | The default parameters for the model (temperature, max_tokens, etc.). |
Function call
Generate an llm.Response by synchronously calling this model's LLM provider.
Parameters
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | An `llm.Response` object containing the LLM-generated content. |
Function call_async
Generate an llm.AsyncResponse by asynchronously calling this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | An `llm.AsyncResponse` object containing the LLM-generated content. |
Function stream
Generate an llm.StreamResponse by synchronously streaming from this model's LLM provider.
Parameters
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | An `llm.StreamResponse` object for iterating over the LLM-generated content. |
Function stream_async
Generate an llm.AsyncStreamResponse by asynchronously streaming from this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| messages | list[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool] | AsyncToolkit | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | An `llm.AsyncStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function context_call
Generate an llm.ContextResponse by synchronously calling this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | An `llm.ContextResponse` object containing the LLM-generated content. |
Function context_call_async
Generate an llm.AsyncContextResponse by asynchronously calling this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | An `llm.AsyncContextResponse` object containing the LLM-generated content. |
Function context_stream
Generate an llm.ContextStreamResponse by synchronously streaming from this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| messages | Sequence[Message] | Messages to send to the LLM. |
| tools= None | Sequence[Tool | ContextTool[DepsT]] | ContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | An `llm.ContextStreamResponse` object for iterating over the LLM-generated content. |
Function context_stream_async
Generate an llm.AsyncContextStreamResponse by asynchronously streaming from this model's LLM provider.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| messages | list[Message] | Messages to send to the LLM. |
| tools= None | Sequence[AsyncTool | AsyncContextTool[DepsT]] | AsyncContextToolkit[DepsT] | None | Optional tools that the model may invoke. |
| format= None | type[FormattableT] | Format[FormattableT] | None | Optional response format specifier. |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | An `llm.AsyncContextStreamResponse` object for asynchronously iterating over the LLM-generated content. |
Function resume
Generate a new llm.Response by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| response | Response | Response[FormattableT] | Previous response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | A new `llm.Response` object containing the extended conversation. |
Function resume_async
Generate a new llm.AsyncResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| response | AsyncResponse | AsyncResponse[FormattableT] | Previous async response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | A new `llm.AsyncResponse` object containing the extended conversation. |
Function context_resume
Generate a new llm.ContextResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| response | ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | Previous context response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | A new `llm.ContextResponse` object containing the extended conversation. |
Function context_resume_async
Generate a new llm.AsyncContextResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| response | AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | Previous async context response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | A new `llm.AsyncContextResponse` object containing the extended conversation. |
Function resume_stream
Generate a new llm.StreamResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| response | StreamResponse | StreamResponse[FormattableT] | Previous stream response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | A new `llm.StreamResponse` object for streaming the extended conversation. |
Function resume_stream_async
Generate a new llm.AsyncStreamResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| response | AsyncStreamResponse | AsyncStreamResponse[FormattableT] | Previous async stream response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse | AsyncStreamResponse[FormattableT] | A new `llm.AsyncStreamResponse` object for asynchronously streaming the extended conversation. |
Function context_resume_stream
Generate a new llm.ContextStreamResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| response | ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | Previous context stream response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | A new `llm.ContextStreamResponse` object for streaming the extended conversation. |
Function context_resume_stream_async
Generate a new llm.AsyncContextStreamResponse by extending another response's messages with additional user content.
Uses the previous response's tools and output format, and this model's params.
Depending on the client, this may be a wrapper around using client call methods with the response's messages and the new content, or it may use a provider-specific API for resuming an existing interaction.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | Context object with dependencies for tools. |
| response | AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | Previous async context stream response to extend. |
| content | UserContent | Additional user content to append. |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | A new `llm.AsyncContextStreamResponse` object for asynchronously streaming the extended conversation. |
Class NotFoundError
Raised when requested resource is not found (404).
Bases:
APIErrorClass Params
Common parameters shared across LLM providers.
Note: Each provider may handle these parameters differently or not support them at all. Please check provider-specific documentation for parameter support and behavior.
Bases:
TypedDictAttributes
| Name | Type | Description |
|---|---|---|
| temperature | float | Controls randomness in the output (0.0 to 1.0). Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. |
| max_tokens | int | Maximum number of tokens to generate. |
| top_p | float | Nucleus sampling parameter (0.0 to 1.0). Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses. |
| top_k | int | Limits token selection to the k most probable tokens (typically 1 to 100). For each token selection step, the ``top_k`` tokens with the highest probabilities are sampled. Then tokens are further filtered based on ``top_p`` with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses. |
| seed | int | Random seed for reproducibility. When ``seed`` is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests. Not supported by all providers, and does not guarantee strict reproducibility. |
| stop_sequences | list[str] | Stop sequences to end generation. The model will stop generating text if one of these strings is encountered in the response. |
| thinking | bool | Configures whether the model should use thinking. Thinking is a process where the model spends additional tokens thinking about the prompt before generating a response. You may configure thinking either by passing a bool to enable or disable it. If `params.thinking` is `True`, then thinking and thought summaries will be enabled (if supported by the model/provider), with a default budget for thinking tokens. If `params.thinking` is `False`, then thinking will be wholly disabled, assuming the model allows this (some models, e.g. `google:gemini-2.5-pro`, do not allow disabling thinking). If `params.thinking` is unset (or `None`), then we will use provider-specific default behavior for the chosen model. |
| encode_thoughts_as_text | bool | Configures whether `Thought` content should be re-encoded as text for model consumption. If `True`, then when an `AssistantMessage` contains `Thoughts` and is being passed back to an LLM, those `Thoughts` will be encoded as `Text`, so that the assistant can read those thoughts. That ensures the assistant has access to (at least the summarized output of) its reasoning process, and contrasts with provider default behaviors which may ignore prior thoughts, particularly if tool calls are not involved. When `True`, we will always re-encode Mirascope messages being passed to the provider, rather than reusing raw provider response content. This may disable provider-specific behavior like cached reasoning tokens. If `False`, then `Thoughts` will not be encoded as text, and whether reasoning context is available to the model depends entirely on the provider's behavior. Defaults to `False` if unset. |
Class Partial
Generate a new class with all attributes optionals.
Bases:
Generic[FormattableT]Class PermissionError
Raised for permission/authorization failures (403).
Bases:
APIErrorClass RateLimitError
Raised when rate limits are exceeded (429).
Bases:
APIErrorClass RawMessageChunk
A chunk containing provider-specific raw message content that will be added to the AssistantMessage.
This chunk contains a provider-specific representation of a piece of content that
will be added to the AssistantMessage reconstructed by the containing stream.
This content should be a Jsonable Python object for serialization purposes.
The intention is that this content may be passed as-is back to the provider when the
generated AssistantMessage is being reused in conversation.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['raw_message_chunk'] | - |
| raw_message | Jsonable | The provider-specific raw content. Should be a Jsonable object. |
Class Response
The response generated by an LLM.
Bases:
BaseResponse[Toolkit, FormattableT]Function execute_tools
Execute and return all of the tool calls in the response.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| Sequence[ToolOutput] | A sequence containing a `ToolOutput` for every tool call in the order they appeared. |
Function resume
Generate a new Response using this response's messages with additional user content.
Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| content | UserContent | The new user message content to append to the message history. |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | A new `Response` instance generated from the extended message history. |
Class ServerError
Raised for server-side errors (500+).
Bases:
APIErrorAttribute Stream
Type: TypeAlias
A synchronous assistant content stream.
Class StreamResponse
A StreamResponse wraps response content from the LLM with a streaming interface.
This class supports iteration to process chunks as they arrive from the model.
Content can be streamed in one of three ways:
- Via
.streams(), which provides an iterator of streams, where each stream contains chunks of streamed data. The chunks containdeltas (new content in that particular chunk), and the stream itself accumulates the collected state of all the chunks processed thus far. - Via
.chunk_stream()which allows iterating over Mirascope's provider- agnostic chunk representation. - Via
.pretty_stream()a helper method which provides all response content asstrdeltas. Iterating throughpretty_streamwill yield text content and optionally placeholder representations for other content types, but it will still consume the full stream. - Via
.structured_stream(), a helper method which provides partial structured outputs from a response (useful when FormatT is set). Iterating throughstructured_streamwill only yield structured partials, but it will still consume the full stream.
As chunks are consumed, they are collected in-memory on the StreamResponse, and they
become available in .content, .messages, .tool_calls, etc. All of the stream
iterators can be restarted after the stream has been consumed, in which case they
will yield chunks from memory in the original sequence that came from the LLM. If
the stream is only partially consumed, a fresh iterator will first iterate through
in-memory content, and then will continue consuming fresh chunks from the LLM.
In the specific case of text chunks, they are included in the response content as soon
as they become available, via an llm.Text part that updates as more deltas come in.
This enables the behavior where resuming a partially-streamed response will include
as much text as the model generated.
For other chunks, like Thinking or ToolCall, they are only added to response
content once the corresponding part has fully streamed. This avoids issues like
adding incomplete tool calls, or thinking blocks missing signatures, to the response.
For each iterator, fully iterating through the iterator will consume the whole LLM stream. You can pause stream execution midway by breaking out of the iterator, and you can safely resume execution from the same iterator if desired.
Bases:
BaseSyncStreamResponse[Toolkit, FormattableT]Function execute_tools
Execute and return all of the tool calls in the response.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| Sequence[ToolOutput] | A sequence containing a `ToolOutput` for every tool call in the order they appeared. |
Function resume
Generate a new StreamResponse using this response's messages with additional user content.
Uses this response's tools and format type. Also uses this response's provider, model, client, and params, unless the model context manager is being used to provide a new LLM as an override.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| content | UserContent | The new user message content to append to the message history. |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | A new `StreamResponse` instance generated from the extended message history. |
Attribute StreamResponseChunk
Type: TypeAlias
Attribute SystemContent
Type: TypeAlias
Type alias for content that can fit into a SystemMessage.
Class SystemMessage
A system message that sets context and instructions for the conversation.
Attributes
| Name | Type | Description |
|---|---|---|
| role | Literal['system'] | The role of this message. Always "system". |
| content | Text | The content of this `SystemMesssage`. |
Class Text
Text content for a message.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['text'] | - |
| text | str | The text content. |
Class TextChunk
Represents an incremental text chunk in a stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['text_chunk'] | - |
| content_type | Literal['text'] | The type of content reconstructed by this chunk. |
| delta | str | The incremental text added in this chunk. |
Class TextEndChunk
Represents the end of a text chunk stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['text_end_chunk'] | - |
| content_type | Literal['text'] | The type of content reconstructed by this chunk. |
Class TextStartChunk
Represents the start of a text chunk stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['text_start_chunk'] | - |
| content_type | Literal['text'] | The type of content reconstructed by this chunk. |
Class TextStream
Synchronous text stream implementation.
Bases:
BaseStream[Text, str]Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['text_stream'] | - |
| content_type | Literal['text'] | The type of content stored in this stream. |
| partial_text | str | The accumulated text content as chunks are received. |
Function collect
Collect all chunks and return the final Text content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| Text | The complete text content after consuming all chunks. |
Class Thought
Thinking content for a message.
Represents the thinking or thought process of the assistant. These generally are summaries of the model's reasoning process, rather than the direct reasoning tokens, although this behavior is model and provider specific.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['thought'] | - |
| thought | str | The thoughts or reasoning of the assistant. |
Class ThoughtChunk
Represents an incremental thought chunk in a stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['thought_chunk'] | - |
| content_type | Literal['thought'] | The type of content reconstructed by this chunk. |
| delta | str | The incremental thoughts added in this chunk. |
Class ThoughtEndChunk
Represents the end of a thought chunk stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['thought_end_chunk'] | - |
| content_type | Literal['thought'] | The type of content reconstructed by this chunk. |
Class ThoughtStartChunk
Represents the start of a thought chunk stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['thought_start_chunk'] | - |
| content_type | Literal['thought'] | The type of content reconstructed by this chunk. |
Class ThoughtStream
Synchronous thought stream implementation.
Bases:
BaseStream[Thought, str]Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['thought_stream'] | - |
| content_type | Literal['thought'] | The type of content stored in this stream. |
| partial_thought | str | The accumulated thought content as chunks are received. |
Function collect
Collect all chunks and return the final Thought content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| Thought | The complete thought content after consuming all chunks. |
Class TimeoutError
Raised when requests timeout or deadline exceeded.
Bases:
MirascopeErrorClass Tool
A tool that can be used by LLMs.
A Tool represents a function that can be called by an LLM during a call.
It includes metadata like name, description, and parameter schema.
This class is not instantiated directly but created by the @tool() decorator.
Bases: ToolSchema[ToolFn[AnyP, JsonableCovariantT]], Generic[AnyP, JsonableCovariantT]
Function execute
Execute the tool using an LLM-provided ToolCall.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| tool_call | ToolCall | - |
Returns
| Type | Description |
|---|---|
| ToolOutput[JsonableCovariantT] | - |
Class ToolCall
Tool call content for a message.
Represents a request from the assistant to call a tool. This is part of an assistant message's content.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['tool_call'] | - |
| id | str | A unique identifier for this tool call. |
| name | str | The name of the tool to call. |
| args | str | The arguments to pass to the tool, stored as stringified json. |
Class ToolCallChunk
Represents an incremental tool call chunk in a stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['tool_call_chunk'] | - |
| content_type | Literal['tool_call'] | The type of content reconstructed by this chunk. |
| delta | str | The incremental json args added in this chunk. |
Class ToolCallEndChunk
Represents the end of a tool call chunk stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['tool_call_end_chunk'] | - |
| content_type | Literal['tool_call'] | The type of content reconstructed by this chunk. |
Class ToolCallStartChunk
Represents the start of a tool call chunk stream.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['tool_call_start_chunk'] | - |
| content_type | Literal['tool_call'] | The type of content reconstructed by this chunk. |
| id | str | A unique identifier for this tool call. |
| name | str | The name of the tool to call. |
Class ToolCallStream
Synchronous tool call stream implementation.
Bases:
BaseStream[ToolCall, str]Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['tool_call_stream'] | - |
| content_type | Literal['tool_call'] | The type of content stored in this stream. |
| tool_id | str | A unique identifier for this tool call. |
| tool_name | str | The name of the tool being called. |
| partial_args | str | The accumulated tool arguments as chunks are received. |
Function collect
Collect all chunks and return the final ToolCall content.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
Returns
| Type | Description |
|---|---|
| ToolCall | The complete tool call after consuming all chunks. |
Class ToolNotFoundError
Raised if a tool_call cannot be converted to any corresponding tool.
Bases:
MirascopeErrorClass ToolOutput
Tool output content for a message.
Represents the output from a tool call. This is part of a user message's content, typically following a tool call from the assistant.
Bases:
Generic[JsonableT]Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['tool_output'] | - |
| id | str | The ID of the tool call that this output is for. |
| name | str | The name of the tool that created this output. |
| value | JsonableT | The output value from the tool call. |
Class Toolkit
A collection of Tools, with helpers for getting and executing specific tools.
Bases:
BaseToolkit[Tool]Function execute
Execute a Tool using the provided tool call.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| tool_call | ToolCall | The tool call to execute. |
Returns
| Type | Description |
|---|---|
| ToolOutput[Jsonable] | The output from executing the `Tool`. |
Class URLImageSource
Image data referenced via external URL.
Attributes
| Name | Type | Description |
|---|---|---|
| type | Literal['url_image_source'] | - |
| url | str | The url of the image (e.g. https://example.com/sazed.png). |
Attribute UserContent
Type: TypeAlias
Type alias for content that can fit into a UserMessage.
Attribute UserContentPart
Type: TypeAlias
Content parts that can be included in a UserMessage.
Class UserMessage
A user message containing input from the user.
Attributes
| Name | Type | Description |
|---|---|---|
| role | Literal['user'] | The role of this message. Always "user". |
| content | Sequence[UserContentPart] | The content of the user message. |
| name | str | None | A name identifying the creator of this message. |
Function call
Returns a decorator for turning prompt template functions into generations.
This decorator creates a Call or ContextCall that can be used with prompt functions.
If the first parameter is typed as llm.Context[T], it creates a ContextCall.
Otherwise, it creates a regular Call.
Example:
Regular call:
from mirascope import llm
@llm.call(
provider="openai:completions",
model_id="gpt-4o-mini",
)
def answer_question(question: str) -> str:
return f"Answer this question: {question}"
response: llm.Response = answer_question("What is the capital of France?")
print(response)Example:
Context call:
from dataclasses import dataclass
from mirascope import llm
@dataclass
class Personality:
vibe: str
@llm.call(
provider="openai:completions",
model_id="gpt-4o-mini",
)
def answer_question(ctx: llm.Context[Personality], question: str) -> str:
return f"Your vibe is {ctx.deps.vibe}. Answer this question: {question}"
ctx = llm.Context(deps=Personality(vibe="snarky"))
response = answer_question(ctx, "What is the capital of France?")
print(response)Parameters
Returns
| Type | Description |
|---|---|
| CallDecorator[ToolT, FormattableT] | - |
Module calls
The llm.calls module.
Function client
Create a cached client instance for the specified provider.
Parameters
Returns
| Type | Description |
|---|---|
| AnthropicClient | GoogleClient | OpenAICompletionsClient | OpenAIResponsesClient | A cached client instance for the specified provider with the given parameters. |
Module clients
Client interfaces for LLM providers.
Module content
The llm.messages.content module.
Module exceptions
Mirascope exception hierarchy for unified error handling across providers.
Function format
Returns a Format that describes structured output for a Formattable type.
This function converts a Formattable type (e.g. Pydantic BaseModel) into a Format
object that describes how the object should be formatted. Calling llm.format
is optional, as all the APIs that expect a Format can also take the Formattable
type directly. However, calling llm.format is necessary in order to specify the
formatting mode that will be used.
The Formattable type may provide custom formatting instructions via a
formatting_instructions(cls) classmethod. If that method is present, it will be called,
and the resulting instructions will automatically be appended to the system prompt.
If no formatting instructions are present, then Mirascope may auto-generate instructions
based on the active format mode. To disable this behavior and all prompt modification,
you can add the formatting_instructions classmethod and have it return None.
Parameters
| Name | Type | Description |
|---|---|---|
| formattable | type[FormattableT] | None | - |
| mode | FormattingMode | The format mode to use, one of the following: - "strict": Use model strict structured outputs, or fail if unavailable. - "tool": Use forced tool calling with a special tool that represents a formatted response. - "json": Use provider json mode if available, or modify prompt to request json if not. |
Returns
| Type | Description |
|---|---|
| Format[FormattableT] | None | A `Format` object describing the Formattable type. |
Module formatting
Response formatting interfaces for structuring LLM outputs.
This module provides a way to define structured output formats for LLM responses.
The @format decorator can be applied to classes to specify how LLM
outputs should be structured and parsed.
Function get_client
Get a client instance for the specified provider.
Multiple calls to get_client will return the same Client rather than constructing new ones.
Parameters
| Name | Type | Description |
|---|---|---|
| provider | Provider | The provider name ("openai:completions", "anthropic", or "google"). |
Returns
| Type | Description |
|---|---|
| AnthropicClient | GoogleClient | OpenAICompletionsClient | OpenAIResponsesClient | A client instance for the specified provider. The specific client type |
Module mcp
MCP compatibility module.
Module messages
The messages module for LLM interactions.
This module defines the message types used in LLM interactions. Messages are represented
as a unified Message class with different roles (system, user, assistant) and flexible
content arrays that can include text, images, audio, documents, and tool interactions.
Function model
Set a model in context for the duration of the context manager.
This context manager sets a model that will be used by llm.use_model() calls
within the context. This allows you to override the default model at runtime.
Example:
import mirascope.llm as llm
def recommend_book(genre: str) -> llm.Response:
model = llm.use_model(provider="openai", model_id="gpt-4o-mini")
message = llm.messages.user(f"Please recommend a book in {genre}.")
return model.call(messages=[message])
# Override the default model at runtime
with llm.model(provider="anthropic", model_id="claude-sonnet-4-0"):
response = recommend_book("fantasy") # Uses Claude instead of GPTParameters
Returns
| Type | Description |
|---|---|
| Iterator[None] | - |
Module models
The llm.models module for implementing the Model interface and utilities.
This module provides a unified interface for interacting with different LLM models
through the Model class. The llm.model() context manager allows you to override
the model at runtime, and llm.use_model() retrieves the model from context or
creates a default one.
Function prompt
Prompt decorator for turning functions (or "Prompts") into prompts.
This decorator transforms a function into a Prompt, i.e. a function that
returns list[llm.Message]. Its behavior depends on whether it's called with a spec
string.
If the first parameter is named 'ctx' or typed as llm.Context[T], it creates
a ContextPrompt. Otherwise, it creates a regular Prompt.
With a template string, it returns a PromptTemplateDecorator, in which case it uses the provided template to decorate an function with an empty body, and uses arguments to the function for variable substitution in the template. The resulting PromptTemplate returns messages based on the template.
Without a template string, it returns a PromptFunctionalDecorator, which transforms a Prompt (a function returning either message content, or messages) into a PromptTemplate. The resulting prompt template either promotes the content into a list containing a single user message, or passes along the messages returned by the decorated function.
Spec substitution rules:
- [USER], [ASSISTANT], [SYSTEM] demarcate the start of a new message with that role
- [MESSAGES] indicates the next variable contains a list of messages to include
{{ variable }}injects the variable as a string, unless annotated- Annotations:
{{ variable:annotation }}where annotation is one of: image, images, audio, audios, document, documents - Single content annotations (image, audio, document) expect a file path, URL, base64 string, or bytes, which becomes a content part with inferred mime-type
- Multiple content annotations (images, audios, documents) expect a list of strings or bytes, each becoming a content part with inferred mime-type
Parameters
| Name | Type | Description |
|---|---|---|
| __fn= None | ContextPromptable[P, DepsT] | AsyncContextPromptable[P, DepsT] | Promptable[P] | AsyncPromptable[P] | None | - |
| template= None | str | None | A string template with placeholders using `{{ variable_name }}` and optional role markers like [SYSTEM], [USER], and [ASSISTANT]. |
Returns
| Type | Description |
|---|---|
| ContextPrompt[P, DepsT] | AsyncContextPrompt[P, DepsT] | Prompt[P] | AsyncPrompt[P] | PromptDecorator | PromptTemplateDecorator | A PromptTemplateDecorator or PromptFunctionalDecorator that converts the decorated function into a prompt. |
Module prompts
The prompt templates module for LLM interactions.
This module defines the prompt templates used in LLM interactions, which are written as python functions.
Module responses
The Responses module for LLM responses.
Function tool
Decorator that turns a function into a tool definition.
This decorator creates a Tool or ContextTool that can be used with llm.call.
The function's name, docstring, and type hints are used to generate the
tool's metadata.
If the first parameter is named 'ctx' or typed as llm.Context[T], it creates
a ContextTool. Otherwise, it creates a regular Tool.
Examples:
Regular tool:
from mirascope import llm
@llm.tool
def available_books() -> list[str]:
"""Returns the list of available books."""
return ["The Name of the Wind"]Context tool:
from dataclasses import dataclass
from mirascope import llm
@dataclass
class Library:
books: list[str]
library = Library(books=["Mistborn", "Gödel, Escher, Bach", "Dune"])
@llm.tool
def available_books(ctx: llm.Context[Library]) -> list[str]:
"""Returns the list of available books."""
return ctx.deps.booksParameters
| Name | Type | Description |
|---|---|---|
| __fn= None | ContextToolFn[DepsT, P, JsonableCovariantT] | AsyncContextToolFn[DepsT, P, JsonableCovariantT] | ToolFn[P, JsonableCovariantT] | AsyncToolFn[P, JsonableCovariantT] | None | - |
| strict= False | bool | Whether the tool should use strict mode when supported by the model. |
Returns
| Type | Description |
|---|---|
| ContextTool[DepsT, JsonableCovariantT, P] | AsyncContextTool[DepsT, JsonableCovariantT, P] | Tool[P, JsonableCovariantT] | AsyncTool[P, JsonableCovariantT] | ToolDecorator | A decorator function that converts the function into a Tool or ContextTool. |
Module tools
The Tools module for LLMs.
Module types
Types for the LLM module.
Function use_model
Get the model from context if available, otherwise create a new Model.
This function checks if a model has been set in the context (via llm.model()
context manager). If a model is found in the context, it returns that model.
Otherwise, it creates and returns a new llm.Model instance with the provided
arguments as defaults.
This allows you to write functions that work with a default model but can be
overridden at runtime using the llm.model() context manager.
Example:
import mirascope.llm as llm
def recommend_book(genre: str) -> llm.Response:
model = llm.use_model(provider="openai", model_id="gpt-4o-mini")
message = llm.messages.user(f"Please recommend a book in {genre}.")
return model.call(messages=[message])
# Uses the default model (gpt-4o-mini)
response = recommend_book("fantasy")
# Override with a different model
with llm.model(provider="anthropic", model_id="claude-sonnet-4-0"):
response = recommend_book("fantasy") # Uses Claude insteadParameters
Returns
| Type | Description |
|---|---|
| Model | An `llm.Model` instance from context or a new instance with the specified settings. |