calls
Class AsyncCall
A class for generating responses using LLMs asynchronously.
Bases: BaseCall[P, AsyncPrompt, AsyncToolkit, FormattableT], Generic[P, FormattableT]
Function call
Generates a response using the LLM asynchronously.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| AsyncResponse | AsyncResponse[FormattableT] | - |
Function stream
Generates a streaming response using the LLM asynchronously.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| AsyncStreamResponse[FormattableT] | AsyncStreamResponse | - |
Class AsyncContextCall
A class for generating responses using LLMs asynchronously.
Bases: BaseCall[P, AsyncContextPrompt, AsyncContextToolkit[DepsT], FormattableT], Generic[P, DepsT, FormattableT]
Function call
Generates a response using the LLM asynchronously.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| AsyncContextResponse[DepsT, None] | AsyncContextResponse[DepsT, FormattableT] | - |
Function stream
Generates a streaming response using the LLM asynchronously.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| AsyncContextStreamResponse[DepsT, None] | AsyncContextStreamResponse[DepsT, FormattableT] | - |
Class Call
A class for generating responses using LLMs.
Bases: BaseCall[P, Prompt, Toolkit, FormattableT], Generic[P, FormattableT]
Function call
Generates a response using the LLM.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| Response | Response[FormattableT] | - |
Function stream
Generates a streaming response using the LLM.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| StreamResponse | StreamResponse[FormattableT] | - |
Class CallDecorator
A decorator for converting prompts to calls.
Bases:
Generic[ToolT, FormattableT]Attributes
| Name | Type | Description |
|---|---|---|
| model | Model | - |
| tools | Sequence[ToolT] | None | - |
| format | type[FormattableT] | Format[FormattableT] | None | - |
Class ContextCall
A class for generating responses using LLMs.
Bases: BaseCall[P, ContextPrompt, ContextToolkit[DepsT], FormattableT], Generic[P, DepsT, FormattableT]
Function call
Generates a response using the LLM.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT] | - |
Function stream
Generates a streaming response using the LLM.
Parameters
| Name | Type | Description |
|---|---|---|
| self | Any | - |
| ctx | Context[DepsT] | - |
| args= () | P.args | - |
| kwargs= {} | P.kwargs | - |
Returns
| Type | Description |
|---|---|
| ContextStreamResponse[DepsT, None] | ContextStreamResponse[DepsT, FormattableT] | - |
Function call
Returns a decorator for turning prompt template functions into generations.
This decorator creates a Call or ContextCall that can be used with prompt functions.
If the first parameter is typed as llm.Context[T], it creates a ContextCall.
Otherwise, it creates a regular Call.
Example:
Regular call:
from mirascope import llm
@llm.call(
provider="openai:completions",
model_id="gpt-4o-mini",
)
def answer_question(question: str) -> str:
return f"Answer this question: {question}"
response: llm.Response = answer_question("What is the capital of France?")
print(response)Example:
Context call:
from dataclasses import dataclass
from mirascope import llm
@dataclass
class Personality:
vibe: str
@llm.call(
provider="openai:completions",
model_id="gpt-4o-mini",
)
def answer_question(ctx: llm.Context[Personality], question: str) -> str:
return f"Your vibe is {ctx.deps.vibe}. Answer this question: {question}"
ctx = llm.Context(deps=Personality(vibe="snarky"))
response = answer_question(ctx, "What is the capital of France?")
print(response)Parameters
Returns
| Type | Description |
|---|---|
| CallDecorator[ToolT, FormattableT] | - |