Mirascopev2
Lilypad

calls

Class AsyncCall

A class for generating responses using LLMs asynchronously.

Bases: BaseCall[P, AsyncPrompt, AsyncToolkit, FormattableT], Generic[P, FormattableT]

Function call

Generates a response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Class AsyncContextCall

A class for generating responses using LLMs asynchronously.

Bases: BaseCall[P, AsyncContextPrompt, AsyncContextToolkit[DepsT], FormattableT], Generic[P, DepsT, FormattableT]

Function call

Generates a response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the LLM asynchronously.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Class Call

A class for generating responses using LLMs.

Bases: BaseCall[P, Prompt, Toolkit, FormattableT], Generic[P, FormattableT]

Function call

Generates a response using the LLM.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
Response | Response[FormattableT]-

Function stream

Generates a streaming response using the LLM.

Parameters

NameTypeDescription
selfAny-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Class CallDecorator

A decorator for converting prompts to calls.

Bases:

Generic[ToolT, FormattableT]

Attributes

NameTypeDescription
modelModel-
toolsSequence[ToolT] | None-
formattype[FormattableT] | Format[FormattableT] | None-

Class ContextCall

A class for generating responses using LLMs.

Bases: BaseCall[P, ContextPrompt, ContextToolkit[DepsT], FormattableT], Generic[P, DepsT, FormattableT]

Function call

Generates a response using the LLM.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]-

Function stream

Generates a streaming response using the LLM.

Parameters

NameTypeDescription
selfAny-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function call

Returns a decorator for turning prompt template functions into generations.

This decorator creates a Call or ContextCall that can be used with prompt functions. If the first parameter is typed as llm.Context[T], it creates a ContextCall. Otherwise, it creates a regular Call.

Example:

Regular call:

from mirascope import llm

@llm.call(
    provider="openai:completions",
    model_id="gpt-4o-mini",
)
def answer_question(question: str) -> str:
    return f"Answer this question: {question}"

response: llm.Response = answer_question("What is the capital of France?")
print(response)

Example:

Context call:

from dataclasses import dataclass
from mirascope import llm

@dataclass
class Personality:
    vibe: str

@llm.call(
    provider="openai:completions",
    model_id="gpt-4o-mini",
)
def answer_question(ctx: llm.Context[Personality], question: str) -> str:
    return f"Your vibe is {ctx.deps.vibe}. Answer this question: {question}"

ctx = llm.Context(deps=Personality(vibe="snarky"))
response = answer_question(ctx, "What is the capital of France?")
print(response)

Parameters

NameTypeDescription
providerProvider-
model_idModelId-
tools= Nonelist[ToolT] | None-
format= Nonetype[FormattableT] | Format[FormattableT] | None-
params= {}Unpack[Params]-

Returns

TypeDescription
CallDecorator[ToolT, FormattableT]-