Mirascopev2
Lilypad

prompts

Class AsyncContextMessageTemplate

Protocol for an async context-aware prompt function that returns UserContent or Sequence[Message].

An async MessageTemplate with a first parameter named 'ctx' of type Context[DepsT]. Can be converted by the llm.prompt decorator into an AsyncContextPrompt, or by the llm.call decorator into an AsyncContextCall.

Bases:

Protocol[P, DepsT]

Class AsyncContextPrompt

An async context-aware prompt that can be called with a model to generate a response.

Created by decorating an async ContextMessageTemplate with llm.prompt. The decorated async function (with first parameter 'ctx' of type Context[DepsT]) becomes callable with a Model to generate LLM responses asynchronously with context dependencies.

An AsyncContextPrompt is essentially: async ContextMessageTemplate + tools + format. It can be invoked with a model: await prompt(model, ctx, *args, **kwargs).

Bases:

Generic[P, DepsT, FormattableT]

Attributes

NameTypeDescription
fnAsyncContextMessageTemplate[P, DepsT]The underlying async context-aware prompt function that generates message content.
toolkitAsyncContextToolkit[DepsT]The toolkit containing this prompt's async context-aware tools.
formattype[FormattableT] | Format[FormattableT] | NoneThe response format for the generated response.

Function call

Generates a response using the provided model asynchronously.

Parameters

NameTypeDescription
selfAny-
modelModel-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the provided model asynchronously.

Parameters

NameTypeDescription
selfAny-
modelModel-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Class AsyncMessageTemplate

Protocol for an async prompt function that returns UserContent or Sequence[Message].

An async MessageTemplate that can be converted by the llm.prompt decorator into an AsyncPrompt, or by the llm.call decorator into an AsyncCall.

Bases:

Protocol[P]

Class AsyncPrompt

An async prompt that can be called with a model to generate a response.

Created by decorating an async MessageTemplate with llm.prompt. The decorated async function becomes callable with a Model to generate LLM responses asynchronously.

An AsyncPrompt is essentially: async MessageTemplate + tools + format. It can be invoked with a model: await prompt(model, *args, **kwargs).

Bases:

Generic[P, FormattableT]

Attributes

NameTypeDescription
fnAsyncMessageTemplate[P]The underlying async prompt function that generates message content.
toolkitAsyncToolkitThe toolkit containing this prompt's async tools.
formattype[FormattableT] | Format[FormattableT] | NoneThe response format for the generated response.

Function call

Generates a response using the provided model asynchronously.

Parameters

NameTypeDescription
selfAny-
modelModel-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Function stream

Generates a streaming response using the provided model asynchronously.

Parameters

NameTypeDescription
selfAny-
modelModel-
args= ()P.args-
kwargs= {}P.kwargs-

Class ContextMessageTemplate

Protocol for a context-aware prompt function that returns UserContent or Sequence[Message].

A MessageTemplate with a first parameter named 'ctx' of type Context[DepsT]. Can be converted by the llm.prompt decorator into a ContextPrompt, or by the llm.call decorator into a ContextCall.

Bases:

Protocol[P, DepsT]

Class ContextPrompt

A context-aware prompt that can be called with a model to generate a response.

Created by decorating a ContextMessageTemplate with llm.prompt. The decorated function (with first parameter 'ctx' of type Context[DepsT]) becomes callable with a Model to generate LLM responses with context dependencies.

A ContextPrompt is essentially: ContextMessageTemplate + tools + format. It can be invoked with a model: prompt(model, ctx, *args, **kwargs).

Bases:

Generic[P, DepsT, FormattableT]

Attributes

NameTypeDescription
fnContextMessageTemplate[P, DepsT]The underlying context-aware prompt function that generates message content.
toolkitContextToolkit[DepsT]The toolkit containing this prompt's context-aware tools.
formattype[FormattableT] | Format[FormattableT] | NoneThe response format for the generated response.

Function call

Generates a response using the provided model.

Parameters

NameTypeDescription
selfAny-
modelModel-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
ContextResponse[DepsT, None] | ContextResponse[DepsT, FormattableT]-

Function stream

Generates a streaming response using the provided model.

Parameters

NameTypeDescription
selfAny-
modelModel-
ctxContext[DepsT]-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Class MessageTemplate

Protocol for a prompt function that returns UserContent or Sequence[Message].

A MessageTemplate is a raw function that returns prompt content. It can be converted by the llm.prompt decorator into a Prompt (callable with a Model), or by the llm.call decorator into a Call (Prompt + Model).

Bases:

Protocol[P]

Class Prompt

A prompt that can be called with a model to generate a response.

Created by decorating a MessageTemplate with llm.prompt. The decorated function becomes callable with a Model to generate LLM responses.

A Prompt is essentially: MessageTemplate + tools + format. It can be invoked with a model: prompt(model, *args, **kwargs).

Bases:

Generic[P, FormattableT]

Attributes

NameTypeDescription
fnMessageTemplate[P]The underlying prompt function that generates message content.
toolkitToolkitThe toolkit containing this prompt's tools.
formattype[FormattableT] | Format[FormattableT] | NoneThe response format for the generated response.

Function call

Generates a response using the provided model.

Parameters

NameTypeDescription
selfAny-
modelModel-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

TypeDescription
Response | Response[FormattableT]-

Function stream

Generates a streaming response using the provided model.

Parameters

NameTypeDescription
selfAny-
modelModel-
args= ()P.args-
kwargs= {}P.kwargs-

Returns

Class PromptDecorator

Decorator for converting a MessageTemplate into a Prompt.

Takes a raw prompt function that returns message content and wraps it with tools and format support, creating a Prompt that can be called with a model.

The decorator automatically detects whether the function is async or context-aware and creates the appropriate Prompt variant (Prompt, AsyncPrompt, ContextPrompt, or AsyncContextPrompt).

Bases:

Generic[ToolT, FormattableT]

Attributes

NameTypeDescription
toolsSequence[ToolT] | NoneThe tools that are included in the prompt, if any.
formattype[FormattableT] | Format[FormattableT] | NoneThe structured output format off the prompt, if any.

Function prompt

Decorates a MessageTemplate to create a Prompt callable with a model.

This decorator transforms a raw prompt function (that returns message content) into a Prompt object that can be invoked with a model to generate LLM responses.

The decorator automatically detects the function type:

  • If the first parameter is named 'ctx' with type llm.Context[T], creates a ContextPrompt
  • If the function is async, creates an AsyncPrompt or AsyncContextPrompt
  • Otherwise, creates a regular Prompt

Parameters

NameTypeDescription
__fn= NoneAsyncContextMessageTemplate[P, DepsT] | ContextMessageTemplate[P, DepsT] | AsyncMessageTemplate[P] | MessageTemplate[P] | NoneThe prompt function to decorate (optional, for decorator syntax without parens)
tools= NoneSequence[ToolT] | NoneOptional `Sequence` of tools to make available to the LLM
format= Nonetype[FormattableT] | Format[FormattableT] | NoneOptional response format class (`BaseModel`) or Format instance

Returns

TypeDescription
AsyncContextPrompt[P, DepsT, FormattableT] | ContextPrompt[P, DepsT, FormattableT] | AsyncPrompt[P, FormattableT] | Prompt[P, FormattableT] | PromptDecorator[ToolT, FormattableT]A `Prompt` variant (Prompt, AsyncPrompt, ContextPrompt, or AsyncContextPrompt)