Mirascope Frog Logo
Mirascope
DocsBlogPricingCloud
⌘K
Type to search
⌘Kto search
Escto close
mirascope
v1.25.7
1.3k
Join our
WelcomeLearnGuidesAPI Referencev1 (Legacy)
LLMOps
OverviewQuickstartMessagesModelsResponsesPromptsCallsThinkingToolsStructured OutputStreamingAsyncAgentsContextChainingErrorsReliabilityProvidersLocal ModelsMCP
# Prompts When working programmatically with LLMs, we often want reusable functions that encapsulate constructing messages and getting an LLM response. Using just what we've already learned about [Messages](/docs/learn/llm/messages) and [Models](/docs/learn/llm/models), we might write code like this: ```python from mirascope import llm def recommend_book(model_id: llm.ModelId, genre: str): model = llm.Model(model_id) return model.call(f"Please recommend a book in {genre}.") response = recommend_book("anthropic/claude-haiku-4-5", "fantasy") print(response.pretty()) ``` Since this is such a common pattern in LLM-powered development, Mirascope provides the `@llm.prompt` decorator to streamline things: ```python from mirascope import llm @llm.prompt def recommend_book(genre: str): return f"Please recommend a book in {genre}." response = recommend_book("anthropic/claude-haiku-4-5", "fantasy") print(response.text()) ``` You write a *prompt function* that returns the content to send to the LLM. The decorator converts it into an `llm.Prompt` that you can call with any model. ## Prompt Return Types A prompt function can return content in several forms: - **A string** — Converted to a user message - **A list of content parts** — `llm.Text`, `llm.Image`, `llm.Audio`, combined into a user message - **A list of messages** — Passed directly to the model for full control ```python from mirascope import llm @llm.prompt def simple_prompt(topic: str): # Converts to a user message with one piece of text. return f"Tell me about {topic}." @llm.prompt def multimodal_prompt(image_url: str): # Converts to a user message with an image and a piece of text. return [ llm.Image.from_url(image_url), "What's in this image?", ] @llm.prompt def chat_prompt(topic: str): # These messages will be used directly return [ llm.messages.system("You are a helpful assistant."), llm.messages.user(f"Tell me about {topic}."), ] ``` ## Calling Prompts A decorated prompt takes a model as its first argument, followed by your function's original arguments. If you don't need custom params, you can just pass a model id string: ```python from mirascope import llm @llm.prompt def recommend_book(genre: str): return f"Please recommend a book in {genre}." response = recommend_book("anthropic/claude-haiku-4-5", "fantasy") print(response.text()) ``` When you need to configure parameters like `temperature`, pass an `llm.Model` instance: ```python from mirascope import llm @llm.prompt def recommend_book(genre: str): return f"Please recommend a book in {genre}." # Use llm.Model when you need to configure parameters model = llm.Model("openai/gpt-4o", temperature=0.9) response = recommend_book(model, "fantasy") print(response.pretty()) ``` <Note> The `Prompt` doesn't reference the model context manager, so it will always use the model that you pass in as the first argument. </Note> ## Inspecting Messages Use `.messages()` to see what messages a prompt generates without calling an LLM: ```python from mirascope import llm @llm.prompt def recommend_book(genre: str): return f"Please recommend a book in {genre}." # Get the messages without calling the LLM messages = recommend_book.messages("fantasy") print(messages) # [UserMessage(content=[Text(text='Please recommend a book in fantasy.')])] ``` ## Decorator Arguments The `@llm.prompt` decorator accepts optional arguments for tools and structured output: | Argument | Description | | --- | --- | | `tools` | List of tools the LLM can call. See [Tools](/docs/learn/llm/tools). | | `format` | Response format for structured output. See [Structured Output](/docs/learn/llm/structured-output). | We cover these arguments in their respective guides. When no arguments are needed, the parentheses are optional: `@llm.prompt` and `@llm.prompt()` are equivalent. ## Next Steps - [Calls](/docs/learn/llm/calls) — Bundle a model with your prompt function - [Tools](/docs/learn/llm/tools) — Let LLMs call functions - [Structured Output](/docs/learn/llm/structured-output) — Parse responses into structured data

On this page

On this page

© 2026 Mirascope. All rights reserved.

Mirascope® is a registered trademark of Mirascope, Inc. in the U.S.

Privacy PolicyTerms of Use