# Chaining
Chaining combines multiple LLM calls to solve complex tasks. Since Mirascope calls are just Python functions, chaining them is as simple as calling one function after another:
<TabbedSection>
<Tab value="Call">
```python
from mirascope import llm
@llm.call("openai/gpt-5-mini")
def summarize(text: str):
return f"Summarize this text in one line: \n{text}"
@llm.call("openai/gpt-5-mini")
def translate(text: str, language: str):
return f"Translate this text to {language}: \n{text}"
text = """
To be, or not to be, that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles
And by opposing end them.
"""
summary = summarize(text)
print(f"Summary: {summary.text()}")
translation = translate(summary.text(), "french")
print(f"Translation: {translation.text()}")
```
</Tab>
<Tab value="Prompt">
```python
from mirascope import llm
@llm.prompt
def summarize(text: str):
return f"Summarize this text in one line: \n{text}"
@llm.prompt
def translate(text: str, language: str):
return f"Translate this text to {language}: \n{text}"
text = """
To be, or not to be, that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles
And by opposing end them.
"""
summary = summarize("openai/gpt-5-mini", text)
print(f"Summary: {summary.text()}")
translation = translate("openai/gpt-5-mini", summary.text(), "french")
print(f"Translation: {translation.text()}")
```
</Tab>
<Tab value="Model">
```python
from mirascope import llm
model = llm.model("openai/gpt-5-mini")
def summarize(text: str):
return model.call(f"Summarize this text in one line: \n{text}")
def translate(text: str, language: str):
return model.call(f"Translate this text to {language}: \n{text}")
text = """
To be, or not to be, that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles
And by opposing end them.
"""
summary = summarize(text)
print(f"Summary: {summary.text()}")
translation = translate(summary.text(), "french")
print(f"Translation: {translation.text()}")
```
</Tab>
</TabbedSection>
This approach lets you:
1. **Decompose problems** — Break complex tasks into smaller, focused steps
2. **Process sequentially** — Pass the output of one step as input to the next
3. **Mix and match** — Combine calls with different models, tools, or formats
## Nested Chains
You can call one prompt function inside another to encapsulate chains. This also lets you use different models for different steps—for example, a more capable model for nuanced summarization and a faster one for straightforward translation:
<TabbedSection>
<Tab value="Call">
```python
from mirascope import llm
# Use a more capable model for nuanced summarization
summarizer = "openai/gpt-5"
# Use a faster model for straightforward translation
translator = "openai/gpt-5-mini"
@llm.call(summarizer)
def summarize(text: str):
return f"Summarize this text: {text}"
@llm.call(translator)
def translate(text: str, language: str):
return f"Translate this text to {language}: {text}"
def summarize_and_translate(text: str, language: str) -> str:
summary = summarize(text)
return translate(summary.text(), language).text()
text = """
What a piece of work is a man! how noble in reason!
how infinite in faculty! in form and moving how
express and admirable! in action how like an angel!
in apprehension how like a god! the beauty of the
world! the paragon of animals! And yet, to me,
what is this quintessence of dust?
"""
translation = summarize_and_translate(text, "french")
print(translation)
```
</Tab>
<Tab value="Prompt">
```python
from mirascope import llm
# Use a more capable model for nuanced summarization
summarizer = "openai/gpt-5"
# Use a faster model for straightforward translation
translator = "openai/gpt-5-mini"
@llm.prompt
def summarize(text: str):
return f"Summarize this text: {text}"
@llm.prompt
def translate(text: str, language: str):
return f"Translate this text to {language}: {text}"
def summarize_and_translate(text: str, language: str) -> str:
summary = summarize(summarizer, text)
return translate(translator, summary.text(), language).text()
text = """
What a piece of work is a man! how noble in reason!
how infinite in faculty! in form and moving how
express and admirable! in action how like an angel!
in apprehension how like a god! the beauty of the
world! the paragon of animals! And yet, to me,
what is this quintessence of dust?
"""
translation = summarize_and_translate(text, "french")
print(translation)
```
</Tab>
<Tab value="Model">
```python
from mirascope import llm
# Use a more capable model for nuanced summarization
summarizer = llm.model("openai/gpt-5")
# Use a faster model for straightforward translation
translator = llm.model("openai/gpt-5-mini")
def summarize(text: str):
return summarizer.call(f"Summarize this text: {text}")
def translate(text: str, language: str):
return translator.call(f"Translate this text to {language}: {text}")
def summarize_and_translate(text: str, language: str) -> str:
summary = summarize(text)
return translate(summary.text(), language).text()
text = """
What a piece of work is a man! how noble in reason!
how infinite in faculty! in form and moving how
express and admirable! in action how like an angel!
in apprehension how like a god! the beauty of the
world! the paragon of animals! And yet, to me,
what is this quintessence of dust?
"""
translation = summarize_and_translate(text, "french")
print(translation)
```
</Tab>
</TabbedSection>
The inner call executes first, and its result flows into the outer call's prompt.
## Conditional Chains
Route to different prompts based on prior outputs. Here, we classify sentiment first, then generate an appropriate response:
<TabbedSection>
<Tab value="Call">
```python
from enum import Enum
from mirascope import llm
class Sentiment(str, Enum):
POSITIVE = "positive"
NEGATIVE = "negative"
@llm.call("openai/gpt-5-mini", format=Sentiment)
def classify_sentiment(review: str):
return f"Is the following review positive or negative? {review}"
@llm.call("openai/gpt-5-mini")
def respond_to_review(review: str):
sentiment = classify_sentiment(review).parse()
if sentiment == Sentiment.POSITIVE:
instruction = "Write a thank you response for the review."
else:
instruction = "Write a response addressing the concerns in the review."
return f"""
The review has been identified as {sentiment.value}.
{instruction}
Review: {review}
"""
positive_review = "This tool is awesome because it's so flexible!"
response = respond_to_review(positive_review)
print(response.text())
```
</Tab>
<Tab value="Prompt">
```python
from enum import Enum
from mirascope import llm
class Sentiment(str, Enum):
POSITIVE = "positive"
NEGATIVE = "negative"
@llm.prompt(format=Sentiment)
def classify_sentiment(review: str):
return f"Is the following review positive or negative? {review}"
@llm.prompt
def respond_to_review(review: str, sentiment: Sentiment):
if sentiment == Sentiment.POSITIVE:
instruction = "Write a thank you response for the review."
else:
instruction = "Write a response addressing the concerns in the review."
return f"""
The review has been identified as {sentiment.value}.
{instruction}
Review: {review}
"""
positive_review = "This tool is awesome because it's so flexible!"
sentiment = classify_sentiment("openai/gpt-5-mini", positive_review).parse()
response = respond_to_review("openai/gpt-5-mini", positive_review, sentiment)
print(response.text())
```
</Tab>
<Tab value="Model">
```python
from enum import Enum
from mirascope import llm
class Sentiment(str, Enum):
POSITIVE = "positive"
NEGATIVE = "negative"
model = llm.model("openai/gpt-5-mini")
def classify_sentiment(review: str):
return model.call(
f"Is the following review positive or negative? {review}",
format=Sentiment,
)
def respond_to_review(review: str):
sentiment = classify_sentiment(review).parse()
if sentiment == Sentiment.POSITIVE:
instruction = "Write a thank you response for the review."
else:
instruction = "Write a response addressing the concerns in the review."
return model.call(f"""
The review has been identified as {sentiment.value}.
{instruction}
Review: {review}
""")
positive_review = "This tool is awesome because it's so flexible!"
response = respond_to_review(positive_review)
print(response.text())
```
</Tab>
</TabbedSection>
Using `format=` with an `Enum` ensures the classifier returns a valid value we can branch on.
## Parallel Chains
When steps are independent, run them concurrently with `asyncio.gather()`:
<TabbedSection>
<Tab value="Call">
```python
import asyncio
from mirascope import llm
model = "openai/gpt-5-mini"
@llm.call(model)
async def chef_selector(ingredient: str):
return (
f"Identify a chef known for cooking with {ingredient}. Return only their name."
)
@llm.call(model, format=list[str])
async def ingredients_identifier(ingredient: str):
return f"List 5 ingredients that complement {ingredient}."
@llm.call(model)
async def recommend(chef: str, ingredients: list[str]):
return f"As chef {chef}, recommend a recipe using: {ingredients}"
async def recipe_recommender(ingredient: str) -> str:
chef_response, ingredients_response = await asyncio.gather(
chef_selector(ingredient),
ingredients_identifier(ingredient),
)
response = await recommend(chef_response.text(), ingredients_response.parse())
return response.text()
async def main():
recipe = await recipe_recommender("apples")
print(recipe)
asyncio.run(main())
```
</Tab>
<Tab value="Prompt">
```python
import asyncio
from mirascope import llm
model = "openai/gpt-5-mini"
@llm.prompt
async def chef_selector(ingredient: str):
return (
f"Identify a chef known for cooking with {ingredient}. Return only their name."
)
@llm.prompt(format=list[str])
async def ingredients_identifier(ingredient: str):
return f"List 5 ingredients that complement {ingredient}."
@llm.prompt
async def recommend(chef: str, ingredients: list[str]):
return f"As chef {chef}, recommend a recipe using: {ingredients}"
async def recipe_recommender(ingredient: str) -> str:
chef_response, ingredients_response = await asyncio.gather(
chef_selector(model, ingredient),
ingredients_identifier(model, ingredient),
)
response = await recommend(
model, chef_response.text(), ingredients_response.parse()
)
return response.text()
async def main():
recipe = await recipe_recommender("apples")
print(recipe)
asyncio.run(main())
```
</Tab>
<Tab value="Model">
```python
import asyncio
from mirascope import llm
model = llm.model("openai/gpt-5-mini")
async def chef_selector(ingredient: str):
return await model.call_async(
f"Identify a chef known for cooking with {ingredient}. Return only their name."
)
async def ingredients_identifier(ingredient: str):
return await model.call_async(
f"List 5 ingredients that complement {ingredient}.",
format=list[str],
)
async def recommend(chef: str, ingredients: list[str]):
return await model.call_async(
f"As chef {chef}, recommend a recipe using: {ingredients}"
)
async def recipe_recommender(ingredient: str) -> str:
chef_response, ingredients_response = await asyncio.gather(
chef_selector(ingredient),
ingredients_identifier(ingredient),
)
response = await recommend(chef_response.text(), ingredients_response.parse())
return response.text()
async def main():
recipe = await recipe_recommender("apples")
print(recipe)
asyncio.run(main())
```
</Tab>
</TabbedSection>
Both `chef_selector` and `ingredients_identifier` run simultaneously. The total time is roughly the slowest call, not the sum of both.
<Note>
Parallel chains require async functions. See [Async](/docs/learn/llm/async) for more on async patterns.
</Note>
## Best Practices
- **Keep steps focused** — Each call should do one thing well
- **Use structured output** — `format=` makes it easier to extract and pass data between steps
- **Consider async for I/O** — Parallel calls can significantly reduce latency
- **Handle errors at boundaries** — Validate outputs before passing to the next step
## Next Steps
- [Structured Output](/docs/learn/llm/structured-output) — Parse responses into typed objects
- [Async](/docs/learn/llm/async) — Concurrent execution patterns
- [Agents](/docs/learn/llm/agents) — Autonomous systems that chain tools and calls