# LLM Quickstart
## Installation
<TabbedSection>
<Tab value="uv">
```bash
uv add "mirascope[all]"
```
</Tab>
<Tab value="pip">
```bash
pip install "mirascope[all]"
```
</Tab>
</TabbedSection>
<Info title="Setting Up API Keys" collapsible={true} defaultOpen={false}>
To run any of the LLM-powered examples, you'll need to set up API keys. The easiest way is to sign up for [Mirascope Router](/cloud/settings/api-keys) and get a Mirascope API key. This single key works for multiple providers, including Anthropic, Google, and OpenAI.
Alternatively, set the API key for your chosen provider:
| Provider | Environment Variable |
| --- | --- |
| Anthropic | `ANTHROPIC_API_KEY` |
| Google | `GOOGLE_API_KEY` |
| Mirascope | `MIRASCOPE_API_KEY` |
| OpenAI | `OPENAI_API_KEY` |
| Together | `TOGETHER_API_KEY` |
Export your key in your shell so all examples run as written:
```bash
export OPENAI_API_KEY="your-api-key"
```
Or use a `.env` file with `python-dotenv`:
```python
from dotenv import load_dotenv
load_dotenv() # Loads variables from .env file
```
See [Providers](/docs/learn/llm/providers) for the full list of providers and configuration options.
</Info>
## Calling Models
The simplest way to call an LLM is with `llm.Model`. Specify a model ID in the format `"provider/model-name"` and call it with your content:
```python
from mirascope import llm
model = llm.Model("openai/gpt-4o")
response = model.call("What is the capital of France?")
print(response.text())
```
See [Models](/docs/learn/llm/models) for parameters, model overrides, and more.
## Prompts
The `@llm.prompt` decorator creates reusable prompt functions. You write a function that returns the content to send, and the decorator handles the rest:
```python
from mirascope import llm
@llm.prompt
def recommend_book(genre: str):
return f"Please recommend a book in {genre}."
response = recommend_book("anthropic/claude-haiku-4-5", "fantasy")
print(response.text())
```
See [Prompts](/docs/learn/llm/prompts) for return types, multimodal content, and more.
## Calls
The `@llm.call` decorator bundles a model with your prompt function for a more convenient API:
```python
from mirascope import llm
@llm.call("openai/gpt-5-mini")
def recommend_book(genre: str):
return f"Please recommend a book in {genre}."
response = recommend_book("fantasy")
print(response.text())
```
See [Calls](/docs/learn/llm/calls) for runtime overrides, parameters, and more.
## Continuing Conversations
Every response contains the full message history. Use `response.resume()` to continue the conversation:
```python
from mirascope import llm
model = llm.Model("openai/gpt-4o")
response = model.call("What's the capital of France?")
print(response.text())
# Continue the conversation with the same model and message history
followup = response.resume("What's the population of that city?")
print(followup.text())
# Chain multiple turns
another = followup.resume("What famous landmarks are there?")
print(another.text())
```
See [Responses](/docs/learn/llm/responses) for content access, metadata, and more.
## Tools
Tools let LLMs request function calls. Define tools with `@llm.tool`, pass them to your call, then loop until done:
<TabbedSection>
<Tab value="Call">
```python
import math
from mirascope import llm
@llm.tool
def sqrt_tool(number: float) -> float:
"""Computes the square root of a number"""
return math.sqrt(number)
@llm.call("openai/gpt-5-mini", tools=[sqrt_tool])
def math_assistant(query: str):
return query
response = math_assistant("What's the square root of 4242?")
while response.tool_calls:
tool_outputs = response.execute_tools()
response = response.resume(tool_outputs)
print(response.text())
```
</Tab>
<Tab value="Prompt">
```python
import math
from mirascope import llm
@llm.tool
def sqrt_tool(number: float) -> float:
"""Computes the square root of a number"""
return math.sqrt(number)
@llm.prompt(tools=[sqrt_tool])
def math_assistant(query: str):
return query
model = llm.Model("openai/gpt-5-mini")
response = math_assistant(model, "What's the square root of 4242?")
while response.tool_calls:
tool_outputs = response.execute_tools()
response = response.resume(tool_outputs)
print(response.text())
```
</Tab>
<Tab value="Model">
```python
import math
from mirascope import llm
@llm.tool
def sqrt_tool(number: float) -> float:
"""Computes the square root of a number"""
return math.sqrt(number)
model = llm.Model("openai/gpt-5-mini")
response = model.call("What's the square root of 4242?", tools=[sqrt_tool])
while response.tool_calls:
tool_outputs = response.execute_tools()
response = response.resume(tool_outputs)
print(response.text())
```
</Tab>
</TabbedSection>
See [Tools](/docs/learn/llm/tools) for async tools, parallel execution, and more.
## Structured Output
Use the `format` parameter to get typed responses. Mirascope supports primitives, enums, and Pydantic models:
<TabbedSection>
<Tab value="Call">
```python
from mirascope import llm
@llm.call("openai/gpt-4o-mini", format=list[str])
def list_books(genre: str):
return f"List 3 {genre} books."
books = list_books("fantasy").parse()
print(books)
# ['The Name of the Wind', 'Mistborn', 'The Way of Kings']
```
</Tab>
<Tab value="Prompt">
```python
from mirascope import llm
@llm.prompt(format=list[str])
def list_books(genre: str):
return f"List 3 {genre} books."
books = list_books("openai/gpt-4o-mini", "fantasy").parse()
print(books)
# ['The Name of the Wind', 'Mistborn', 'The Way of Kings']
```
</Tab>
<Tab value="Model">
```python
from mirascope import llm
model = llm.Model("openai/gpt-4o-mini")
response = model.call("List 3 fantasy books.", format=list[str])
books = response.parse()
print(books)
# ['The Name of the Wind', 'Mistborn', 'The Way of Kings']
```
</Tab>
</TabbedSection>
See [Structured Output](/docs/learn/llm/structured-output) for formatting modes, validation, and more.
## Streaming
Call `.stream()` to get responses as they're generated:
<TabbedSection>
<Tab value="Call">
```python
from mirascope import llm
@llm.call("openai/gpt-5-mini")
def recommend_book(genre: str):
return f"Recommend a {genre} book."
response: llm.StreamResponse = recommend_book.stream("fantasy")
for chunk in response.text_stream():
print(chunk, end="", flush=True)
```
</Tab>
<Tab value="Prompt">
```python
from mirascope import llm
@llm.prompt
def recommend_book(genre: str):
return f"Recommend a {genre} book."
response: llm.StreamResponse = recommend_book.stream("openai/gpt-5-mini", "fantasy")
for chunk in response.text_stream():
print(chunk, end="", flush=True)
```
</Tab>
<Tab value="Model">
```python
from mirascope import llm
model = llm.Model("openai/gpt-5-mini")
response: llm.StreamResponse = model.stream("Recommend a fantasy book.")
for chunk in response.text_stream():
print(chunk, end="", flush=True)
```
</Tab>
</TabbedSection>
See [Streaming](/docs/learn/llm/streaming) for stream iterators, accumulated content, and more.
## Async
Make any prompt function async for concurrent operations:
<TabbedSection>
<Tab value="Call">
```python
import asyncio
from mirascope import llm
@llm.call("openai/gpt-5-mini")
async def recommend_book(genre: str):
return f"Recommend a {genre} book."
async def main():
response = await recommend_book("fantasy")
print(response.text())
asyncio.run(main())
```
</Tab>
<Tab value="Prompt">
```python
import asyncio
from mirascope import llm
@llm.prompt
async def recommend_book(genre: str):
return f"Recommend a {genre} book."
async def main():
response: llm.AsyncResponse = await recommend_book("openai/gpt-5-mini", "fantasy")
print(response.text())
asyncio.run(main())
```
</Tab>
<Tab value="Model">
```python
import asyncio
from mirascope import llm
model = llm.Model("openai/gpt-5-mini")
async def main():
response: llm.AsyncResponse = await model.call_async("Recommend a fantasy book.")
print(response.text())
asyncio.run(main())
```
</Tab>
</TabbedSection>
See [Async](/docs/learn/llm/async) for parallel calls, async tools, and more.
## Learning More
If you'd like to learn more about Mirascope, consider the following resources:
- Read all of the focused topic guides that you'll find in these docs. They are organized so you can read them top to bottom, starting with [messages](/docs/learn/llm/messages)
- We have extensive [end-to-end snapshot testing](https://github.com/Mirascope/mirascope/tree/v2/python/tests/e2e) which consists of real runnable Mirascope code, and snapshots that serialize the expected output. For example, here are [end to end tests for cross-provider thinking support](https://github.com/Mirascope/mirascope/blob/v2/python/tests/e2e/output/test_call_with_thinking_true.py) and [here are the corresponding snapshots](https://github.com/Mirascope/mirascope/tree/v2/python/tests/e2e/output/snapshots/test_call_with_thinking_true).
- The [API reference](/docs/api) documents all of the public functionality in Mirascope.
- You can hop on our [Discord](https://mirascope.com/discord-invite) and ask us questions directly!
We welcome your feedback, questions, and bug reports.