# JSON Mode
<Note>
If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls)
</Note>
JSON Mode is a feature in Mirascope that allows you to request structured JSON output from Large Language Models (LLMs). This mode is particularly useful when you need to extract structured information from the model's responses, making it easier to parse and use the data in your applications.
<Warning title="Not all providers have an official JSON Mode" collapsible={true} defaultOpen={false}>
For providers with explicit support, Mirascope uses the native JSON Mode feature of the API. For providers without explicit support (e.g. Anthropic), Mirascope implements a pseudo JSON Mode by instructing the model in the prompt to output JSON.
| Provider | Support Type | Implementation |
|-----------|--------------|---------------------|
| Anthropic | Pseudo | Prompt engineering |
| Azure | Explicit | Native API feature |
| Bedrock | Pseudo | Prompt engineering |
| Cohere | Pseudo | Prompt engineering |
| Google | Explicit | Native API feature |
| Groq | Explicit | Native API feature |
| LiteLLM | Explicit | Native API feature |
| Mistral | Explicit | Native API feature |
| OpenAI | Explicit | Native API feature |
If you'd prefer not to have any internal updates made to your prompt, you can always set JSON mode yourself through `call_params` rather than using the `json_mode` argument, which provides provider-agnostic support but is certainly not required to use JSON mode.
</Warning>
## Basic Usage and Syntax
Let's take a look at a basic example using JSON Mode:
<TabbedSection>
<Tab value="Shorthand">
```python
import json
from mirascope import llm
@llm.call(provider="$PROVIDER", model="$MODEL", json_mode=True) # [!code highlight]
def get_book_info(book_title: str) -> str: # [!code highlight]
return f"Provide the author and genre of {book_title}"
response = get_book_info("The Name of the Wind")
print(json.loads(response.content))
# Output: {'author': 'Patrick Rothfuss', 'genre': 'Fantasy'} # [!code highlight]
```
</Tab>
<Tab value="Template">
```python
import json
from mirascope import llm, prompt_template
@llm.call(provider="$PROVIDER", model="$MODEL", json_mode=True) # [!code highlight]
@prompt_template("Provide the author and genre of {book_title}") # [!code highlight]
def get_book_info(book_title: str): ...
response = get_book_info("The Name of the Wind")
print(json.loads(response.content))
# Output: {'author': 'Patrick Rothfuss', 'genre': 'Fantasy'} # [!code highlight]
```
</Tab>
</TabbedSection>
In this example we
1. Enable JSON Mode with `json_mode=True` in the `call` decorator
2. Instruct the model what fields to include in our prompt
3. Load the JSON string response into a Python object and print it
## Error Handling and Validation
While JSON Mode can significantly improve the structure of model outputs, it's important to note that it's far from infallible. LLMs often produce invalid JSON or deviate from the expected structure, so it's crucial to implement proper error handling and validation in your code:
<TabbedSection>
<Tab value="Shorthand">
```python
import json
from mirascope import llm
@llm.call(provider="$PROVIDER", model="$MODEL", json_mode=True)
def get_book_info(book_title: str) -> str:
return f"Provide the author and genre of {book_title}"
try: # [!code highlight]
response = get_book_info("The Name of the Wind")
print(json.loads(response.content))
except json.JSONDecodeError: # [!code highlight]
print("The model produced invalid JSON")
```
</Tab>
<Tab value="Template">
```python
import json
from mirascope import llm, prompt_template
@llm.call(provider="$PROVIDER", model="$MODEL", json_mode=True)
@prompt_template("Provide the author and genre of {book_title}")
def get_book_info(book_title: str): ...
try: # [!code highlight]
response = get_book_info("The Name of the Wind")
print(json.loads(response.content))
except json.JSONDecodeError: # [!code highlight]
print("The model produced invalid JSON")
```
</Tab>
</TabbedSection>
<Warning title="Beyond JSON Validation">
While this example catches errors for invalid JSON, there's always a chance that the LLM returns valid JSON that doesn't conform to your expected schema (such as missing fields or incorrect types).
For more robust validation, we recommend using [Response Models](/docs/v1/learn/response_models) for easier structuring and validation of LLM outputs.
</Warning>
## Next Steps
By leveraging JSON Mode, you can create more robust and data-driven applications that efficiently process and utilize LLM outputs. This approach allows for easy integration with databases, APIs, or user interfaces, demonstrating the power of JSON Mode in creating robust, data-driven applications.
Next, we recommend reading the section on [Output Parsers](/docs/v1/learn/output_parsers) to see how to engineer prompts with specific output structures and parse the outputs automatically on every call.