Thinking & Reasoning
Recent LLM models have support for "extended thinking", or "reasoning" in which, prior to generating a final output, the model produces internal reasoning about the task it has been given.
We're currently working on Mirascope v2, which will support thinking in a generic and cross-provider way. However, as of Mirascope v1, we have ad-hoc thinking support added for the following providers:
Provider | Can Use Thinking | Can View Thinking Summaries |
---|---|---|
Anthropic | ✓ | ✓ |
Google Gemini | ✓ | ✓ |
Provider Examples
Anthropic thinking is supported for Claude Opus 4, Claude Sonnet 4, and Claude Sonnet 3.7. It may be invoked using
the @anthropic.call
provider-specific decorator, as in the example below. For more, read the Anthropic reasoning docs.
from mirascope.core import anthropic, prompt_template
@anthropic.call(
model="claude-3-7-sonnet-latest",
call_params=anthropic.AnthropicCallParams(
max_tokens=2048,
thinking={"type": "enabled", "budget_tokens": 1024},
),
)
@prompt_template(
"""
Suppose a rocket is launched from a surface, pointing straight up.
For the first ten seconds, the rocket engine is providing upwards thrust of
50m/s^2, after which it shuts off.
There is constant downwards acceleration of 10m/s^2 due to gravity.
What is the highest height it will achieve?
Your final response should be ONLY a number in meters, with no additional text.
""")
def answer(): ...
response = answer()
print("---- Thinking ----")
print(response.thinking)
print("---- Response ----")
print(response.content)