Mirascope Frog Logo
Mirascope
DocsBlogPricingCloud
⌘K
Type to search
⌘Kto search
Escto close
mirascope
v2.0.2
1.4k
Join our
WelcomeLearnGuidesAPI Referencev1 (Legacy)
DocsGuidesAPI
Overview
Getting Started
QuickstartStructured OutputsDynamic Configuration & ChainingTools & Agents
Agents
Web Search AgentAgent Executor: Blog WritingDocumentation AgentLocal Chat with CodebaseLocalized AgentQwant Search Agent with SourcesGenerate SQL with LLM
More Advanced
Code Generation and ExecutionDocument SegmentationExtracting from PDFExtraction using VisionGenerate Captions for an ImageGenerate Synthetic DataLLM Validation With RetriesNamed Entity Recognitiono1 Style ThinkingPII ScrubbingQuery PlanRemoving Semantic DuplicatesSearch with SourcesTranscribing SpeechSupport Ticket RoutingText ClassificationText SummarizationText TranslationKnowledge Graph
Prompt Engineering
Self-Refine: Enhancing LLM Outputs Through Iterative Self-ImprovementSelf-Consistency: Enhancing LLM Reasoning with Multiple OutputsReverse Chain of ThoughtPrompt Paraphrasing: Generating Diverse Prompts for LLM Testing and EvaluationMixture of Reasoning: Enhancing LLM Performance with Multiple TechniquesDemonstration Ensembling: Enhancing LLM Responses with Aggregated ExamplesChain of Verification: Enhancing LLM Accuracy through Self-VerificationLeast to MostDiVeRSe: Enhancing LLM Reasoning with Prompt VariationsDecomposed Prompting: Enhancing LLM Problem-Solving with Tool-Based SubproblemsSystem to Attention (S2A): Enhancing LLM Focus with Query FilteringStep-back Prompting: Enhancing LLM Reasoning with High-Level QuestionsSkeleton of Thought: Enhancing LLM Response SpeedSim to M: Enhancing LLM Reasoning with Perspective-Taking
Evaluations
Evaluating Documentation AgentEvaluating Web Search Agent with LLMEvaluating Generating SQL with LLM
Langgraph Vs Mirascope
LangGraph Quickstart using Mirascope
# Mixture of Reasoning: Enhancing LLM Performance with Multiple Techniques Mixture of Reasoning is a prompt engineering technique where you set up multiple calls, each utilizing a different prompt engineering technique. This approach is best when you want to be able to handle a wide variety of responses, or have a variety of techniques that you have found to be successful for responding to a type of prompt. <Info title="Mirascope Concepts Used" collapsible={true} defaultOpen={false}> <ul> <li><a href="/docs/v1/learn/prompts/">Prompts</a></li> <li><a href="/docs/v1/learn/calls/">Calls</a></li> <li><a href="/docs/v1/learn/response_models/">Response Models</a></li> </ul> </Info> <Note title="Background"> In the <a href="https://users.umiacs.umd.edu/~jbg/docs/2023_findings_more.pdf">original paper</a>, a trained classifier is used to determine the best answer, but since we do not have access to that, we will use an LLM to evaluate the best answer instead. To get a clean separation between the reasoning and the actual output, we'll use <code>response_model</code> in our final evaluation call. </Note> ## Implementation Let's implement the Mixture of Reasoning technique using Mirascope: ```python from mirascope.core import openai, prompt_template from pydantic import BaseModel, Field @openai.call(model="gpt-4o-mini") @prompt_template( """ Answer this question, thinking step by step. {query} """ ) def cot_call(query: str): ... @openai.call(model="gpt-4o-mini") @prompt_template( """ {query} It's very important to my career. """ ) def emotion_prompting_call(query: str): ... @openai.call(model="gpt-4o-mini") @prompt_template( """ {query} Rephrase and expand the question, and respond. """ ) def rar_call(query: str): ... class BestResponse(BaseModel): best_response: str = Field( ..., description="The best response of the options given, verbatim" ) reasoning: str = Field( ..., description="""A short description of why this is the best response to the query, along with reasons why the other answers were worse.""", ) @openai.call(model="gpt-4o-mini", response_model=BestResponse) @prompt_template( """ Here is a query: {query} Evaluate the following responses from LLMs and decide which one is the best based on correctness, fulfillment of the query, and clarity: Response 1: {cot_response} Response 2: {emotion_prompting_response} Response 3: {rar_response} """ ) def mixture_of_reasoning(query: str) -> openai.OpenAIDynamicConfig: cot_response = cot_call(query=query) emotion_prompting_response = emotion_prompting_call(query=query) rar_response = rar_call(query=query) return { "computed_fields": { "cot_response": cot_response, "emotion_prompting_response": emotion_prompting_response, "rar_response": rar_response, } } prompt = "What are the side lengths of a rectangle with area 8 and perimeter 12?" best_response = mixture_of_reasoning(prompt) print(best_response.best_response) print(best_response.reasoning) ``` This implementation consists of several key components: 1. Three different prompt engineering techniques: - `cot_call`: [Chain of Thought reasoning](/docs/v1/guides/prompt-engineering/text-based/chain-of-thought) - `emotion_prompting_call`: [Emotion prompting](/docs/v1/guides/prompt-engineering/text-based/emotion-prompting) - `rar_call`: [Rephrase and Respond](/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond) 2. A `BestResponse` model to structure the output of the final evaluation. 3. The `mixture_of_reasoning` function, which: - Calls each of the three prompt engineering techniques - Uses dynamic configuration to pass the responses to the final evaluation - Returns the best response and reasoning using the `BestResponse` model ## Benefits and Considerations The Mixture of Reasoning implementation offers several advantages: 1. Improved ability to handle a wide variety of queries by leveraging multiple prompt engineering techniques. 2. Enhanced performance by selecting the best response from multiple approaches. 3. Flexibility to add or modify prompt engineering techniques as needed. When implementing this technique, consider: - Carefully selecting the prompt engineering techniques to include based on your specific use case. - Balancing the number of techniques with computational cost and time constraints. - Fine-tuning the evaluation criteria in the final step to best suit your needs. <Info title="Additional Real-World Applications"> <ul> <li><b>Complex Problem Solving</b>: Use Mixture of Reasoning for multi-step problems in fields like mathematics, physics, or engineering.</li> <li><b>Customer Support</b>: Implement different response strategies to handle various types of customer queries effectively.</li> <li><b>Content Generation</b>: Generate diverse content ideas by applying multiple creative thinking techniques.</li> <li><b>Decision Making</b>: Analyze complex scenarios from different perspectives to make more informed decisions.</li> <li><b>Educational Tutoring</b>: Provide explanations using various teaching methods to cater to different learning styles.</li> </ul> </Info> When adapting this recipe to your specific use-case, consider: - Experimenting with different combinations of prompt engineering techniques. - Implementing a feedback loop to continuously improve the selection of the best response. - Tailoring the evaluation criteria to your specific domain or task requirements. - Combining Mixture of Reasoning with other techniques like Self-Consistency for even more robust reasoning capabilities. By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the Mixture of Reasoning technique to enhance your LLM's performance across a wide range of applications.

Provider

On this page

Provider

On this page

© 2026 Mirascope. All rights reserved.

Mirascope® is a registered trademark of Mirascope, Inc. in the U.S.

Privacy PolicyTerms of Use