{ "cells": [ { "cell_type": "markdown", "id": "b8146326125747e5", "metadata": {}, "source": [ "# Self-Consistency: Enhancing LLM Reasoning with Multiple Outputs\n", "\n", "This recipe demonstrates how to implement the Self-Consistency technique using Large Language Models (LLMs) with Mirascope. Self-Consistency is a prompt engineering method that enhances an LLM's reasoning capabilities by generating multiple Chain of Thought (CoT) responses and selecting the most common answer. We'll explore both a basic implementation and an enhanced version with automated answer extraction.\n", "\n", "
Mirascope Concepts Used
\n", "Background
\n", "\n", "Self-consistency is a prompt engineering technique where multiple calls are made with Chain of Thought prompting, resulting in various answers, and the most common answer is selected. Self-consistency has shown to be highly effective on mathematical and symbolic reasoning, and has also been shown to help in niche scenarios where CoT actually reduces the quality of LLM output.\n", "
\n", "\n", "In the original paper, users manually pick the most frequent response, but we have integrated response models to automate that process once all responses have been generated.\n", "
\n", "Additional Real-World Applications
\n", "