{ "cells": [ { "cell_type": "markdown", "id": "4c2cb6bf3e30bf22", "metadata": {}, "source": [ "# Chain of Verification: Enhancing LLM Accuracy through Self-Verification\n", "\n", "This recipe demonstrates how to implement the Chain of Verification technique using Large Language Models (LLMs) with Mirascope. Chain of Verification is a prompt engineering method that enhances an LLM's accuracy by generating and answering verification questions based on its initial response.\n", "\n", "
Mirascope Concepts Used
\n", "Background
\n", "\n", "Chain of Verification is a prompt engineering technique where one takes a prompt and its initial LLM response then generates a checklist of questions that can be used to verify the initial answer. Each of these questions are then answered individually with separate LLM calls, and the results of each verification question are used to edit the final answer. LLMs are often more truthful when asked to verify a particular fact rather than use it in their own answer, so this technique is effective in reducing hallucinations.\n", "
\n", "