{ "cells": [ { "cell_type": "markdown", "id": "88a48c9dd8ed7eba", "metadata": {}, "source": [ "# Documentation Agent\n", "\n", "In this recipe, we will be building a `DocumentationAgent` that has access to some documentation. We will be using Mirascope documentation in this example, but this should work on all types of documents. This is implemented using `OpenAI`, see [Local Chat with Codebase](../../agents/local_chat_with_codebase) for the Llama3.1 implementation.\n", "\n", "
\n", "

Mirascope Concepts Used

\n", "\n", "
" ] }, { "cell_type": "markdown", "id": "e0204984", "metadata": {}, "source": [ "## Setup\n", "\n", "To set up our environment, first let's install all of the packages we will use:" ] }, { "cell_type": "code", "execution_count": null, "id": "ea239b91", "metadata": {}, "outputs": [], "source": [ "!pip install \"mirascope[openai]\"\n", "# LLamaIndex for embedding and retrieving embeddings from a vectorstore\n", "!pip install llama-index" ] }, { "cell_type": "code", "execution_count": null, "id": "666cc429", "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = \"YOUR_API_KEY\"\n", "# Set the appropriate API key for the provider you're using" ] }, { "cell_type": "markdown", "id": "76242e05", "metadata": {}, "source": [ "## Store Embeddings\n", "\n", "The first step is to grab our docs and embed them into a vectorstore. In this recipe, we will be storing our vectorstore locally, but using Pinecone or other cloud vectorstore providers will also work. We adjusted the `chunk_size` and `chunk_overlap` to get the best results for Mirascope docs, but these values may not necessarily be good for other types of documents." ] }, { "cell_type": "code", "execution_count": null, "id": "26e160ae6829b9bf", "metadata": {}, "outputs": [], "source": [ "from llama_index.core import (\n", " SimpleDirectoryReader,\n", " VectorStoreIndex,\n", ")\n", "from llama_index.core.extractors import TitleExtractor\n", "from llama_index.core.ingestion import IngestionPipeline\n", "from llama_index.core.node_parser import SentenceSplitter\n", "from llama_index.core.storage import StorageContext\n", "from llama_index.core.vector_stores import SimpleVectorStore\n", "from llama_index.embeddings.openai import OpenAIEmbedding\n", "\n", "documents = SimpleDirectoryReader(\"../../../docs/learn\").load_data()\n", "vector_store = SimpleVectorStore()\n", "storage_context = StorageContext.from_defaults(vector_store=vector_store)\n", "\n", "pipeline = IngestionPipeline(\n", " transformations=[\n", " SentenceSplitter(chunk_size=512, chunk_overlap=128),\n", " TitleExtractor(),\n", " OpenAIEmbedding(),\n", " ],\n", " vector_store=vector_store,\n", ")\n", "\n", "nodes = pipeline.run(documents=documents)\n", "index = VectorStoreIndex(\n", " nodes,\n", " storage_context=storage_context,\n", ")\n", "\n", "index.storage_context.persist()" ] }, { "cell_type": "markdown", "id": "788a3ae490bf7bba", "metadata": {}, "source": [ "\n", "## Load Embeddings\n", "\n", "After we saved our embeddings, we can use the below code to retrieve it and load in memory:\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e1c87a3660b910b0", "metadata": {}, "outputs": [], "source": [ "from llama_index.core import (\n", " load_index_from_storage,\n", ")\n", "\n", "storage_context = StorageContext.from_defaults(persist_dir=\"storage\")\n", "loaded_index = load_index_from_storage(storage_context)\n", "query_engine = loaded_index.as_query_engine()" ] }, { "cell_type": "markdown", "id": "c5ae9163ffdf2459", "metadata": {}, "source": [ "\n", "## LLM Reranker\n", "\n", "Vectorstore retrieval relies on semantic similarity search but lacks contextual understanding. By employing an LLM to rerank results based on relevance, we can achieve more accurate and robust answers.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "1e878ae24ce7bafb", "metadata": {}, "outputs": [], "source": [ "from mirascope.core import openai, prompt_template\n", "from pydantic import BaseModel, Field\n", "\n", "\n", "class Relevance(BaseModel):\n", " id: int = Field(..., description=\"The document ID\")\n", " score: int = Field(..., description=\"The relevance score (1-10)\")\n", " document: str = Field(..., description=\"The document text\")\n", " reason: str = Field(..., description=\"A brief explanation for the assigned score\")\n", "\n", "\n", "@openai.call(\n", " \"gpt-4o-mini\",\n", " response_model=list[Relevance],\n", " json_mode=True,\n", ")\n", "@prompt_template(\n", " \"\"\"\n", " SYSTEM:\n", " Document Relevance Assessment\n", " Given a list of documents and a question, determine the relevance of each document to answering the question.\n", "\n", " Input\n", " - A question\n", " - A list of documents, each with an ID and content summary\n", "\n", " Task\n", " - Analyze each document for its relevance to the question.\n", " - Assign a relevance score from 1-10 for each document.\n", " - Provide a reason for each score.\n", "\n", " Scoring Guidelines\n", " - Consider both direct and indirect relevance to the question.\n", " - Prioritize positive, affirmative information over negative statements.\n", " - Assess the informativeness of the content, not just keyword matches.\n", " - Consider the potential for a document to contribute to a complete answer.\n", "\n", " Important Notes\n", " - Exclude documents with no relevance less than 5 to the question.\n", " - Be cautious with negative statements - they may be relevant but are often less informative than positive ones.\n", " - Consider how multiple documents might work together to answer the question.\n", " - Use the document title and content summary to make your assessment.\n", "\n", " Documents:\n", " {documents}\n", "\n", " USER: \n", " {query}\n", " \"\"\"\n", ")\n", "def llm_query_rerank(documents: list[dict], query: str): ..." ] }, { "cell_type": "markdown", "id": "24ee51ddfa8efbe8", "metadata": {}, "source": [ "\n", "We get back a list of `Relevance`s which we will be using for our `get_documents` function.\n", "\n", "## Getting our documents\n", "\n", "With our LLM Reranker configured, we can now retrieve documents for our query. The process involves three steps:\n", "\n", "1. Fetch the top 10 (`top_k`) semantic search results from our vectorstore.\n", "2. Process these results through our LLM Reranker in batches of 5 (`choice_batch_size`).\n", "3. Return the top 2 (`top_n`) most relevant documents.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "90ca5de1ae60175e", "metadata": {}, "outputs": [], "source": [ "from typing import cast\n", "\n", "from llama_index.core import QueryBundle\n", "from llama_index.core.indices.vector_store import VectorIndexRetriever\n", "\n", "\n", "def get_documents(query: str) -> list[str]:\n", " \"\"\"The get_documents tool that retrieves Mirascope documentation based on the\n", " relevance of the query\"\"\"\n", " query_bundle = QueryBundle(query)\n", " retriever = VectorIndexRetriever(\n", " index=cast(VectorStoreIndex, loaded_index),\n", " similarity_top_k=10,\n", " )\n", " retrieved_nodes = retriever.retrieve(query_bundle)\n", " choice_batch_size = 5\n", " top_n = 2\n", " results: list[Relevance] = []\n", " for idx in range(0, len(retrieved_nodes), choice_batch_size):\n", " nodes_batch = [\n", " {\n", " \"id\": idx + id,\n", " \"text\": node.node.get_text(), # pyright: ignore[reportAttributeAccessIssue]\n", " \"document_title\": node.metadata[\"document_title\"],\n", " \"semantic_score\": node.score,\n", " }\n", " for id, node in enumerate(retrieved_nodes[idx : idx + choice_batch_size])\n", " ]\n", " results += llm_query_rerank(nodes_batch, query)\n", " results = sorted(results, key=lambda x: x.score or 0, reverse=True)[:top_n]\n", "\n", " return [result.document for result in results]" ] }, { "cell_type": "markdown", "id": "b65c29fb7b625cfc", "metadata": {}, "source": [ "\n", "Now that we can retrieve relevant documents for our user query, we can create our Agent.\n", "\n", "## Creating `DocumentationAgent`\n", "\n", "Our `get_documents` method retrieves relevant documents, which we pass to the `context` for our call. The LLM then categorizes the question as either `code` or `general`. Based on this classification:\n", "\n", "- For code questions, the LLM generates an executable code snippet.\n", "- For general questions, the LLM summarizes the content of the retrieved documents.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "58c34447d14044cd", "metadata": {}, "outputs": [], "source": [ "from typing import Literal\n", "\n", "\n", "class Response(BaseModel):\n", " classification: Literal[\"code\", \"general\"] = Field(\n", " ..., description=\"The classification of the question\"\n", " )\n", " content: str = Field(..., description=\"The response content\")\n", "\n", "\n", "class DocumentationAgent(BaseModel):\n", " @openai.call(\"gpt-4o-mini\", response_model=Response, json_mode=True)\n", " @prompt_template(\n", " \"\"\"\n", " SYSTEM:\n", " You are an AI Assistant that is an expert at answering questions about Mirascope.\n", " Here is the relevant documentation to answer the question.\n", "\n", " First classify the question into one of two types:\n", " - General Information: Questions about the system or its components.\n", " - Code Examples: Questions that require code snippets or examples.\n", "\n", " For General Information, provide a summary of the relevant documents if the question is too broad ask for more details. \n", " If the context does not answer the question, say that the information is not available or you could not find it.\n", "\n", " For Code Examples, output ONLY code without any markdown, with comments if necessary.\n", " If the context does not answer the question, say that the information is not available.\n", "\n", " Examples:\n", " Question: \"What is Mirascope?\"\n", " Answer:\n", " A toolkit for building AI-powered applications with Large Language Models (LLMs).\n", " Explanation: This is a General Information question, so a summary is provided.\n", "\n", " Question: \"How do I make a basic OpenAI call using Mirascope?\"\n", " Answer:\n", " from mirascope.core import openai, prompt_template\n", "\n", "\n", " @openai.call(\"gpt-4o-mini\")\n", " def recommend_book(genre: str) -> str:\n", " return f'Recommend a {genre} book'\n", "\n", " response = recommend_book(\"fantasy\")\n", " print(response.content)\n", " Explanation: This is a Code Examples question, so only a code snippet is provided.\n", "\n", " Context:\n", " {context:list}\n", "\n", " USER:\n", " {question}\n", " \"\"\"\n", " )\n", " def _call(self, question: str) -> openai.OpenAIDynamicConfig:\n", " documents = get_documents(question)\n", " return {\"computed_fields\": {\"context\": documents}}\n", "\n", " def _step(self, question: str):\n", " answer = self._call(question)\n", " print(\"(Assistant):\", answer.content)\n", "\n", " def run(self):\n", " while True:\n", " question = input(\"(User): \")\n", " if question == \"exit\":\n", " break\n", " self._step(question)\n", "\n", "\n", "if __name__ == \"__main__\":\n", " DocumentationAgent().run()\n", " # Output:\n", " \"\"\"\n", " (User): How do I make an LLM call using Mirascope?\n", " (Assistant): from mirascope.core import openai\n", " \n", " @openai.call('gpt-4o-mini')\n", " def recommend_book(genre: str) -> str:\n", " return f'Recommend a {genre} book'\n", " \n", " response = recommend_book('fantasy')\n", " print(response.content)\n", " \"\"\"" ] }, { "cell_type": "markdown", "id": "a1ab1312d36e7ce7", "metadata": {}, "source": [ "\n", "
\n", "

Additional Real-World Applications

\n", "\n", "
\n", "\n", "\n", "When adapting this recipe, consider:\n", "\n", "- Experiment with different model providers and version for quality.\n", "- Add evaluations to the agent, and feed the errors back to the LLM for refinement.\n", "- Add history to the Agent so that the LLM can generate context-aware queries to retrieve more semantically similar embeddings.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.6" } }, "nbformat": 4, "nbformat_minor": 5 }