{ "cells": [ { "cell_type": "markdown", "id": "f2ca422f1caa894d", "metadata": {}, "source": [ "# Local Chat with Codebase\n", "\n", "In this recipe, we will be using all Open Source Software to build a local ChatBot that has access to some documentation. We will be using Mirascope documentation in this example, but this should work on all types of documents. Also note that we will be using a smaller Llama 3.1 8B so the results will not be as impressive as larger models. Later, we will take a look at how OpenAI's GPT compares with Llama.\n", "\n", "
\n", "

Mirascope Concepts Used

\n", "\n", "
" ] }, { "cell_type": "markdown", "id": "391703b0", "metadata": {}, "source": [ "## Setup\n", "\n", "To set up our environment, first let's install all of the packages we will use:" ] }, { "cell_type": "code", "execution_count": null, "id": "79ffb0afbd78a71a", "metadata": {}, "outputs": [], "source": [ "!pip install \"mirascope[openai]\"\n", "!pip install llama-index llama-index-llms-ollama llama-index-embeddings-huggingface huggingface" ] }, { "cell_type": "markdown", "id": "a804032d4d19c961", "metadata": {}, "source": [ "## Configuration\n", "\n", "For this setup, we are using [Ollama](https://github.com/ollama/ollama), but vLLM would also work." ] }, { "cell_type": "code", "execution_count": null, "id": "592e677f9b71afca", "metadata": {}, "outputs": [], "source": [ "from llama_index.core import (\n", " Settings,\n", " SimpleDirectoryReader,\n", " VectorStoreIndex,\n", ")\n", "from llama_index.legacy.embeddings import HuggingFaceEmbedding\n", "from llama_index.legacy.llms import Ollama\n", "\n", "Settings.llm = Ollama(model=\"llama3.1\")\n", "Settings.embed_model = HuggingFaceEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")" ] }, { "cell_type": "markdown", "id": "54a6561ff39ca350", "metadata": {}, "source": [ "\n", "We will be using LlamaIndex for RAG, and setting up the proper models we will be using for Re-ranking and the Embedding model.\n", "\n", "## Store Embeddings\n", "\n", "The first step is to grab our docs and embed them into a vectorstore. In this recipe, we will be storing our vectorstore locally, but using Pinecone or other cloud vectorstore providers will also work.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "8e2b78f59c166a58", "metadata": {}, "outputs": [], "source": [ "from llama_index.core.storage import StorageContext\n", "from llama_index.core.vector_stores import SimpleVectorStore\n", "\n", "documents = SimpleDirectoryReader(\"PATH/TO/YOUR/DOCS\").load_data()\n", "vector_store = SimpleVectorStore()\n", "storage_context = StorageContext.from_defaults(vector_store=vector_store)\n", "index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)\n", "index.storage_context.persist()" ] }, { "cell_type": "markdown", "id": "81eb217df593f1c0", "metadata": {}, "source": [ "\n", "## Load Embeddings\n", "\n", "After we saved our embeddings, we can use the below code to retrieve it and load in memory:\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "b28e9cdd0178f0a5", "metadata": {}, "outputs": [], "source": [ "from llama_index.core import load_index_from_storage\n", "\n", "storage_context = StorageContext.from_defaults(persist_dir=\"storage\")\n", "loaded_index = load_index_from_storage(storage_context)\n", "query_engine = loaded_index.as_query_engine()" ] }, { "cell_type": "markdown", "id": "a5ae2e44b614cfef", "metadata": {}, "source": [ "\n", "## Code\n", "\n", "We need to update LlamaIndex `default_parse_choice_select_answer_fn` for Llama 3.1. You may need to update the `custom_parse_choice_select_answer_fn` depending on which model you are using. Adding re-ranking is extremely important to get better quality retrievals so the LLM can make better context-aware answers.\n", "\n", "We will be creating an Agent that will read Mirascope documentation called MiraBot which will answer questions regarding Mirascope docs.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "3370e1697feaea48", "metadata": {}, "outputs": [], "source": [ "import re\n", "\n", "from llama_index.core.base.response.schema import Response\n", "from llama_index.core.postprocessor import LLMRerank\n", "from llama_index.core.storage import StorageContext\n", "from llama_index.core.vector_stores import SimpleVectorStore\n", "from llama_index.embeddings.huggingface import HuggingFaceEmbedding\n", "from llama_index.llms.ollama import Ollama\n", "from mirascope.core import openai, prompt_template\n", "from openai import OpenAI\n", "from pydantic import BaseModel\n", "\n", "\n", "def custom_parse_choice_select_answer_fn(\n", " answer: str, num_choices: int, raise_error: bool = False\n", ") -> tuple[list[int], list[float]]:\n", " \"\"\"Custom parse choice select answer function.\"\"\"\n", " answer_lines = answer.split(\"\\n\")\n", " answer_nums = []\n", " answer_relevances = []\n", " for answer_line in answer_lines:\n", " line_tokens = answer_line.split(\",\")\n", " if len(line_tokens) != 2:\n", " if not raise_error:\n", " continue\n", " else:\n", " raise ValueError(\n", " f\"Invalid answer line: {answer_line}. \"\n", " \"Answer line must be of the form: \"\n", " \"answer_num: , answer_relevance: \"\n", " )\n", " split_tokens = line_tokens[0].split(\":\")\n", " if (\n", " len(split_tokens) != 2\n", " or split_tokens[1] is None\n", " or not split_tokens[1].strip().isdigit()\n", " ):\n", " continue\n", " answer_num = int(line_tokens[0].split(\":\")[1].strip())\n", " if answer_num > num_choices:\n", " continue\n", " answer_nums.append(answer_num)\n", " # extract just the first digits after the colon.\n", " _answer_relevance = re.findall(r\"\\d+\", line_tokens[1].split(\":\")[1].strip())[0]\n", " answer_relevances.append(float(_answer_relevance))\n", " return answer_nums, answer_relevances\n", "\n", "\n", "def get_documents(query: str) -> str:\n", " \"\"\"The get_documents tool that retrieves Mirascope documentation based on the\n", " relevance of the query\"\"\"\n", " query_engine = loaded_index.as_query_engine(\n", " similarity_top_k=10,\n", " node_postprocessors=[\n", " LLMRerank(\n", " choice_batch_size=5,\n", " top_n=2,\n", " parse_choice_select_answer_fn=custom_parse_choice_select_answer_fn,\n", " )\n", " ],\n", " response_mode=\"tree_summarize\",\n", " )\n", "\n", " response = query_engine.query(query)\n", " if isinstance(response, Response):\n", " return response.response or \"No documents found.\"\n", " return \"No documents found.\"\n", "\n", "\n", "class MirascopeBot(BaseModel):\n", " @openai.call(\n", " model=\"llama3.1\",\n", " client=OpenAI(base_url=\"http://localhost:11434/v1\", api_key=\"ollama\"),\n", " )\n", " @prompt_template(\n", " \"\"\"\n", " SYSTEM:\n", " You are an AI Assistant that is an expert at answering questions about Mirascope.\n", " Here is the relevant documentation to answer the question.\n", "\n", " Context:\n", " {context}\n", "\n", " USER:\n", " {question}\n", " \"\"\"\n", " )\n", " def _step(self, context: str, question: str): ...\n", "\n", " def _get_response(self, question: str):\n", " context = get_documents(question)\n", " answer = self._step(context, question)\n", " print(\"(Assistant):\", answer.content)\n", " return\n", "\n", " def run(self):\n", " while True:\n", " question = input(\"(User): \")\n", " if question == \"exit\":\n", " break\n", " self._get_response(question)\n", "\n", "\n", "MirascopeBot().run()\n", "# Output:\n", "\"\"\"\n", "(User): How do I make an LLM call using Mirascope?\n", "(Assistant): To make an LLM (Large Language Model) call using Mirascope, you can use the `call` decorator provided by Mirascope.\n", "\n", "Here are the basic steps:\n", "\n", "1. Import the `call` decorator from Mirascope.\n", "2. Define a function that takes any number of arguments and keyword arguments. This will be the function that makes the LLM call.\n", "3. Prepend this function definition with the `@call` decorator, specifying the name of the model you want to use (e.g., \"gpt-4o\").\n", "4. Optionally, pass additional keyword arguments to customize the behavior of the LLM call.\n", "\n", "For example:\n", "```python\n", "from mirascope import call\n", "\n", "@click('gpt-4o')\n", "def greet(name: str) -> dict:\n", " return {'greeting': f\"Hello, {name}!\"}\n", "```\n", "In this example, `greet` is the function that makes an LLM call to a GPT-4o model. The `@call('gpt-4o')` decorator turns this function into an LLM call.\n", "\"\"\"" ] }, { "cell_type": "markdown", "id": "40deed51ef8bdf6d", "metadata": {}, "source": [ "
\n", "

Check out OpenAI Implementation

\n", "

\n", "While we demonstrated an open source version of chatting with our codebase, there are several improvements we can make to get better results. Refer to Documentation Agent Cookbook for a detailed walkthrough on the improvements made.\n", "

\n", "
\n", "\n", "
\n", "

Additional Real-World Applications

\n", "
    \n", "
  • Improved Chat Application: Maintain the most current documentation by storing it in a vector database or using a tool to retrieve up-to-date information in your chat application
  • \n", "
  • Code Autocomplete: Integrate the vector database with the LLM to generate accurate, context-aware code suggestions.
  • \n", "
  • Interactive Internal Knowledge Base: Index company handbooks and internal documentation to enable instant, AI-powered Q&A access.
  • \n", "
\n", "
\n", "\n", "\n", "When adapting this recipe, consider:\n", "\n", "- Experiment with different model providers and version for quality.\n", "- Use a different Reranking prompt as that impacts the quality of retrieval\n", "- Implement a feedback loop so the LLM hallucinates less frequently.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.6" } }, "nbformat": 4, "nbformat_minor": 5 }