{
"cells": [
{
"metadata": {},
"cell_type": "markdown",
"source": [
"# System to Attention (S2A): Enhancing LLM Focus with Query Filtering\n",
"\n",
"This recipe demonstrates how to implement the System to Attention (S2A) technique using Large Language Models (LLMs) with Mirascope. S2A is a prompt engineering method that enhances an LLM's ability to focus on relevant information by filtering out irrelevant context from the initial query.\n",
"\n",
"Mirascope Concepts Used
\n",
"\n",
"
\n",
"
Background
\n", "System to Attention (S2A) is a prompt engineering technique whereby the prompt is first filtered to remove all irrelevant information from the query. This approach helps LLMs focus on the most pertinent information, potentially improving the accuracy and relevance of their responses, especially for queries containing extraneous or potentially biasing information.
\n", "Additional Real-World Applications
\n", "