{ "cells": [ { "cell_type": "markdown", "id": "e64fd8485eae1ce8", "metadata": {}, "source": [ "# Text Summarization\n", "\n", "In this recipe, we show some techniques to improve an LLM’s ability to summarize a long text from simple (e.g. `\"Summarize this text: {text}...\"`) to more complex prompting and chaining techniques. We will use OpenAI’s GPT-4o-mini model (128k input token limit), but you can use any model you’d like to implement these summarization techniques, as long as they have a large context window.\n", "\n", "
Mirascope Concepts Used
\n", "Background
\n", "\n", " Large Language Models (LLMs) have revolutionized text summarization by enabling more coherent and contextually aware abstractive summaries. Unlike earlier models that primarily extracted or rearranged existing sentences, LLMs can generate novel text that captures the essence of longer documents while maintaining readability and factual accuracy.\n", "\n", "
\n", "Additional Real-World Applications
\n", "