Skip to content

Tips & Inspiration

Prompt Engineering Examples and Techniques

If there’s one certainty when it comes to prompt engineering it’s this: the more you put into your prompts in terms of clarity, richness of context, and specificity of detail, the better the model outputs you’ll receive.

Prompt engineering is the process of structuring and refining your inputs that go into the LLM to get the best outputs. And if you build LLM applications, then getting the model to output the best results possible is important to providing a good user experience for your application.

This is why adherence to best practices is key when it comes to prompting. But in order to bridge the gap between best practice in theory and actual practice, we thought it useful to present a number of prompt engineering examples. These examples not only provide useful snippets for your own use cases, but they illustrate ways in which best practices are actually applied.

Understanding LangChain Runnables

A LangChain runnable is a protocol that allows you to create and invoke custom chains. It’s designed to sequence tasks, taking the output of one call and feeding it as input to the next, making it suitable for straightforward, linear tasks where each step directly builds upon the previous one.

Runnables simplify the process of building, managing, and modifying complex workflows by providing a standardized way for different components to interact. With a single function call, you can execute a chain of operations — which is useful for scenarios where the same series of steps need to be applied multiple times.

Prompt Chaining in AI Development

Prompt chaining is a way to sequence LLM calls (and their prompts) by using the output of the last call as input to the next, to guide an LLM to produce more useful answers than if it had been prompted only once.

By treating the entire chain of calls and prompts as part of a larger request to arrive at an ultimate response, you’re able to refine and steer the intermediate calls and responses at each step to achieve a better result.

Prompt chaining allows you to manage what may start out as a large, unwieldy prompt, whose implicitly defined subtasks and details can throw off language models and result in unsatisfying responses. This is because LLMs lose focus when asked to process different ideas thrown together. They can misread relationships between different instructions and incompletely execute them.

8 Prompt Engineering Best Practices and Techniques

Eliciting the best answers from Large Language Models (LLMs) begins with giving them clear instructions. That’s really the goal of prompt engineering: it’s to guide the model to respond in kind to your precise and specific inputs. Prompt engineering done right introduces predictability in the model’s outputs and saves you the effort of having to iterate excessively on your prompts.

In our experience, there are two key aspects to prompting an LLM effectively:

  1. The language employed should be unambiguous and contextually rich. The more an LLM understands exactly what you want, the better it’ll respond.

  2. Beyond the language used, good software developer practices such as version control, ensuring the quality of prompt inputs, writing clean code, and others, help maintain a structured and dependable approach.

LlamaIndex vs LangChain vs Mirascope: An In-Depth Comparison

In the context of building Large Language Model (LLM) applications—and notably Retrieval Augmented Generation (RAG) applications—the consensus seems to be that:

  • LlamaIndex excels in scenarios requiring robust data ingestion and management.
  • LangChain is suitable for chaining LLM calls and for designing autonomous agents.

In truth, the functionalities of both frameworks often overlap. For instance, LangChain offers document loader classes for data ingestion, while LlamaIndex lets you build autonomous agents.

Comparing Prompt Flow vs LangChain vs Mirascope

Currently, LangChain is one of the most popular frameworks among developers of Large Language Model (LLM) applications, and for good reason: its library is rather expansive and covers many use cases.

But teams using LangChain also report that it:

  • Takes a while to catch up in functionality to new features of the language models, which unfortunately means users must wait too.
  • Is an opinionated framework, and as such, encourages developers to implement solutions in its way.
  • Requires developers to learn its unique abstractions for doing tasks that might be easier to accomplish in native Python or JavaScript. This is in contrast to other frameworks that may offer abstractions, but don’t require that users learn or use them.
  • Sometimes uses a large number of dependencies, even for comparatively simple tasks.

A Guide to Prompt Templates in LangChain

A LangChain prompt template is a class containing elements you typically need for a Large Language Model (LLM) prompt. At a minimum, these are:

  • A natural language string that will serve as the prompt: This can be a simple text string, or, for prompts consisting of dynamic content, an f-string or docstring containing placeholders that represent variables.
  • Formatting instructions (optional), that specify how dynamic content should appear in the prompt, i.e., whether it should be italicized, capitalized, etc.
  • Input parameters (optional) that you pass into the prompt class to provide instructions or context for generating prompts. These parameters influence the content, structure, or formatting of the prompt. But oftentimes they’re variables for the placeholders in the string, whose values resolve to produce the final string that goes into the LLM through an API call as the prompt.

8 of the Best Prompt Engineering Tools in 2025

While anyone can develop LLM applications using just the OpenAI SDK—we used to do that since we didn’t find helper functions at the time to be particularly helpful—prompt engineering tools that simplify LLM interactions and enhance productivity are emerging as key players.

We’ve tried several libraries and have built our own, and in our experience you should look for six capabilities in a good prompt engineering tool, if you’re looking to develop robust, production-grade LLM applications:

Engineers Should Handle Prompting LLMs (and Prompts Should Live in Your Codebase)

We’ve seen many discussions around Large Language Model (LLM) software development allude to a workflow where prompts live apart from LLM calls and are managed by multiple stakeholders, including non-engineers. In fact, many popular LLM development frameworks and libraries are built in a way that requires prompts to be managed separately from their calls.

We think this is an unnecessarily cumbersome approach that’s not scalable for complex, production-grade LLM software development.

Here’s why: for anyone developing production-grade LLM apps, prompts that include code will necessarily be a part of your engineering workflow. Therefore, separating prompts from the rest of your codebase, especially from their API calls, means you’re splitting that workflow into different, independent parts.

Separating concerns and assigning different roles to manage each may seem to bring certain efficiencies, for example, easing collaboration between tech and non-tech roles. But it introduces fundamental complexity that can disrupt the engineering process. For instance, introducing a change in one place—like adding a new key-value pair to an input for an LLM call—means hunting down that change manually. And then, you will likely still not catch all the errors.