Prompt Engineering
The practice of crafting effective text inputs to guide LLMs toward desired outputs.
Definition
Prompt engineering is the art and science of designing inputs to AI language models to elicit high-quality, relevant, and accurate outputs. Since LLMs are sensitive to how queries are framed, small changes in wording, structure, or context can dramatically affect output quality.
Key techniques include: zero-shot prompting (direct instruction with no examples), few-shot prompting (including examples in the prompt), chain-of-thought prompting (asking the model to reason step-by-step), role prompting (assigning a persona), and structured output prompting (requesting JSON, tables, or other formats).
As LLMs become more capable, prompt engineering is evolving into more systematic approaches like automatic prompt optimisation and meta-prompting. However, the field remains essential for getting reliable, safe, and useful outputs from commercial LLM APIs.
Examples
- Chain-of-thought prompting
- Few-shot examples
- System prompt design
Related Terms
Large Language Model (LLM)
A transformer-based AI system trained on billions of tokens of text, capable of generating, reasoning about, and transforming language.
RAG (Retrieval-Augmented Generation)
Grounding LLM responses by first retrieving relevant documents from a knowledge base before generating an answer.
Context Window
The maximum number of tokens an LLM can process in a single request, determining how much text it can "see" at once.
Fine-tuning
Continuing training of a pre-trained model on domain-specific data to specialise it for a particular task.