Techniques

Prompt Engineering

The practice of crafting effective text inputs to guide LLMs toward desired outputs.

Definition

Prompt engineering is the art and science of designing inputs to AI language models to elicit high-quality, relevant, and accurate outputs. Since LLMs are sensitive to how queries are framed, small changes in wording, structure, or context can dramatically affect output quality.

Key techniques include: zero-shot prompting (direct instruction with no examples), few-shot prompting (including examples in the prompt), chain-of-thought prompting (asking the model to reason step-by-step), role prompting (assigning a persona), and structured output prompting (requesting JSON, tables, or other formats).

As LLMs become more capable, prompt engineering is evolving into more systematic approaches like automatic prompt optimisation and meta-prompting. However, the field remains essential for getting reliable, safe, and useful outputs from commercial LLM APIs.

Examples

  • Chain-of-thought prompting
  • Few-shot examples
  • System prompt design