Prompt templates – reusable prompt structures with placeholders for inputs.
Retrieval-augmented prompting – injecting external information into the prompt. ⟶ https://arxiv.org/abs/2512.04106
P.S. These are only the main prompt engineering techniques, but there are much more. Here's a good survey where you can find them: https://arxiv.org/pdf/2402.07927
▪️ Context Engineering (designing the information environment the model operates in)
Retrieval-Augmented Generation (RAG) – dynamically injecting external knowledge retrieved from databases, search, or vector stores.
Tool calling / function calling – enabling the model to use external tools. (APIs, calculators, code, search). They return extra info that the model needs to perform the task.
Structured context – providing schemas, JSON, tables, or graphs instead of free-form text.
System prompts / policies – persistent high-level instructions that govern behavior across interactions. ⟶ https://arxiv.org/abs/2212.08073
Short-term memory – passing recent interaction history or intermediate state; summarizing information from the ongoing conversation. ⟶ https://arxiv.org/pdf/2512.13564
Long-term memory – storing and retrieving user profiles, facts, or past decisions and conversations over time. ⟶ https://arxiv.org/abs/2503.08026
Environment state – exposing the current world, task, or agent state (files, variables, observations).
Multi-agent context – sharing state or messages between multiple LLM-based agents. ⟶ https://arxiv.org/abs/2505.21471