The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.
143,165
manus
prompt-engineering
prompt-engineering-techniques
Context Management: Handling context windows, memory, and state across interactions
Context Compression: Reducing token usage while preserving essential information
Context Isolation: Separating concerns across different context spaces
📖 Featured Articles
Context Rot: How Increasing Input Tokens Impacts LLM Performance
https://research.trychroma.com/context-rot
Manus Context Engineering
Context Engineering for AI Agents: Lessons from Building Manus
🎯 Establishes formal taxonomy of context engineering components
The performance of Large Language Models (LLMs) is fundamentally determined by the contextual information provided during inference. This survey introduces Context Engineering, a formal discipline that transcends simple prompt design to encompass the systematic optimization of information payloads for LLMs.
+1 for "context engineering" over "prompt engineering".
People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step. Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting... Too little or of the wrong form and the LLM doesn't have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial. And art because of the guiding intuition around LLM psychology of people spirits.
On top of context engineering itself, an LLM app has to:
- break up problems just right into control flows
- pack the context windows just right
- dispatch calls to LLMs of the right kind and capability
- handle generation-verification UIUX flows
- a lot more - guardrails, security, evals, parallelism, prefetching, ...
So context engineering is just one small piece of an emerging thick layer of non-trivial software that coordinates individual LLM calls (and a lot more) into full LLM apps. The term "ChatGPT wrapper" is tired and really, really wrong.
Key Principles
KV-Cache Optimization: Design for cache hit rates to reduce latency and cost
Append-Only Context: Avoid modifying previous context to maintain cache validity
External Memory: Use file systems and databases as extended context storage
Error Preservation: Keep failure traces for model learning and adaptation
Diversity Over Uniformity: Avoid repetitive patterns that lead to model drift
🔗 Model Context Protocol (MCP)
Context7 MCP Server
Up-to-date code documentation for LLMs and AI code editors