by GMaN1911
Working memory for Claude Code - persistent context and multi-instance coordination
# Add to your Claude Code skills
git clone https://github.com/GMaN1911/claude-cognitiveWorking memory for Claude Code — persistent context and multi-instance coordination
Claude Code is powerful but stateless. Every new instance:
With large codebases (50k+ lines), this becomes painful fast.
Claude Cognitive gives Claude Code working memory through two complementary systems:
Attention-based file injection with cognitive dynamics:
Files decay when not mentioned, activate on keywords, and co-activate with related files.
Multi-instance state sharing for long-running sessions:
No comments yet. Be the first to share your thoughts!
pool blocks for critical coordinationToken Savings:
Average: 64-95% depending on codebase size and work pattern.
Developer Experience:
Validated on:
# Clone to your home directory
cd ~
git clone https://github.com/GMaN1911/claude-cognitive.git .claude-cognitive
# Copy scripts
cp -r .claude-cognitive/scripts ~/.claude/scripts/
# Set up hooks (adds to existing config)
cat .claude-cognitive/hooks-config.json >> ~/.claude/settings.json
Note: The repo contains a
.claude-dev/directory for development/dogfooding purposes only. Do not copy this to your projects—it's not part of the user-facing installation. Use your own project-local.claude/directory instead (see step 2).
cd /path/to/your/project
# Create .claude directory
mkdir -p .claude/{systems,modules,integrations,pool}
# Copy templates
cp -r ~/.claude-cognitive/templates/* .claude/
# Edit .claude/CLAUDE.md with your project info
# Edit .claude/systems/*.md to describe your architecture
# Add to ~/.bashrc for persistence:
export CLAUDE_INSTANCE=A
# Or per-terminal:
export CLAUDE_INSTANCE=B
# Start Claude Code
claude
# First message - check for context injection:
# Should see: "ATTENTION STATE [Turn 1]" with HOT/WARM/COLD counts
# Query pool activity:
python3 ~/.claude/scripts/pool-query.py --since 1h
Create .claude/keywords.json in your project root:
cp ~/.claude-cognitive/templates/keywords.json.example .claude/keywords.json
Edit to match your project's documentation files and relevant keywords.
Full setup guide: SETUP.md Customization guide: CUSTOMIZATION.md
Create .claude/keywords.json in your project root to define project-specific keywords:
{
"keywords": {
"path/to/doc.md": ["keyword1", "keyword2", "phrase to match"]
},
"co_activation": {
"path/to/doc.md": ["related/doc.md"]
},
"pinned": ["always/warm/file.md"]
}
Keywords: Map documentation files to trigger words. When any keyword appears in your prompt (case-insensitive), the file becomes HOT.
Co-activation: When a file activates, related files get a score boost.
Pinned: Files that should always be at least WARM.
The router checks for config in this order:
.claude/keywords.json (project-local)~/.claude/keywords.json (global fallback)Attention Dynamics:
User mentions "orin" in message
↓
systems/orin.md → score = 1.0 (HOT)
↓
Co-activation:
integrations/pipe-to-orin.md → +0.35 (WARM)
modules/t3-telos.md → +0.35 (WARM)
↓
Next turn (no mention):
systems/orin.md → 1.0 × 0.85 decay = 0.85 (still HOT)
↓
3 turns later (no mention):
systems/orin.md → 0.85 × 0.85 × 0.85 = 0.61 (now WARM)
Injection:
Automatic Mode:
Instance A completes task
↓
Auto-detector finds: "Successfully deployed PPE to Orin"
↓
Writes pool entry:
action: completed
topic: PPE deployment to Orin
affects: orin_sensory_cortex/
↓
Instance B starts session
↓
Pool loader shows:
"[A] completed: PPE deployment to Orin"
↓
Instance B avoids duplicate work
Manual Mode:
```pool
INSTANCE: A
ACTION: completed
TOPIC: Fixed authentication bug
SUMMARY: Resolved race condition in token refresh. Added mutex.
AFFECTS: auth.py, session_handler.py
BLOCKS: Session management refactor can proceed
```
Claude Cognitive now remembers its own attention. Every turn is logged with structured data showing which files were HOT/WARM/COLD and how they transitioned between tiers.
The router always computed attention scores. Now they persist as queryable history:
# Last 20 turns
python3 ~/.claude/scripts/history.py
# Last 2 hours
python3 ~/.claude/scripts/history.py --since 2h
# Filter by file pattern
python3 ~/.claude/scripts/history.py --file ppe
# Show only tier transitions
python3 ~/.claude/scripts/history.py --transitions
# Summary statistics
python3 ~/.claude/scripts/history.py --stats
# Filter by instance
python3 ~/.claude/scripts/history.py --instance A
============================================================
2025-12-31
============================================================
[18:43:21] Instance A | Turn 47
Query: refactor ppe routing tier collapse
🔥 HOT: ppe-anticipatory-coherence.md, t3-telos.md
🌡️ WARM: orin.md, pipeline.md
⬆️ Promoted to HOT: ppe-anticipatory-coherence.md
⬇️ Decayed to COLD: img-to-asus.md
[19:22:35] Instance A | Turn 48
Query: what divergence dynamics?
🔥 HOT: divergent.md, t3-telos.md, cvmp-transformer.md
🌡️ WARM: pipeline.md, orin.md (+3 more)
⬆️ Promoted to HOT: divergent.md
python3 ~/.claude/scripts/history.py --stats --since 7d
╔══════════════════════════════════════════════════════════════╗
║ ATTENTION STATISTICS ║
╚══════════════════════════════════════════════════════════════╝
Total turns: 342
Time range: 2025-12-24 to 2025-12-31
Instances: {'A': 156, 'B': 98, 'default': 88}
Most frequently HOT:
87 turns: pipeline.md
65 turns: t3-telos.md
43 turns: orin.md
38 turns: ppe-anticipatory-coherence.md
22 turns: divergent.md
Most promoted to HOT:
23 times: ppe-anticipatory-coherence.md
18 times: divergent.md
12 times: convergent.md
Busiest days:
2025-12-30: 156 turns
2025-12-29: 98 turns
2025-12-28: 88 turns
Average context size: 18,420 chars
Each turn logs:
{
"turn": 47,
"timestamp": "2025-12-31T18:43:21Z",
"instance_id": "A",
"prompt_keywords": ["refactor", "ppe", "routing", "tier"],
"activated": ["ppe-anticipatory-coherence.md"],
"hot": ["ppe-anticipatory-coherence.md", "t3-telos.md"],
"warm": ["orin.md", "pipeline.md"],
"cold_count": 12,
"transitions": {
"to_hot": ["ppe-anticipatory-coherence.md"],
"to_warm": ["orin.md"],
"to_cold": ["img-to-asus.md"]
},
"total_chars": 18420
}
File: ~/.claude/attention_history.jsonl (append-only, one entry per turn)
Retention: 30 days (configurable in context-router-v2.py)
claude-cognitive/
├── scripts/
│ ├── context-router-v2.py # Attention dynamics + history logging
│ ├── history.py # History viewer CLI (v1.1+)
│ ├── pool-auto-update.py # Continuous pool updates
│ ├── pool-loader.py # SessionStart injection
│ ├── pool-extractor.py # Stop hook extraction
│ └── pool-query.py # CLI query tool
│
├── templates/
│ ├── CLAUDE.md # Project context template
│ ├── systems/ # Hardware/deployment
│ ├── modules/ # Core systems
│ └── integrations/ # Cross-system communication
│
└── examples/
├── small-project/ # Simple example
├── monorepo/ # Complex structure
└── mirrorbot-sanitized/ # Real-world 50k+ line example
Hooks:
UserPromptSubmit: Context router + pool auto-updateSessionStart: Pool loaderStop: Pool extractor (manual blocks)State Files:
.claude/attn_state.json - Context router scores.claude/pool/instance_state.jsonl - Pool entriesStrategy: Project-local first, ~/.claude/ fallback (monorepo-friendl