by CodeAbra
The best-benchmarked open-source memory system for AI coding assistants
# Add to your Claude Code skills
git clone https://github.com/CodeAbra/iai-mcpIndependent Autistic Intelligence — a local memory layer for Claude (and other MCP-compatible assistants).
A local server that speaks the MCP protocol and gives Claude, and any other MCP-compatible assistant, a long-term memory. It captures every turn of every session verbatim, organizes those captures over time into a personal map of who you are, and serves a small slice of relevant memory back at the start of each new conversation. You never have to say "remember this" or "what did we say last time?".
I built this for myself. It worked. I've been running it daily for months, and now I'm sharing it. The benchmarks were mostly for my own curiosity. I wanted to know if it actually works or if I'd just gotten used to it.
No comments yet. Be the first to share your thoughts!
Windows and Linux not supported yet but I'm working on it.
git clone https://github.com/CodeAbra/iai-mcp.git
cd iai-mcp
bash scripts/install.sh
The installer creates a Python venv, installs dependencies (LanceDB, sentence-transformers, torch-hd, NetworkX, igraph), builds the TypeScript MCP wrapper, pre-downloads the default embedding model (~130 MB), symlinks the CLI to ~/.local/bin/iai-mcp, and on macOS registers the daemon with launchd.
Make sure ~/.local/bin is on your PATH:
export PATH="$HOME/.local/bin:$PATH" # add to ~/.zshrc or ~/.bashrc
iai-mcp --version
This is what makes capture ambient. Without it you'd have to save memories by hand.
Claude Code is the default target:
iai-mcp capture-hooks install
For Codex:
iai-mcp capture-hooks install --target codex
To install both:
iai-mcp capture-hooks install --target all
Check status with:
iai-mcp capture-hooks status --target all
Manual Claude Code setup is equivalent to:
mkdir -p ~/.claude/hooks
cp deploy/hooks/iai-mcp-session-capture.sh ~/.claude/hooks/
chmod +x ~/.claude/hooks/iai-mcp-session-capture.sh
Register in ~/.claude/settings.json:
{
"hooks": {
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "$HOME/.claude/hooks/iai-mcp-session-capture.sh"
}
]
}
]
}
}
Claude Code:
claude mcp add iai-mcp -- node "$(pwd)/mcp-wrapper/dist/index.js"
Or edit ~/.claude.json directly:
{
"mcpServers": {
"iai-mcp": {
"command": "node",
"args": ["/absolute/path/to/iai-mcp/mcp-wrapper/dist/index.js"]
}
}
}
Use the absolute path. ~ and $HOME won't expand here.
For Claude Desktop (untested), edit ~/Library/Application Support/Claude/claude_desktop_config.json.
Codex CLI:
[mcp_servers.iai-mcp]
command = "node"
args = ["/absolute/path/to/iai-mcp/mcp-wrapper/dist/index.js"]
[mcp_servers.iai-mcp.env]
IAI_MCP_PYTHON = "/absolute/path/to/iai-mcp/.venv/bin/python"
IAI_MCP_STORE = "/Users/you/.iai-mcp"
TRANSFORMERS_VERBOSITY = "error"
TOKENIZERS_PARALLELISM = "false"
Codex hooks are stable in current Codex CLI builds. If hooks are disabled by
local policy or an older install, enable [features].hooks = true in
~/.codex/config.toml.
iai-mcp doctor
iai-mcp daemon status
Restart Claude Code. Start a session, do some work, exit. Then:
tail ~/.iai-mcp/logs/capture-$(date -u +%Y-%m-%d).log
You should see a rc=0 line. That's your first memory.
You do not call iai-mcp directly during a session. Once it's connected:
Capture is automatic. Every turn, yours and the assistant's, is recorded verbatim with timestamps and session metadata. You don't say "remember this."
Recall is automatic. When a new session starts, the daemon assembles a small relevant slice of your history and injects it into the conversation prefix. You don't say "what did we say."
Consolidation runs idle. Between sessions, the daemon merges duplicates, strengthens recall pathways for things retrieved often, and prunes weak edges. The system gets quietly better at remembering you over time.
After a few weeks of regular use the difference becomes noticeable. The assistant stops asking the same orientation questions, references things you mentioned in passing, and adapts to your style without being told.
The daemon is a Python process that runs in the background. Your MCP client connects to it via a Unix socket. No network exposure.
Memory is stored in three tiers:
Episodic is verbatim, timestamped fragments of what was said. Write-once, never overwritten or rewritten.
Semantic is summaries induced from clusters of related episodes during idle-time consolidation.
Procedural is a small set of stable parameters about you, learned over time: preferences, style cues, recurring patterns. Eleven sealed knobs that shift based on what works.
A background pass runs periodically (sleep cycles): it clusters episodes, builds semantic summaries, decays old unreinforced connections, and reinforces frequently co-retrieved paths. Things you haven't revisited fade naturally. There's an optional "insight of the day" step that makes one Anthropic API call, but it's off by default.
Recall combines three signals: semantic similarity, graph-link strength, and recency. All ranked together.
All records are encrypted at rest with AES-256-GCM. The key lives in ~/.iai-mcp/.key (mode 0600). Back it up. Lose the key, lose the memories.
Everything lives at ~/.iai-mcp/. Embeddings are computed locally with bge-small-en-v1.5. The only data that leaves the machine is your normal conversation with whatever LLM API your client uses.
Claude Code <--MCP-stdio--> TypeScript wrapper <--UNIX socket--> Python daemon <--> LanceDB
I made these because I wanted honest numbers. Every harness ships in bench/. Run them on your machine, get your own results.
| Metric | Target | Measured | |---|---|---| | Verbatim recall (byte-exact) | >=99% | >=99% at N=10k | | Recall p95 latency | <100 ms | <100 ms at N=10k | | RAM at steady state | <=300 MB | ~150-300 MB | | Session-start tokens (warm cache) | <=3,000 | <=3,000 | | Session-start tokens (cold) | <=8,000 | <=8,000 |
python -m bench.verbatim # verbatim fidelity
python -m bench.neural_map # recall latency
python -m bench.memory_footprint # RAM usage
python -m bench.tokens # session-start cost
python -m bench.total_session_cost # full 10-turn cost
python -m bench.trajectory # 30-session corpus
python -m bench.contradiction_longitudinal # falsifiability
python -m bench.longmemeval_blind # LongMemEval-S blind run
The LongMemEval-S run is blind on purpose. No dataset-specific tuning, no hyperparameter sweep. The numbers are what they are.
| Variable | Default | What it does |
|---|---|---|
| IAI_MCP_STORE | ~/.iai-mcp/ | Data directory |
| IAI_MCP_EMBED_MODEL | bge-small-en-v1.5 | Embedding model. bge-m3 for multilingual at ~3x size. |
Switching embedders requires re-embedding the store: iai-mcp migrate reembed.
iai-mcp doctor runs 14 checks against the daemon, the store, and the runtime state. Output is one line per check: PASS, WARN, or FAIL.
iai-mcp doctor
What it checks:
| # | Check | What it means |
|---|---|---|
| a | Daemon alive | Is the daemon process running? |
| b | Socket fresh | Can the UNIX socket accept a connection? |
| c | Lock healthy | Is the process lock held correctly? |
| d | No orphan core | No leftover stdio core process without a daemon |
| e | State file valid | .daemon-state.json parses and has expected fields |
| f | LanceDB readable | Can the records table be opened and queried? |
| g | No duplic