by JasonDocton
Memory for AI that works like yours—local, instant, persistent. 13x faster than Pinecone, 5x leaner than RAG. Finds what RAG misses. Zero cloud, zero cost.
# Add to your Claude Code skills
git clone https://github.com/JasonDocton/lucid-memory2.7ms retrieval. 743,000 memories/second. $0/query.
Memory for AI that works like yours—local, instant, persistent.
curl -fsSL https://lucidmemory.dev/install | bash
New in 0.6.0: Memory Consolidation — Lucid Memory is now self-maintaining. Background consolidation strengthens recent memories, decays stale ones, prunes weak associations, and manages visual memory lifecycle. New memories are checked against existing traces — similar content reinforces or updates rather than duplicating. 307 tests, 0 tsc errors.
New in 0.5.0: Episodic Memory — Claude remembers not just what happened, but how it unfolded — reconstructing the story of your debugging session, not just the fix. "What was I working on before the auth refactor?" now has an answer.
We're not a vector database. We're the retrieval layer that makes vector databases obsolete for AI memory.
Pinecone stores vectors. We understand context.
Benchmarked on realistic developer workflows (50-200 memories). Full methodology: bun run bench:realistic && bun run bench:tokens
| | Lucid Memory | Pinecone | |---|---|---| | Token efficiency | 5x | 2.5x | | Recall | 82.5% | 55.3% | | Latency | 2.7ms | 10-50ms | | Monthly cost | $0 | $70+ | | Your data | Stays on your machine | Sent to cloud | | Recency awareness | Yes (multiplicative) | No | | Associative retrieval | Yes (spreading activation) | No |
Pinecone is a great vector database. But vector search isn't memory.
Lucid Memory retrieves (82.5% vs 55.3% recall), runs (local vs cloud), costs , and keeps your code —all while understanding that what you accessed yesterday matters more than what you accessed last year.
No comments yet. Be the first to share your thoughts!
5x more relevant context per token than claude-mem. 2x more than Pinecone. Same budget, 5x more useful memories.
82.5% recall vs 28.9% (claude-mem) and 55.3% (Pinecone) at equivalent token budgets. More of what you need surfaces.
100% on adversarial recency tests. Recent-but-irrelevant never beats old-but-relevant—unlike systems where recency overwhelms similarity.
Realistic Developer Workflow Benchmarks:
| Scenario | Lucid Memory | RAG Baseline | Delta | | -------- | ------------ | ------------ | ----- | | Morning context restoration | 93.3% | 78.3% | +15.0% | | Needle in haystack (200 memories) | 100% | 100% | — | | Recency vs similarity tradeoff | 100% | 100% | — | | Co-edited files (spreading activation) | 100% | 75% | +25.0% | | Cold start (no history) | 100% | 100% | — | | Adversarial recency trap | 100% | 100% | — | | Long-term decay | 100% | 100% | — | | Episode retrieval (0.5.0) | 80% | 0% | +80.0% | | Weak encoding retrieval | 100% | 60% | +40.0% | | Overall | 87.3% | 71.3% | +16.0% |
Note: RAG ties on adversarial tests because it ignores recency entirely. The test validates Lucid's recency handling doesn't break relevance—and it doesn't.
Token Efficiency (at 300 token budget):
| Metric | Lucid Memory | Claude-mem | Pinecone RAG | | ------ | ------------ | ---------- | ------------ | | Memories retrieved | 10-21 | 0-5 | 1-6 | | Relevant memories found | 5-10 | 0-3 | 1-3 | | Relative efficiency | 5x | 1x | 2.5x |
Speed (M-series Mac, 1024-dim embeddings):
| Memories | Retrieval Time | Throughput | | -------- | -------------- | ---------- | | 100 | 0.13ms | 769k mem/s | | 1,000 | 1.35ms | 741k mem/s | | 2,000 | 2.69ms | 743k mem/s | | 10,000 | ~13ms | ~740k mem/s |
Spreading activation (depth 3) adds <0.1ms overhead.
sim³) suppresses weak matches before budgeting.Without Lucid:
User: "Remember that bug we fixed in the auth module?"
Claude: "I don't have context from previous conversations..."
With Lucid:
User: "Remember that bug we fixed in the auth module?"
Claude: "Yes - the race condition in the session refresh. We fixed it
by adding a mutex around the token update. That was three weeks ago
when we were refactoring the middleware."
macOS / Linux:
curl -fsSL https://lucidmemory.dev/install | bash
irm https://lucidmemory.dev/install.ps1 | iex
Older PowerShell versions (5.1) can't follow 308 redirects. Use the direct URL instead:
irm https://raw.githubusercontent.com/JasonDocton/lucid-memory/main/install.ps1 | iex
That's it. Your AI coding assistant now remembers across sessions.
Requirements: 5GB free disk space, at least one supported client installed (Claude Code, Codex, or OpenCode)
Lucid Memory supports Claude Code, OpenAI Codex, and OpenCode. During installation, you can choose any combination.
Database modes:
memory-claude.db, memory-codex.db, memory-opencode.db)Managing configuration:
lucid config show # View current configuration
lucid config set-mode per-client # Switch to per-client databases
lucid config create-profile work # Create a new profile
lucid config set-profile codex work # Assign Codex to use the work profile
lucid config set-profile opencode work # Assign OpenCode to use the work profile
Environment variable:
The LUCID_CLIENT environment variable determines which client is active. This is set automatically in the MCP config for each client.
Most AI memory is just vector search—embed query, find similar docs, paste into context.
Lucid implements how humans actually remember:
| Aspect | Traditional RAG | Lucid Memory | | ------ | --------------- | ------------ | | Model | Database lookup | Cognitive simulation | | Memory | Static records | Living, evolving traces | | Retrieval | Similarity search | Activation competition | | Context | Ignored | Shapes what surfaces | | Time | Flat | Recent/frequent = stronger | | Associations | None | Memories activate each other |
Want the full picture? See How It Works for a deep dive into the cognitive architecture, retrieval algorithms, and neuroscience behind Lucid Memory.
New in 0.3: Claude now sees and remembers images and videos you share.
When you share media in your conversation, Claude automatically processes and remembers it—not by storing the file, but by understanding and describing what it sees and hears. Later, when you mention something related, those visual memories surface naturally.
| Without Visual Memory | With Visual Memory | | --------------------- | ------------------ | | "What was in that screenshot?" | "That screenshot showed the error in the auth module—the stack trace pointed to line 47." | | Claude forgets media between sessions | Visual memories persist and surface when relevant | | Videos are just files | Claude remembers both what it saw AND what was said |
How it works: