by varun29ankuS
Cognitive memory for AI agents — learns from use, forgets what's irrelevant, strengthens what matters. Single binary, fully offline.
# Add to your Claude Code skills
git clone https://github.com/varun29ankuS/shodh-memoryAI agents forget everything between sessions. They repeat mistakes, lose context, and treat every conversation like the first one.
Shodh-Memory fixes this. It's persistent memory that actually learns — memories you use often become easier to find, old irrelevant context fades automatically, and recalling one thing brings back related things. No API keys. No cloud. No external databases. One binary.
| | Shodh | mem0 | Cognee | Zep | |---|---|---|---|---| | LLM calls to store a memory | 0 | 2+ per add | 3+ per cognify | 2+ per episode | | External services needed | None | OpenAI + vector DB | OpenAI + Neo4j + vector DB | OpenAI + Neo4j | | Time to store a memory | 55ms | ~20 seconds | seconds | seconds | | Learns from usage | Yes (Hebbian) | No | No | No | | Forgets irrelevant data | Yes (decay) | No | No | Temporal only | | Runs fully offline | Yes | No | No | No | | Binary size | ~17MB | pip install + API keys | pip install + API keys + Neo4j | Cloud only |
Every other memory system delegates intelligence to LLM API calls — that's why they're slow, expensive, and can't work offline. Shodh uses algorithmic intelligence: local embeddings, mathematical decay, learned associations. No LLM in the loop.
# Download from GitHub Releases (or brew tap varun29ankuS/shodh-memory && brew install shodh-memory)
shodh init # First-time setup — creates config, generates API key, downloads AI model
shodh server # Start the memory server on :3030
shodh tui # Launch the TUI dashboard
shodh status # Check server health
shodh doctor # Diagnose issues
No comments yet. Be the first to share your thoughts!
One binary, all functionality. No Docker, no API keys, no external dependencies.
claude mcp add shodh-memory -- npx -y @shodh/memory-mcp
That's it. The MCP server auto-downloads the backend binary and starts it. No Docker, no API keys, no configuration. Claude now has persistent memory across sessions.
# 1. Start the server
docker run -d -p 3030:3030 -v shodh-data:/data varunshodh/shodh-memory
# 2. Add to Claude Code
claude mcp add shodh-memory -- npx -y @shodh/memory-mcp
{
"mcpServers": {
"shodh-memory": {
"command": "npx",
"args": ["-y", "@shodh/memory-mcp"]
}
}
}
For local use, no API key is needed — one is generated automatically. For remote servers, add "env": { "SHODH_API_KEY": "your-key" }.
pip install shodh-memory
from shodh_memory import Memory
memory = Memory(storage_path="./my_data")
memory.remember("User prefers dark mode", memory_type="Decision")
results = memory.recall("user preferences", limit=5)
[dependencies]
shodh-memory = "0.1"
use shodh_memory::{MemorySystem, MemoryConfig};
let memory = MemorySystem::new(MemoryConfig::default())?;
memory.remember("user-1", "User prefers dark mode", MemoryType::Decision, vec![])?;
let results = memory.recall("user-1", "user preferences", 5)?;
docker run -d -p 3030:3030 -v shodh-data:/data varunshodh/shodh-memory
You use a memory often → it becomes easier to find (Hebbian learning)
You stop using a memory → it fades over time (activation decay)
You recall one memory → related memories surface too (spreading activation)
A connection is used → it becomes permanent (long-term potentiation)
Under the hood, memories flow through three tiers:
Working Memory ──overflow──▶ Session Memory ──importance──▶ Long-Term Memory
(100 items) (500 MB) (RocksDB)
This is based on Cowan's working memory model and Wixted's memory decay research. The neuroscience isn't a gimmick — it's why the system gets better with use instead of just accumulating data.
| Operation | Latency | |-----------|---------| | Store memory (API response) | <200ms | | Store memory (core) | 55-60ms | | Semantic search | 34-58ms | | Tag search | ~1ms | | Entity lookup | 763ns | | Graph traversal (3-hop) | 30µs |
Single binary. No GPU required. Content-hash dedup ensures identical memories are never stored twice.
shodh tui
Full list of tools available to Claude, Cursor, and other MCP clients:
remember · recall · proactive_context · context_summary · list_memories · read_memory · forget · reinforce
add_todo · list_todos · update_todo · complete_todo · delete_todo · reorder_todo · list_subtasks · add_todo_comment · list_todo_comments · update_todo_comment · delete_todo_comment · todo_stats
add_project · list_projects · archive_project · delete_project
set_reminder · list_reminders · dismiss_reminder
memory_stats · verify_index · repair_index · token_status · reset_token_session · consolidation_report · backup_create · backup_list · backup_verify · backup_restore · backup_purge
160+ endpoints on http://localhost:3030. All /api/* endpoints require X-API-Key header.
# Store a memory
curl -X POST http://localhost:3030/api/remember \
-H "Content-Type: application/json" \
-H "X-API-Key: your-key" \
-d '{"user_id": "user-1", "content": "User prefers dark mode", "memory_type": "Decision"}'
# Search memories
curl -X POST http://localhost:3030/api/recall \
-H "Content-Type: application/json" \
-H "X-API-Key: your-key" \
-d '{"user_id": "user-1", "query": "user preferences", "limit": 5}'
Linux x86_64 · Linux ARM64 · macOS Apple Silicon · macOS Intel · Windows x86_64
SHODH_ENV=production # Production mode
SHODH_API_KEYS=key1,key2,key3 # Comma-separated API keys
SHODH_HOST=127.0.0.1 # Bind address (default: localhost)
SHODH_PORT=3030 # Port (default: 3030)
SHODH_MEMORY_PATH=/var/lib/shodh # Data directory
SHODH_REQUEST_TIMEOUT=60 # Request timeout in seconds
SHODH_MAX_CONCURRENT=200 # Max concurrent requests
SHODH_CORS_ORIGINS=https://app.example.com
services:
shodh-memory:
image: varunshodh/shodh-memory:latest
environment:
- SHODH_ENV=production
- SHODH_HOST=0.0.0.0
- SHODH_API_KEYS=${SHODH_API_KEYS}
volumes:
- shodh-data:/data
networks:
- internal
caddy:
image: caddy:latest
ports:
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
networks:
- internal
volumes:
shodh-data:
networks:
internal:
The server binds to 127.0.0.1 by default. For network deployments, place behind a reverse proxy:
memory.example.com {
reverse_proxy localhost:3030
}
| Project | Description | Author | |---------|-------------|--------| | [SHODH