by doobidoo
Open-source persistent memory for AI agent pipelines (LangGraph, CrewAI, AutoGen) and Claude. REST API + knowledge graph + autonomous consolidation.
# Add to your Claude Code skills
git clone https://github.com/doobidoo/mcp-memory-serviceOpen-source memory backend for multi-agent systems. Agents store decisions, share causal knowledge graphs, and retrieve context in 5ms — without cloud lock-in or API costs.
Works with LangGraph · CrewAI · AutoGen · any HTTP client · Claude Desktop
| Without mcp-memory-service | With mcp-memory-service | |---|---| | Each agent run starts from zero | Agents retrieve prior decisions in 5ms | | Memory is local to one graph/run | Memory is shared across all agents and runs | | You manage Redis + Pinecone + glue code | One self-hosted service, zero cloud cost | | No causal relationships between facts | Knowledge graph with typed edges (causes, fixes, contradicts) | | Context window limits create amnesia | Autonomous consolidation compresses old memories |
No comments yet. Be the first to share your thoughts!
Key capabilities for agent pipelines:
X-Agent-ID header — auto-tag memories by agent identity for scoped retrievalconversation_id — bypass deduplication for incremental conversation storagepip install mcp-memory-service
MCP_ALLOW_ANONYMOUS_ACCESS=true memory server --http
# REST API running at http://localhost:8000
import httpx
BASE_URL = "http://localhost:8000"
# Store — auto-tag with X-Agent-ID header
async with httpx.AsyncClient() as client:
await client.post(f"{BASE_URL}/api/memories", json={
"content": "API rate limit is 100 req/min",
"tags": ["api", "limits"],
}, headers={"X-Agent-...