by aayoawoyemi
Local-first persistent agentic memory powered by Recursive Memory Harness (RMH). Open source must win.
# Add to your Claude Code skills
git clone https://github.com/aayoawoyemi/Ori-MnemosOpen-source persistent memory infrastructure for AI agents.
Ori implements human cognition as mathematical models on a knowledge graph. Activation decay from ACT-R. Spreading activation along wiki-link edges. Hebbian co-occurrence from retrieval patterns. Reinforcement learning on retrieval itself. Recursive graph traversal with sub-question decomposition. The system learns what matters, forgets what doesn't, and optimizes its own retrieval pipeline.
Persistent memory across sessions, clients, and machines. Zero-infrastructure retrieval that matches and in several cases strongly outperforms incumbents on benchmarks — and you own every byte of your data. Markdown on disk. Wiki-links as graph edges. Git as version control. No database lock-in, no cloud dependency, no vendor capture.
v0.5.0 · npm · Paper · Apache-2.0
Head-to-head against Mem0, the most widely adopted agent memory system. HotpotQA tests multi-hop reasoning — questions that require connecting information across multiple documents to answer.
| Metric | Ori Mnemos | Mem0 | Δ | |--------|:----------:|:----:|:-:| | Recall@5 | 90% | 29% | 3.1× | | F1 Score | 0.68 | 0.33 | 2.1× | | Latency (avg) | 120ms | 1,140ms | 9.5× faster | | Infrastructure | Markdown + SQLite | Redis + Qdrant + cloud | — |
Ori retrieves the right information 3× more often, scores 2× higher on answer quality, and does it 9.5× faster — on markdown files with a SQLite index. No cloud services. No API keys. Full evaluation code in .
No comments yet. Be the first to share your thoughts!
Evaluated on LoCoMo (Maharana et al., 2024) — the standard benchmark for long-term conversational memory. 10 conversations, 695 questions across single-hop, multi-hop, and temporal reasoning.
| System | Single-hop | Multi-hop | Infrastructure | |--------|:----------:|:---------:|----------------| | MemoryBank | 5.00 | — | Custom server | | ReadAgent | 9.15 | — | LLM-based | | A-Mem | 20.76 | — | Cloud APIs | | MemGPT / Letta | 26.65 | — | PostgreSQL + cloud | | LangMem | 35.51 | 26.04 | Cloud APIs | | OpenAI Memory | 34.30 | — | OpenAI proprietary | | Zep | 35.74 | 19.37 | PostgreSQL + cloud | | Mem0 | 38.72 | 28.64 | Redis + Qdrant + cloud | | Ori Mnemos | 37.69 | 29.31 | Markdown on disk |
Baseline numbers from Mem0 paper (Table 1). Ori evaluated with GPT-4.1-mini for answer generation, BM25 + embedding + PageRank fusion for retrieval.
More benchmarks coming — including LoCoMo-Plus (Level-2 cognitive memory) and adversarial refusal evaluation.
npm install -g ori-memory
ori init my-agent
cd my-agent
Connect to your a...