by aayoawoyemi
Local-first persistent agentic memory powered by Recursive Memory Harness (RMH). Open source must win.
# Add to your Claude Code skills
git clone https://github.com/aayoawoyemi/Ori-MnemosOpen-source persistent memory infrastructure for AI agents.
Ori implements human cognition as mathematical models on a knowledge graph. Activation decay from ACT-R. Spreading activation along wiki-link edges. Hebbian co-occurrence from retrieval patterns. Reinforcement learning on retrieval itself. Recursive graph traversal with sub-question decomposition. The system learns what matters, forgets what doesn't, and optimizes its own retrieval pipeline.
Persistent memory across sessions, clients, and machines. Zero-infrastructure retrieval that matches and in several cases strongly outperforms incumbents on benchmarks — and you own every byte of your data. Markdown on disk. Wiki-links as graph edges. Git as version control. No database lock-in, no cloud dependency, no vendor capture.
v0.5.0 · npm · Paper · Apache-2.0
Head-to-head against Mem0, the most widely adopted agent memory system. HotpotQA tests multi-hop reasoning — questions that require connecting information across multiple documents to answer.
| Metric | Ori Mnemos | Mem0 | Δ | |--------|:----------:|:----:|:-:| | Recall@5 | 90% | 29% | 3.1× | | F1 Score | 0.68 | 0.33 | 2.1× | | Latency (avg) | 120ms | 1,140ms | 9.5× faster | | Infrastructure | Markdown + SQLite | Redis + Qdrant + cloud | — |
Ori retrieves the right information 3× more often, scores 2× higher on answer quality, and does it 9.5× faster — on markdown files with a SQLite index. No cloud services. No API keys. Full evaluation code in .
No comments yet. Be the first to share your thoughts!
Evaluated on LoCoMo (Maharana et al., 2024) — the standard benchmark for long-term conversational memory. 10 conversations, 695 questions across single-hop, multi-hop, and temporal reasoning.
| System | Single-hop | Multi-hop | Infrastructure | |--------|:----------:|:---------:|----------------| | MemoryBank | 5.00 | — | Custom server | | ReadAgent | 9.15 | — | LLM-based | | A-Mem | 20.76 | — | Cloud APIs | | MemGPT / Letta | 26.65 | — | PostgreSQL + cloud | | LangMem | 35.51 | 26.04 | Cloud APIs | | OpenAI Memory | 34.30 | — | OpenAI proprietary | | Zep | 35.74 | 19.37 | PostgreSQL + cloud | | Mem0 | 38.72 | 28.64 | Redis + Qdrant + cloud | | Ori Mnemos | 37.69 | 29.31 | Markdown on disk |
Baseline numbers from Mem0 paper (Table 1). Ori evaluated with GPT-4.1-mini for answer generation, BM25 + embedding + PageRank fusion for retrieval.
More benchmarks coming — including LoCoMo-Plus (Level-2 cognitive memory) and adversarial refusal evaluation.
npm install -g ori-memory
ori init my-agent
cd my-agent
Connect to your agent:
# Full adapters — auto-orient at session start, capture at session end
ori bridge claude-code --vault ~/brain # hooks + MCP + CLAUDE.md
ori bridge hermes --vault ~/brain # native plugin + MCP + HERMES.md
# MCP-only adapters — tools available, no lifecycle automation
ori bridge cursor --vault ~/brain # .cursor/mcp.json
ori bridge codex --vault ~/brain # ~/.codex/config.toml
# Any MCP client
ori bridge generic --vault ~/brain # prints config for manual setup
Claude Code and Hermes Agent get full lifecycle integration — the agent orients at session start, captures insights at session end, and validates notes on write. Cursor, Codex, and other MCP clients get access to all 16 tools but manage their own session lifecycle.
Manual MCP config (works with any client that speaks MCP):
{
"mcpServers": {
"ori": {
"command": "ori",
"args": ["serve", "--mcp", "--vault", "/path/to/brain"],
"env": { "ORI_VAULT": "/path/to/brain" }
}
}
}
Start a session. The agent receives its identity automatically and begins onboarding on first run.
Ori is the first implementation of the Recursive Memory Harness (RMH) framework — a set of constraints on how persistent memory should behave for AI agents.
The core insight comes from Recursive Language Models (Zhang, Krassa & Khattab, 2026). RLM treats context not as input to be stuffed into a window, but as an environment to be navigated. The model doesn't get a bigger desk — it gets legs and walks into the library. RMH applies the same principle to persistent memory.
Three constraints define the framework:
Retrieval must follow the graph. Memory is not a flat vector store. Notes are nodes, wiki-links are edges. Retrieval walks the structure — Personalized PageRank at α=0.45, spreading activation along edges, community-aware traversal. The topology of the graph shapes what gets found.
Unresolved queries must recurse. When a single retrieval pass is insufficient, the system decomposes the question into sub-questions, retrieves against each, and synthesizes. Convergence detection stops recursion when new passes stop surfacing new information. This is what ori explore does.
Every retrieval must reshape the graph. Retrieval is not read-only. Co-occurrence edges grow between notes retrieved together (Hebbian learning). Q-values update based on whether retrieved notes were actually useful. The graph learns from how it is used — every query makes the next query better.
Most memory systems treat retrieval as search. RMH treats retrieval as navigation, recursion, and learning — on a graph that evolves with every session.
Read the full paper: Introducing Recursive Memory Harness
Persistent identity. Agent state — name, personality, goals, methodology — is stored in plain markdown and auto-injected at session start via MCP instructions. Identity survives client switches, machine migrations, and model changes without reconfiguration.
Knowledge graph. Every [[wiki-link]] is a directed edge. PageRank authority, Louvain community detection, betweenness centrality, bridge detection, orphan and dangling link analysis. Structure is queryable through MCP tools and CLI.
Three memory spaces. Identity (self/) decays at 0.1x — barely fades. Knowledge (notes/) decays at 1.0x — lives and dies by relevance. Operations (ops/) decays at 3.0x — burns hot and clears itself. The separation is architectural, not cosmetic.
Cognitive forgetting. Notes decay using ACT-R base-level learning equations, not arbitrary TTLs. Used notes stay alive. Their neighbors stay warm through spreading activation along wiki-link edges. Structurally critical nodes are protected by Tarjan's algorithm. ori prune analyzes the full activation topology before archiving anything.
Four-signal fusion. Semantic embeddings, BM25 keyword matching, personalized PageRank, and associative warmth fused through score-weighted Reciprocal Rank Fusion. Intent classification (episodic, procedural, semantic, decision) shifts signal weights automatically.
Dampening pipeline. Three post-fusion stages validated by ablation testing: gravity dampening halves cosine-similarity ghosts with zero query-term overlap, hub dampening applies a P90 degree penalty to prevent map notes from dominating results, and resolution boost surfaces actionable knowledge (decisions, learnings) over passive observation.
Learning retrieval (v0.4.0). Three intelligence layers improve retrieval quality from session to session, synthesized from 63 research sources. See Retrieval Intelligence below.
Capture-promote pipeline. ori add captures to inbox. ori promote classifies (idea, decision, learning, insight, blocker, opportunity), detects links, suggests areas. 50+ heuristic patterns. Optional LLM enhancement.
Zero cloud dependencies. Local embeddings via all-MiniLM-L6-v2 running in-process. SQLite for vectors and intelligence state. Everything on your filesystem. Zero API keys required for core functionality.
Three learning layers that improve retrieval quality over time without manual tuning. Synthesized from 63 research sources across reinforcement learning, information retrieval, cognitive science, and bandit theory.
Notes earn Q-values from session outcomes via exponential moving average updates. Over time, genuinely useful notes rise and noise sinks.
| Signal | Reward | What triggers it |
|--------|--------|-----------------|
| Forward citation | +1.0 | You [[link]] a retrieved note in new content |
| Update after retrieval | +0.5 | You edit a note you just retrieved |
| Downstream creation | +0.6 | You create a new note after retrieving |
| Within-session re-recall | +0.4 | Same note surfaces across different queries |
| Dead end (top-3, no follow-up) | −0.15 | Retrieved in top 3 but nothing follows |
After RRF fusion, Phase B reranks the candidate set with a lambda blend of similarity score and learned Q-value, plus a UCB-Tuned exploration bonus that ensures under-retrieved notes still get discovered. Exposure-aware correction prevents the same notes from dominating every session. A cumulative bias cap (MAX=3.0, compression=0.3) prevents runaway score inflation.
Notes that are retrieved together grow edges between them — Hebbian learning on the knowledge graph. Edge weights are computed using NPMI normalization (genuine association beyond base rate), GloVe power-law frequency scaling, and Ebbinghaus decay with strength accumulation (frequently co-retrieved pairs decay slower).
Per-node Turrigiano homeostasis