by yantrikos
Cognitive memory database for AI agents — consolidates duplicates, detects contradictions, fades stale memories via temporal decay. Rust, AGPL, ships as library / MCP server / HTTP cluster.
# Add to your Claude Code skills
git clone https://github.com/yantrikos/yantrikdb-serverA memory database that forgets, consolidates, and detects contradictions.
Vector databases store memories. They don't manage them. After 10,000 memories, recall quality degrades because there's no consolidation, no forgetting, no conflict resolution. Your AI agent just gets noisier.
YantrikDB is different. It's a cognitive memory engine — embed it, run it as a server, or connect via MCP. It thinks about what it stores.
The bigger picture: YantrikDB is the memory layer being built on the road to YantrikOS — an AI-native operating system where agents are first-class primitives, not apps on top. Memory was the bottleneck, so we're shipping it first.

| Memories | File-Based (CLAUDE.md) | YantrikDB | Token Savings | Recall Precision | |---|---|---|---|---| | 100 | 1,770 tokens | 69 tokens | 96% | 66% | | 500 | 9,807 tokens | 72 tokens | 99.3% | 77% | | 1,000 | 19,988 tokens | 72 tokens | 99.6% | 84% | | 5,000 | 101,739 tokens | 53 tokens | 99.9% | 88% |
No comments yet. Be the first to share your thoughts!
At 500 memories, file-based memory exceeds 32K context. At 5,000, it doesn't fit in any model — not even 200K. YantrikDB stays at ~70 tokens per query. Precision improves with more data — the opposite of context stuffing.
Reproduce: python benchmarks/bench_token_savings.py
db.record("read the SLA doc by Friday", importance=0.4, half_life=86400) # 1 day
# 24 hours later, this memory's relevance score has decayed
# 7 days later, recall stops surfacing it unless explicitly queried
# 20 similar memories about the same meeting
for note in meeting_notes:
db.record(note, namespace="standup-2026-04-12")
db.think()
# → {"consolidation_count": 5} # collapsed 20 fragments into 5 canonical memories
db.record("CEO is Alice")
db.record("CEO is Bob") # added later in another conversation
db.think()
# → {"conflicts_found": 1, "conflicts": [{"memory_a": "CEO is Alice",
# "memory_b": "CEO is Bob",
# "type": "factual_contradiction"}]}
Plus: temporal decay with configurable half-life, entity graph with relationship edges, personality derivation from memory patterns, session-aware context surfacing, multi-signal scoring (recency × importance × similarity × graph proximity).
YantrikDB isn't just storage with operations. The engine has a layer that makes agents feel less reactive:
stale surfaces important memories that haven't been touched recently. upcoming surfaces memories with approaching deadlines.Full cognitive architecture lives in the standalone engine repo. This server repo focuses on deployment, HTTP API, and cluster operations.
docker run -p 7438:7438 ghcr.io/yantrikos/yantrikdb:latest
curl -X POST http://localhost:7438/v1/remember -d '{"text":"hello"}'
Single Rust binary. HTTP + binary wire protocol. 2-voter + 1-witness HA cluster via Docker Compose or Kubernetes. Per-tenant quotas, Prometheus metrics, AES-256-GCM at-rest encryption, runtime deadlock detection. See docker-compose.cluster.yml and k8s manifests.
pip install yantrikdb-mcp
Add to your MCP client config — the agent auto-recalls context, auto-remembers decisions, auto-detects contradictions. No prompting needed. See yantrikdb-mcp.
pip install yantrikdb
# or
cargo add yantrikdb
import yantrikdb
db = yantrikdb.YantrikDB("memory.db", embedding_dim=384)
db.set_embedder(SentenceTransformer("all-MiniLM-L6-v2"))
db.record("Alice leads engineering", importance=0.8)
db.recall("who leads the team?", top_k=3)
db.think() # consolidate, detect conflicts, derive personality
Live numbers from a 2-core LXC cluster with 1689 memories:
| Operation | Latency | |---|---| | Recall p50 | 112ms (most is query embedding ~100ms) | | Recall p99 | 190ms | | Batch write | 76 writes/sec | | Engine lock acquire | <0.1ms | | Deep health probe | <1ms |
For pre-computed embeddings (skip query-time embedding), recall p50 drops to ~5ms.
v0.5.13 — hardened alpha + RFC 006 Phase 0 observability telemetry shipped. The embeddable engine has been used in production by the YantrikOS ecosystem since early 2026. The network server runs live on a 3-node Proxmox cluster with multiple tenants.
A 42-task hardening sprint just completed across 8 epics:
parking_lot mutexes everywhere with runtime deadlock detection (caught a self-deadlock that would have taken hours to find with std::sync)Read the maturity notes: https://yantrikdb.com/server/quickstart/#maturity
Current AI memory is:
Store everything → Embed → Retrieve top-k → Inject into context → Hope it helps.
That's not memory. That's a search engine with extra steps.
Real memory is hierarchical, compressed, contextual, self-updating, emotionally weighted, time-aware, and predictive. YantrikDB is built for that.
| Solution | What it does | What it lacks | |----------|-------------|---------------| | Vector DBs (Pinecone, Weaviate) | Nearest-neighbor lookup | No decay, no causality, no self-organization | | Knowledge Graphs (Neo4j) | Structured relations | Poor for fuzzy memory, not adaptive | | Memory Frameworks (LangChain, Mem0) | Retrieval wrappers | Not a memory architecture — just middleware | | File-based (CLAUDE.md, memory files) | Dump everything into context | O(n) token cost, no relevance filtering |
| Memories | File-Based | YantrikDB | Token Savings | Precision | |----------|-----------|-----------|---------------|-----------| | 100 | 1,770 tokens | 69 tokens | 96% | 66% | | 500 | 9,807 tokens | 72 tokens | 99.3% | 77% | | 1,000 | 19,988 tokens | 72 tokens | 99.6% | 84% | | 5,000 | 101,739 tokens | 53 tokens | 99.9% | 88% |
At 500 memories, file-based exceeds 32K context windows. At 5,000, it doesn't fit in any context window — not even 200K. YantrikDB stays at ~70 tokens per query. Precision improves with more data — the opposite of context stuffing.
record(), recall(), relate(), not SELECTSend + Sync with internal Mutex/RwLock, safe for concurrent access┌──────────────────────────────────────────────────────┐
│ YantrikDB Engine │
│ │
│ ┌──────────┬──────────┬──────────┬──────────┐ │
│ │ Vector │ Graph │ Temporal │ Decay │ │
│ │ (HNSW) │(Entities)│ (Events) │ (Heap) │ │
│ └──────────┴──────────┴──────────┴──────────┘ │
│ ┌──────────┐ │
│ │ Key-Value│ WAL + Replication Log (CRDT) │
│ └──────────┘ │
└──────────────────────────────────────────────────────┘
| Type | What it stores | Example | |------|---------------|---------| | Semantic | Facts, knowledge | "User is a software engineer at Meta" | | Episodic | Events with context | "Had a rough day at work on Feb 20" | | Procedural | Strategies, what worked | "Deploy with blue-green, not ro