by syncable-dev
The missing memory layer for coding agents
# Add to your Claude Code skills
git clone https://github.com/syncable-dev/memtrace-publicWaitlist & Early Access — Memtrace is currently in a private beta. We are slowly rolling out access to ensure stability. You must join the waitlist at memtrace.io to use the product right now.
Join the discussion, ask questions, and follow beta updates in Discord.
Core indexing and structural search are stable. Temporal features (evolution scoring, timeline replay) are functional but may have rough edges. Report issues here.
🔒 Privacy — Memtrace runs entirely on your machine. Your source code never leaves it. All parsing, graph construction, embedding generation, and querying happens locally. The only network traffic is license validation, aggregate usage counts (total nodes/edges — no code, no file paths, no symbol names), and opt-out telemetry for crashes / errors / app-start events (sanitised — no source, no file contents, no symbol names). See PRIVACY.md and TELEMETRY.md for the full breakdowns. Disable telemetry with
MEMTRACE_TELEMETRY=off.
Memtrace gives coding agents something they've never had: structural memory. Not vector similarity. Not semantic chunking. A real knowledge graph compiled from your codebase's AST — where every function, class, interface, and API endpoint exists as a node with deterministic, typed relationships.
Index once. Every agent query after that resolves through graph traversal — callers, callees, implementations, imports, blast radius, temporal evolution — in milliseconds, with zero token waste.
No comments yet. Be the first to share your thoughts!
Local machine requirements — Memtrace indexes and embeds your code locally, so the first run is CPU/RAM intensive. Minimum: 4 CPU cores, 8 GB RAM, 5 GB free disk, Node.js 18+, and Git. Recommended for large monorepos: 8+ CPU cores, 16–32 GB RAM, and 10–20 GB free disk. No GPU required.
npm install -g memtrace # binary + 12 skills + MCP server — one command
memtrace start # launches the graph database and auto-indexes the current project
That's it. Run memtrace start from your project root — it spins up the graph database and kicks off indexing automatically. Claude and Cursor (v2.4+) pick up the skills and MCP tools automatically.
https://github.com/user-attachments/assets/e7d6a1e9-c912-4e65-a421-bd0256dffa5a
Built-in UI at
localhost:3030— explore your graph, trace dependencies, spot dead code, and visualize architecture at a glance
Good code intelligence tools already exist. GitNexus and CodeGrapherContext build AST-based graphs with symbol relationships, and they work well for understanding what's in your codebase right now.
Memtrace is a bi-temporal episodic structural knowledge graph. It builds on that same AST foundation and adds two dimensions:
On top of that, the structural layer is comprehensive:
CALLS, IMPLEMENTS, IMPORTS, EXPORTS, CONTAINSThe agent doesn't just search your code. It remembers it.
Five sub-benches across three corpora (mempalace, Django, a 21-file scratch fixture). Every system runs on the same machine, against the same ground truth, using the same adapter contract. Ground truth comes from Python's stdlib ast, the pyright LSP, or deterministic edit scripts — never from any tool's own index — so no system gets a home-field advantage in the dataset itself.
Full reproduction instructions and per-bench numbers: benchmarks/README.md. The frozen exact-symbol harness is benchmarks/fair/; the extended harness covering all five benches is benchmarks/suite/.
Summary across the five benches (🟢 = Memtrace wins declared primary axis, 🟡 = Memtrace trails):
| # | Bench | Primary axis | Memtrace | Runner-up | Δ |
|:-:|:------|:-------------|---------:|:----------|---:|
| 0 | Exact-symbol lookup (1,000 queries, mempalace) | acc_at_1_pct | 96.7% 🟢 | ChromaDB 62.3% | 1.55× |
| 1 | Token economy (same 1,000) | acc_at_1_per_kilo_token | 495.52 🟢 | GitNexus 126.90 | 3.90× |
| 2 | Intent retrieval (100 NL PR titles, Django) | recall_at_10 | 58.6% 🟡 | ChromaDB 66.8% | −8.2 pp |
| 3 | Graph queries (mempalace, pyright GT) | callers_of.recall | 0.851 🟢 | CGC 0.584 | 1.46× |
| 3 | Graph queries (Django, pyright GT) | callers_of.recall | 0.816 🟢 | GitNexus 0.053 | 15.4× |
| 4 | Incremental freshness (50 edits) | time_to_queryable_p95 | 42.5 ms 🟢 | CGC 613.7 ms | 14.4× faster |
Memtrace wins 5 of 6, trails on 1 (Bench #2 — ChromaDB is the expected winner on semantic NL queries). Bench #5 (agent-level) is skeleton-only and gated behind RUN_AGENT_BENCH=1.
Numbers from isolated per-adapter processes — full methodology in BENCHMARKS-v0.3.22.md.
| Tool | Coverage | Acc@1 | Acc@10 | Prec@10 | Avg lat | RSS | Tokens | |:-----|---------:|------:|-------:|--------:|--------:|----:|-------:| | Memtrace (MemDB) | 100.0% | 96.6% | 99.7% | 0.967 | 0.07 ms | 26.2 MB | 383 | | GitNexus (eval-server) | 100.0% | 97.0% | 100% | 0.702 | 8.95 ms | 31.0 MB | 90 | | ChromaDB (all-MiniLM-L6-v2) | 100.0% | 62.4% | 87.8% | 0.188 | 54.6 ms | 1,060 MB | 1,937 | | CodeGrapherContext (CLI) | 100.0% | 7.9% | 99.9% | 0.521 | 2,020 ms | ~150 MB | 217 |
What the numbers say, read fairly:
direct_callers_count so Model.delete precedes tests.fake_delete. Right tradeoff for agents, small benchm