by Signet-AI
Local-first identity, memory, and secrets for AI agents. Portable state across models and harnesses.
# Add to your Claude Code skills
git clone https://github.com/Signet-AI/signetaiLocal-first identity, memory, and secrets for AI agents
Website · Docs · Vision · Discussions · Discord · Contributing · AI Policy
Portable AI agent state, across sessions, models, and harnesses.
TL;DR
Signet keeps an agent's identity, memory, secrets, and skills outside any single model or harness. The harness can change. The model can change. The agent keeps its state.
Memory is ambient. Signet extracts and injects relevant context automatically, between sessions, before the next prompt starts. Your agent wakes up with the continuity it needs instead of asking you to rebuild the room by hand.
Structured memory, graph traversal, and hybrid retrieval are the substrate. The larger job is behavioral context portability: keeping the agent's accumulated understanding under the user's control.
Why teams adopt it:
Benchmark note: early LoCoMo results show 87.5% answer accuracy and 100% Hit@10 retrieval on an . Larger evaluation runs are in progress.
No comments yet. Be the first to share your thoughts!
bun add -g signetai # or: npm install -g signetai
signet setup # interactive setup wizard
signet status # confirm daemon + pipeline health
signet dashboard # open memory + retrieval inspector
If you already use Claude Code, OpenCode, OpenClaw, Codex, or Hermes Agent, keep your existing harness. Signet installs under it.
Run Signet as a containerized daemon with first-party Compose assets:
cd deploy/docker
cp .env.example .env
docker compose up -d --build
See docs/SELF-HOSTING.md for token bootstrap,
backup, and upgrade runbook details.
Run this once:
signet remember "my primary stack is bun + typescript + sqlite"
Then in your next session, ask your agent:
what stack am i using for this project?
You should see continuity without manually reconstructing context. If not, inspect recall and provenance in the dashboard or run:
signet recall "primary stack"
Want the deeper architecture view? Jump to How it works or Architecture.
These are the product surface areas Signet is optimized around:
| Core | What it does | |---|---| | 🧠 Ambient memory extraction | Sessions are distilled automatically, no memory tool calls required | | 🎯 Structured context selection | Graph traversal, hybrid search, provenance, and scoped ranking surface useful context without flooding the window | | 💾 Session continuity | Checkpoint and transcript-backed context carried across sessions | | 🏠 Local-first storage | Data lives on your machine in SQLite and markdown, portable by default | | 🤝 Cross-harness runtime | Claude Code, OpenCode, OpenClaw, Codex, Pi, one shared memory substrate |
Use Signet if you want:
Signet may be overkill if you only need short-lived chat memory inside a single hosted assistant.
These systems improve quality and reliability of the core memory loop:
| Supporting | What it does | |---|---| | 📜 Lossless transcripts | Raw session history preserved alongside extracted memories | | 🕸️ Structured retrieval substrate | Graph traversal + FTS5 + vector search produce bounded candidate context | | 🎯 Feedback-aware ranking | Recency, provenance, importance, and dampening signals help separate useful context from repeated noise | | 🔬 Noise filtering | Hub and similarity controls reduce low-signal memory surfacing | | 📄 Document ingestion | Pull PDFs, markdown, and URLs into the same retrieval pipeline | | 🖥️ CLI + Dashboard | Operate and inspect the system from terminal or web UI |
These extend Signet for larger deployments and custom integrations:
| Advanced | What it does | |---|---| | 🔐 Agent-blind secrets | Encrypted secret storage, injected at execution time, not exposed to agent text | | 👯 Multi-agent policies | Isolated/shared/group memory visibility for multiple named agents | | 🔄 Git sync | Identity and memory can be versioned in your own remote | | 📦 SDK + middleware | Typed client, React hooks, and Vercel AI SDK middleware | | 🔌 MCP aggregation | Register MCP servers once, expose across connected harnesses | | 👥 Team controls | RBAC, token policy, and rate limits for shared deployments | | 🏪 Ecosystem installs | Install skills and MCP servers from skills.sh and ClawHub | | ⚖️ Apache 2.0 | Fully open source, forkable, and self-hostable |
Memory quality is not just recall quality. It is governance quality.
Signet is built to support:
Signet is not a harness. It doesn't replace Claude Code, OpenClaw, OpenCode, Pi, or Hermes Agent — it runs alongside them as an enhancement. Bring the harness you already use. Signet handles the memory layer underneath it.
| Harness | Status | Integration | |---|---|---| | Claude Code | Supported | Hooks | | Forge | First-party | Native runtime / reference harness | | OpenCode | Supported | Plugin + Hooks | | OpenClaw | Supported | Runtime plugin + NemoClaw compatible | | Codex | Supported | Hooks + MCP server | | Hermes Agent | Supported | Memory provider plugin | | Pi | Supported | Extension + Hooks | | Gemini CLI | Planned | — |
Don't see your favorite harness? file an issue and request that it be added!
LoCoMo is the standard benchmark for conversational memory systems. No standardized leaderboard exists — each system uses different judge models, question subsets, and evaluation prompts. These numbers are collected from published papers and repos.
| Rank | System | Score | Metric | Open Source | Local? | LLM at Search? | |------|--------|-------|--------|-------------|--------|----------------| | 1 | Kumiho | 97.5% adv, 0.565 F1 | Official F1 + adv subset | SDK open | No | Yes | | 2 | EverMemOS | 93.05% | Judge (self-reported) | No | No | Yes | | 3 | MemU | 92.09% | Judge | Yes | No | Yes | | 4 | MemMachine | 91.7% | Judge | No | No | Yes | | 5 | Hindsight | 89.6% | Judge | Yes (MIT) | No | Yes | | 6 | SLM V3 Mode C | 87.7% | Judge | Yes (MIT) | Partial | Yes | | 7 | Signet | 87.5% | Judge (GPT-4o) | Yes (Apache) | Yes | No | | 8 | Zep/Graphiti | ~85% | Judge (third-party est) | Partial | No | Yes | | 9 | Letta/MemGPT | ~83% | Judge | Yes (Apache) | No | Yes | | 10 | Engram | 80% | Judge | Yes | No | Yes | | 11 | SLM V3 Mode A | 74.8% | Judge | Yes (MIT) | Yes | No | | 12 | [Mem0+Graph](https://arxiv.org/abs/25