by adoresever
Openclaw记忆插件Knowledge Graph + Memory;Knowledge Graph Context Engine for OpenClaw — extracts structured triples from conversations, compresses context 75%, enables cross-session experience reuse
# Add to your Claude Code skills
git clone https://github.com/adoresever/graph-memoryWhen conversations grow long, agents lose track of what happened. graph-memory solves three problems at once:
SOLVED_BY edgeIt feels like talking to an agent that learns from experience. Because it does.
58 nodes, 40 edges, 3 communities — automatically extracted from conversations. Right panel shows the knowledge graph with community clusters (GitHub ops, B站 MCP, session management). Left panel shows agent using
gm_statsandgm_searchtools.
Recall now runs two parallel paths that merge results:
Community summaries are generated immediately after each community detection cycle (every 7 turns), so the generalized path is available from the first maintenance window.
The top 3 PPR-ranked nodes now pull their into the context. The agent sees not just structured triples, but the actual dialogue that produced them — improving accuracy when reapplying past solutions.
No comments yet. Be the first to share your thoughts!
The embedding module now uses raw fetch instead of the openai SDK, making it compatible with any OpenAI-compatible endpoint out of the box:
text-embedding-v4)embo-01)POST /embeddingsv2.0 ships a Windows installer (.exe). Download from Releases:
graph-memory-installer-win-x64.exeplugins.slots.contextEngine, adds the plugin entry, and restarts the gateway7-round conversation installing bilibili-mcp + login + query:
| Round | Without graph-memory | With graph-memory | |-------|---------------------|-------------------| | R1 | 14,957 | 14,957 | | R4 | 81,632 | 29,175 | | R7 | 95,187 | 23,977 |
75% compression. Red = linear growth without graph-memory. Blue = stabilized with graph-memory.
graph-memory builds a typed property graph from conversations:
TASK (what was done), SKILL (how to do it), EVENT (what went wrong)USED_SKILL, SOLVED_BY, REQUIRES, PATCHES, CONFLICTS_WITHUser query
│
├─ Precise path (entity-level)
│ vector/FTS5 search → seed nodes
│ → community peer expansion
│ → graph walk (N hops)
│ → Personalized PageRank ranking
│
├─ Generalized path (community-level)
│ query embedding vs community summary embeddings
│ → matched community members
│ → graph walk (1 hop)
│ → Personalized PageRank ranking
│
└─ Merge & deduplicate → final context
Both paths run in parallel. Precise results take priority; generalized results fill gaps from uncovered knowledge domains.
Message in → ingest (zero LLM)
├─ All messages saved to gm_messages
└─ turn_index continues from DB max (survives gateway restart)
assemble (zero LLM)
├─ Graph nodes → XML with community grouping (systemPromptAddition)
├─ PPR ranking decides injection priority
├─ Episodic traces for top 3 nodes
├─ Content normalization (prevents OpenClaw content.filter crash)
└─ Keep last turn raw messages
afterTurn (async, non-blocking)
├─ LLM extracts triples → gm_nodes + gm_edges
├─ Every 7 turns: PageRank + community detection + community summaries
└─ User sends new message → extract auto-interrupted
session_end
├─ finalize (LLM): EVENT → SKILL promotion
└─ maintenance: dedup → PageRank → community detection
Next session → before_prompt_build
├─ Dual-path recall (precise + generalized)
└─ Personalized PageRank ranking → inject into context
Unlike global PageRank, PPR ranks nodes relative to your current query:
Download the installer from Releases:
graph-memory-installer-win-x64.exe
The installer handles everything: plugin installation, context engine activation, and gateway restart. After running, skip to Step 3: Configure LLM and Embedding.
Choose one of three methods:
Option A — From npm registry (recommended):
pnpm openclaw plugins install graph-memory
No node-gyp, no manual compilation. The SQLite driver (@photostructure/sqlite) ships prebuilt binaries — works with OpenClaw's --ignore-scripts install.
Option B — From GitHub:
pnpm openclaw plugins install github:adoresever/graph-memory
Option C — From source (for development or custom modifications):
git clone https://github.com/adoresever/graph-memory.git
cd graph-memory
npm install
npx vitest run # verify 80 tests pass
pnpm openclaw plugins install .
This is the critical step most people miss. graph-memory must be registered as the context engine, otherwise OpenClaw will only use it for recall but won't ingest messages or extract knowledge.
Edit ~/.openclaw/openclaw.json and add plugins.slots:
{
"plugins": {
"slots": {
"contextEngine": "graph-memory"
},
"entries": {
"graph-memory": {
"enabled": true
}
}
}
}
Without "contextEngine": "graph-memory" in plugins.slots, the plugin registers but the ingest / assemble / compact pipeline never fires — you'll see recall in logs but zero data in the database.
Add your API credentials inside plugins.entries.graph-memory.config:
{
"plugins": {
"slots": {
"contextEngine": "graph-memory"
},
"entries": {
"graph-memory": {
"enabled": true,
"config": {
"llm": {
"apiKey": "your-llm-api-key",
"baseURL": "https://api.openai.com/v1",
"model": "gpt-4o-mini"
},
"embedding": {
"apiKey": "your-embedding-api-key",
"baseURL": "https://api.openai.com/v1",
"model": "text-embedding-3-small",
"dimensions": 512
}
}
}
}
}
}
LLM (config.llm) — Required. Used for knowledge extraction and community summaries. Any OpenAI-compatible endpoint works. Use a cheap/fast model.
Embedding (config.embedding) — Optional but recommended. Enables semantic vector search, community-level recall, and vector dedup. Without it, falls back to FTS5 full-text search (still works, just keyword-based).
⚠️ Important:
pnpm openclaw plugins installmay reset your config. Always verifyconfig.llmandconfig.embeddingare present after reinstalling.
If config.llm is not set, graph-memory falls back to the ANTHROPIC_API_KEY environment variable + Anthropic API.
| Provider | baseURL | Model | dimensions |
|----------|---------|-------|------------|
| OpenAI | https://api.openai.com/v1 | text-embedding-3-small | 512 |
| Alibaba DashScope | https://dashscope.aliyuncs.com/compatible-mode/v1 | text-embedding-v4 | 1024 |