by openmemind
Self-evolving cognitive memory and context engine for AI agents in Java. Empowering 24/7 proactive agents like OpenClaw with understanding and SOTA performance.
# Add to your Claude Code skills
git clone https://github.com/openmemind/memindLast scanned: 5/9/2026
{
"issues": [],
"status": "PASSED",
"scannedAt": "2026-05-09T06:15:55.262Z",
"semgrepRan": false,
"npmAuditRan": true,
"pipAuditRan": true
}Memind achieves state-of-the-art results across all three benchmarks: LoCoMo, LongMemEval, and PersonaMem.
Memind is a hierarchical cognitive memory and context engine for AI agents, built natively in Java.
Instead of treating memory as a flat collection of isolated facts, Memind continuously extracts, organizes, and evolves knowledge from conversations into a structured Insight Tree.
No comments yet. Be the first to share your thoughts!
It tackles two core problems of agent memory: flat, unstructured storage (memories remain disconnected facts with no higher-level organization) and no knowledge evolution (memories accumulate, but never consolidate into deeper understanding).
The result is a long-term memory and context layer that helps agents retain context, build structured understanding over time, and recall knowledge at multiple levels of abstraction.
The Insight Tree is memind's core innovation. Unlike traditional memory systems that store isolated facts, memind progressively distills knowledge through three tiers — each tier sees patterns the previous one cannot:
| Tier | Input | What it produces | |------|-------|-----------------| | 🍃 Leaf | Grouped memory items | Insights within a single semantic group | | 🌿 Branch | Multiple leaves | Cross-group patterns within one dimension | | 🌳 Root | Multiple branches | Cross-dimensional insights invisible at lower levels |
Example — understanding a user named Li Wei through conversations:
🍃 Leaf (from career_background group): "Li Wei has 8 years of backend experience — 3 years at Alibaba, then led an 8-person team at a fintech company, designing a core trading system with Java 17 + Spring Cloud + Kafka."
🌿 Branch (integrating career + education + certifications): "Li Wei is a senior backend architect with deep distributed systems expertise, combining Zhejiang University CS training, large-scale Alibaba experience, and hands-on fintech system design — a well-rounded technical profile with both depth and breadth."
🌳 Root (cross-dimensional — identity × preferences × behavior): "Li Wei's preference for functional programming and high code quality (80% test coverage), combined with conservative tech adoption (requires 2+ years production validation), reveals a personality oriented toward long-term code maintainability over rapid innovation — suggesting recommendations should emphasize stability and proven patterns over cutting-edge tools."
Each tier reveals something the previous one couldn't see. Leaves know facts. Branches see patterns. Roots understand the person.
memind maintains separate memory scopes for comprehensive agent cognition:
| Scope | Categories | Purpose | |-------|-----------|---------| | USER | Profile, Behavior, Event | User identity, preferences, relationships, experiences | | AGENT | Tool, Directive, Playbook, Resolution | Tool usage experience, durable instructions, reusable workflows, resolved problem knowledge |
| Strategy | How it works | Best for | |----------|-------------|----------| | Simple | Vector search + BM25 keyword matching, merged via RRF (Reciprocal Rank Fusion), with adaptive truncation | Low-latency, cost-sensitive scenarios | | Deep | LLM-assisted query expansion, sufficiency checking, and reranking | Complex queries requiring reasoning |
Retrieval admission is always enabled: blank queries, pure punctuation/symbol inputs, and pure emoji inputs return empty retrieval results before search. In the standard Memory.builder() path, oversized queries are handled by LLM long-query condensation; if condensation fails or remains invalid, retrieval returns an empty result.
| Category | Capability | Description |
|----------|-----------|-------------|
| Extraction | Conversation Segmentation | Automatic boundary detection and segmentation for streaming messages |
| | Memory Item Extraction | Extract structured facts with deduplication across 5 categories |
| | Insight Tree Construction | Hierarchical knowledge building: Leaf → Branch → Root |
| | Foresight Prediction | Predict future user needs based on conversation patterns |
| | Tool Call Statistics | Track tool usage patterns and success rates |
| Retrieval | Simple Strategy | Vector + BM25 hybrid search with RRF fusion and adaptive truncation |
| | Deep Strategy | LLM-assisted query expansion, sufficiency checking, and reranking |
| | Intent Routing | Automatically determine whether retrieval is needed |
| | Multi-granularity | Retrieve from any Insight Tree tier based on query needs |
| Integration | Pure Java Runtime | memind-core plus plugins assembled through Memory.builder() |
| | Spring Boot Infrastructure Starters | Optional infrastructure wiring with memind-plugin-ai-spring-ai-starter and memind-plugin-jdbc-starter |
| | Plugin Architecture | Pluggable store (SQLite, MySQL) and tracing (OpenTelemetry) |
The fastest way to try Memind is Docker Compose. It starts the local
memind-server API together with the React admin UI without requiring Java,
Maven, Node.js, or pnpm on the host.
Create a local .env file in the repository root. docker-compose.yml reads these values
automatically:
# Required.
OPENAI_API_KEY=your-key
# Optional provider and model overrides.
OPENAI_BASE_URL=https://openrouter.ai/api
OPENAI_CHAT_MODEL=openai/gpt-4o-mini
OPENAI_EMBEDDING_MODEL=openai/text-embedding-3-small
# Optional. Required only when you want an external rerank provider for deep retrieval.
MEMIND_RERANK_BASE_URL=https://aihubmix.com
MEMIND_RERANK_API_KEY=
MEMIND_RERANK_MODEL=jina-reranker-v3
# Optional host ports.
MEMIND_SERVER_PORT=8366
MEMIND_UI_PORT=8080
OPENAI_BASE_URL, OPENAI_CHAT_MODEL, and OPENAI_EMBEDDING_MODEL are optional.
The chat and embedding model choices directly affect memory extraction, insight quality,
and retrieval quality. If your embedding provider uses a different endpoint or key from chat,
also set EMBEDDING_BASE_URL and EMBEDDING_API_KEY.
docker compose up -d --build
After the images are built and the containers start:
http://localhost:8080http://localhost:8366/open/v1/healthhttp://localhost:8366/open/v1http://localhost:8366/admin/v1http://localhost:8366/mcpThe UI container proxies /open/* and /admin/* to memind-server, so the browser can use
the UI as a same-origin local admin console.
memind-server includes a stateless HTTP MCP server at /mcp, enabled by default. It exposes
Memind memory tools for MCP-compatible agents and uses the same runtime, database, configuration,
and logs as the REST APIs.
Claude Code can connect to the local server with:
claude mcp add --transport http memind http://localhost:8366/mcp
Available MCP tools:
memind_retrieve: retrieve memory for a userId and agentId with a natural-language query;
strategy can be SIMPLE or DEEP, and defaults to SIMPLE.memind_extract_text: immediately extract memory from standalone text, such as pasted notes,
document excerpts, or summaries.memind_add_message: add one user or assistant conversation message to Memind's pending
conversation buffer.memind_commit: commit pending conversation messages for the same userId and agentId