by agentic-box
Give your AI agents persistent memory.
# Add to your Claude Code skills
git clone https://github.com/agentic-box/memoraCore Storage
Search & Intelligence
Document Storage
Tools & Visualization
No comments yet. Be the first to share your thoughts!
pip install git+https://github.com/agentic-box/memora.git
Includes cloud storage (S3/R2) and OpenAI embeddings out of the box.
# Optional: local embeddings (offline, ~2GB for PyTorch)
pip install "memora[local]" @ git+https://github.com/agentic-box/memora.git
The server runs automatically when configured in Claude Code. Manual invocation:
# Default (stdio mode for MCP)
memora-server
# With graph visualization server
memora-server --graph-port 8765
# HTTP transport (alternative to stdio)
memora-server --transport streamable-http --host 127.0.0.1 --port 8080
Add to .mcp.json in your project root:
Local DB:
{
"mcpServers": {
"memora": {
"command": "memora-server",
"args": [],
"env": {
"MEMORA_DB_PATH": "~/.local/share/memora/memories.db",
"MEMORA_ALLOW_ANY_TAG": "1",
"MEMORA_GRAPH_PORT": "8765"
}
}
}
}
Cloud DB (Cloudflare D1) - Recommended:
{
"mcpServers": {
"memora": {
"command": "memora-server",
"args": ["--no-graph"],
"env": {
"MEMORA_STORAGE_URI": "d1://<account-id>/<database-id>",
"CLOUDFLARE_API_TOKEN": "<your-api-token>",
"MEMORA_ALLOW_ANY_TAG": "1"
}
}
}
}
With D1, use --no-graph to disable the local visualization server. Instead, use the hosted graph at your Cloudflare Pages URL (see Cloud Graph).
Cloud DB (S3/R2) - Sync mode:
{
"mcpServers": {
"memora": {
"command": "memora-server",
"args": [],
"env": {
"AWS_PROFILE": "memora",
"AWS_ENDPOINT_URL": "https://<account-id>.r2.cloudflarestorage.com",
"MEMORA_STORAGE_URI": "s3://memories/memories.db",
"MEMORA_CLOUD_ENCRYPT": "true",
"MEMORA_ALLOW_ANY_TAG": "1",
"MEMORA_GRAPH_PORT": "8765"
}
}
}
}
Add to ~/.codex/config.toml:
[mcp_servers.memora]
command = "memora-server" # or full path: /path/to/bin/memora-server
args = ["--no-graph"]
env = {
AWS_PROFILE = "memora",
AWS_ENDPOINT_URL = "https://<account-id>.r2.cloudflarestorage.com",
MEMORA_STORAGE_URI = "s3://memories/memories.db",
MEMORA_CLOUD_ENCRYPT = "true",
MEMORA_ALLOW_ANY_TAG = "1",
}
| Variable | Description |
|------------------------|-----------------------------------------------------------------------------|
| MEMORA_DB_PATH | Local SQLite database path (default: ~/.local/share/memora/memories.db) |
| MEMORA_STORAGE_URI | Storage URI: d1://<account>/<db-id> (D1) or s3://bucket/memories.db (S3/R2) |
| CLOUDFLARE_API_TOKEN | API token for D1 database access (required for d1:// URI) |
| MEMORA_CLOUD_ENCRYPT | Encrypt database before uploading to cloud (true/false) |
| MEMORA_CLOUD_COMPRESS| Compress database before uploading to cloud (true/false) |
| MEMORA_CACHE_DIR | Local cache directory for cloud-synced database |
| MEMORA_ALLOW_ANY_TAG | Allow any tag without validation against allowlist (1 to enable) |
| MEMORA_TAG_FILE | Path to file containing allowed tags (one per line) |
| MEMORA_TAGS | Comma-separated list of allowed tags |
| MEMORA_GRAPH_PORT | Port for the knowledge graph visualization server (default: 8765) |
| MEMORA_EMBEDDING_MODEL | Embedding backend: openai (default), sentence-transformers, or tfidf |
| SENTENCE_TRANSFORMERS_MODEL | Model for sentence-transformers (default: all-MiniLM-L6-v2) |
| OPENAI_API_KEY | API key for OpenAI embeddings and LLM deduplication |
| OPENAI_BASE_URL | Base URL for OpenAI-compatible APIs (OpenRouter, Azure, etc.) |
| OPENAI_EMBEDDING_MODEL | OpenAI embedding model (default: text-embedding-3-small) |
| MEMORA_LLM_ENABLED | Enable LLM-powered deduplication comparison (true/false, default: true) |
| MEMORA_LLM_MODEL | Model for deduplication comparison (default: gpt-4o-mini) |
| CHAT_MODEL | Model for the chat panel (default: deepseek/deepseek-chat, falls back to MEMORA_LLM_MODEL) |
| AWS_PROFILE | AWS credentials profile from ~/.aws/credentials (useful for R2) |
| AWS_ENDPOINT_URL | S3-compatible endpoint for R2/MinIO |
| R2_PUBLIC_DOMAIN | Public domain for R2 image URLs |
Memora supports three embedding backends:
| Backend | Install | Quality | Speed |
|---------|---------|---------|-------|
| openai (default) | Included | High quality | API latency |
| sentence-transformers | pip install memora[local] | Good, runs offline | Medium |
| tfidf | Included | Basic keyword matching | Fast |
Automatic: Embeddings and cross-references are computed automatically when you memory_create, memory_update, or memory_create_batch.
Manual rebuild required when:
MEMORA_EMBEDDING_MODEL after memories exist# After changing embedding model, rebuild all embeddings
memory_rebuild_embeddings
# Then rebuild cross-references to update the knowledge graph
memory_rebuild_crossrefs
A built-in HTTP server starts automatically with the MCP server, serving an interactive knowledge graph visualization.