by roampal-ai
Memory that learns what works.
# Add to your Claude Code skills
git clone https://github.com/roampal-ai/roampalMemory that learns what works.
Say it worked. Say it didn't. The AI remembers.
Stop re-explaining yourself every conversation. Roampal remembers outcomes, learns from feedback, and gets smarter over time—all 100% private and local.
85.8% non-adversarial on LoCoMo (1,986 questions). +23 pts over raw ingestion. Absorbs 1,135 poison memories losing only 4 pts. (Paper)
LoCoMo dataset (1,986 questions, 5 categories, corrected ground truths). Evaluated with roampal-labs. Dual-graded by local 20B + MiniMax M2.7.
| Metric | Result | |--------|--------| | Non-adversarial accuracy (MiniMax-regraded) | 85.8% | | Overall (all 5 categories) | 76.6% | | vs raw ingestion baseline | +23 pts (76.6% vs 53.0%, p<0.0001) | | Poison resilience | -4.2 pts after 1,135 adversarial memories | | No-memory baseline | 6.0% (model has zero LoCoMo knowledge) | | | Architecture: +23 pts. Model swap (GPT-4o-mini): 1.5-2.5 pts |
No comments yet. Be the first to share your thoughts!
| Config | Hit@1 Clean | Hit@1 Poison | p-value | |--------|-------------|--------------|---------| | TagCascade + cosine | 27.3% | 29.0% | baseline | | Overlap + cosine | 25.8% | 28.0% | p=0.0003 | | Pure CE | 25.4% | 28.4% | — | | TagCascade + Wilson | 23.0% | 25.0% | p<0.0001 |
Full methodology in roampal-labs
Roampal.exe → Run as administratorYour AI starts learning about you immediately.
Memory That Learns
Your Knowledge Base
Privacy First
Connect Roampal to Claude Desktop, Cursor, and other MCP-compatible tools.
Settings → Integrations → Connect → Restart your tool
7 tools available: search_memory, add_to_memory_bank, update_memory, archive_memory, get_context_insights, record_response, score_memories
┌─────────────────────────────────────────────────────────┐
│ 5-TIER MEMORY │
├─────────────┬─────────────┬─────────────┬──────────────┤
│ Books │ Working │ History │ Patterns │
│ (permanent) │ (24h) │ (30 days) │ (permanent) │
├─────────────┴─────────────┴─────────────┴──────────────┤
│ Memory Bank │
│ (permanent user identity/prefs) │
└─────────────────────────────────────────────────────────┘
Core Technology:
Works with any tool-calling model via Ollama or LM Studio:
| Model | Provider | Parameters | |-------|----------|------------| | Llama 3.x | Meta | 3B - 70B | | Qwen 2.5 | Alibaba | 3B - 72B | | Mistral/Mixtral | Mistral AI | 7B - 8x22B | | GPT-OSS | OpenAI (Apache 2.0) | 20B - 120B |
| Document | Description | |----------|-------------| | Architecture | 5-tier memory, knowledge graphs, technical deep-dive | | Benchmarks | LoCoMo evaluation, TagCascade results | | Release Notes | Latest: TagCascade Retrieval, Sidecar LLM, ONNX CE, Two-Lane Injection |
AI Safety: LLMs may generate incorrect information. Always verify critical information. Don't rely on AI for medical, legal, or financial advice.
Model Licenses: Downloaded models (Llama, Qwen, etc.) have their own licenses. Review before commercial use.
Free & open-source (Apache 2.0 License)
Made with love for people who want AI that actually remembers