by pi22by7
Persistent Intelligence Infrastructure for AI Agents
# Add to your Claude Code skills
git clone https://github.com/pi22by7/In-MemoriaGiving AI coding assistants a memory that actually persists.
Watch In Memoria in action: learning a codebase, providing instant context, and routing features to files.
You know the drill. You fire up Claude, Copilot, or Cursor to help with your codebase. You explain your architecture. You describe your patterns. You outline your conventions. The AI gets it, helps you out, and everything's great.
Then you close the window.
Next session? Complete amnesia. You're explaining the same architectural decisions again. The same naming conventions. The same "no, we don't use classes here, we use functional composition" for the fifteenth time.
Every AI coding session starts from scratch.
This isn't just annoying, it's inefficient. These tools re-analyze your codebase on every interaction, burning tokens and time. They give generic suggestions that don't match your style. They have no memory of what worked last time, what you rejected, or why.
In Memoria is an MCP server that learns from your actual codebase and remembers across sessions. It builds persistent intelligence about your code (patterns, architecture, conventions, decisions) that AI assistants can query through the Model Context Protocol.
No comments yet. Be the first to share your thoughts!
Think of it as giving your AI pair programmer a notepad that doesn't get wiped clean every time you restart the session.
Current version: 0.6.0 - See what's changed
# First time: Learn your codebase
npx in-memoria learn ./my-project
# Start the MCP server
npx in-memoria server
# Now in Claude/Copilot:
You: "Add password reset functionality"
AI: *queries In Memoria*
"Based on your auth patterns in src/auth/login.ts, I'll use your
established JWT middleware pattern and follow your Result<T>
error handling convention..."
# Next session (days later):
You: "Where did we put the password reset code?"
AI: *queries In Memoria*
"In src/auth/password-reset.ts, following the pattern we
established in our last session..."
No re-explaining. No generic suggestions. Just continuous, context-aware assistance.
# Install globally
npm install -g in-memoria
# Or use directly with npx
npx in-memoria --help
Claude Desktop - Add to your config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"in-memoria": {
"command": "npx",
"args": ["in-memoria", "server"]
}
}
}
Claude Code CLI:
claude mcp add in-memoria -- npx in-memoria server
GitHub Copilot - See Copilot Integration section below
# Analyze and learn from your project
npx in-memoria learn ./my-project
# Or let AI agents trigger learning automatically
# (Just start the server and let auto_learn_if_needed handle it)
npx in-memoria server
In Memoria is built on Rust + TypeScript, using the Model Context Protocol to connect AI tools to persistent codebase intelligence.
┌─────────────────────┐ MCP ┌──────────────────────┐ napi-rs ┌─────────────────────┐
│ AI Tool (Claude) │◄──────────►│ TypeScript Server │◄─────────────►│ Rust Core │
└─────────────────────┘ └──────────┬───────────┘ │ • AST Parser │
│ │ • Pattern Learner │
│ │ • Semantic Engine │
▼ │ • Blueprint System │
┌──────────────────────┐ └─────────────────────┘
│ SQLite (persistent) │
│ SurrealDB (in-mem) │
└──────────────────────┘
Rust Layer - Fast, native processing:
TypeScript Layer - MCP server and orchestration:
Storage - Local-first:
This isn't just another RAG system or static rules engine:
src/auth/login.tsWe recently completed Phases 1-4 of the implementation roadmap:
Instant project context without full learning. Ask about a codebase and get tech stack, entry points, key directories, and architecture all in under 200 tokens.
AI agents can now track work sessions, maintain task lists, and record architectural decisions. Resume work exactly where you left off.
Feature-to-file mapping across 10 categories (auth, API, database, UI, etc.). Vague requests like "add password reset" get routed to specific files automatically.
No more janky console spam. Progress bars update in-place with consistent 500ms refresh rates.
In Memoria provides 13 specialized tools that AI assistants can call via MCP. They're organized into 4 categories (down from 16 after Phase 4 consolidation merged redundant tools):
analyze_codebase - Analyze files/directories with concepts, patterns, complexity (Phase 4: now handles both files and directories)search_codebase - Multi-mode search (semantic/text/pattern)learn_codebase_intelligence - Deep learning to extract patterns and architectureget_project_blueprint - Instant project context with tech stack and entry points ⭐ (Phase 4: includes learning status)get_semantic_insights - Query learned concepts and relationshipsget_pattern_recommendations - Get patterns with related files for consistencypredict_coding_approach - Implementation guidance with file routing ⭐get_developer_profile - Access coding style and work contextcontribute_insights - Record architectural decisionsauto_learn_if_needed - Smart auto-learning with staleness detection ⭐ (Phase 4: includes quick setup functionality)get_system_status - Health checkget_intelligence_metrics - Analytics on learned patternsget_performance_status - Performance diagnosticsPhase 4 Consolidation: Three tools were merged into existing tools for better AX (agent experience haha):
analyze_codebaseget_project_blueprintauto_learn_if_neededFor AI agents: See
AGENT.mdfor complete tool reference with usage patterns and decision trees.
In Memoria works with GitHub Copilot through custom instructions and chat modes.
This repository includes:
.github/copilot-instructions.md - Automatic guidance for Copilot.github/chatmodes/ - Three specialized chat modes:
In Memoria integrates with **GitH