by pi22by7
Persistent Intelligence Infrastructure for AI Agents
# Add to your Claude Code skills
git clone https://github.com/pi22by7/In-MemoriaGiving AI coding assistants a memory that actually persists.
Watch In Memoria in action: learning a codebase, providing instant context, and routing features to files.
You know the drill. You fire up Claude, Copilot, or Cursor to help with your codebase. You explain your architecture. You describe your patterns. You outline your conventions. The AI gets it, helps you out, and everything's great.
Then you close the window.
Next session? Complete amnesia. You're explaining the same architectural decisions again. The same naming conventions. The same "no, we don't use classes here, we use functional composition" for the fifteenth time.
Every AI coding session starts from scratch.
This isn't just annoying, it's inefficient. These tools re-analyze your codebase on every interaction, burning tokens and time. They give generic suggestions that don't match your style. They have no memory of what worked last time, what you rejected, or why.
In Memoria is an MCP server that learns from your actual codebase and remembers across sessions. It builds persistent intelligence about your code (patterns, architecture, conventions, decisions) that AI assistants can query through the Model Context Protocol.
Think of it as giving your AI pair programmer a notepad that doesn't get wiped clean every time you restart the session.
Current version: 0.6.0 - See what's changed
# First time: Learn your codebase
npx in-memoria learn ./my-project
# Start the MCP server
npx in-memoria server
# Now in Claude/Copilot:
You: "Add password reset functionality"
AI: *queries In Memoria*
"Based on your auth patterns ...