by Beever-AI
Your First LLM-Wiki Conversation Knowledge Base
# Add to your Claude Code skills
git clone https://github.com/Beever-AI/beever-atlasBeever Atlas pulls the conversations your team already has on Slack, Discord, Microsoft Teams, and Mattermost, extracts atomic facts, deduplicates them, and clusters them into topic pages with citations. A graph store links the people, decisions, and projects mentioned across channels. Ask questions in natural language and get answers cited back to the source messages — through the dashboard, or through MCP into Claude Code and Cursor.
If you want a knowledge base that grows on its own from the chats your team already has, this is it.
Six short clips — connect a workspace, sync history, watch memory build, browse the auto-generated wiki, ask questions, plug external AI agents in via MCP.
Conversations from any supported platform flow into a unified ingestion pipeline that produces two complementary memory systems — a 3-tier semantic store (channel / topic / atomic fact) for fast hybrid search, and a graph store that extracts entities and their relationships. Those memories fuel two consumer surfaces: the LLM Wiki (distilled, auto-maintained) and QA Agents (served through the dashboard directly, or through MCP into Claude Code / Cursor).
Under the hood, three services (backend, bot, frontend) are backed by four data stores (Weaviate, Neo4j, MongoDB, Redis). See the architecture overview on the documentation site for the full design — component responsibilities, dual-memory internals, and the smart query router.
Most RAG systems answer questions by retrieving raw message snippets and feeding them straight to an LLM. Beever Atlas takes a different approach: it continuously distils conversations into a structured, auto-maintained wiki — with topic pages, entity graphs, decisions, and citations — before any query is issued. When you ask a question, the retrieval layer works against clean, deduplicated knowledge rather than noisy chat history. This means answers are more consistent, citations are traceable to source messages, and the wiki itself becomes a useful artifact your team can browse independently of the Q&A interface. The dual-memory architecture (semantic + graph) lets the query router pick the right retrieval strategy per question, keeping latency low and context precise.
No comments yet. Be the first to share your thoughts!
The per-channel wiki concept is directly inspired by Andrej Karpathy's observation that LLMs are far better at reasoning over curated, encyclopedic content (books, docs, wikis) than over raw conversational transcripts. Chat history is noisy, redundant, temporally scattered, and full of implicit context that only humans resolve. A wiki, by contrast, is the already-distilled form of that knowledge — deduplicated, structured, citation-bearing, and organised by topic rather than by timestamp.
Beever Atlas operationalises this insight: every synced channel gets its own auto-generated, continuously-updated wiki — sections for topics, entities, decisions, open questions, and timelines — rebuilt incrementally as new messages arrive. The QA agent retrieves against this wiki first, falling back to raw messages only when a fact hasn't been distilled yet.
For a detailed comparison with other LLM knowledge tools, see the comparison page on the documentation site.
Beever Atlas ships as a Docker Compose stack (backend + bot + web + 4 datastores). You can try a seeded demo in 30 seconds with zero keys, then pick one of three deployment options to install it for real.
git clone https://github.com/beever-ai/beever-atlas.git
cd beever-atlas
make demo
make demo brings up the full stack pre-loaded with a public Wikipedia corpus (Ada Lovelace + Python history). Seeding uses pre-computed fixtures — no API keys required. Asking questions via /api/ask needs a free-tier GOOGLE_API_KEY because the QA agent calls Gemini. See demo/README.md for curl examples.
Skip this step if you're ready to install for real.
Two free keys are required before installing. Both offer generous free tiers — enough to sync a small team's channels for testing.
| Key | Purpose | Whe