by winstonkoh87
The Linux OS for AI Agents — Persistent memory, autonomy, and time-awareness for any LLM. Own the state. Rent the intelligence.
# Add to your Claude Code skills
git clone https://github.com/winstonkoh87/Athena-Public
Your memory. Your machine. Any model.
Open-source cognitive augmentation layer that gives you persistent memory, structured reasoning, and full data ownership — across ChatGPT, Claude, Gemini, and any model you switch to next.
Platforms forget. Athena doesn't.
Quickstart · How It Works · Docs · FAQ · Safety · Contributing
Last updated: 25 April 2026
You've spent months training ChatGPT to understand you. Then a model update resets the personality. Your custom instructions stop working. You can't find that conversation from last Tuesday. And if you switch to Claude or Gemini? You start from zero.
Platform memory is unreliable, opaque, and locked to one provider. You don't own it, you can't inspect it, and you can't take it with you.
No comments yet. Be the first to share your thoughts!
Athena moves the memory layer to your machine. Plain Markdown files that you own, version-control, and point at any model.
/start (~10K) → /ultrastart (~20K). 80–98% of your context window stays free, even after 10,000 sessions.A generic LLM is a brilliant amnesiac. Athena is the hippocampus — the memory that makes intelligence useful.
Or in engineering terms: The LLM is the engine. Athena is the chassis, the memory, and the rules of the road. Swap the engine anytime — the car remembers every road you've driven.
The design philosophy: augment the human, not replace them. After 1,700+ sessions, the bottleneck shifted — optimising the operator is now higher-leverage than optimising the AI.
Athena's centralised design principle: augment human cognition, not replace it. The more context you give Athena, the sharper its answers become — not by remembering your preferences, but by reasoning differently because of what it knows about you.
A generic LLM gives the internet's statistically average answer — correct on average, across all humans. Athena gives answers calibrated to your specific situation, because statistical correctness and personal correctness are completely different things:
Generic LLMs solve the question. Athena solves the person. The same question, asked by different people with different lives, demands fundamentally different answers. A generic LLM can’t differentiate because it has no context. Athena can’t give the same answer twice — because the context files are different. The memory is the product.
Not all problems are solvable. Athena classifies and responds accordingly:
| Problem Type | What Athena Does | Example | |:-------------|:-----------------|:--------| | Solvable | Solves it | "What's the Kelly fraction for this bet?" → calculates, answers | | Optimisable | Optimises within your chosen path | "I've decided to freelance — help me price it" → constraint optimization | | Unsolvable | Maps every option, prices every trade-off, hands the choice back to you | A closeted husband with children weighing whether to stay married or come out — no clean answer exists. Children, shared assets, identity, cultural context, and personal wellbeing all pull in different directions. Athena ensures you choose with full information, not comfortable illusions | | Ruin-path | Vetoes before you walk off the cliff | "This bet risks everything" → Law #1 override, regardless of your preference |
The uncertainty of the domain changes Athena's conviction level:
| Domain Type | Athena's Posture | Example | |:------------|:-----------------|:--------| | Deterministic | High conviction — single correct answer exists | Code bugs, math proofs, tax calculations | | Semi-deterministic | Moderate conviction — answer depends on assumptions you control | Pricing strategy, system architecture, career path analysis | | Semi-stochastic | Low conviction — structural edge exists but randomness dominates | Trading setups, relationship dynamics, market timing | | Stochastic | Minimal conviction — no model outperforms randomness reliably | Startup outcomes, life events, long-term predictions |
As uncertainty increases, Athena shifts from "here's the answer" to "here's the valid structural zone" to "here are your options — you choose." This is deliberate: false confidence in stochastic domains is more dangerous than honest uncertainty. Athena's conviction is proportional to domain determinism and context completeness.
Crucially, conviction and decisiveness are independent axes. Low certainty about outcomes doesn't require vague output. A surgeon operates with high decisiveness and low conviction about outcomes. In semi-stochastic domains, Athena delivers precise, operational setups — then explicitly defers the probability judgment to you. "Setup: Long 1.0850 / SL 1.0800 / TP1 1.0920. Your calibration: structural tell present Y/N?" — not "you might want to consider..." — Protocol 524 →
Law #0 (Sovereignty): Your life, your weights, your choice. Law #1 (No Irreversible Ruin): …unless the choice ends the game permanently. Law #1 overrides Law #0. Always.
Athena doesn't tell you what you should do. It shows you what you can do, what each option costs, and hands the choice back. The only exception: paths that end the game permanently.
Architecture, not oracle. This domain classification is a replicable architecture — each Athena instance calibrates independently over time through bilateral use. Session 1 treats most problems conservatively. Session 500 has accumulated enough frameworks, case studies, and corrected assumptions to tighten confidence bands and solve more sub-problems autonomously. The calibration compounds; the model is interchangeable. — [Protocol 525 (Cross-Domain Weighting) →](examples/protocols/