by fajarhide
A smart context filter that removes noise, improves responses, and reduces token usage up to 90%
# Add to your Claude Code skills
git clone https://github.com/fajarhide/omniLess noise. More signal. Cut your AI token consumption by up to 90%.
OMNI is a smart terminal layer that intelligently filters and prioritizes command output before it reaches your AI agent. By preventing your AI from getting confused by noisy output, you get accurate answers faster while saving massive amounts of token costs.
Fully transparent. You're always in control.
When you use autonomous AI agents (like Claude Code) in your terminal, they read everything. A simple git diff, npm install, or cargo test command can easily dump 10,000 to 25,000 tokens of useless terminal noise into your AI's context.
This causes three huge problems:
No comments yet. Be the first to share your thoughts!
I built Omni because I wanted to run AI agents efficiently and cheaply every single day in my own workflow.
Omni acts as the perfect filter between your terminal and your AI.
The result? You can run your AI agent on a super-advanced framework and feed it zero noise. Because the AI is only fed highly focused, straight-to-the-point context, even affordable or ordinary models will perform on-par with expensive flagship models, since they are never distracted by junk data.
My ultimate passion isn't to monetize this—it's to build the ultimate open-source toolbelt for the Agentic AI era. By aggressively saving token costs, I can develop software robustly and cost-effectively today, and you can too.
OMNI wasn't built just to "cut context" or "save tokens"—those are simply the happy side effects. The true philosophy behind OMNI is Context Quality.
AI agents like Claude are only as smart as the context you feed them. When you flood them with megabytes of dependency logs or loading bars, you force them to sift through garbage to find the actual problem. This dilutes their reasoning and leads to degraded or unhelpful responses.
OMNI's goal is to feed your AI pure, highly-dense signal. This means only grabbing the context that is actually important and meaningful for Claude. We clean up the noise the AI doesn't need, which means:
Try it for a week. Feel the difference in the quality and speed of your AI's reasoning when it's fed on a diet of pure signal instead of raw terminal noise.
RewindStore). If the AI actually needs the full log, it can just automatically ask for it.omni stats to see how much money and space you've saved.omni diff): See exactly how much money and space you are saving. Just run omni diff to see the bulky raw output compared side-by-side to Omni's sleek, filtered version.flowchart TB
Agent["Claude Code / OpenClaw / MCP Agent"]
subgraph Hooks["Native Hook Layer (Transparent)"]
Pre["Pre-Hook\n(Rewriter)"]
Post["Post-Hook\n(Distiller)"]
Sess["Session-Start\n(Context)"]
Comp["Pre-Compact\n(Summary)"]
end
Agent --> Pre
Pre -->|"omni exec"| Output["Raw Stream"]
Output --> Post
Post --> Agent
subgraph OMNI_Engine["OMNI — Semantic Signal Engine"]
direction LR
R["Registry\n(Filters)"]
S["Scorer\n(Context Boost)"]
D["Distiller\n(Semantic Magic)"]
R --> S --> D
end
Post --> OMNI_Engine
Pre --> OMNI_Engine
subgraph Persistence["Persistence Store (SQLite)"]
ST["SessionState"]
RW["RewindStore"]
end
OMNI_Engine <--> Persistence
Sess --> ST
Comp --> ST
Omni is incredibly easy to set up. It natively integrates into your terminal.
macOS / Linux:
# 1. Install via Homebrew
brew install fajarhide/tap/omni
# 2. Setup Omni (Hooks + MCP Server)
omni init --all
# 3. Verify it's working
omni doctor
# 4. Or auto-fix any issues
omni doctor --fix
# 5. Check Current Status
omni init --status
Universal Installer (macOS / Linux / WSL):
curl -fsSL omni.weekndlabs.com/install | bash
Windows (PowerShell):
irm omni.weekndlabs.com/install.ps1 | iex
Once installed via omni init --all, OMNI works invisibly in the background. Whether your AI Agent runs a terminal command via MCP or you manually pipe output (ls | omni), OMNI automatically jumps in as a transparent layer. It intelligently filters terminal output, removes the noisy logs, and hands the clean signal back to the AI.
To review how many tokens (and how much money) you've saved today, just type:
omni stats
Need to see the filters in action or add your own custom rules? You can easily create your own rules using simple TOML files.
For Users:
omni learn, Custom TOML Filters, and CLI Commands.openclaw plugins install clawhub:@fajarhide/omni-signal-engineFor Developers & System Integrators:
Omni is part of my personal AI toolbelt. If you use claude-code, I highly recommend pairing Omni with my other project: Heimsense.
Heimsense unlocks restricted environments like claude-code to run with any free or OpenAI-compatible model, rather than forcing you to use expensive Anthropic ones.
Omni + Heimsense = Run world-class agent frameworks using affordable models with zero noise and pinpoint accuracy.
This is a passion project built for the era of Agentic AI. Whether you're here to save money on tokens, test out free models, or help build the ultimate agentic toolbelt, contributions are always welcome!
make ci and cargo build. Read our CONTRIBUTING.md for details.Build with ❤️ by Fajar Hidayat