by boshu2
The missing DevOps layer for coding agents. Flow, feedback, and memory that compounds between sessions.
# Add to your Claude Code skills
git clone https://github.com/boshu2/agentopsHow It Works · Install · See It Work · Skills · CLI · FAQ
</div> <p align="center"> <img src="docs/assets/swarm-6-rpi.png" alt="Agents running full development cycles in parallel with validation gates and a coordinating team leader" width="800"> <br> <i>From goal to shipped code — agents research, plan, and implement in parallel. Councils validate before and after. Every learning feeds the next session.</i> </p>Coding agents get a blank context window every session. AgentOps is a toolbox of skills you compose however you want — use one, chain several, or run the full pipeline. Knowledge compounds between sessions automatically.
| Pattern | Chain | When |
|---------|-------|------|
| Quick fix | /implement | One issue, clear scope |
| Validated fix | /implement → /vibe | One issue, want confidence |
| Planned epic | /plan → /pre-mortem → /crank → /post-mortem | Multi-issue, structured |
| Full pipeline | /rpi (chains all above) | End-to-end, autonomous |
| Evolve loop | (chains repeatedly) | Fitness-scored improvement |
| | → → → → | External repo |
| | → (if gaps) | Understanding before building |
| | | Ad-hoc multi-judge review |
No comments yet. Be the first to share your thoughts!
/evolve/rpi/pr-research/pr-plan/pr-implement/pr-validate/pr-prep/knowledge/research/council validate <target>Every skill maps to one of DevOps' Three Ways, applied to the agent loop:
/research, /plan, /crank, /swarm, /rpi): move work through the system. Swarm parallelizes any skill; crank runs dependency-ordered waves; rpi chains the full pipeline./council, /vibe, /pre-mortem): shorten the feedback loop until defects can't survive it. Independent judges catch issues before code ships..agents/, ao CLI, /retro, /knowledge): stop rediscovering what you already know. Every session extracts learnings, scores them by freshness, and re-injects the best ones next time. Session 50 knows what session 1 learned the hard way.The learning part is what makes it compound. Your agent validates a PR, and the decisions and patterns are written to .agents/. Three weeks later, different task, but your agent already knows:
> /research "retry backoff strategies"
[inject] 3 prior learnings loaded (freshness-weighted):
- Token bucket with Redis (established, high confidence)
- Rate limit at middlew...