Skip to main content
agentic-flow - AI Agents | SkillsLLM
Home AI Agents agentic-flow Easily switch between alternative low-cost AI models in Claude Code/Agent SDK. For those comfortable using Claude agents and commands, it lets you take what you've created and deploy fully hosted agents for real business purposes. Use Claude Code to get the agent working, then deploy it in your favorite cloud.
AI Agentsagents claude claude-agent-sdk claude-code gemini
# Add to your Claude Code skills
git clone https://github.com/ruvnet/agentic-flowπ Agentic-Flow v2
Production-ready AI agent orchestration with 66 self-learning agents, 213 MCP tools, and autonomous multi-agent swarms.
β‘ Quick Start (60 seconds)
# 1. Initialize your project
npx agentic-flow init
# 2. Bootstrap intelligence from your codebase
npx agentic-flow hooks pretrain
# 3. Start Claude Code with self-learning hooks
claude
That's it! Your project now has:
π§ Self-learning hooks that improve agent routing over time
π€ 80+ specialized agents (coder, tester, reviewer, architect, etc.)
β‘ Background workers triggered by keywords (ultralearn, optimize, audit)
π 213 MCP tools for swarm coordination
Common Commands
# Route a task to the optimal agent
npx agentic-flow hooks route "implement user authentication"
# View learning metrics
npx agentic-flow hooks metrics
# Dispatch background workers
npx agentic-flow workers dispatch "ultralearn how caching works"
# Run MCP server for Claude Code
npx agentic-flow mcp start
Use in Code
import { AgenticFlow } from 'agentic-flow';
const flow = new AgenticFlow();
await flow.initialize();
// Route task to best agent
const result = await flow.route('Fix the login bug');
console.log(`Best agent: ${result.agent} (${result.confidence}% confidence)`);
Sign in with GitHub to leave a comment.
No comments yet. Be the first to share your thoughts!
Related Skills The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.
openrouter
swarm-intelligence
π What's New in v2
SONA: Self-Optimizing Neural Architecture π§ Agentic-Flow v2 now includes SONA (@ruvector/sona) for sub-millisecond adaptive learning:
π +55% Quality Improvement : Research profile with LoRA fine-tuning
β‘ <1ms Learning Overhead : Sub-millisecond pattern learning and retrieval
π Continual Learning : EWC++ prevents catastrophic forgetting
π‘ Pattern Discovery : 300x faster pattern retrieval (150ms β 0.5ms)
π° 60% Cost Savings : LLM router with intelligent model selection
π 2211 ops/sec : Production throughput with SIMD optimization
Complete AgentDB@alpha Integration π§ Agentic-Flow v2 now includes ALL advanced vector/graph, GNN, and attention capabilities from AgentDB@alpha v2.0.0-alpha.2.11:
β‘ Flash Attention : 2.49x-7.47x speedup, 50-75% memory reduction
π― GNN Query Refinement : +12.4% recall improvement
π§ 5 Attention Mechanisms : Flash, Multi-Head, Linear, Hyperbolic, MoE
πΈοΈ GraphRoPE : Topology-aware position embeddings
π€ Attention-Based Coordination : Smarter multi-agent consensus
Performance Grade: A+ (100% Pass Rate)
π Table of Contents
π₯ Key Features
π SONA: Self-Optimizing Neural Architecture Adaptive Learning (<1ms Overhead)
Sub-millisecond pattern learning and retrieval
300x faster than traditional approaches (150ms β 0.5ms)
Real-time adaptation during task execution
No performance degradation
LoRA Fine-Tuning (99% Parameter Reduction)
Rank-2 Micro-LoRA: 2211 ops/sec
Rank-16 Base-LoRA: +55% quality improvement
10-100x faster training than full fine-tuning
Minimal memory footprint (<5MB for edge devices)
Continual Learning (EWC++)
No catastrophic forgetting
Learn new tasks while preserving old knowledge
EWC lambda 2000-2500 for optimal memory preservation
Cross-agent pattern sharing
LLM Router (60% Cost Savings)
Intelligent model selection (Sonnet vs Haiku)
Quality-aware routing (0.8-0.95 quality scores)
Budget constraints and fallback handling
$720/month β $288/month savings
Quality Improvements by Domain :
Code tasks: +5.0%
Creative writing: +4.3%
Reasoning: +3.6%
Chat: +2.1%
Math: +1.2%
5 Configuration Profiles :
Real-Time : 2200 ops/sec, <0.5ms latency
Batch : Balance throughput & adaptation
Research : +55% quality (maximum)
Edge : <5MB memory footprint
Balanced : Default (18ms, +25% quality)
π§ Advanced Attention Mechanisms Flash Attention (Production-Ready)
2.49x speedup in JavaScript runtime
7.47x speedup with NAPI runtime
50-75% memory reduction
<0.1ms latency for all operations
Multi-Head Attention (Standard Transformer)
8-head configuration
Compatible with existing systems
<0.1ms latency
Linear Attention (Scalable)
O(n) complexity
Perfect for long sequences (>2048 tokens)
<0.1ms latency
Hyperbolic Attention (Hierarchical)
Models hierarchical structures
Queen-worker swarm coordination
<0.1ms latency
MoE Attention (Expert Routing)
Sparse expert activation
Multi-agent routing
<0.1ms latency
GraphRoPE (Topology-Aware)
Graph structure awareness
Swarm coordination
<0.1ms latency
π― GNN Query Refinement
+12.4% recall improvement target
3-layer GNN network
Graph context integration
Automatic query optimization
π€ 66 Self-Learning Specialized Agents All agents now feature v2.0.0-alpha self-learning capabilities :
π§ ReasoningBank Integration : Learn from past successes and failures
π― GNN-Enhanced Context : +12.4% better accuracy in finding relevant information
β‘ Flash Attention : 2.49x-7.47x faster processing
π€ Attention Coordination : Smarter multi-agent consensus
Core Development (Self-Learning Enabled)
coder - Learns code patterns, implements faster with GNN context
reviewer - Pattern-based issue detection, attention consensus reviews
tester - Learns from test failures, generates comprehensive tests
planner - MoE routing for optimal agent assignment
researcher - GNN-enhanced pattern recognition, attention synthesis
Swarm Coordination (Advanced Attention Mechanisms)
hierarchical-coordinator - Hyperbolic attention for queen-worker models
mesh-coordinator - Multi-head attention for peer consensus
adaptive-coordinator - Dynamic mechanism selection (flash/multi-head/linear/hyperbolic/moe)
collective-intelligence-coordinator - Distributed memory coordination
swarm-memory-manager - Cross-agent learning patterns
byzantine-coordinator, raft-manager, gossip-coordinator
crdt-synchronizer, quorum-manager, security-manager
Performance & Optimization
perf-analyzer, performance-benchmarker, task-orchestrator
memory-coordinator, smart-agent
GitHub & Repository (Intelligent Code Analysis)
pr-manager - Smart merge strategies, attention-based conflict resolution
code-review-swarm - Pattern-based issue detection, GNN code search
issue-tracker - Smart classification, attention priority ranking
release-manager - Deployment strategy selection, risk assessment
workflow-automation - Pattern-based workflow generation
SPARC Methodology (Continuous Improvement)
specification - Learn from past specs, GNN requirement analysis
pseudocode - Algorithm pattern library, MoE optimization
architecture - Flash attention for large docs, pattern-based design
refinement - Learn from test failures, pattern-based refactoring
And 40+ more specialized agents, all with self-learning!
π§ 213 MCP Tools
Swarm & Agents : swarm_init, agent_spawn, task_orchestrate
Memory & Neural : memory_usage, neural_train, neural_patterns
GitHub Integration : github_repo_analyze, github_pr_manage
Performance : benchmark_run, bottleneck_analyze, token_usage
And 200+ more tools!
π§© Advanced Capabilities
π§ ReasoningBank Learning Memory : All 66 agents learn from every task execution
Store successful patterns with reward scores
Learn from failures to avoid repeating mistakes
Cross-agent knowledge sharing
Continuous improvement over time (+10% accuracy improvement per 10 iterations)
π― Self-Learning Agents : Every agent improves autonomously
Pre-task: Search for similar past solutions
During: Use GNN-enhanced context (+12.4% better accuracy)
Post-task: Store learning patterns for future use
Track performance metrics and optimize strategies
β‘ Flash Attention Processing : 2.49x-7.47x faster execution
Automatic runtime detection (NAPI β WASM β JS)
50% memory reduction for long contexts
<0.1ms latency for all operations
Graceful degradation across runtimes
π€ Intelligent Coordination : Better than simple voting
Attention-based multi-agent consensus
Hierarchical coordination with hyperbolic attention
MoE routing for expert agent selection
Topology-aware coordination with GraphRoPE
π Quantum-Resistant Jujutsu VCS : Secure version control with Ed25519 signatures
π Agent Booster : 352x faster code editing with local WASM engine
π Distributed Consensus : Byzantine, Raft, Gossip, CRDT protocols
π§ Neural Networks : 27+ ONNX models, WASM SIMD acceleration
β‘ QUIC Transport : Low-latency, secure agent communication
π Benefits
For Developers
Pre-built agents for common tasks
Auto-spawning based on file types
Smart code completion and editing
352x faster local code edits with Agent Booster
2.49x-7.47x speedup with Flash Attention
150x-12,500x faster vector
23,507
AI Agents ai-agents anthropic
An open-source AI agent that brings the power of Gemini directly into your terminal.
AI Agents ai ai-agents
An AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
CLI Tools ai-skills antigravity
The agent that grows with you
AI Agents ai ai-agent
A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
AI Agents agent-skills ai-agents
Bash is all you need - A nano claude codeβlike γagent harnessγ, built from 0 to 1
AI Agents agent agent-development