by jmagly
Cognitive architecture for AI-augmented software development. Specialized agents, structured workflows, and multi-platform deployment. Claude Code · Codex · Copilot · Cursor · Factory · Warp · Windsurf.
# Add to your Claude Code skills
git clone https://github.com/jmagly/aiwgMulti-agent AI framework for Claude Code, Copilot, Cursor, Warp, and 4 more platforms
188 agents, 50 CLI commands, 128 skills, 5 core frameworks + training marketplace plugin, 23 addons. SDLC workflows, digital forensics, research management, marketing operations, media curation, ops infrastructure, and fine-tuning dataset curation — all deployable with one command.
npm i -g aiwg # install globally
aiwg use sdlc # deploy SDLC framework
Get Started · Features · Agents · CLI Reference · Documentation · Community
No comments yet. Be the first to share your thoughts!
AIWG is a deployment tool and support utility for AI context. At its core, aiwg use copies markdown and YAML source files into the specific paths each AI platform looks in — .claude/agents/, ~/.codex/skills/, .cursor/rules/, .github/prompts/, and six more — so one source of truth works across 10 platforms.
Around that core, AIWG ships utilities for things the base platforms do not handle on their own: persistent artifact memory (.aiwg/), background orchestration (aiwg mc), autonomous loops (aiwg ralph), artifact indexing (aiwg index), cost telemetry, health diagnostics, and more. Most are opt-in. The deployment layer works standalone as plain text files the platform reads natively.
AIWG ships five primitive artifact types. All are plain text:
/flow-security-review-cycle)Each is a single .md file with YAML frontmatter. Nothing executes until an AI platform reads it.
Because the primitives are text, they compose without runtime coordination:
.aiwg/ directory gives those agents a shared memory — artifacts from Monday's requirements session are read by Thursday's test design.The leverage is not in any one file. It is that hundreds of small files — each independently readable and editable — snap together into workflows that would otherwise take a bespoke agent platform to build.
This is also where the research background lives. AIWG implements patterns from cognitive science (Miller 1956, Sweller 1988), multi-agent systems (Jacobs et al. 1991, MetaGPT, AutoGen), and software engineering (Cooper's stage-gate, FAIR Principles, W3C PROV) — applied as file conventions and deployment rules, not as a runtime you depend on.
These are CLI tools and services on top of the text-file substrate. The substrate works without them:
aiwg ralph — autonomous iterate-until-done loopsaiwg mc — background mission-control for parallel tasksaiwg daemon — persistent session manageraiwg index — searchable artifact indexaiwg mcp — MCP server for runtime tool accessTurn any of these on when you want persistence, parallelism, or automation. Turn them off and your deployed agents, skills, and rules still work — they are still text files the platform reads natively.
.aiwg/ directory (artifacts) and a few provider-specific context dirs (deployed copies). Delete them and your app is unchanged.If you have used AI coding assistants and thought "this is amazing for small tasks but falls apart on anything complex," AIWG is the missing infrastructure layer that scales AI assistance to multi-week projects.
Base AI assistants (Claude, GPT-4, Copilot without frameworks) have three fundamental limitations:
Each conversation starts fresh. The assistant has no idea what happened yesterday, what requirements you documented, or what decisions you made last week. You re-explain context every morning.
Without AIWG: Projects stall as context rebuilding eats time. A three-month project requires continuity, not fresh starts every session.
With AIWG: The .aiwg/ directory maintains 50-100+ interconnected artifacts across days, weeks, and months. Later phases build on earlier ones automatically because memory persists. Agents read prior work via @-mentions instead of regenerating from scratch.
The segmented structure also makes large projects tractable. As code files grow, the project doesn't become harder to reason about — agents load only the slice of memory relevant to the current task (@requirements/UC-001.md, @architecture/sad.md, @testing/test-plan.md) rather than the entire codebase. Each subdirectory is a focused knowledge domain that fits comfortably in context, while cross-references keep everything connected.
The artifact index (aiwg index) takes this further. Without any tooling, agents often need to browse 3-6 documents before finding what they need. AIWG's structured artifacts reduce this to 2-3. With the index enabled, agents resolve artifact lookups in one query more often than not — a direct hit on the right requirement, architecture decision, or test case without browsing.
When AI generates broken code or flawed designs, you manually intervene, explain the problem, and hope the next attempt works. There is no systematic learning from failures, no structured retry, no checkpoint-and-resume.
Without AIWG: Research shows 47% of AI workflows produce inconsistent outputs without reproducibility constraints (R-LAM, Sureshkumar et al. 2026). Debugging is trial-and-error.
With AIWG: The agent loop implements closed-loop self-correction — execute, verify, learn from failure, adapt strategy, retry. External Ralph survives crashes and runs for 6-8+ hours autonomously. Debug memory accumulates failure patterns so the agent doesn't repeat mistakes.
Base assistants optimize for "sounds plausible" not "actually works." A general assistant critiques security, performance, and maintainability simultaneously — poorly. No domain specialization, no multi-perspective review, no human approval checkpoints.
Without AIWG: Production code ships without architectural review, security validation, or operational feasibility assessment.
With AIWG: 162 specialized agents provide domain expertise — Security Auditor reviews security, Test Architect reviews testability, Performance Engineer reviews scalability. Multi-agent review panels with synthesis. Human-in-the-loop gates at every phase transition. Research shows 84% cost reduction keeping humans on high-stakes decisions versus fully autonomous systems (Agent Laboratory, Schmidgall et al. 2025).
The .aiwg/ directory is a persistent artifact repository storing requirements, architecture decisions, test strategies, risk registers, and deployment plans across sessions. This implements Retrieval-Augmented Generation patterns (Lewis et al., 2020) — agents retrieve from an evolving knowledge base rather than regenerating from scratch.
Each artifact is discoverable via @-mentions (e.g., @.aiwg/requirements/UC-001-login.md). Context sharing between agents happens through artifacts: the requirements analyst writes use cases, the architecture designer reads them.
Instead of a single general-purpose assistant, AIWG provides 162 specialized agents organized by domain. Complex artifacts go through multi-agent review pane