by growthxai
The open-source TypeScript framework for building AI workflows and agents. Designed for Claude Code describe what you want, Claude builds it, with all the best practices already in place.
# Add to your Claude Code skills
git clone https://github.com/growthxai/outputThe open-source TypeScript framework for building AI workflows and agents. Designed for Claude Code — describe what you want, Claude builds it, with all the best practices already in place.
One framework. Prompts, evals, tracing, cost tracking, orchestration, credentials. No SaaS fragmentation. No vendor lock-in. Everything in your codebase, everything your AI coding agent can reach.
Every piece of the AI stack is becoming a separate subscription. Prompts in one tool. Traces in another. Evals in a third. Cost tracking across five dashboards. None of them talk to each other. Half of them will get acquired or shut down before your product ships.
Output brings everything together. One TypeScript framework, extracted from thousands of production AI workflows. Best practices baked in so beginners ship professional code from day one, and experienced AI engineers stop rebuilding the same infrastructure.
Output is the first framework designed for AI coding agents. The entire codebase is structured so Claude Code can scaffold, plan, generate, test, and iterate on your workflows. Every workflow is a folder — code, prompts, tests, evals, traces, all together. Your agent reads one folder and has full context.
No comments yet. Be the first to share your thoughts!
.prompt files with YAML frontmatter and Liquid templating. Version-controlled, reviewable in PRs, deployed with your code. Switch providers by changing one line. No subscription needed to manage your own prompts.
Every LLM call, HTTP request, and step traced automatically. Token counts, costs, latency, full prompt/response pairs. JSON in logs/runs/. Zero config. Claude Code analyzes your traces and fixes issues — because the data is in your file system.
LLM-as-judge evaluators with confidence scores. Inline evaluators for production retry loops. Offline evaluators for dataset testing. Deterministic assertions and subjective quality judges.
Anthropic, OpenAI, Azure, Vertex AI, Bedrock. One API. Structured outputs, streaming, tool calling — all work the same regardless of provider.
Temporal under the hood. Automatic retries with exponential backoff. Workflow history. Replay on failure. Child workflows. Parallel execution with concurrency control. You don't think about Temporal until you need it — then it's already there.
AI apps need a lot of API keys. Sharing .env files is risky, and coding agents shouldn't see your secrets. Output encrypts credentials with AES-256-GCM, scoped per environment and workflow, managed through the CLI. No external vault subscription needed.
npx @outputai/cli init
cd <project-name>
Add your API key to .env:
ANTHROPIC_API_KEY=sk-ant-...
npx output dev
This starts the full development environment:
npx output workflow run blog_evaluator paulgraham_hwh
Inspect the execution:
npx output workflow debug <workflow-id>
For the full getting started guide, see the documentation.
Orchestration layer — deterministic coordination logic, no I/O.
// src/workflows/research/workflow.ts
workflow({
name: 'research',
fn: async (input) => {
const data = await gatherSources(input);
const analysis = await analyzeContent(data);
const quality = await checkQuality(analysis);
return quality.passed ? analysis : await reviseContent(analysis, quality);
}
});
Where I/O happens — API calls, LLM requests, database queries. Each step runs once and its result is cached for replay.
// src/workflows/research/steps.ts
step({
name: 'gatherSources',
fn: async (input) => {
const results = await searchApi(input.topic);
return { sources: results };
}
});
.prompt files with YAML configuration and Liquid templating.
---
provider: anthropic
model: claude-sonnet-4-20250514
temperature: 0
---
<system>You are a research analyst.</system>
<user>Analyze the following sources about {{ topic }}: {{ sources }}</user>
LLM-as-judge evaluation with confidence scores and reasoning.
// src/workflows/research/evaluators.ts
evaluator({
name: 'checkQuality',
fn: async (content) => {
const { output } = await generateText({
prompt: 'evaluate_quality',
variables: { content },
output: Output.object({
schema: z.object({
isQuality: z.boolean(),
confidence: z.number().describe('0-100'),
reasoning: z.string()
})
})
});
return new EvaluationBooleanResult({
value: output.isQuality,
confidence: output.confidence,
reasoning: output.reasoning
});
}
});
| Package | Description | |---------|-------------| | @outputai/core | Workflow, step, and evaluator primitives | | @outputai/llm | Multi-provider LLM with prompt management | | @outputai/http | HTTP client with tracing | | @outputai/cli | CLI for project init, dev environment, and workflow management |
See test_workflows/ for complete examples:
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_ENDPOINT=...
AWS_ACCESS_KEY_ID=... # For Amazon Bedrock
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-east-1
For local development, output dev handles everything. For production, use Temporal Cloud or self-hosted Temporal:
TEMPORAL_ADDRESS=your-namespace.tmprl.cloud:7233
TEMPORAL_NAMESPACE=your-namespace
TEMPORAL_API_KEY=your-api-key
# Local tracing (writes JSON to disk; default under "logs/runs/")
OUTPUT_TRACE_LOCAL_ON=true
# Remote tracing (upload to S3 on run completion)
OUTPUT_TRACE_REMOTE_ON=true
OUTPUT_REDIS_URL=redis://localhost:6379
OUTPUT_TRACE_REMOTE_S3_BUCKET=my-traces
We welcome contributions! Please read CONTRIBUTING.md first — all contributions require an approved issue and maintainer assignment before work begins.
git clone https://github.com/growthxai/output.git
cd output
pnpm install && npm run build:packages
npm run dev # Start dev environment
npm test # Run tests
npm run lint # Lint code
./run.sh validate # Validate everything
Project structure:
sdk/ — SDK packages (core, llm, http, cli)api/ — API server for workflow executiontest_workflows/ — Example workflowsApache 2.0 — see LICENSE file.
Built with Temporal, Vercel AI SDK, Zod, LiquidJS.