by vinkius-labs
The Express.js for MCP Servers. Type-safe tools · Presenters that control what the LLM sees · Built-in PII redaction · Deploy once — every AI assistant connects.
# Add to your Claude Code skills
git clone https://github.com/vinkius-labs/vurb.tsThe Express.js for MCP Servers. Type-safe tools · Presenters that control what the LLM sees · Built-in PII redaction · Deploy once — every AI assistant connects.
Documentation · Quick Start · API Reference · llms.txt
vurb create my-server
Open it in Cursor, Claude Code, or GitHub Copilot and prompt:
💬 Tell your AI agent:
"Build an MCP server for patient records with Prisma. Redact SSN and diagnosis from LLM output. Add an FSM that gates discharge tools until attending physician signs off."
The agent reads the SKILL.md (or the llms.txt) and writes the entire server. First pass — no corrections.
One command. Your MCP server is live on Vinkius Edge, Vercel Functions, or Cloudflare Workers.
vurb deploy
A production-ready MCP server with file-based routing, Presenters, middleware, tests, and pre-configured connections for Cursor, Claude Desktop, Claude Code, Windsurf, Cline, and VS Code + GitHub Copilot.
Every framework you've adopted followed the same loop: read the docs, study the conventions, hit an edge case, search GitHub issues, re-read the docs. Weeks before your first production PR. Your AI coding agent does the same — it hallucinates Express patterns into your Hono project because it has no formal contract to work from.
Vurb.ts ships a SKILL.md — a machine-readable architectural contract that your AI agent ingests before generating a single line. Not a tutorial. Not a "getting started guide" the LLM will paraphrase loosely. A typed specification: every Fluent API method, every builder chain, every Presenter composition rule, every middleware signature, every file-based routing convention. The agent doesn't approximate — it compiles against the spec.
The agent reads SKILL.md and produces:
// src/tools/patients/discharge.ts — generated by your AI agent
const PatientPresenter = createPresenter('Patient')
.schema({ id: t.string, name: t.string, ssn: t.string, diagnosis: t.string })
.redactPII(['ssn', 'diagnosis'])
.rules(['HIPAA: diagnosis visible in UI blocks but REDACTED in LLM output']);
const gate = f.fsm({
id: 'discharge', initial: 'admitted',
states: {
admitted: { on: { SIGN_OFF: 'cleared' } },
cleared: { on: { DISCHARGE: 'discharged' } },
discharged: { type: 'final' },
},
});
export default f.mutation('patients.discharge')
.describe('Discharge a patient')
.bindState('cleared', 'DISCHARGE')
.returns(PatientPresenter)
.handle(async (input, ctx) => ctx.db.patients.update({
where: { id: input.id }, data: { status: 'discharged' },
}));
Correct Presenter with .redactPII(). FSM gating that makes patients.discharge invisible until sign-off. File-based routing. Typed handler. First pass — no corrections.
This works on Cursor, Claude Code, GitHub Copilot, Windsurf, Cline — any agent that can read a file. The SKILL.md is the single source of truth: the agent doesn't need to have been trained on Vurb.ts, it just needs to read the spec.
You don't learn Vurb.ts. You don't teach your agent Vurb.ts. You hand it a 400-line contract. It writes the server. You review the PR.
Click one of these links. The AI will read the Vurb.ts architecture and generate production-ready code in seconds:
No comments yet. Be the first to share your thoughts!