by vargHQ
AI video generation SDK — JSX for videos. One API for Kling, Flux, ElevenLabs, Sora. Built on Vercel AI SDK.
# Add to your Claude Code skills
git clone https://github.com/vargHQ/sdkvarg is an open-source TypeScript SDK for AI video generation. One API key, one gateway — generate images, video, speech, music, lipsync, and captions through varg.* providers. Write videos as JSX components (like React), render locally or in the cloud.
Install the varg skill into Claude Code, Cursor, Windsurf, or any agent that supports skills. Zero code — just prompt.
# 1. Install the varg skill
npx -y skills add vargHQ/skills --all --copy -y
# 2. Set your API key (get one at app.varg.ai)
export VARG_API_KEY=varg_live_xxx
# 3. Create your first video
claude "create a 10-second product video for white sneakers, 9:16, UGC style, with captions and background music"
The agent writes declarative JSX, varg handles AI generation + caching + rendering.
# Install with bun (recommended)
bun install vargai ai
# Or with npm
npm install vargai ai
# Set up project (auth, skills, hello.tsx, cache dirs)
bunx vargai init
vargai init handles everything: signs you in, installs the agent skill, creates a starter template, and sets up your project structure.
Then render the starter template:
bunx vargai render hello.tsx
Or ask your AI agent to create something from scratch.
Your prompt / JSX code
|
varg gateway (api.varg.ai)
/ | \ \
Kling Flux ElevenLabs Wan ... (AI providers)
\ | / /
varg render engine
|
output.mp4
VARG_API_KEY) routes to all providers through the varg gateway<Clip>, <Video>, <Music>, <Captions>bunx vargai render locally, or submit via the Cloud Render APIimport { Render, Clip, Image, Video } from "vargai/react";
import { varg } from "vargai/ai";
const character = Image({
prompt: "cute kawaii orange cat, round body, big eyes, Pixar style",
model: varg.imageModel("nano-banana-pro"),
aspectRatio: "9:16",
});
export default (
<Render width={1080} height={1920}>
<Clip duration={5}>
<Video
prompt={{ text: "cat waves hello, bounces happily", images: [character] }}
model={varg.videoModel("kling-v3")}
/>
</Clip>
</Render>
);
bunx vargai render hello.tsx
import { Render, Clip, Image, Video, Speech, Captions, Music } from "vargai/react";
import { varg } from "vargai/ai";
const character = Image({
model: varg.imageModel("nano-banana-pro"),
prompt: "friendly robot, blue metallic, expressive eyes",
aspectRatio: "9:16",
});
const voiceover = Speech({
model: varg.speechModel("eleven_v3"),
voice: "adam",
children: "Hello! I'm your AI assistant. Let me show you something cool!",
});
export default (
<Render width={1080} height={1920}>
<Music prompt="upbeat electronic, cheerful" model={varg.musicModel()} volume={0.15} />
<Clip duration={5}>
<Video
prompt={{ text: "robot talking, subtle head movements", images: [character] }}
model={varg.videoModel("kling-v3")}
/>
</Clip>
<Captions src={voiceover} style="tiktok" color="#ffffff" withAudio />
</Render>
);
import { Render, Clip, Image, Video, Speech, Captions, Music } from "vargai/react";
import { varg } from "vargai/ai";
const voiceover = Speech({
model: varg.speechModel("eleven_v3"),
voice: "josh",
children: "With varg, you can create any videos at scale!",
});
const baseCharacter = Image({
prompt: "woman, sleek black bob hair, fitted black t-shirt, natural look",
model: varg.imageModel("nano-banana-pro"),
aspectRatio: "9:16",
});
const animatedCharacter = Video({
prompt: {
text: "woman speaking naturally, subtle head movements, friendly expression",
images: [baseCharacter],
},
model: varg.videoModel("kling-v3"),
});
export default (
<Render width={1080} height={1920}>
<Music prompt="modern tech ambient, subtle electronic" model={varg.musicModel()} volume={0.1} />
<Clip duration={5}>
<Video
prompt={{ video: animatedCharacter, audio: voiceover }}
model={varg.videoModel("sync-v2-pro")}
/>
</Clip>
<Captions src={voiceover} style="tiktok" color="#ffffff" withAudio />
</Render>
);
| Component | Purpose | Key props |
|-----------|---------|-----------|
| <Render> | Root container | width, height, fps |
| <Clip> | Time segment | duration, transition, cutFrom, cutTo |
| <Image> | AI or static image | prompt, src, model, zoom, aspectRatio, resize |
| <Video> | AI or source video | prompt, src, model, volume, cutFrom, cutTo |
| <Speech> | Text-to-speech | voice, model, volume, children |
| <Music> | Background music | prompt, src, model, volume, loop, ducking |
| <Title> | Text overlay | position, color, start, end |
| <Subtitle> | Subtitle text | backgroundColor |
| <Captions> | Auto-generated subs | src, srt, style, color, activeColor, withAudio |
| <Overlay> | Positioned layer | left, top, width, height, keepAudio |
| <Split> | Side-by-side | direction |
| <Slider> | Before/after reveal | direction |
| <Swipe> | Tinder-style cards | direction, interval |
| <TalkingHead> | Animated character | character, src, voice, model, lipsyncModel |
| <Packshot> | End card with CTA | background, logo, cta, blinkCta |
<Captions src={voiceover} style="tiktok" /> // word-by-word highlight
<Captions src={voiceover} style="karaoke" /> // fill left-to-right
<Captions src={voiceover} style="bounce" /> // words bounce in
<Captions src={voiceover} style="typewriter" /> // typing effect
67 GL transitions available:
<Clip transition={{ name: "fade", duration: 0.5 }}>
<Clip transition={{ name: "crossfade", duration: 0.5 }}>
<Clip transition={{ name: "wipeleft", duration: 0.5 }}>
<Clip transition={{ name: "cube", duration: 0.8 }}>
All models are accessed through varg.* — one API key, one provider.
import { varg } from "vargai/ai";
| Model | Use case | Credits (5s) |
|-------|----------|-------------|
| varg.videoModel("kling-v3") | Best quality, latest | 150 |
| varg.videoModel("kling-v3-standard") | Good quality, cheaper | 50 |
| varg.videoModel("kling-v2.5") | Previous gen, reliable | 50 |
| varg.videoModel("wan-2.5") | Good for characters | 50 |
| varg.videoModel("minimax") | Alternative | 50 |
| varg.videoModel("sync-v2-pro") | Lipsync (video + audio) | 50 |
| Model | Use case | Credits |
|-------|----------|---------|
| varg.imageModel("nano-banana-pro") | Versatile, fast | 5 |
| varg.imageModel("nano-banana-pro/edit") | Image-to-image editing | 5 |
| varg.imageModel("flux-schnell") | Fast generation | 5 |
| varg.imageModel("flux-pro") | High quality | 25 |
| varg.imageModel("recraft-v3") | Alternative | 10 |
| Model | Use case | Credits |
|-------|----------|---------|
| varg.speechModel("eleven_v3") | Text-to-speech | 25 |
| varg.speechModel("eleven_multilingual_v2") | Multilingual TTS | 25 |
| varg.musicModel() | Music generation | 25 |
| varg.transcriptionModel("whisper") | Speech-to-text | 5 |
1 credit = $0.01. Cache hits are always free.
bunx vargai login # sign in (email OTP or API key)
bunx vargai init # set up project (auth + skills + template)
bunx vargai render video.tsx # render a video
bunx vargai render video.tsx --preview # free preview with placeholders
bunx vargai render video.tsx --verbose # render with detailed output
bunx vargai balance # check credit balance
bunx vargai topup # add credits
bunx vargai run image --prompt "sunset" # generate a single image
bunx vargai run video --prompt "ocean waves" # generate a single video
bunx vargai list # list available models and actions
bunx vargai studio # open visual editor
# Required — one key for everything
VARG_API_KEY=varg_live_xxx
Get your API key at app.varg.ai. Bun auto-loads .env files.
You can use provider keys directly if you prefer:
FAL_API_KEY=fal_xxx # fal.ai direct
ELEVENLABS_API_KEY=xxx # Elev
No comments yet. Be the first to share your thoughts!