Give Claude the ability to watch and understand videos — Claude Code plugin with frame extraction and multimodal audio analysis
# Add to your Claude Code skills
git clone https://github.com/jordanrendric/claude-video-visionGive Claude the ability to watch and understand videos.
A Claude Code plugin that extracts frames via ffmpeg and processes audio via multiple backends (Gemini API, local Whisper, or OpenAI API). Claude receives frames as images and audio transcription with timestamps — the plugin is a perception layer, not an interpretation layer.
/setup-video-vision walks you through configurationInside Claude Code, run these commands one at a time:
/plugin marketplace add https://github.com/jordanrendric/claude-video-vision
Then:
/plugin install claude-video-vision
The MCP server will auto-install via npx from npm on first use — no build step required.
Alternative: local development
git clone https://github.com/jordanrendric/claude-video-vision.git
claude --plugin-dir /path/to/claude-video-vision
Inside Claude Code, run the interactive wizard:
No comments yet. Be the first to share your thoughts!
/setup-video-vision
It will walk you through backend selection, whisper configuration (if local), frame options, and dependency verification.
/watch-video path/to/video.mp4
/watch-video tutorial.mp4 "what language is used in this tutorial?"
Just mention a video file — Claude will detect it:
"analyze this video for me: ~/Downloads/demo.mp4"
"take a look at the first second of ~/videos/bug-report.mov"
Claude adapts parameters automatically:
00:00:00 to 00:00:01| Backend | Audio processing | Cost | Setup |
|---------|------------------|------|-------|
| Gemini API | Native (speech + non-speech events) | Free tier: 1500 req/day | GEMINI_API_KEY env var |
| Local (Whisper) | whisper.cpp or Python openai-whisper | Free, fully offline | brew install whisper-cpp + auto model download |
| OpenAI API | OpenAI Whisper API | Paid per usage | OPENAI_API_KEY env var |
All backends extract video frames via ffmpeg — Claude always has direct visual access.
┌───────────────────────────────────────────────────────┐
│ Claude Code (your session) │
│ │
│ /watch-video ──→ Skill: video-perception │
│ │ │
│ ▼ │
│ MCP tool: video_watch │
│ │ │
└────────────────────────┼──────────────────────────────┘
│
▼
┌────────────────────────────────────┐
│ MCP Server (Node.js) │
│ │
│ ┌──────────┐ ┌──────────────┐ │
│ │ ffmpeg │ │ Audio backend│ │
│ │ frames │ ║ │ (parallel) │ │
│ └──────────┘ └──────────────┘ │
│ │ │ │
└───────┼─────────────────┼──────────┘
▼ ▼
base64 images transcription
+ timestamps + audio events
│ │
└────────┬────────┘
▼
Claude receives both
brew install whisper-cpp (macOS) or equivalentThe plugin exposes 4 MCP tools:
video_watch — Extract frames + process audio (main tool)video_info — Get video metadata without processingvideo_configure — Change settingsvideo_setup — Check and guide dependency installation/watch-video <path> [question] — Analyze a video/setup-video-vision — Interactive configuration wizardSettings are stored in ~/.claude-video-vision/config.json:
{
"backend": "local",
"whisper_engine": "cpp",
"whisper_model": "auto",
"whisper_at": false,
"frame_mode": "images",
"frame_resolution": 512,
"default_fps": "auto",
"max_frames": 100,
"frame_describer_model": "sonnet"
}
Whisper models auto-download to ~/.claude-video-vision/models/ on first use. Available: tiny, base, small, medium, large-v3-turbo, large-v3, auto (picks best for your RAM).
v1.0.0 — Initial release. Tested on macOS (Apple Silicon) with Local backend (whisper.cpp).
MIT — see LICENSE.
Jordan Vasconcelos