by f
Comprehensive analytics dashboard for AI coding agents — Cursor, Windsurf, Claude Code, VS Code Copilot, Zed, Antigravity, OpenCode, Command Code
# Add to your Claude Code skills
git clone https://github.com/f/agentlyticsYou switch between Cursor, Windsurf, Claude Code, VS Code Copilot, and more — each with its own siloed conversation history.
One command. Full picture. All local.
npx agentlytics
# or
pnpm dlx agentlytics
# or
yarn dlx agentlytics
# or
bunx agentlytics
Opens at http://localhost:4637. Requires Node.js ≥ 20.19 or ≥ 22.12, macOS. No data ever leaves your machine.
Run a lightweight, zero-dependency analytics scan with Deno's permission sandbox — directly from a URL, no install needed:
deno run --allow-read --allow-env https://raw.githubusercontent.com/f/agentlytics/master/mod.ts
Only --allow-read and --allow-env are required. No network access, no file writes, no code execution — just reads your local editor data and prints a summary.
(● ●) [● ●] Agentlytics — Deno Sandboxed Edition
{● ●} <● ●> Lightweight CLI analytics for AI coding agents
✓ Claude Code 8 sessions
✓ VS Code 23 sessions
✓ VS Code Insiders 66 sessions
● Cursor detected
✓ Codex CLI 3 sessions
...
Summary
Sessions 109
Messages 459
Projects 18
Editors 7 of 15 checked
Date range 2025-04-02 → 2026-03-09
No comments yet. Be the first to share your thoughts!
Add --json for machine-readable output:
deno run --allow-read --allow-env mod.ts --json
If you've cloned the repo, you can also use Deno tasks for the full dashboard:
deno task start # Full dashboard (all permissions)
deno task scan # Lightweight CLI scan
deno task scan:json # JSON output
$ npx agentlytics
(● ●) [● ●] Agentlytics
{● ●} <● ●> Unified analytics for your AI coding agents
Looking for AI coding agents...
✓ Cursor 498 sessions
✓ Windsurf 20 sessions
✓ Windsurf Next 56 sessions
✓ Claude Code 6 sessions
✓ VS Code 23 sessions
✓ Zed 1 session
✓ Codex 3 sessions
✓ Gemini CLI 2 sessions
...and 6 more
(● ●) [● ●] {● ●} <● ●> ✓ 691 analyzed, 360 cached (27.1s)
✓ Dashboard ready at http://localhost:4637
To only build the cache without starting the server:
npx agentlytics --collect
# or: pnpm dlx agentlytics --collect
| Editor | Msgs | Tools | Models | Tokens | |--------|:----:|:-----:|:------:|:------:| | Cursor | ✅ | ✅ | ✅ | ✅ | | Windsurf | ✅ | ✅ | ✅ | ✅ | | Windsurf Next | ✅ | ✅ | ✅ | ✅ | | Antigravity | ✅ | ✅ | ✅ | ✅ | | Claude Code | ✅ | ✅ | ✅ | ✅ | | VS Code | ✅ | ✅ | ✅ | ✅ | | VS Code Insiders | ✅ | ✅ | ✅ | ✅ | | Zed | ✅ | ✅ | ✅ | ❌ | | OpenCode | ✅ | ✅ | ✅ | ✅ | | Codex | ✅ | ✅ | ✅ | ✅ | | Gemini CLI | ✅ | ✅ | ✅ | ✅ | | Copilot CLI | ✅ | ✅ | ✅ | ✅ | | Cursor Agent | ✅ | ❌ | ❌ | ❌ | | Command Code | ✅ | ✅ | ❌ | ❌ | | Goose | ✅ | ✅ | ✅ | ❌ | | Kiro | ✅ | ✅ | ✅ | ❌ |
Windsurf, Windsurf Next, and Antigravity must be running during scan.
Relay enables multi-user context sharing across a team. One person starts a relay server, others join and share selected project sessions. An MCP server is exposed so AI clients can query across everyone's coding history.
npx agentlytics --relay
# or: pnpm dlx agentlytics --relay
Optionally protect with a password:
RELAY_PASSWORD=secret npx agentlytics --relay
This starts a relay server on port 4638 and prints the join command and MCP endpoint:
⚡ Agentlytics Relay
Share this command with your team:
cd /path/to/project
npx agentlytics --join 192.168.1.16:4638
MCP server endpoint (add to your AI client):
http://192.168.1.16:4638/mcp
cd /path/to/your-project
npx agentlytics --join <host:port>
# or: pnpm dlx agentlytics --join <host:port>
If the relay is password-protected:
RELAY_PASSWORD=secret npx agentlytics --join <host:port>
Username is auto-detected from git config user.email. You can override it with --username <name>.
You'll be prompted to select which projects to share. The client then syncs session data to the relay every 30 seconds.
Connect your AI client to the relay's MCP endpoint (http://<host>:4638/mcp) to access these tools:
| Tool | Description |
|------|-------------|
| list_users | List all connected users and their shared projects |
| search_sessions | Full-text search across all users' chat messages |
| get_user_activity | Get recent sessions for a specific user |
| get_session_detail | Get full conversation messages for a session |
Example query to your AI: "What did alice do in auth.js?"
| Endpoint | Description |
|----------|-------------|
| GET /relay/health | Health check and user count |
| GET /relay/users | List connected users |
| GET /relay/search?q=<query> | Search messages across all users |
| GET /relay/activity/:username | User's recent sessions |
| GET /relay/session/:chatId | Full session detail |
| POST /relay/sync | Receives data from join clients |
Relay is designed for trusted local networks. Set
RELAY_PASSWORDenv on both server and clients to enable password protection.
Editor files/APIs → editors/*.js → cache.js (SQLite) → server.js (REST) → React SPA
Relay: join clients → POST /relay/sync → relay.db (SQLite) → MCP server → AI clients
Deno: Editor files → mod.ts (zero deps) → stdout (CLI/JSON)
All data is normalized into a local SQLite cache at ~/.agentlytics/cache.db. The Express server exposes read-only REST endpoints consumed by the React frontend. Relay data is stored separately in ~/.agentlytics/relay.db. The Deno sandboxed edition (mod.ts) bypasses SQLite entirely and reads editor files directly for a lightweight, permission-minimal CLI report.
| Endpoint | Description |
|----------|-------------|
| GET /api/overview | Dashboard KPIs, editors, modes, trends |
| GET /api/daily-activity | Daily counts for heatmap |
| GET /api/dashboard-stats | Hourly, weekday, streaks, tokens, velocity |
| GET /api/chats | Paginated session list |
| GET /api/chats/:id | Full chat with messages |
| GET /api/projects | Project-level aggregations |
| GET /api/deep-analytics | Tool/model/token breakdowns |
| GET /api/tool-calls | Individual tool call instances |
| GET /api/refetch | SSE: wipe cache and rescan |
All endpoints accept optional editor filter. See API.md for full request/response documentation.
Windsurf / Windsurf Next / Antigravity offline reading — Currently these editors require their app to be running because data is fetched via ConnectRPC from the language server process. Unlike Cursor or Claude Code, there's no known local file structure to read cascade history from