by HKUDS
"CatchMe: Make Your AI Agents Truly Personal"
# Add to your Claude Code skills
git clone https://github.com/HKUDS/CatchMe🦞 Makes Your Agents Truly Personal. CatchMe ships as an agent-compatible skill for CLI agents (OpenClaw, NanoBot, Claude, Cursor, etc.). Run CatchMe independently. Your agents query memories via CLI commands only.
CatchMe transforms raw digital activity into structured, searchable memory through three concurrent stages:
Capture. Six background recorders silently track your activity. They monitor window focus, keystrokes, mouse movement, screenshots, clipboard, and notifications.
Index. Raw events auto-organize into a Hierarchical Activity Tree: Day → Session → App → Location → Action. Each node gets LLM-generated summaries. Fast, meaningful recall without vector embeddings.
Retrieve. You ask a question. The LLM traverses your memory tree top-down. It selects relevant nodes and inspects raw data like screenshots or keystrokes. Then synthesizes a precise answer.
The Activity Tree is CatchMe's memory core. It provides structured, multi-level views of your digital life. Browse high-level summaries or dive into granular details.
CatchMe skips traditional vector search. Instead, the LLM directly navigates your Activity Tree. This enables complex, cross-day reasoning. Precise evidence gathering from raw activity history.
📖 Learn More: Detailed design insights and technical deep-dive available in our blog.
• 100% Local Storage: All raw data (screenshots, keystrokes, activity trees) stays in ~/data/ and never leaves your machine.
• Offline-First Options: Local LLMs (Ollama, vLLM, LM Studio) enable fully offline operation without any cloud dependency.
• ⚠️Cloud Provider Caution: If used, cloud APIs will be used to summarize your daily activities. Untrusted endpoints may expose private data — review data policies of your provider carefully.
• Multimodal support: Your model should be able to handle text + images.
• Context window: Make sure the context window of your model exceed max_tokens limits in config.json.
• Cost control: For forced cost control, set limits via llm.max_calls or increase filter.mouse_cluster_gap to reduce summarization frequency.
CatchMe requires an LLM for background summarization and intelligent retrieval. Use catchme init (in Get Started)for guided setup or follow the manual configuration steps below.
For cloud API services:
{
"llm": {
"provider": "openrouter",
"api_key": "sk-or-...",
"api_url": null,
"model": "google/gemini-3-flash-preview"
}
}
For local/offline operation:
{
"llm": {
"provider": "ollama",
"api_key": null,
"api_url": null,
"model": "gemma3:4b"
}
}
| Provider | Config name | Default API URL | Get Key |
| ------------------------- | ------------------------ | ------------------------------------------------------- | -------------------------------------------------------------------- |
| OpenRouter (gateway) | openrouter | https://openrouter.ai/api/v1 | openrouter.ai/keys |
| AiHubMix (gateway) | aihubmix | https://aihubmix.com/v1 | aihubmix.com |
| SiliconFlow (gateway) | siliconflow | https://api.siliconflow.cn/v1 | cloud.siliconflow.cn |
| OpenAI | openai | https://api.openai.com/v1 | platform.openai.com |
| Anthropic | anthropic | https://api.anthropic.com/v1 | [console.ant
No comments yet. Be the first to share your thoughts!