by aiming-lab
Just talk to your agent — it learns and EVOLVES.
# Add to your Claude Code skills
git clone https://github.com/aiming-lab/MetaClaw🇨🇳 中文 • 🇯🇵 日本語 • 🇰🇷 한국어 • 🇫🇷 Français • 🇩🇪 Deutsch • 🇪🇸 Español
<br/> </div>metaclaw setup # one-time config wizard
metaclaw start # skills on, OpenClaw wired — ready to chat
metaclaw start --mode rl # optional: + live RL training via Tinker
<div align="center">
<img src="assets/metaclaw.gif" alt="MetaClaw demo" width="700">
</div>
metaclaw CLI. Skills enabled by default, RL is now opt-in.https://github.com/user-attachments/assets/d86a41a8-4181-4e3a-af0e-dc453a6b8594
MetaClaw turns live conversations into continuous training data — automatically. Just talk to your agent as usual, and MetaClaw handles the learning loop behind the scenes.
It places your model behind an OpenAI-compatible proxy that intercepts interactions from OpenClaw, injects relevant skills at each step, and can optionally perform continuous fine-tuning through Tinker Cloud RL. Updated weights are hot-swapped seamlessly without interrupting the service.
There is no need to maintain a dedicated GPU cluster. MetaClaw works with any OpenAI-compatible LLM API out of the box, and optionally integrates Kimi-K2.5 (1T MoE) via Tinker for cloud-based LoRA training.
Configure once with metaclaw setup, then metaclaw start brings up the proxy, injects skills, and wires OpenClaw automatically. No manual shell scripts needed.
| Mode | Default | What it does | |------|---------|--------------| | `ski...
No comments yet. Be the first to share your thoughts!