Comprehensive open-source library of AI research and engineering skills for any AI model. Package the skills and your claude code/codex/gemini agent will be an AI research agent with full horsepower. Maintained by Orchestra Research.
# Add to your Claude Code skills
git clone https://github.com/Orchestra-Research/AI-Research-SKILLsSkills LibraryThe most comprehensive open-source skills library enabling AI agents to autonomously conduct AI research — from idea to paper
| | | | |:---:|:---:|:---:| | Autoresearch (1) | Ideation (2) | ML Paper Writing (2) | | Model Architecture (5) | Fine-Tuning (4) | Post-Training (8) | | Distributed Training (6) | Optimization (6) | Inference (4) | | Tokenization (2) | Data Processing (2) | Evaluation (3) | | Safety & Alignment (4) | Agents (4) | RAG (5) | | Multimodal (7) | Prompt Engineering (4) | MLOps (3) | | Observability (2) | Infrastructure (3) | Mech Interp (4) | | Emerging Techniques (6) | | |
No comments yet. Be the first to share your thoughts!
We enable AI agents to autonomously conduct AI research — from literature survey and idea generation through experiment execution to paper writing. The library provides both the research orchestration layer (autoresearch, ideation, paper writing) and the engineering skills (training, evaluation, deployment) needed at each stage.
Modern AI research requires mastering dozens of specialized tools and frameworks. AI Researchers spend more time debugging infrastructure than testing hypotheses — slowing the pace of scientific discovery. We provide a comprehensive skills library that enables AI agents to autonomously conduct the full research lifecycle — from brainstorming ideas to writing the paper.
Quality over quantity: Each skill provides comprehensive, expert-level guidance with real code examples, troubleshooting guides, and production-ready workflows.
For humans — interactive installer with one command:
npx @orchestra-research/ai-research-skills
For AI agents — point your agent to the welcome doc and it handles the rest:
Read https://www.orchestra-research.com/ai-research-skills/welcome.md and follow the instructions to install and use AI Research Skills.
This installs all 87 skills, loads the autoresearch orchestration layer, and starts autonomous research.
~/.orchestra/skills/ with symlinks to each agent (falls back to copy on Windows)# Interactive installer (recommended)
npx @orchestra-research/ai-research-skills
# Direct commands
npx @orchestra-research/ai-research-skills list # View installed skills
npx @orchestra-research/ai-research-skills update # Update installed skills
Install skill categories directly using the Claude Code CLI:
# Add the marketplace
/plugin marketplace add orchestra-research/AI-research-SKILLs
# Install by category (22 categories available)
/plugin install fine-tuning@ai-research-skills # Axolotl, LLaMA-Factory, PEFT, Unsloth
/plugin install post-training@ai-research-skills # TRL, GRPO, OpenRLHF, SimPO, verl, slime, miles, torchforge
/plugin install inference-serving@ai-research-skills # vLLM, TensorRT-LLM, llama.cpp, SGLang
/plugin install distributed-training@ai-research-skills
/plugin install optimization@ai-research-skills
| Category | Skills | Included | |----------|--------|----------| | Autoresearch | 1 | Autonomous research orchestration — central layer that manages the full lifecycle and routes to all other skills | | Ideation | 2 | Research Brainstorming, Creative Thinking | | ML Paper Writing | 2 | ML Paper Writing (LaTeX templates, citation verification), Academic Plotting | | Model Architecture | 5 | LitGPT, Mamba, NanoGPT, RWKV, TorchTitan | | Tokenization | 2 | HuggingFace Tokenizers, SentencePiece | | Fine-Tuning | 4 | Axolotl, LLaMA-Factory, PEFT, Unsloth | | Mech Interp | 4 | TransformerLens, SAELens, pyvene, nnsight | | Data Processing | 2 | NeMo Curator, Ray Data | | Post-Training | 8 | TRL, GRPO, OpenRLHF, SimPO, verl, slime, miles, torchforge | | Safety | 4 | Constitutional AI, LlamaGuard, NeMo Guardrails, Prompt Guard | | Distributed | 6 | DeepSpeed, FSDP, Accelerate, Megatron-Core, Lightning, Ray Train | | Infrastructure | 3 | Modal, Lambda Labs, SkyPilot | | Optimization | 6 | Flash Attention, bitsandbytes, GPTQ, AWQ, HQQ, GGUF | | Evaluation | 3 | lm-eval-harness, BigCode, NeMo Evaluator | | Inference | 4 | vLLM, TensorRT-LLM, llama.cpp, SGLang | | MLOps | 3 | W&B, MLflow, TensorBoard | | Agents | 4 | LangChain, LlamaIndex, CrewAI, AutoGPT | | RAG | 5 | Chroma, FAISS, Pinecone, Qdrant, Sentence Transformers | | Prompt Eng | 4 | DSPy, Instructor, Guidance, Outlines | | Observability | 2 | LangSmith, Phoenix | | Multimodal | 7 | CLIP, Whisper, LLaVA, BLIP-2, SAM, Stable Diffusion, AudioCraft | | Emerging | 6 | MoE, Model Merging, Long Context, Speculative Decoding, Distillation, Pruning |