An open-source library of evidence-based Claude skills for educators — designed for teacher use and agent orchestration.
# Add to your Claude Code skills
git clone https://github.com/GarethManning/claude-education-skillsAn open-source library of 108 evidence-based pedagogical skills for curriculum design, lesson planning, and assessment — usable today by any educator with access to Claude, and engineered for AI agent orchestration.
I'm an educator — start here No setup required. Install the plugin and start teaching.
I'm a developer or AI builder — start here YAML schemas, typed inputs and outputs, chaining metadata, live MCP server.
No comments yet. Be the first to share your thoughts!
AI is arriving in education fast. Whether it improves learning outcomes or simply scales mediocre practice depends almost entirely on what it is built on.
Most AI education tools are built on convention, habit, and assumption — on what educators have always done, rather than on what the research says actually works. Learning styles. Rigid lesson structures. Wellbeing programmes disconnected from learning theory. As AI expands in education, so does the risk of scaling ineffective practice.
This library exists to build something different: a credible, rigorous foundation for AI in education. One that is anchored in named research, honest about its limitations, and designed especially for the educators working at the frontier — building the next generation of schools, not optimising existing ones.
The potential is real. Personalised, evidence-grounded learning support at a scale that was never previously possible. But only if what is powering it is the actual evidence.
The benefit is not only personalised learning. It is teaching quality and workload. An educator who would otherwise spend hours researching, designing, and second-guessing gets structured, evidence-grounded support in minutes — which means more time for the parts of teaching that only a human can do.
That is one use case. The same library can power school-wide curriculum audits, personalised professional development pathways for teachers, or orchestrated end-of-term assessment reviews. The skills are the foundation. The architecture below describes the layers that make this possible.
Three ways to use the library, depending on your setup:
Connect via the MCP server to access all 108 skills in any Claude conversation — add https://mcp-server-sigma-sooty.vercel.app/mcp under Settings → Connectors. Skills activate when your conversation matches their topic. A dedicated Skills Directory listing is in progress.
/plugin install GarethManning/claude-education-skills
Skills load with progressive disclosure — metadata only until a skill is actually needed.
Connect to the live MCP server:
https://mcp-server-sigma-sooty.vercel.app/mcp
Use this when you're building tools or agents that need to call skills programmatically. Four meta-tools provide discovery: list_skills, find_skills, suggest_skills, and get_skill_details.
The library is now compliant with the Agent Skills 1.0 open standard. What this means in practice:
registry.json indexes all skills with descriptions, tags, chaining metadata, and domain grouping for programmatic consumptionInstall the plugin, then tell Claude what you need in plain language. The skills activate automatically.
Example: Say "I'm planning a Year 9 science unit on cells — 6 weeks, 3 lessons a week."
Claude runs the Backwards Design Unit Planner, the Spaced Practice Scheduler, and the Retrieval Practice Generator in parallel. In under 90 seconds you get a complete lesson-by-lesson plan with spaced retrieval built in, evidence-grounded sequencing, and ready-to-use formative assessment activities — all calibrated to the timeline and topic list you provided.
No API key. No technical setup. No dependencies.
skills/)Example: Open skills/memory-learning-science/spaced-practice-scheduler/SKILL.md and provide:
Claude returns a complete week-by-week schedule showing when to teach new content and when to revisit previous topics at expanding intervals — with specific retrieval activities for each review slot. The schedule follows Cepeda et al.'s (2006) meta-analysis on optimal spacing intervals, includes interleaving across topics, and comes with practical guidance on what to do when review reveals gaps.
Evidence is the filter — including knowing what to exclude. Every skill is grounded in named research: specific authors, specific studies, specific findings. Frameworks that lack empirical support — including learning styles, VAK, and other widely-circulated but poorly-evidenced approaches — are not included. The library documents exactly what was excluded and why in EXCLUSIONS.md. For any school or faculty trying to separate evidence from convention, that document is worth reading on its own.
Evidence strength is rated transparently.
| Rating | What it means | |--------|--------------| | Strong | Multiple meta-analyses or systematic reviews with consistent findings | | Moderate | Solid experimental evidence with some contextual variation | | Emerging | Promising research base with limited replication or practitioner translation | | Original | Practitioner framework; clearly labelled, not claimed as research-backed |
Where original frameworks are included (Domain 14), they are labelled honestly. One important limitation: the skills encode research-grounded prompts, but the prompts themselves have not been empirically validated as AI interventions. That work is ongoing.
Built by an educator with 20 years of international school experience. The pedagogical judgements embedded in every prompt, every output structure, and every known-limitations section reflect real classroom and curriculum design practice — not a reading of the literature.
Designed for orchestration from day one. YAML schema headers, typed input and output fields, chaining metadata, and composable outputs are built into every skill. This is not a prompt collection with metadata bolted on. It is a skill library engineered for programmatic use.
| # | Domain | Skills | Focus | |---|--------|--------|-------| | 1 | Memory & Learning Science | 8 | Retrieval practice, spacing, interleaving, cognitive load, dual coding, elaborative interrogation, feedback | | 2 | Self-Regulated Learning & Metacognition | 5 | Self-regulation scaffolds, metacognitive prompts, goal-setting, study strategy selection, error analysis | | 3 | Explicit & Direct Instruction | 5 | Gradual release sequences, checking for understanding, lesson openings, think-alouds, practice design | | 4 | Questioning, Discussion & Dialogue | 4 | Socratic questioning, discussion protocols, dialogic teaching moves, hinge questions | | 5 | Literacy, Writing & Critical Thinking | 7 | Argument structure, disciplinary writing, reading comprehension, source evaluation, text complexity, media literacy, critical thinking | | 6 | EAL/D & Language Development | 5 | Language demand analysis, vocabulary tiering, scaffolded task modification, sentence frames, sheltered instruction | | 7 | Curriculum Design & Assessment | 15 | Backwards design, competency unpacking, rubric generation, assessment validity, formative assessment, differentiation, gap analysis, learning progressions, PBL, threshold concept translation | | 8 | Wellbeing, Motivation & Student Agency | 12 | Motivation diagnostics, self-efficacy, wellbeing-learning connections, age