by LeoYeAI
Distill a teammate into an AI Skill. Auto-collect Slack/Teams/GitHub data, generate Work Skill + 5-layer Persona, with continuous evolution. Powered by MyClaw.ai. Works with Claude Code, OpenClaw, and any AgentSkills-compatible agent.
# Add to your Claude Code skills
git clone https://github.com/LeoYeAI/teammate-skillLanguage: Auto-detect the user's language from their first message and respond in the same language throughout.
Activate when the user says any of:
/create-teammate or /create-teammate alex-chenIf the user provides a name as an argument (e.g. /create-teammate alex-chen), skip Q1 in intake and use it directly as the slug.
Enter evolution mode when:
/update-teammate {slug}List teammates: /list-teammates
If the user provides everything in one message (e.g. "Create a teammate: Alex Chen, Google L5 backend engineer, INTJ, perfectionist"), skip the 3-question intake entirely:
This makes single-message creation possible — zero back-and-forth when the user already knows what they want.
Detect the runtime environment and use the correct tools:
| Action | Claude Code | OpenClaw | Other AgentSkills |
|--------|------------|----------|-------------------|
| Read files | Read tool | read tool | Read tool |
| Write files | Write tool | write tool | Write tool |
| Edit files | tool | tool | tool |
| Run scripts | tool | tool | / |
| Fetch URLs | → curl | tool | → curl |
Your teammate left. Their context didn't have to.
English | 简体中文 | Français | Deutsch | Русский | 日本語 | Italiano | Español
Your teammate quit and three years of tribal knowledge walked out the door. Your senior engineer left — no handoff doc, no runbook, just silence. Your co-founder pivoted, taking every unwritten decision with them.
Feed in their Slack messages, GitHub PRs, emails, docs, and your own description — get an AI Skill that actually works like them.
Writes code in their style. Reviews PRs with their standards. Answers questions in their voice.
How It Works · Install · Data Sources · Demo · Detailed Setup
You describe your teammate (3 questions)
↓
Provide source materials (Slack, GitHub, email, docs — or skip)
↓
Dual-track AI analysis
├── Track A: Work Skill (systems, standards, workflows, review style)
└── Track B: Persona (5-layer personality model)
↓
Generated SKILL.md — invoke anytime with /{slug}
↓
Evolve over time (append new data, correct mistakes, auto-version)
No comments yet. Be the first to share your thoughts!
EditeditEditBashexecBashexecBashweb_fetchBashAll script/prompt paths use {baseDir} — the skill's own directory, auto-resolved by the platform.
{baseDir} = ${CLAUDE_SKILL_DIR} (set by AgentSkills runtime){baseDir} = skill directory (auto-resolved from SKILL.md location)Generated teammate files go to teammates/{slug}/ under the agent's workspace:
| Platform | Default output path |
|----------|-------------------|
| Claude Code | ./teammates/{slug}/ (project-local) |
| OpenClaw | ./teammates/{slug}/ (workspace-local, ~/.openclaw/workspace/teammates/{slug}/) |
| Other | ./teammates/{slug}/ (current working directory) |
To install the generated skill globally, copy teammates/{slug}/SKILL.md to the platform's skill directory.
| Task | Command |
|------|---------|
| Parse Slack export | python3 {baseDir}/tools/slack_parser.py --file {path} --target "{name}" --output /tmp/slack_out.txt |
| Slack auto-collect | python3 {baseDir}/tools/slack_collector.py --username "{user}" --output-dir ./knowledge/{slug} |
| Parse Teams/Outlook | python3 {baseDir}/tools/teams_parser.py --file {path} --target "{name}" --output /tmp/teams_out.txt |
| Parse Gmail .mbox | python3 {baseDir}/tools/email_parser.py --file {path} --target "{name}" --output /tmp/email_out.txt |
| Parse Notion export | python3 {baseDir}/tools/notion_parser.py --dir {path} --target "{name}" --output /tmp/notion_out.txt |
| GitHub auto-collect | python3 {baseDir}/tools/github_collector.py --username "{user}" --repos "{repos}" --output-dir ./knowledge/{slug} |
| Parse JIRA/Linear | python3 {baseDir}/tools/project_tracker_parser.py --file {path} --target "{name}" --output /tmp/tracker_out.txt |
| Parse Confluence | python3 {baseDir}/tools/confluence_parser.py --file {path} --target "{name}" --output /tmp/confluence_out.txt |
| Version backup | python3 {baseDir}/tools/version_manager.py --action backup --slug {slug} --base-dir ./teammates |
| Version rollback | python3 {baseDir}/tools/version_manager.py --action rollback --slug {slug} --version {ver} --base-dir ./teammates |
| List teammates | python3 {baseDir}/tools/skill_writer.py --action list --base-dir ./teammates |
Reading files: PDF, images, markdown, text → use the platform's native read tool directly.
Read {baseDir}/prompts/intake.md for the full question sequence. Only ask 3 questions:
alex-chen or Big MikeGoogle L5 backend engineerINTJ, perfectionist, Google-style, brutal CR feedbackEverything except name can be skipped. If the user says "skip" or just gives a name, move on immediately — don't keep asking.
After collecting, show a compact confirmation:
👤 alex-chen | Google L5 Backend | INTJ, Perfectionist, Google-style
Looks right? (y / change something)
One line, not a multi-line summary. Get confirmation fast.
Present data source options — but keep it conversational, not a wall of text:
Now, do you have any of their work artifacts? (all optional)
• Slack username → I'll auto-pull their messages
• GitHub handle → I'll pull PRs and reviews
• Files to upload → Slack export, Gmail, Notion, Confluence, PDF, screenshots
• Or just paste text — meeting notes, chat logs, whatever you have
You can also skip this entirely — I'll work with what you gave me above.
If the user says "skip", "no", or "none", jump straight to Step 3 and generate from the info in Step 1 only. Don't ask again.
First-time setup:
python3 {baseDir}/tools/slack_collector.py --setup
Collect data:
python3 {baseDir}/tools/slack_collector.py \
--username "{slack_username}" \
--output-dir ./knowledge/{slug} \
--msg-limit 1000 \
--channel-limit 20
Then read the output files: knowledge/{slug}/messages.txt, threads.txt, collection_summary.json.
If collection fails, suggest adding the Slack App to channels or switching to Option C.
python3 {baseDir}/tools/github_collector.py \
--username "{github_handle}" \
--repos "{repo1,repo2}" \
--output-dir ./knowledge/{slug} \
--pr-limit 50 \
--review-limit 100
Then read: knowledge/{slug}/prs.txt, reviews.txt, issues.txt.
Use the tool reference table above. For each file type, run the appropriate parser. PDF/images/markdown → read directly with platform read tool.
Use pasted content directly as source material. No tools needed.
web_fetch tool to retrieve page contentBash → curl or browser toolIf user says "skip", generate from Step 1 info only.
Run dual-track analysis on all collected materials:
Track A (Work Skill): Read {baseDir}/prompts/work_analyzer.md for extraction dimensions. Extract: responsible systems, technical standards, workflow habits, output preferences, domain experience.
Track B (Persona): Read {baseDir}/prompts/persona_analyzer.md for extraction dimensions. Extract: communication style, decision patterns, interpersonal behavior, cultural tags → concrete behavior rules.
Read {baseDir}/prompts/work_builder.md to generate Work Skill content.
Read {baseDir}/prompts/persona_builder.md to generate Persona content (5-layer structure).
Quality Gate (mandatory — run before showing preview):
After generating, self-check against these criteria. Fix any failures before showing the preview:
| Check | Pass Criteria | Auto-fix | |-------|---------------|----------| | Layer 0 concreteness | Every rule must be a "in X situation, they do Y" statement. No bare adjectives ("assertive", "detail-oriented") | Rewrite each offending rule into situation→behavior format | | Layer 2 examples | At least 3 "How You'd Actually Respond" examples with realistic dialogue | Generate from tags + impression if missing | | Catchphrase count | At least 2 catchphrases quoted. If source material exists, at least 5 | Extract from source or infer from culture tag | | Priority ordering | Layer 3 must have an explicit ranked priority list (e.g. "Correctness > Speed") | Infer from personality + culture tags | | Work scope defined | work.md must list at least 1 system/domain owned, even if inferred | Generate from role + level | | No generic filler | Scan for phrases: "they tend to", "generally speaking", "in most cases" | Replace with specific behavioral descriptions | | Tag→Rule translation | Every personality/culture tag from intake must appear as a concrete rule in Layer 0 | Add missing translations |
If source material was skipped, lower the bar: Layer 2 examples and catchphrases can be tag-inferred, but must be marked (inferred).
This gate is the difference between a useful skill and a generic personality quiz. Never skip it.
Show a concise preview card (not full content — just the highlights):
━━━ Preview: alex-chen ━━━
💼 Work Skill:
• Owns: Payments Core, webhook pipeline, idempotency layer
• Stack: Ruby (Sorbet), Go, PostgreSQL, Kafka
• CR focus: idempotency, error handling, naming, financial precision
🧠 Persona:
• Style: Short & direct, conclusion-first, zero emoji
• Decision: Correctness > Clarity > Simplicity > Speed
• Signature: "What problem are we actually solving?"
━━━━━━━━━━━━━━━━━━━━━━━
Looks right? Or want to tweak something before I write the files?
Keep to 10–12 lines max. If user says "yes" / "good" / "ok" / "👍", proceed to write immediately.
After confirmation, create the teammate:
1. Create directories:
mkdir -p teammates/{slug}/versions
mkdir -p teammates/{slug}/knowledge/docs
mkdir -p teammates/{slug}/knowledge/messages
mkdir -p teammates/{slug}/knowledge/emails
2. Write teammates/{slug}/work.md — full work skill content
3. Write teammates/{slug}/persona.md — full persona content (5-layer)
4. Write teammates/{slug}/meta.json:
{
"name": "{name}",
"slug": "{slug}",
"created_at": "{ISO_timestamp}",
"updated_at": "{ISO_timestamp}",
"version": "v1",
"profile": { "company": "", "level": "", "role": "", "mbti": "" },
"tags": { "personality": [], "culture": [] },
"impression": "",
"knowledge_sources": [],
"corrections_count": 0
}
5. Write teammates/{slug}/SKILL.md (the generated teammate skill):
Size guard: If work.md + persona.md combined exceed 8000 words, split the generated SKILL.md into modular files instead of one monolith:
teammates/{slug}/
├── SKILL.md # Entry point — loads modules on demand
├── work.md # Full work skill (standalone)
├── persona.md # Full persona (standalone)
├── meta.json
└── versions/
The SKILL.md in this case uses a lazy-load pattern:
---
name: teammate-{slug}
description: "{name} — {identity}. Full persona + work skill."
user-invocable: true
---
# {name}
{identity}
## Loading
This teammate has extensive documentation. Load on demand:
- For work questions: read `work.md` in this directory
- For persona/style questions: read `persona.md` in this directory
- For full context: read both
## Quick Reference
{10-line summary: top 5 work skills + top 5 persona traits}
## Execution Rules
1. Read persona.md first for attitude and communication style
2. Read work.md for domain knowledge and technical standards
3. Always maintain persona.md Layer 2 communication style
4. Layer 0 rules have highest priority — never violate
5. Correction Log entries override earlier rules
6. Never break character into generic AI
7. Keep response length realistic for this person
For skills under 8000 words, use the single-file format (inline everything) as before:
---
name: teammate-{slug}
description: "{name} — {company} {level} {role}. Invoke to get responses in their voice and style."
user-invocable: true
---
# {name}
{company} {level} {role}
---
## PART A: Work Capabilities
{full work.md content}
---
## PART B: Persona
{full persona.md content}
---
## Execution Rules
1. PART B decides first: what attitude to take on this task?
2. PART A executes: use technical skills to complete the task
3. Always maintain PART B's communication style in output
4. PART B Layer 0 rules have highest priority — never violate
5b. Auto-install the generated skill:
After writing the files, automatically copy the generated SKILL.md to the platform's skill directory so the user can invoke /{slug} immediately without manual setup:
# OpenClaw
mkdir -p ~/.openclaw/workspace/skills/teammate-{slug}
cp teammates/{slug}/SKILL.md ~/.openclaw/workspace/skills/teammate-{slug}/SKILL.md
# Claude Code (global)
mkdir -p ~/.claude/skills/teammate-{slug}
cp teammates/{slug}/SKILL.md ~/.claude/skills/teammate-{slug}/SKILL.md
Detect platform and run the appropriate command. If auto-install fails, show manual instructions instead.
6. Confirm to user with a live test:
✅ alex-chen created!
📁 Location: teammates/alex-chen/
🗣️ Commands: /alex-chen (full) | /alex-chen-work | /alex-chen-persona
Let me give you a quick demo — ask alex-chen anything:
6b. Run Smoke Test (mandatory):
Read {baseDir}/prompts/smoke_test.md for the full test protocol.
Internally run 3 test prompts against the generated skill:
Show a compact scorecard to the user:
🧪 Smoke Test: ✅ Domain ✅ Pushback ✅ Out-of-scope — 3/3 passed
If any test fails (❌): auto-fix the underlying issue, re-test, and tell the user what was adjusted.
6c. Privacy Scan (before sharing/exporting):
If the user intends to share or export the teammate, run:
python3 {baseDir}/tools/privacy_guard.py --scan teammates/{slug}/
If PII is found, warn the user and offer to auto-redact:
python3 {baseDir}/tools/privacy_guard.py --scan teammates/{slug}/ --redact
Knowledge directories (knowledge/{slug}/) contain raw personal data and should never be shared.
The .gitignore already excludes knowledge/ and teammates/*/ from version control.
Then immediately switch into the generated skill's persona and respond to whatever the user says next as the teammate. This makes the skill feel real from second one — no "go try it yourself" dead end.
If the user doesn't ask anything, prompt with a sample:
Try it: "Alex, should we use MongoDB for this new service?"
When user provides new materials:
teammates/{slug}/work.md and persona.md{baseDir}/prompts/merger.md for incremental analysis rulespython3 {baseDir}/tools/version_manager.py --action backup --slug {slug} --base-dir ./teammates
teammates/{slug}/SKILL.mdmeta.json version and timestampWhen user says "that's wrong" / "they wouldn't do that":
{baseDir}/prompts/correction_handler.md## Correction Log sectionteammates/{slug}/SKILL.md| Command | Action |
|---------|--------|
| /list-teammates | python3 {baseDir}/tools/skill_writer.py --action list --base-dir ./teammates |
| /compare {slug1} vs {slug2} | Read {baseDir}/prompts/compare.md, then load both teammates' work.md + persona.md and generate side-by-side comparison |
| /export-teammate {slug} | python3 {baseDir}/tools/export.py --slug {slug} --base-dir ./teammates — creates portable package |
| /teammate-rollback {slug} {ver} | python3 {baseDir}/tools/version_manager.py --action rollback --slug {slug} --version {ver} --base-dir ./teammates |
| /delete-teammate {slug} | Confirm, then rm -rf teammates/{slug} |
Tool/script fails: Don't dump the traceback to the user. Summarize in one line + suggest a fix:
⚠️ Slack collector failed (token expired). Run: python3 tools/slack_collector.py --setup
User goes off-script: If the user says something unrelated mid-creation, handle it gracefully and offer to resume:
No problem — want to continue creating {slug}, or do something else?
Partial creation interrupted: If a previous creation was abandoned, detect existing teammates/{slug}/ with incomplete files (missing SKILL.md) and offer to resume or restart:
Found an incomplete teammate "alex-chen" from earlier. Resume where we left off, or start fresh?
The generated skill has two parts that work together:
| Part | What it captures | |------|-----------------| | Part A — Work Skill | Systems owned · tech standards · code review focus · workflows · tribal knowledge | | Part B — Persona | 5-layer model: hard rules → identity → expression → decision patterns → interpersonal style |
When invoked: Persona decides the attitude → Work Skill executes → output in their voice.
Open-source personal AI assistant by @steipete. Runs on your own hardware, answers on 25+ channels (WhatsApp, Telegram, Slack, Discord, Teams, Signal, iMessage…). Local-first, persistent memory, voice, canvas, cron jobs, and a growing skills ecosystem.
Managed hosting for OpenClaw — skip Docker, servers, and configs. One-click deploy, always-on, automatic updates, daily backups. Your OpenClaw instance live in minutes. Perfect if you want teammate.skill running 24/7 without self-hosting.
Anthropic's official agentic coding CLI. Install this skill into .claude/skills/ and invoke with /create-teammate.
Option A — ClawHub (recommended):
openclaw skills install create-teammate
Option B — Git:
git clone https://github.com/LeoYeAI/teammate-skill ~/.openclaw/workspace/skills/create-teammate
Then start a new session (/new) and type /create-teammate.
MyClaw.ai users: SSH into your instance or use the web terminal. Same commands.
# Per-project
mkdir -p .claude/skills
git clone https://github.com/LeoYeAI/teammate-skill .claude/skills/create-teammate
# Global (all projects)
git clone https://github.com/LeoYeAI/teammate-skill ~/.claude/skills/create-teammate
Then type /create-teammate in Claude Code.
Clone into your agent's skill directory. Any agent that reads AgentSkills SKILL.md frontmatter will auto-detect it.
pip3 install -r requirements.txt # Only needed for Slack/GitHub auto-collectors
Slack auto-collection needs a Bot Token. GitHub collection uses
GITHUB_TOKEN. See INSTALL.md for setup guides.
| Source | Type | Method |
|--------|------|--------|
| Slack | Messages, threads, reactions | ✅ Auto-collect (enter username) or upload JSON export |
| GitHub | PRs, reviews, commits, issues | ✅ Auto-collect (enter handle) |
| Gmail | Emails | Upload .mbox / .eml from Google Takeout |
| Teams / Outlook | Chat, emails | Upload export JSON |
| Notion | Wiki, docs | Upload Markdown/HTML export |
| Confluence | Wiki, docs | Upload HTML export or zip |
| JIRA / Linear | Issues, tickets | Upload CSV / JSON export |
| PDF / Images | Any document | Upload directly |
| Markdown / Text | Anything | Upload or paste |
Mix and match any combination. Or skip entirely — even a one-sentence description generates a useful skill.
You ❯ /create-teammate Alex Chen, Stripe L3 backend, INTJ perfectionist,
brutal code reviewer but usually right
Agent ❯ 👤 alex-chen | Stripe L3 Backend | INTJ, Perfectionist, Stripe-style
Looks right? (y / change something)
You ❯ y
Agent ❯ Any work artifacts? Slack username, GitHub handle, files, or paste text.
Skip is fine — I'll work with what you gave me.
You ❯ skip
Agent ❯ ━━━ Preview: alex-chen ━━━
💼 Work: Payments Core · Ruby/Go/Postgres · CR: idempotency, naming
🧠 Persona: Direct, conclusion-first · Correctness > Speed
━━━━━━━━━━━━━━━━━━━━━━━
Looks right?
You ❯ 👍
Agent ❯ ✅ alex-chen created!
📁 teammates/alex-chen/ | 🗣️ /alex-chen
Try it — ask Alex anything:
You ❯ Alex, should we use MongoDB for this new service?
Alex ❯ No. We need ACID for payment flows. Stick with Postgres.
If the concern is schema flexibility, use JSONB columns.
You ❯ Review this payment endpoint?
Alex ❯ What problem does this solve? PR description doesn't say.
Blocking: you're using float for amount. Use Decimal.
Also — not idempotent. What happens on retry?
You ❯ Skip tests, ship by Friday?
Alex ❯ No. Tests right or we don't ship. Cut scope, not quality.
You ❯ /update-teammate alex-chen
[uploads 3 months of Slack export]
Agent ❯ Analyzed 847 messages. Found:
• 12 new catchphrases added to Persona
• Webhook reliability expertise added to Work Skill
• v1 → v2 (v1 archived)
This skill is built to run natively on both platforms — no adapter layer, no workarounds.
| Feature | Claude Code | OpenClaw |
|---------|:-----------:|:--------:|
| Install | .claude/skills/ | openclaw skills install or ~/.openclaw/workspace/skills/ |
| Trigger | /create-teammate | /create-teammate |
| Path resolution | ${CLAUDE_SKILL_DIR} | {baseDir} (auto-resolved) |
| File operations | Read / Write / Edit | read / write / edit |
| Run scripts | Bash tool | exec tool |
| Fetch URLs | Bash → curl | web_fetch tool |
| Generated output | ./teammates/{slug}/ | ./teammates/{slug}/ |
The SKILL.md uses {baseDir} for all paths — both platforms resolve it automatically.
Teammate skills aren't static — they improve over time:
| Method | How | |--------|-----| | Append data | "I found more Slack logs" → auto-analyze delta, merge without overwriting | | Correct mistakes | "They wouldn't say that, they'd say..." → instant correction | | Version control | Every update auto-archives → rollback to any version |
Every teammate goes through a 3-layer quality pipeline before you get it:
Validates generated content against 7 hard rules: Layer 0 concreteness, example count, catchphrase density, priority ordering, scope definition, no generic filler, tag→rule completeness. Failures auto-fix before you see the preview.
Three automated test prompts run against every generated skill:
🧪 Smoke Test: ✅ Domain ✅ Pushback ✅ Out-of-scope — 3/3 passed
Automatic detection of emails, phone numbers, API tokens, SSNs, and other PII:
python3 tools/privacy_guard.py --scan teammates/alex-chen/ # detect
python3 tools/privacy_guard.py --scan teammates/alex-chen/ --redact # auto-fix
Raw knowledge files (knowledge/) are excluded from git and exports by default.
Side-by-side comparison with scenario simulation:
You ❯ /compare alex-chen vs bob-smith
Agent ❯ ━━━ alex-chen vs bob-smith ━━━
alex-chen bob-smith
Priority: Correctness > Speed Ship fast > Perfect
CR Style: Blocking on naming Suggestions only
Under Pressure: Gets quieter Gets louder
Says "No" by: Direct refusal Asking questions
You ❯ Who should review the payments API redesign?
Agent ❯ alex-chen: "Send me the design doc. I want to check
idempotency and error contracts."