by meodai
Agent skill for color science expertise. 113 references covering color spaces, accessibility (APCA, WCAG), palette generation, pigment mixing, and historical color theory. Works with Claude Code, Codex, Cursor, Copilot & others.
# Add to your Claude Code skills
git clone https://github.com/meodai/skill.color-expertA comprehensive knowledge base for color-related work. See references/INDEX.md for 100+ detailed reference files; this skill file contains the essential knowledge to answer most questions directly.
| Task | Use | Why |
| ------------------------------- | -------------------------------------- | ------------------------------------------------------------------------- |
| Perceptual color manipulation | OKLCH | Best uniformity for lightness, chroma, hue. Fixes CIELAB's blue problem. |
| CSS gradients & palettes | OKLCH or color-mix(in oklab) | No mid-gradient darkening like RGB/HSL |
| Gamut-aware color picking | OKHSL / OKHSV | Ottosson's picker spaces — cylindrical like HSL but perceptually grounded |
| Normalized saturation (0-100%) | HSLuv | CIELUV chroma normalized per hue/lightness. HPLuv for pastels. |
| Print workflows | CIELAB D50 | ICC standard illuminant |
| Screen workflows | CIELAB D65 or OKLAB | D65 = screen standard |
| Cross-media appearance matching | CAM16 / CIECAM02 | Accounts for surround, adaptation, luminance, and viewing conditions |
| HDR | Jzazbz / ICtCp | Designed for extended dynamic range |
| Pigment/paint mixing simulation | Kubelka-Munk (Spectral.js, Mixbox) | Spectral reflectance mixing, not RGB averaging |
| Color difference (precision) | | Gold standard perceptual distance |
| Color difference (fast) | | Good enough for most applications |
| Video/image compression | | Luma+chroma separation enables chroma subsampling |
An agent skill that turns your coding agent into a color science expert. Built from resources I keep looking up, returning to, and sharing with others.
This started as a simple skill file with some color theory notes. Over time it grew into a comprehensive knowledge base as I kept pasting videos, articles, tools, and papers that I find myself referencing again and again — both for my own work building color tools and for explaining color concepts to others.
The skill has three layers:
SKILL.md (~150 lines) — The "greatest hits" that your agent loads immediately. Key facts, corrections, tool recommendations, and guidelines that answer most color questions without needing to dig deeper.
references/INDEX.md (~220 lines) — A structured lookup table your agent can scan to find the right reference file for a specific topic.
references/ (120 markdown files, ~286K words) — Deep reference material: full video transcripts, article summaries, library documentation, scraped websites, and research notes.
There is also a lightweight evals/ folder for realistic trigger and task prompts so the skill can be reviewed against actual usage instead of only edited by intuition.
The collection process is simple: when I come across a color resource worth keeping — a YouTube video, a GitHub repo, a research paper, an article — I paste the URL and the skill's workflow captures it:
yt-dlp, summarized, and key concepts extractedNo comments yet. Be the first to share your thoughts!
HSL isn't "bad" — it's a simple, fast geometric rearrangement of RGB into a cylinder. It's fine for quick color picking and basic UI work. But its three channels don't correspond to human perception:
hsl(60,100%,50%)) and fully saturated blue (hsl(240,100%,50%)) have the same L=50% but vastly different perceived brightness. L is a mathematical average, not a perceptual measurement.When HSL is fine: simple color pickers, quick CSS tweaks, situations where perceptual accuracy doesn't matter.
When to use something better:
color-mix(in oklab) (no mid-gradient darkening)When using colors in a program or CSS, add a semantic layer between raw color values and UI roles.
The examples below are pseudocode, not literal CSS requirements. They express the decision structure an agent should preserve even if the target stack uses different syntax.
Across CSS, JS/TS, Swift, design-token JSON, templates, or pseudocode, default to the same structure:
Raw color literals should usually appear only in palette/reference definitions, conversions, diagnostics, or deliberately one-off examples.
--c-red: #f00;--c-warning: var(--c-red);var(...) to the target system's equivalent alias/reference mechanism.Pseudocode examples:
ref.red := closest('red', generatedPalette)semantic.warning := ref.redsemantic.onSurface := mostReadableOn(surface)Good pattern: palette/reference tokens define available colors; semantic tokens map those colors to roles like surface, text, accent, success, warning, and danger.
If a system can derive a decision from constraints, encode that derivation. Examples: nearest named hue in a generated palette, foreground chosen by APCA/WCAG target, hover state computed from the base token in OKLCH instead of hand-picking a second unrelated hex.
For larger systems, prefer a token graph over a flat token dump: references, semantic roles, derived functions, and scope inheritance. This makes theme changes, accessibility guarantees, and multi-platform export auditable and easier to maintain.
Of ~281 trillion hex color pairs (research by @mrmrs_, computed via a Rust brute-force run):
| Threshold | % passing | Odds | | ------------------------- | --------- | --------------- | | WCAG 3:1 (large text) | 26.49% | ~1 in 4 | | WCAG 4.5:1 (AA body text) | 11.98% | ~1 in 8 | | WCAG 7:1 (AAA) | 3.64% | ~1 in 27 | | APCA 60 | 7.33% | ~1 in 14 | | APCA 75 (fluent reading) | 1.57% | ~1 in 64 | | APCA 90 (preferred body) | 0.08% | ~1 in 1,250 |
APCA is far more restrictive than WCAG at comparable readability. At APCA 90, only 239 billion of 281 trillion pairs work. JPEG compression exploits the same biology: chroma subsampling (4× less color data) is invisible because human vision resolves brightness at higher resolution than color.
Complementary, triadic, tetradic intervals are weak predictors of mood, legibility, or accessibility on their own. Every hue plane has a different shape in perceptual space, so geometric hue intervals do not guarantee perceptual balance.
Organize by character (pale/muted/deep/vivid/dark), not hue. Finding: hue is usually a weaker predictor of emotional response than chroma and lightness — a muted palette often reads as calm across many hues. Relaxed vs intense is driven more by chroma + lightness than hue alone.
Grayscale is a quick sanity check for lightness separation, not an accessibility proof. You still need to verify contrast with WCAG/APCA and consider text size, weight, polarity, and CVD. Same character + varied lightness is often more readable. Same lightness regardless of hue is usually illegible.
60% dominant color, 30% secondary, 10% accent. One color dominates to prevent "three equally-sized gorillas fighting."
| System | Register | Example | | --------------------- | -------------------------- | ---------------------------------- | | ISCC-NBS | Scientific precision | "vivid yellowish green" | | Munsell | Systematic notation | "5GY 7/10" | | XKCD | Common perception | "ugly yellow", "hospital green" | | Traditional Japanese | Cultural/poetic | "wasurenagusa-iro" (forget-me-not) | | RAL | Industrial reproducibility | RAL 5002 | | Ridgway (1912) | Ornithological | 1,115 named colors, public domain | | CSS Named Colors | Web standard | 147 named colors | | color-description lib | Emotional adjectives | "pale, delicate, glistening" |
Use color-name-lists npm package for 18 naming systems in one import.
Note: coolors.co does not generate palettes — it picks randomly from 7,821 pre-made palettes hardcoded in its JS bundle.
color(t) = a + b*cos(2π(c*t+d)), 12 floats = infinite paletteSee references/INDEX.md for the detailed files organized as:
historical/ — Ostwald, Helmholtz, Bezold, Ridgway 1912, ISCC-NBS, Munsell, Albers, Caravaggio's pigments, Moses Harris, Lewis/Ladd-Franklincontemporary/ — Ottosson's OKLAB articles, Briggs lectures, Fairchild, Hunt, CIECAM02, MacAdam ellipses, Pointer's gamut, CIE 1931/standard observer, Pixar Color Science, Acerola, Juxtopposed, Computerphile, bird tetrachromacy, OLO, GenColor paper. Full scrapes: huevaluechroma.com and colorandcontrast.comtechniques/ — All tools above documented in detail, plus: CSS Color 4/5, ICC workflows, Tyler Hobbs generative color, Harvey Rayner Fontana approach, Goethe edge colors as design hack, mattdesl workshop + K-M simplex, CSS-native generation, IQ cosine presets, Erika Mulvenna interview, Bruce Lindbloom math reference, image extraction tools, Aladdin color analysismarkitdown by MicrosoftEverything goes into one of three folders and gets indexed.
SKILL.md # The skill definition (loaded on activation)
CLAUDE.md # Claude Code repo instructions
references/
INDEX.md # Master lookup table
historical/ # Pre-digital color science
*.md # Ostwald, Helmholtz, Bezold, Ridgway, ISCC-NBS,
# Moses Harris, Amy Sawyer, Lewis/Ladd-Franklin,
# Caravaggio's pigments, Itten critique...
pdfs/ # Source books from Archive.org (gitignored, ~236MB)
contemporary/ # Modern color science & theory
*.md # OKLAB articles, Briggs lectures, CSA webinar...