by Calix-L
Prompt packs that make any AI agent a LaTeX expert — fix errors, polish writing, format for venues, read papers, recover source
# Add to your Claude Code skills
git clone https://github.com/Calix-L/awesome-latex-skillsGuides for using ai agents skills like awesome-latex-skills.
47 compilation errors at 2 AM. A reviewer who writes "English needs improvement." A CVPR reject that needs to become an ICML submission by Friday.
awesome-latex-skills turns any AI agent into a LaTeX expert — structured workflows, curated knowledge, and guardrails that raw prompts can't replicate.
47 errors → 0 · Chinglish → publication-ready · CVPR → NeurIPS · 50 pages → structured notes
10-Second Pitch · Skills · Demos · Workflows · Quick Start · Compatibility
You describe the problem. The skill produces the fix.
latex-rescue auto-corrects typos, fixes mismatched environments, resolves package conflicts. 80+ patterns, zero manual edits.latex-polish fixes 18 categories of Chinglish, applies 100+ academic phrasebank templates, adds proper hedging. 3 intensity levels.latex-fmt switches \documentclass, removes banned packages, anonymizes, checks page limits. 15 venues covered.paper-read produces a 5-bullet skim in 30 seconds, a structured analysis in 5 minutes, or a full critical review in 15.No comments yet. Be the first to share your thoughts!
pdf2tex rebuilds LaTeX from any compiled PDF. 97+ math glyph mappings, table reconstruction, 7-phase pipeline.No LaTeX expertise required. The skill handles the semicolons.
| | | | | |---|---|---|---| | 80+ error patterns | 14 package conflicts | 18 Chinglish categories | 100+ phrasebank templates | | 15 venue rules | 50+ appraisal items | 97+ glyph mappings | 15 reference files |
You've tried asking ChatGPT to fix your LaTeX. It guesses. It misses things. It changes your math.
| You say... | What the raw LLM does | What the skill pack does |
|---|---|---|
| \beginn{table} | "That's an interesting typo" | Auto-corrects to \begin{table} |
| "According to the experiment" | Accepts it | Flags overuse, suggests alternatives |
| "Format for NeurIPS" | Forgets Broader Impact | Flags missing required section |
| "Convert this PDF to LaTeX" | Produces broken markup | 7-phase pipeline with verification |
| \citep{} without natbib | Silently ignores | Detects missing package, adds it |
| "Polish my paper" | Rewrites everything | Minimal edits, preserves math & commands |
Skills inject hundreds of domain-specific rules that LLMs can't reliably recall from memory. Each skill = structured workflow + reference knowledge + guardrails. Same input, same expert output, every time.
- \textbff{bold} → Undefined control sequence
+ \textbf{bold} → auto-fixed
- x_i is important → Missing $ inserted
+ $x_i$ is important → auto-fixed
- \begin{figure}...\end{table}
+ \begin{figure}...\end{figure} → mismatch fixed
- The model can achieves good performance on the dataset.
+ The model achieves strong performance on the benchmark.
- According to the experiment, it makes the accuracy improved by 3.2%.
+ Experiments show that the method improves accuracy by 3.2%.
- Most of methods in this research field can not achieve the same result.
+ Most methods in this field fail to match this result.
- \documentclass{article}
+ \documentclass{neurips_2025}
- \author{Zhang et al.}
+ \author{Anonymous}
- (no Broader Impact section)
+ ⚠ Broader Impact required by NeurIPS — flagged
- "This paper proposes a novel transformer-based approach for..."
+ [skim] Object detection · Wang et al., CVPR 2024
+ Novelty: sparse attention for real-time. Verdict: worth deep read.
- (reading every paper front-to-back)
+ [deep] Key eq: sparse attention. Delta: 10x faster.
+ Gap: only tested on COCO. Overclaim: "SOTA" (margin 0.3%).
- (staring at a compiled PDF, no source files)
+ \documentclass{article}
+ \usepackage{amsmath,amssymb}
+ \section{Introduction}
+ The model achieves $F_1 = 92.3$ on the benchmark.
+ % [UNCERTAIN: math notation — verify subscripts]
Skills compose into pipelines for real academic scenarios:
| Scenario | What you type | What happens |
|---|---|---|
| Deadline crunch | /latex-rescue | Crash → rescue → compile |
| Review turnaround | /latex-polish → /latex-fmt | Draft → polish → format → submit |
| Rebuttal reformat | /latex-polish → /latex-fmt | CVPR reject → polish → reformat for ICML |
| Lost source | /pdf2tex → /latex-rescue | PDF → reconstruct → fix → compile |
| New paper | /paper-read → /latex-polish → /latex-fmt | Read papers → polish → format for venue |
| Overleaf | /latex-rescue | Paste error log → get fixes |
One command to install all 5 skills:
git clone https://github.com/Calix-L/awesome-latex-skills.git && \
cp -r awesome-latex-skills/{latex-rescue,latex-polish,latex-fmt,paper-read,pdf2tex} ~/.claude/skills/
Then just type /latex-rescue, /latex-polish, etc. in Claude Code.
One skill only:
cp -r awesome-latex-skills/latex-rescue ~/.claude/skills/
Not using Claude Code? Just point your agent to the SKILL.md:
Read awesome-latex-skills/latex-rescue/SKILL.md and follow the workflow.
No install needed for latex-polish, latex-fmt, or paper-read. latex-rescue needs LaTeX. pdf2tex needs pip install pymupdf.
| Platform | How to use |
|---|---|
| Claude Code | Copy to ~/.claude/skills/, invoke with /latex-rescue |
| ChatGPT / GPT-4 | Paste SKILL.md as custom instruction or system prompt |
| Cursor | Add SKILL.md content to .cursor/rules/ |
| Copilot | Add SKILL.md content to .github/copilot-instructions.md |
| Any LLM | Send SKILL.md as context, then ask your question |
Each skill is a self-contained directory:
latex-rescue/
├── SKILL.md # the prompt — role, triggers, workflow, guardrails
├── references/ # domain knowledge the agent reads at each phase
│ ├── error-catalog.md
│ ├── package-conflicts.md
│ └── debug-workflow.md
└── agents/
└── config.yaml # auto-activation triggers and platform settings
/latex-rescue or say "fix my LaTeX errors"SKILL.md — now it has a structured workflow + guardrailsreferences/ for precise domain rules at each phaseMIT