jialong@columbia:~/site$cat ./lab/resume-tailor.md
home
> Lab · Waaangjl/resume-tailor

resume-tailor

repo:
Waaangjl/resume-tailor
lang:
Python · LaTeX
year:
2026 — active

A small Python tool that takes a LaTeX résumé and a job description, and rewrites the bullets — and only the bullets — to read like they were written for that role. The hard line it tries not to cross: don't make things up. If a bullet says I built a forty-tab cash-flow model in March 2025, the tool can re-tense it, re-emphasize it, swap the verb, surface a more relevant outcome — but it can't claim I built fifty tabs, or did it in February.

Why I wrote it

SIPA's on-cycle market is a treadmill. A real one — fifteen banks, ten consultancies, twelve climate funds, all wanting a tailored résumé and a tailored cover letter, all due the same Sunday. The tailoring itself isn't intellectually hard. It's just a lot of careful copy-paste and a lot of judgment calls about which bullet leads.

I did the first round by hand, the way one is supposed to. By the seventh résumé I noticed I was making the same five edits over and over: tense changes, lead-bullet reorderings, swapping out "Python" for "Python (pandas, statsmodels)" when the JD asked for it. That's the kind of thing a model is good at — short, local rewrites with a clear constraint.

So I wrote it.

How it works

A .tex file goes in. The tool parses out each \item{...} bullet, builds a context of (your full résumé, the JD, the role's seniority, any guardrails you set), and asks a model to rewrite each bullet under three rules:

  1. Don't add facts. If a bullet doesn't have a number, the rewrite doesn't get to invent one.
  2. Keep proper nouns. Wood Mackenzie stays Wood Mackenzie; "a research firm" is not allowed as a substitute.
  3. Match the JD's vocabulary. If the JD says "valuation," the bullet doesn't say "modeling."

Output is a new .tex file plus a colored diff so you can scan what changed and decline anything that drifted. There's a flag for cover-letter generation that runs the same constraints over a five-paragraph template.

It works with Claude Code (claude -p as the inference backend) or any LiteLLM-compatible endpoint. I run it most often against Claude Sonnet for tailoring — Opus when something feels like it needs a second pass.

What it doesn't do

  • It doesn't typeset. You still run pdflatex.
  • It doesn't track outcomes. There's no "résumé v3 got me an interview at firm Y" loop. I tried wiring one up and decided it was creepy.
  • It doesn't tailor to culture — "this firm is collegial, write warmer." The model will do that if you ask, but the results were mediocre and I cut the feature.

Caveat

There is a real ethical line at the edge of this kind of tool. Tailoring is acceptable; fabricating is not. The whole point of the three rules above is to keep the tool on the right side of that line — but a user can absolutely override the rules and ask the model to lie. I'd rather you not.

Clone

git clone https://github.com/Waaangjl/resume-tailor
cd resume-tailor
pip install -r requirements.txt
python tailor.py --resume me.tex --jd job.txt --out me-tailored.tex

Repository: github.com/Waaangjl/resume-tailor ↗