💡 Using Claude for Year-End Code Reviews — Prompts That Work
The holiday slowdown is the best time to tackle the code-review backlog that accumulated all year. Claude's large context window (up to 200,000 tokens on Sonnet and above) makes it practical to feed in entire modules, pull-request diffs, or even whole microservices for a thorough audit in a single conversation. The key is structuring your prompts to separate what to look for from how to report it — giving Claude a clear rubric produces far more actionable output than a generic "review this code" request.
Prompt patterns that work
- Role + rubric pattern: "You are a senior TypeScript engineer conducting a year-end security review. For each function, flag: (1) any input not validated at the boundary, (2) async error paths that swallow exceptions, (3) hardcoded secrets or config values. Output as a markdown table with columns: Location | Severity | Description | Fix."
- Diff-first pattern: Paste the git diff, then ask Claude to identify regressions, style drift, or test-coverage gaps relative to the code that was removed.
- Documentation audit: Feed in a module plus its README and ask Claude to list every public function that is either undocumented or whose documentation no longer matches the implementation.
- Dependency triage: Paste the output of
npm outdated or pip list --outdated and ask Claude to summarise which updates are safe to apply now vs. which require breaking-change review.
Context window tip
For very large codebases, use Claude's system prompt to pre-load the architecture description and tech stack. This anchors the review to your project's conventions and reduces hallucinated suggestions about patterns you don't use.
code review
prompting
developer tips
year-end
retrospective
💡 Extended Thinking — Your Secret Weapon for Year-End Planning Documents
If you haven't yet put Claude's extended thinking capability to work on a complex planning document, the quiet between Christmas and New Year is the perfect moment. Extended thinking gives Claude a private reasoning scratchpad before it writes its final answer — the model explores alternatives, checks its own assumptions, and catches contradictions that standard responses would miss. For producing annual retrospectives, risk registers, or 2026 technical roadmaps, the quality uplift is substantial.
When to use extended thinking
- Multi-constraint planning: "Given these 12 engineering initiatives, rank them by expected ROI while keeping total team capacity under 80% and maintaining our Q1 launch commitment." Extended thinking handles the trade-off matrix better than chain-of-thought prompting.
- Risk analysis: Feed in an incident log and ask for a root-cause taxonomy — extended thinking finds clusters and second-order causes that a non-thinking response tends to flatten.
- Gap analysis: Compare a 2025 OKR document against your actual output list. Extended thinking spots discrepancies that require contextual reading rather than keyword matching.
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=16000,
thinking={
"type": "enabled",
"budget_tokens": 10000
},
messages=[{
"role": "user",
"content": "Review our 2025 engineering OKRs against the delivery log below and produce a gap analysis..."
}]
)
extended thinking
planning
year-end
developer tips
retrospective