🧭 Claude Haiku 3.5 Scheduled for Deprecation — Migrate to Haiku 4.5 by February 2026
Anthropic's API release notes published this week confirm that Claude Haiku 3.5 (claude-3-5-haiku-20241022) will be retired on 19 February 2026. Developers currently using Haiku 3.5 in production need to migrate to Claude Haiku 4.5, which offers meaningfully better performance across speed, instruction-following, and reasoning tasks while maintaining competitive pricing. Anthropic's standard migration timeline gives API users approximately two months of advance notice — consistent with the policy introduced earlier in 2025.
Migration checklist
- Update your model string: Replace
claude-3-5-haiku-20241022 with the Haiku 4.5 model ID in all API calls, environment variables, and config files.
- Re-test your prompts: Haiku 4.5 is a different model with different tokenisation and instruction-following characteristics. Run your evaluation suite before switching in production.
- Review context window usage: Haiku 4.5 offers an expanded context window; if you were working around Haiku 3.5 limits, you may be able to simplify prompt construction.
- Check your cost projections: Input and output token pricing may differ — re-run your cost model with the new rates from the Anthropic pricing page.
Deadline: 19 February 2026
Any calls to claude-3-5-haiku-20241022 after that date will return an error. Set a calendar reminder now and prioritise migration in your January sprint planning.
Haiku 3.5
deprecation
model migration
API
retrospective
🧭 Anthropic Publishes Research on Claude's User Wellbeing Safeguards
A research note published by Anthropic this week reveals internal testing results showing Claude models respond appropriately in 86–99% of cases involving crisis conversations — topics such as suicide, self-harm, and acute mental health distress. The document also details Anthropic's ongoing work to reduce sycophancy: the tendency for AI assistants to tell users what they want to hear rather than what is accurate or genuinely helpful. Both strands of research are live features in the currently-deployed Claude models, not future roadmap items.
Key findings from the research
- Crisis response accuracy: Across a battery of red-team test cases, Claude correctly identified and responded to crisis scenarios 86–99% of the time, depending on scenario type and model tier. Haiku scored at the lower end; Sonnet and Opus consistently above 95%.
- Safe messaging guidelines: Claude follows evidence-based safe-messaging protocols (as established by mental health organisations) when detecting potential self-harm discussions — directing users to resources rather than engaging in ways that could cause harm.
- Anti-sycophancy training: Anthropic describes a training methodology that penalises responses that simply agree with users, particularly when the user's stated premise contains a factual error. The goal is a model that is honest even when honesty is uncomfortable.
For operators building user-facing products
These safeguards are active in the default Claude system prompt. If your product serves vulnerable populations, review Anthropic's operator guidance on mental health use cases in the usage policy documentation — there are additional system-prompt patterns that can further reinforce appropriate responses.
wellbeing
safety
sycophancy
mental health
retrospective