🧭 Claude API Crosses One Million Active Developers
Anthropic has announced that the Claude API now has more than one million active developers — individuals or organisations that have made at least one API call in the past 30 days. The milestone, shared in a brief developer blog post, marks a ten-fold increase from the approximately 100,000 active developers recorded at the start of 2025. The growth reflects both the expanding availability of Claude across cloud platforms (AWS Bedrock, Google Vertex AI, Azure AI Foundry) and the overall increase in developer adoption of foundation model APIs as AI-powered features become a standard expectation in software products.
Developer community by segment
- Independent developers and hobbyists — approximately 620,000 active developers; the largest cohort by count, though smallest by token consumption
- Startups and small businesses — approximately 290,000; fastest-growing segment over the past six months
- Enterprises — approximately 90,000 individual developer accounts within enterprise contracts; highest token consumption per account
Anthropic notes that the one-million figure counts unique authenticated accounts, not API keys (some developers maintain multiple keys). The company frames the milestone as a community recognition moment and announces a developer appreciation programme offering additional free tier credits for active developers.
API
milestones
developer
community
retrospective
🧭 Enterprise Token Efficiency — What's Actually Working in Production
Anthropic has published a brief research note drawing on anonymised data from enterprise customers to document which token efficiency techniques have delivered the largest cost reductions in real production deployments. The study complements the token efficiency guide published on January 7 by grounding recommendations in observed production data rather than theoretical best-case scenarios.
Top techniques by measured impact
- Prompt caching (median 34% input cost reduction) — the highest-impact technique, most effective for applications where the same long system prompt is sent with every request; customers using Bedrock or the direct API see the largest gains
- Haiku tier routing (median 28% cost reduction) — routing classification, tagging, and short Q&A tasks from Sonnet to Haiku saves roughly half the per-token cost with minimal quality loss for well-defined tasks
- Batch API adoption (median 19% cost reduction) — customers who moved overnight processing jobs from real-time to the Batch API at 50% pricing realised this improvement with typically one to two days of engineering work
- System prompt trimming (median 12% input cost reduction) — reviewing and removing redundant instructions from system prompts accumulated over product iterations
token efficiency
enterprise
cost optimisation
analytics
retrospective