✅ Anthropic Economic Index: Experienced Users Outperform Newcomers by 10%
Anthropic's third Economic Index report, published in late March 2026 and drawn from approximately one million Claude conversations in February, surfaces a striking headline: users who have been on the platform for six or more months achieve ~10% higher task success rates than those in their first month. The finding directly challenges the assumption that AI assistants are equally easy to use for everyone out of the box — AI proficiency, it turns out, is a learnable skill that compounds over time.
Key findings from the Learning Curves report
- Experienced users self-select harder work: Long-tenure users are not just better — they take on genuinely more complex, higher-value tasks. The improvement in outcomes is partly driven by better prompting and partly by users building more ambitious workflows as their confidence grows.
- Task concentration is falling: The top 10 tasks accounted for 24% of all Claude.ai traffic in earlier reports; by February 2026 that figure had dropped to 19%. Users are diversifying what they ask Claude to do, suggesting the user base is maturing beyond the obvious use cases (email drafts, code summaries) toward more specialised and personal applications.
- Model selection is becoming deliberate: Claude Opus 4.6 is disproportionately selected for longer, harder conversations — users are learning to match the model to the task rather than defaulting to one tier for everything.
- Geographic adoption is broadening: US per-capita usage became less concentrated in leading states, though the pace of convergence is slowing (see entry below).
What this means for teams adopting Claude
If your team is rolling out Claude to non-technical users, budget time for a learning curve — the 10% improvement takes months to materialise. Structured onboarding, internal prompt libraries, and regular "Claude office hours" can compress that timeline significantly.
economic index
learning curves
user research
adoption
Anthropic
✅ Economic Index Geography: US Adoption Is Converging — But Slower Than Expected
A supplemental analysis published alongside the Learning Curves report tracks geographic distribution of Claude usage across US states and internationally from August 2025 to February 2026. The headline: US adoption is converging — the top five states' share of per-capita usage dropped from 30% to 24% over six months — but the pace of convergence is decelerating. Anthropic's modelling now projects it could take 5–9 years to reach near-equal per-capita usage across all US states, up from an earlier estimate of 2–5 years.
International patterns
- Concentration increasing globally: In contrast to the US, international usage became slightly more concentrated: the top countries increased their share of total traffic from 45% to 48% over the same period. This likely reflects accelerating enterprise adoption in a small number of leading markets (UK, Canada, India, Germany, Australia) rather than broad global diffusion.
- India's rapid climb: Anthropic's Bengaluru office opening this week coincides with India emerging as the company's second-largest market globally. The Economic Index geography data shows India's share of international traffic nearly doubling since the Opus 4.6 launch.
- Rural-urban gaps persist in the US: The convergence trend masks a persistent rural-urban divide within states — high-adoption states like California and New York are pulled up by metropolitan centres, while secondary cities and rural areas lag by a wider margin than the state-level average implies.
Why this matters for product teams
If you're building Claude-powered products for non-tech audiences or markets outside the current leading hubs, the data suggests significant untapped demand. The convergence trend, even at a slowing pace, points to adoption being a matter of time and access rather than a ceiling on who can benefit from AI tools.
economic index
geography
adoption
global markets
India
✅ Five Habits That Compress the Claude Learning Curve
The Economic Index finding — that experienced users achieve measurably better outcomes — raises an obvious follow-on question: what are they doing differently? Based on the report's supporting data and patterns observed across high-tenure user cohorts, five habits stand out.
1. Iterate, don't restart
Novice users tend to abandon a conversation when Claude's first response isn't quite right. Experienced users treat the first response as a draft — they push back, add constraints, ask for alternatives within the same context window. Staying in the conversation rather than starting fresh compounds quickly.
2. Match model to task
Use Claude Haiku 4.5 for quick lookups and simple formatting. Claude Sonnet 4.6 for the bulk of day-to-day work. Reserve Claude Opus 4.6 for anything that genuinely requires extended reasoning, multi-step planning, or high-stakes writing. Model selection is not just a cost optimisation — it changes the quality of the output.
3. Give context up front
Experienced users front-load their prompts with role, constraints, and success criteria. A prompt like "You are a senior technical writer. My audience is non-technical PMs. Write a one-page summary of this architecture doc. Avoid jargon." will consistently outperform "Summarise this."
4. Build a personal prompt library
The task diversification trend (top 10 tasks falling from 24% to 19% of traffic) suggests that power users have moved beyond generic prompts to highly personalised, domain-specific ones. Keep a running document of prompts that work well — they are reusable assets, not one-off experiments.
5. Use extended thinking for hard problems
Opus 4.6's extended thinking capability (triggered with thinking: { type: "enabled", budget_tokens: N } in the API) lets the model reason through complex problems before responding. Power users route their hardest questions — multi-variable trade-offs, debugging obscure errors, strategic planning — through this mode and see consistently higher output quality.
The compounding effect
Each of these habits on its own lifts outcomes modestly. Applying all five consistently for six months is likely what drives the 10% advantage the Economic Index found in experienced users. There is no shortcut to the time investment — but there is a shortcut to building the right habits from day one.
best practices
prompting
model selection
extended thinking
tips