🧭 ChatGPT Uninstalls Up 295% as Claude Gains Ground in Consumer Market
Mobile app analytics platform data.ai has published February 2026 figures showing ChatGPT uninstall rates up approximately 295% month-over-month in the US and UK markets, while Claude app installs are up roughly 180% over the same period. The data, cited by TechCrunch, captures only the mobile app layer of a much larger shift — a significant share of both platforms' usage occurs via web browser — but analysts describe the direction of movement as unambiguous.
The spike is attributed to a combination of factors: user frustration with a series of high-profile ChatGPT reliability incidents in late January and early February; Anthropic's Super Bowl advertising campaign introducing Claude to a large new consumer audience; and the launch of Cowork providing existing Claude users with a compelling reason to consolidate their AI tooling onto a single platform. The data does not indicate whether uninstalled ChatGPT users are replacing it with Claude or with other alternatives.
Context: Mobile app install and uninstall data captures consumer behaviour but understates enterprise usage, where both platforms operate primarily via API and web interfaces. These figures reflect consumer-segment trends and should not be extrapolated directly to overall platform health.
market share
consumer
Claude.ai
competitive landscape
retrospective
🧭 The Safety-First Debate — Advantage or Constraint?
As February closes with Anthropic posting what analysts describe as its strongest month yet on both product and safety dimensions, MIT Technology Review has published an analysis of whether the company's safety-first posture is proving to be a competitive advantage or a binding constraint. The piece draws on interviews with enterprise customers, AI researchers, and policy experts to present multiple perspectives on the question.
The case for advantage: enterprise procurement teams increasingly cite Anthropic's RSP, model cards, and Transparency Hub as differentiating factors in competitive evaluations; the February 2026 Risk Report and RSP v3.0 publication reinforced that narrative this week. The case for constraint: the ARARA evaluation gate and the new Deployment Review Board mean that future model releases will take longer to reach the market than at labs with less process overhead, creating a potential capability gap window. Analysts quoted in the piece are divided, with the emerging consensus being that the answer likely depends on whether AI adoption continues to be driven primarily by performance benchmarks or by enterprise trust and governance requirements — and that February's events suggest the latter is gaining weight.
safety
strategy
enterprise
AI governance
retrospective