🧭 Claude Code Auto Mode — The Middle Ground Between Speed and Control
Anthropic shipped a research preview of Auto Mode for Claude Code on March 24 — a new operating level designed to sit squarely between two extremes that developers have long had to choose between: constant approval prompts (safe but slow) and --dangerously-skip-permissions (fast but scary). Auto Mode threads that needle by adding an AI classifier layer that reviews each tool call before execution, approving the safe ones silently and blocking — or escalating — the risky ones.
How the classifier works
Before each tool call, a secondary model checks the proposed action against a risk rubric: is this destructive (mass delete, format disk)? Is it exfiltrating sensitive data? Does it show signs of prompt injection?
Safe actions — file reads, writes to expected paths, running scoped tests — proceed automatically without a permission prompt
Risky actions are blocked; if Claude repeatedly attempts the same blocked action, it surfaces a permission prompt to the user rather than looping silently
The classifier itself runs quickly enough that it doesn't meaningfully slow down the agent loop
Availability and caveats
Available immediately to Claude Teams users as a research preview; rolling out to Enterprise and API customers in the coming days
Anthropic is explicit: auto mode reduces risk compared to skipping all permissions, but does not eliminate it — still recommended for use in isolated environments
The feature is an extension of the existing architecture, not a rewrite; your existing Claude Code configuration and hooks continue to work alongside it
Best practice for adopting Auto Mode: Start with low-stakes repositories and observe which actions Claude takes silently vs. which it escalates. After a few runs you'll develop intuition for what the classifier approves, and can calibrate your task scoping accordingly. Pair with Claude Code's --verbose flag during the learning phase to see the full tool-call log.
Claude Codeauto modesafetypermissionsdeveloper tools
🧭 What 81,000 People Want From AI — Anthropic's Landmark User Study
Anthropic published what it describes as the largest and most multilingual qualitative study ever conducted — 80,508 conversations with Claude users across 159 countries in 70 languages, conducted in December 2025 using Anthropic Interviewer, an AI-powered conversational research tool. The March 18 publication crystallises a picture of how real people think about AI: not as a chatbot novelty, but as a lever on the most important dimensions of their lives — work, money, time, and meaning.
What people want (top visions)
Professional excellence (18.8%) — automate routine tasks to reclaim time for higher-value work
Personal transformation (13.7%) — growth, emotional wellbeing, life coaching
Life management (13.5%) — organisational support, reducing mental burden
Financial independence (9.7%) — economic security and income generation, especially prominent in emerging markets
Where AI has actually delivered
Productivity (32%) — accelerated work and task automation
Cognitive partnership (17.2%) — brainstorming and creative collaboration
Technical accessibility (8.7%) — enabling non-specialists to build software and tools
Top concerns
Unreliability / hallucinations (26.7%) — the biggest single concern globally
Jobs & economy (22.3%) — displacement and widening inequality
Autonomy & agency loss (21.9%) — fear of ceding human decision-making
Cognitive atrophy (16.3%) — skill loss from over-reliance; notably, educators report this at 24% — higher than the 17% baseline
The geographic split: Sub-Saharan Africa, South Asia and Latin America are significantly more optimistic than North America and Western Europe. Lower/middle-income users frame AI as a "ladder up" — bypass capital barriers, build businesses, access expertise. Higher-income regions focus on governance gaps and surveillance concerns. 67% of all respondents expressed net positive sentiment; 81% said AI was already making progress toward their vision.
The study also identified five "light and shade" tensions where the same users simultaneously experience benefit and worry — most sharply for emotional support (16% benefit, 12% fear dependency) and time-saving (50% gain time, 18% feel a "productivity treadmill" instead). These are not separate camps of optimists and pessimists: they are the same people, holding both things at once.
researchuser studyglobalAnthropicAI impact
🧭 Claude Code v2.1.83 — New Hooks, Drop-In Config & Transcript Search
The latest Claude Code release (v2.1.83) ships four quality-of-life improvements that, taken together, make the tool significantly more scriptable, observable, and comfortable to use in larger, longer sessions. None of these are headline features, but they're the kind of sharp edges that become obvious once you're running Claude Code as a daily driver.
What's new
managed-settings.d/ drop-in directory — drop JSON snippet files into this directory and Claude Code merges them into its config at startup. This is a huge win for teams managing Claude Code across many machines via configuration management tools: each tool (Homebrew, Ansible, a company dotfiles repo) can own its own snippet without overwriting a monolithic settings.json.
CwdChanged and FileChanged hook events — two new lifecycle hooks let you react when Claude Code changes its working directory or when a file it was editing is saved. Use cases: auto-run linters on changed files, trigger test suites, update a project-specific context file, or post a webhook when Claude writes to a watched path.
Transcript search (/ key) — press / inside transcript mode to open an inline search over the full session history. Essential for long multi-hour sessions where you need to find a specific tool call, error message, or decision point without scrolling.
Pasted image reference chips — when you paste an image into the Claude Code prompt, a small chip now appears confirming the attachment, with a preview and remove button. Removes the guesswork about whether your screenshot actually got attached.
Drop this in your managed-settings.d/ directory and every Python file Claude edits gets auto-linted before you review the diff.
Claude Codechangeloghooksconfigurationdeveloper tools
🧭 Vercept Closes, OSWorld Hits 72.5% — Claude's Computer Use Comes of Age
March 25 marked a quiet but significant milestone: Anthropic wound down Vercept's external product as the February acquisition formally completed. Vercept — founded by Kiana Ehsani, Luca Weihs, and Ross Girshick — built Vy, a cloud-hosted AI agent that operated a remote Apple MacBook to complete complex multi-step tasks. Their speciality was high-precision visual perception within live software interfaces: the exact capability Anthropic needed to make Mac Computer Use genuinely reliable. The team is now fully inside Anthropic working on what the company describes as "some of the hardest problems" in agentic AI.
The benchmark that puts it in context
Coinciding with the integration, Anthropic published an updated OSWorld score for Claude Sonnet 4.6: 72.5%. OSWorld is the standard evaluation for AI computer-use ability — tasks like filling web forms across multiple browser tabs, navigating complex spreadsheets, and managing files across applications. The trajectory tells the story:
Late 2024: under 15% — computer use worked in demos but failed on real workflows
Mid 2025: ~38% — usable for simple, single-application tasks
March 2026 (Sonnet 4.6): 72.5% — approaching human performance on the benchmark suite
That is roughly a 5× improvement in 15 months. For comparison, human performance on OSWorld sits at around 72–74%, meaning Claude is now within the noise of an average human on this measure — and considerably faster.
What this means for Claude Code users: the Auto Mode and Dispatch workflows covered earlier this week are only as useful as the underlying computer-use accuracy. A 72.5% OSWorld score means roughly 3 in 4 GUI interactions succeed on the first attempt — practical for supervised use, but still worth giving Claude clear stopping conditions and running in a sandboxed environment for anything sensitive.