Anthropic Hands Claude Code More Control, but Keeps It on a Leash

Anthropic Hands Claude Code More Control, but Keeps It on a Leash

Anthropic today shipped "auto mode" for Claude Code in research preview — and it's the most consequential permission change since the tool launched. Until now, developers faced an uncomfortable binary: babysit Claude through every tool call, or flip --dangerously-skip-permissions and hope for the best. Auto mode finally splits that difference. An AI classifier reviews each proposed action before it runs, automatically greenlighting safe operations while blocking anything that looks risky or that might have been injected by a compromised prompt chain.

The rollout is deliberately cautious: auto mode currently works only with Claude Sonnet 4.6 and Opus 4.6, and Anthropic strongly recommends running it inside sandboxed or containerized environments rather than directly on a production machine. Enterprise and API users will get access within days, with broader availability to follow. For teams already running Claude Code in CI pipelines or overnight agentic workflows, this is the feature they've been waiting for — a native safety layer that lets Claude run truly unattended without requiring humans to choose between oversight and autonomy.

The timing matters too. With Claude Code crossing $2.5B ARR and autonomous agents moving from demos to production deployments, the industry's biggest open question has been trust: can you actually leave Claude Code running unsupervised without something going sideways? Auto mode is Anthropic's answer — not a philosophical commitment, but a shipped classifier sitting between Claude's intent and your filesystem.

Read the full article at TechCrunch →