Anthropic Hands Claude Code More Control — But Keeps It on a Leash
TechCrunch has a solid breakdown of Anthropic's new auto mode for Claude Code, currently in research preview on the Team plan with Enterprise and API access rolling out in the coming days. The framing is apt: auto mode occupies the space between Claude's default conservative stance — asking permission for almost everything — and the blunt instrument of --dangerously-skip-permissions. An AI classifier sits in front of every tool call and decides in real time whether it's safe to proceed. Mass file deletion, sensitive data exfiltration, suspicious code execution, and prompt injection attempts all get flagged; everything else runs freely. When Claude keeps hitting blocks, it escalates to a human. When the path is clear, it keeps moving.
The competitive context matters here. Auto mode puts Anthropic directly in the ring with GitHub Copilot Workspace and OpenAI Codex on the question of how much autonomy to hand an AI coding agent. What's different is that Anthropic is being explicit about the safety architecture rather than hand-waving it. That said, TechCrunch notes the company hasn't published the classifier's exact criteria — a gap that will likely drive developer frustration before wide adoption, since knowing where the lines are is exactly what you need to build reliable agentic pipelines around the tool.