What's Going On with Claude Code? A Documented Investigation into the Degradation
Something has been wrong with Claude Code, and for the past two weeks a growing number of developers have been documenting exactly what. Yan Gao at alphaguru.ai has published the most thorough public investigation yet: a cluster of reproducible failure modes including Claude claiming edits are complete when they aren't, ignoring explicit instructions in CLAUDE.md, falling into 20-plus retry loops instead of reasoning through a problem, and executing destructive bulk replacements without warning. These aren't isolated complaints — they're corroborated across GitHub issues, Reddit threads, and Hacker News comment sections, with consistent timestamps pointing to the same two-week window.
The investigation traces the failures to Anthropic's own infrastructure. The company's status page logged seven or more major incidents during March alone, and there are signs of inconsistent routing across serving pools — meaning that whether you hit a stable or unstable model instance has been partly a matter of luck. The boundary between Opus and Sonnet tier routing appears to be a particular point of instability, with some requests silently downgraded in ways that degrade output quality without surfacing any error to the user.
For anyone who has noticed Claude Code feeling less reliable lately, this piece is essential context. It names the specific incident dates, explains the infrastructure dynamics behind them, and cuts through the frustration with a clear-eyed diagnosis. Understanding what's been happening — and why — is the first step toward knowing when to trust your agent and when to reach for a workaround.