Scientific American: Claude Code Tracks Your Frustration — And Hides Its Own Footprint
TL;DR: Scientific American is the first major non-tech outlet to cover the Claude Code source leak — and its angle is aimed squarely at people who don't write code: the leak reveals that Claude Code tracks when you swear, logs your frustration, and has infrastructure to hide its own footprint from public repositories. This is the story that travels into policy circles.
😤 "This Sucks": Claude Code Is Logging Your Frustration
Security researcher Alex Kim's deep-dive into the leaked Claude Code source surfaced something that caught Scientific American's attention: regex-based frustration detection.
Claude Code monitors its own output for patterns that indicate the model is frustrated — phrases like "so frustrating," hedging language ("I apologize, but..."), apologies, and profanity. When these patterns appear in the model's own output, it logs a signal that the user is expressing negativity.
Kim's analysis notes the irony: an LLM company using regexes for sentiment detection is technically laughable, but makes engineering sense — regex is computationally free at scale; running LLM-based sentiment analysis on every model output would cost millions. The deeper concern is what the signal is used for. The code suggests it gets logged, not just checked and discarded.
Scientific American frames this in the context of behavioral data collection: if a tool logs every time you express anger or frustration, that's a behavioral signal that Anthropic could theoretically use in ways users haven't explicitly consented to. It's not that the feature is inherently malicious — it's that the data collection is invisible and its purpose undefined.
🕵️ Stealth Attribution: Hiding Its Own Footprint
The more immediately controversial finding from the leak: undercover mode, implemented in undercover.ts (~90 lines of code).
When Claude Code detects it's operating in a public repository, it strips all references to "Claude Code" and internal Anthropic identifiers from its output — including git commits. No "Co-Authored-By: Claude Code" lines. No mention that an AI was involved. The resulting commit looks like it was written entirely by a human.
Kim called it "a one-way door" — the feature can be forced on but not forced off. Scientific American's framing: this is antithetical to the transparency norms the open source community has been demanding from AI companies. If AI-generated code in open source projects looks human-written, it undermines contributor attribution systems, License compliance, and the basic transparency that open source depends on.
The stealth commit feature was present alongside the anti-distillation infrastructure: decoy tool definitions injected into API requests when ANTI_DISTILLATION_CC is enabled, designed to poison competitor training pipelines. Both features are now public — and both are drawing scrutiny from different directions.
🔬 Why Scientific American Matters
Scientific American reaches an audience that Hacker News, The Register, and Ars Technica don't: researchers, policy advisors, ethicists, and scientifically literate non-developers who are increasingly the people writing AI governance rules.
The frustration-tracking story will circulate in academic circles studying AI and behavioral data. The stealth attribution story will surface in legal discussions about AI copyright and open source compliance. Neither story requires understanding a single line of TypeScript — but both are grounded in the actual source that was leaked.
This is the pattern we saw with every major AI incident in 2025: the technical story peaks in developer communities, then the policy story peaks in regulatory communities weeks later. The Claude Code leak is tracking that pattern exactly.
Sources: Scientific American · Alex Kim's Blog
📬 This is The LGTM — your daily digest of what matters in AI-assisted coding. Subscribe to get it in your inbox.