Exclusive: Anthropic 'Mythos' AI Model — Representing a 'Step Change' in Power — Revealed in Data Leak
Anthropic's next flagship model has a name — and it leaked before the company was ready to announce it. Fortune broke the story after a researcher discovered a publicly accessible data cache containing roughly 3,000 unpublished Anthropic blog assets, including a draft post describing a model called Claude Mythos. The document characterized Mythos as Anthropic's "most capable model to date" and flagged something unusual: the company itself believes it poses "unprecedented cybersecurity risks," specifically citing dual-use capability to surface previously unknown vulnerabilities in production codebases. Mythos is already in limited early-access trials. Anthropic confirmed the leak to Fortune, attributing it to a human error in its CMS configuration, and promptly locked down the data store.
The "step change" framing matters. Anthropic has been deliberate about not overpromising on incremental Sonnet and Opus refreshes — describing a new model as a generational leap, even in an accidentally-public draft, signals something meaningfully different is coming. For Claude Code users specifically, a more capable underlying model translates directly to better multi-file reasoning, fewer hallucinations in complex refactors, and stronger agentic decisions mid-session. The cybersecurity dual-use positioning also hints at a serious enterprise security product angle: vulnerability detection at scale in production codebases is a capability that CISOs will pay for. A separate document in the same leaked cache detailed a planned invite-only CEO summit in Europe as part of Anthropic's enterprise sales push — suggesting the Mythos launch is being coordinated with a broader enterprise go-to-market move.
The data breach itself is a separate story worth watching. Three thousand unpublished assets sitting in an unsecured, publicly searchable store is a serious operational failure for a company whose core enterprise pitch is trustworthy, safe AI. Two independent security researchers — one from LayerX Security and one from Cambridge — helped verify and assess the leaked material before Fortune published.