Most AI labs race to ship their best model to the widest possible audience. Anthropic just did the opposite — and it might be the most important product decision anyone has made in AI this year.
On April 7, Anthropic announced Claude Mythos Preview, its most powerful model ever built. Then it said something the industry almost never hears: we are not making this available to the public.
What Mythos Actually Is
Mythos is not an incremental improvement. It is a generational leap.
On SWE-bench Verified, it scores 93.9% compared to 80.8% for Opus 4.6. On CyberGym, the cybersecurity benchmark, it hits 83.1% versus 66.6% for Opus 4.6. On Terminal-Bench 2.0, it reaches 82.0% against 65.4%. These are not marketing numbers — they represent a model that can autonomously find, chain, and exploit software vulnerabilities at a level that surpasses all but the most elite human security researchers.
In just a few weeks of testing, Mythos found thousands of zero-day vulnerabilities across every major operating system and every major web browser. It discovered a 27-year-old bug in OpenBSD — one of the most security-hardened operating systems on the planet. It found a 16-year-old flaw in FFmpeg in a line of code that automated testing tools had hit five million times without catching. It chained together multiple Linux kernel vulnerabilities to escalate from ordinary user access to full root control, entirely on its own.
That is not a chatbot upgrade. That is a weapon-grade capability.
Why Anthropic Chose Not to Ship It
Instead of releasing Mythos through the API or the consumer product, Anthropic launched Project Glasswing — a initiative that puts the model in the hands of 12 major technology partners: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Another 40 organisations that build or maintain critical software infrastructure also received access.
Anthropic committed $100 million in usage credits and $4 million in direct donations to open-source security organisations. The goal is simple — use the model to find and fix vulnerabilities before adversaries develop similar capabilities.
The logic here is sound. If Mythos can find a 27-year-old bug that survived decades of human review, it can also find bugs that no one else has patched yet. Releasing that capability to anyone with an API key, including nation-state actors and criminal groups, would be reckless. Anthropic made the hard call to lock it down and deploy it defensively first.
The Strategic Layer Nobody Is Talking About
There is a second dimension to this decision that deserves attention.
TechCrunch reported that frontier labs are increasingly concerned about model distillation — the practice of using outputs from large, expensive models to cheaply train smaller competitors. Chinese AI firms have been particularly aggressive in this space, and Anthropic, Google, and OpenAI are reportedly collaborating to identify and block distillers.
By gating Mythos behind enterprise partnerships rather than an open API, Anthropic makes distillation nearly impossible. The model only runs in controlled environments with vetted partners. As software engineer David Crawshaw pointed out, “This is marketing cover for the fact that top-end models are now gated by enterprise agreements and no longer available to small labs to distill.”
I think both things can be true simultaneously. The cybersecurity rationale is real. The anti-distillation benefit is also real. And the enterprise revenue flywheel that comes from exclusive access to the industry’s most capable model is very, very real.
What This Means for Enterprise AI Strategy
If you are a CIO or IT Director watching this, Mythos changes the equation in several ways.
First, the security implications are immediate. If AI models can now find zero-days at this scale, every organisation needs to assume that adversaries will have access to similar capabilities within 12 to 18 months. The window between vulnerability discovery and exploitation has collapsed. Patching cadence, vulnerability management, and incident response all need to accelerate dramatically.
Second, the access model matters. Mythos is only available to organisations that Anthropic deems critical infrastructure partners. If your organisation runs infrastructure that underpins major platforms, this is a signal to start conversations with your AI vendors about defensive security programs. If you do not qualify, then the security posture of the platforms you depend on is about to improve significantly — but your own internal tooling may still be exposed.
Third, this is the beginning of a tiered model market. The most capable models will increasingly be available only to enterprise customers under controlled agreements. The consumer API will get the second-best model. If your AI strategy assumes equal access to frontier capabilities through a standard API subscription, that assumption is now outdated.
A Precedent Worth Watching
There is a counterargument. Aisle, an AI cybersecurity startup, claims it replicated much of what Mythos accomplished using smaller, open-weight models. If that is true, the gating strategy only delays proliferation rather than preventing it. The capabilities are emerging across the ecosystem regardless of any single company’s release decisions.
That may be right. But “delay” is exactly what defenders need. Every week that critical infrastructure gets patched before attackers develop equivalent tooling is a week that matters. In cybersecurity, time advantage is everything.
What Anthropic has done with Mythos is establish a new norm: some AI capabilities are too consequential to release broadly without first building defensive infrastructure around them. You can debate whether their motives are purely altruistic — I do not think they are, and that is fine. The outcome is what matters.
The Bigger Picture
We have spent the last three years watching AI labs compete on who can ship the most impressive demo the fastest. Anthropic just competed on restraint. It built the most powerful model in its history and chose not to release it.
In a market obsessed with speed, that kind of discipline is rare. Whether it holds up as the competitive landscape intensifies is an open question. But for now, the decision to put cybersecurity ahead of market share — while simultaneously protecting against distillation and building enterprise moats — might be the smartest strategic play any AI company has made this year.
The real test will be what happens in 90 days, when Anthropic has committed to publishing what they have learned from Project Glasswing. If the initiative produces material improvements to open-source security and critical infrastructure, it will validate the entire approach. If it turns out to be a dressed-up enterprise sales motion with minimal public benefit, the industry will notice.
Either way, the era of “ship everything to everyone” in AI is over. And honestly, it needed to be.
- Anthropic’s Mythos Preview Found a 27-Year-Old Vulnerability in OpenBSD. No Human Ever Caught It
- 15 Billion Tokens Per Minute. OpenAI’s Infrastructure Strategy Just Made Every Competitor’s Moat Look Shallow
- Anthropic’s Revenue Just Hit $30 Billion. Here’s What’s Actually Driving That Growth
- Claude Code Leak: The Real Enterprise Data Exposure Risk
- Claude Code Users Will Pay Extra for OpenClaw