Skip to main content

Command Palette

Search for a command to run...

The Model That Found a 17-Year-Old Bug in FreeBSD

Anthropic's Mythos Is Too Dangerous to Release. That Tells You Everything.

Published
3 min read
The Model That Found a 17-Year-Old Bug in FreeBSD

There's a certain kind of AI announcement that doesn't generate the usual buzz. No launch party. No waitlist. No marketing. Just a careful, sober blog post explaining why you've built something remarkable and decided not to let anyone use it.

That's what Anthropic did on April 7th.

The company quietly disclosed that its next-generation frontier model - Claude Mythos Preview - has, in recent weeks, identified thousands of previously unknown zero-day vulnerabilities across every major operating system, every major web browser, and a range of other widely-deployed software. Not theoretical vulnerabilities. Real, exploitable ones, including a 17-year-old remote code execution flaw in FreeBSD that allows a remote attacker to gain root access to any machine running NFS. Anthropic found it. Autonomously. The CVE is now logged as CVE-2026-4747.

The company's conclusion: Mythos Preview is too dangerous to release to the public. So instead of a general rollout, they built Project Glasswing.

Glasswing is a controlled-access security collaboration that gives a small group of heavyweight partners - Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks - restricted access to Mythos specifically for patching critical software. The goal is to use the model's superhuman vulnerability-finding abilities defensively: let it find the holes before malicious actors do, and get them fixed before anyone else knows to look.

It's a genuinely interesting approach to a genuinely hard problem, and I find myself of two minds about it.

On one hand, this is responsible disclosure at a civilizational scale. Anthropic found a model that could be a hacker's dream weapon, and rather than release it and deal with the fallout, they built a framework to use it carefully. The partners list reads like a who's who of the internet's critical infrastructure. That's not nothing.

On the other hand, we're now in a world where a private company has decided, unilaterally, which organizations get access to a capability that could reshape the security landscape. Project Glasswing was not voted on. There was no regulatory consultation. Amazon and Apple get in; your regional bank, your city's water utility, your local hospital network do not.

Anthropic says the capabilities "emerged surprisingly quickly" during development — which is perhaps the most unsettling sentence in the entire announcement. They didn't set out to build the world's most dangerous vulnerability scanner. It just became one.

That's the thing about frontier AI development in 2026: the capabilities keep surprising even the people doing the building. Glasswing feels like a reasonable response to an unreasonable situation. Whether it scales — whether a controlled-access model can patch a world's worth of software before attackers figure out what's happening — is a different question entirely.

Sources:

10 views

Decoding AI: From Theory to Real-World Applications

Part 3 of 19

Artificial Intelligence is reshaping our world, but how does it actually work? In this series, we’ll break down AI and Machine Learning fundamentals, explore cutting-edge advancements, and apply practical techniques to real-world problems.

Up next

OpenAI vs. Anthropic’s Agentic Coding Showdown Is About More Than Bragging Right

There was something oddly human about the way this played out. On the morning of February 5, 2026, OpenAI and Anthropic were reportedly set to release their new agentic coding models at the same time: