Decentralised News Logo
AI

The Day AI Became a Zero-Day Threat

Why Anthropic’s Unreleased AI Model Is a Warning for the Digital Economy.

A striking line has been crossed in artificial intelligence, and it did not arrive through a chatbot gimmick or another benchmark war. It arrived through cybersecurity.

This month, Anthropic said it would not make its new Claude Mythos Preview model generally available. Instead, it placed the system behind invitation-only access inside a defensive initiative called Project Glasswing. Anthropic said Mythos had already found thousands of high-severity vulnerabilities, including flaws in every major operating system and web browser, and described the model as capable of surpassing all but the most skilled humans at finding and exploiting software vulnerabilities.

That is not just another AI safety headline. It is a sign that cyber has moved from being one more application of AI to being one of the technologies most likely to shape the next phase of the AI era itself. Until recently, the popular debate around artificial intelligence revolved around jobs, search, creativity and productivity. Now a different question is forcing its way to the top. What happens when machines become unusually good at finding the weak seams in the software world we already depend on?

The first answer is uncomfortable. The cyber economy has always rested on a fragile bargain. Defenders hoped they could patch fast enough, upgrade enough, monitor enough and train enough staff to stay ahead of attackers. But the digital world was already sprawling, layered and unevenly defended before this latest leap in model capability. Banks, governments, utilities, exchanges and logistics firms still run mixtures of modern cloud systems and decades-old legacy software. Reuters reported this week that security experts are especially worried about banking because banks often operate highly interconnected stacks that combine current tools with old infrastructure, creating a large and potentially scalable attack surface.

That is what makes the Anthropic announcement so important. The problem is not simply that a powerful model can find bugs. Security researchers have always found bugs. The new issue is speed, volume and accessibility. Anthropic’s own research page says Mythos wrote complete exploit chains autonomously, including one privilege-escalation pipeline that took under a day and cost under $2,000 to produce. The company also said that even currently available frontier models can already find serious vulnerabilities in web apps, crypto libraries and the Linux kernel, though they are less effective than Mythos at turning those findings into working exploits.

Once that becomes true, cyber stops looking like a specialized domain reserved for elite operators. It starts to look more like industrial capability. The bottleneck shifts from deep genius to workflow, access and scale.

That is why official reactions have been unusually serious. Reuters reported that officials in the United States, Canada and Britain have been meeting financial institutions to discuss the implications of Mythos. In Britain, according to Reuters, the Bank of England, the Financial Conduct Authority and the Treasury have been in talks with the National Cyber Security Centre after concerns that the model could expose vulnerabilities in critical systems.

The British AI Security Institute’s own evaluation helps explain the concern. It found that Mythos showed significant improvement on multi-step cyber attack simulations, becoming the first model to solve its 32-step corporate network attack scenario from start to finish in some runs. The institute also found Mythos could autonomously attack small, weakly defended and vulnerable enterprise systems where network access had already been obtained, while warning that future frontier models will be more capable still.

That creates a transition problem that is larger than Anthropic. Even if this one model remains restricted, the company itself says the trajectory is clear and that similar capabilities are unlikely to remain rare for long. Anthropic’s argument is effectively that defensive users need a head start because the offensive capability frontier is moving too fast to assume it can be safely democratized right away.

This is where the future of cyber gets interesting, because the real story may not be that AI breaks the internet. The real story may be that AI forces a great unwinding of technical debt.

For years, the modern economy has run on a quiet compromise. Companies kept old systems alive because replacing them was expensive, disruptive and politically unrewarding. Boards preferred digital transformation speeches to the thankless work of code migration. Governments layered new tools onto old systems. Banks did the same. So did hospitals, telecom groups and critical infrastructure providers. AI now threatens to make that compromise unaffordable. Anthropic itself says defenders should think beyond vulnerability finding and use models to shorten patch cycles, analyze cloud misconfigurations and accelerate migrations from legacy systems to more secure ones. That point is easy to miss, but it may prove central. AI is not only making attack cheaper. It is making procrastination more expensive.

In that sense, frontier AI could become the greatest upgrade pressure the software industry has seen in decades. It may force the move from brittle, inherited infrastructure to systems that are easier to verify, patch and monitor. The losers in that world are not only weak defenders. They are also institutions whose business models quietly depended on the fact that no one had time to inspect everything.

There is a second layer to this, and it reaches into crypto.

Crypto markets often imagine cyber risk through a narrow lens. People worry about exchange hacks, wallet drains, phishing and smart contract bugs. Those fears are real. But the deeper issue is that crypto sits at the intersection of old and new computing assumptions. It is part software, part money, part infrastructure, part ideology. That makes it unusually exposed to changes in the cyber environment.

Anthropic’s technical write-up says Mythos found weaknesses in major cryptography libraries and in implementations tied to TLS, AES-GCM and SSH, with some flaws potentially allowing forged certificates or decrypted communications if exploited. That is an important reminder. In practice, cyber failure often arrives through implementation flaws, not the clean mathematical collapse of a cryptographic primitive. The danger is usually in the plumbing before it is in the theorem.

 

For crypto, that means the near-term AI threat is likely less about AI somehow “breaking Bitcoin” and more about AI supercharging the discovery of bugs in wallets, bridges, validators, exchange infrastructure, SDKs, browser extensions and the software supply chains surrounding digital assets. In other words, AI may attack crypto first where crypto is most human: in the code people wrote, the keys they mishandle, the interfaces they trust and the infrastructure they forgot to upgrade.

Then there is the longer-horizon question of quantum risk, which is related but distinct. NIST says the time to migrate to post-quantum cryptography is now and notes that current public-key encryption will be at risk if quantum computers mature enough to break today’s standards. Google Research went further last month, saying most blockchain technologies and cryptocurrencies currently rely on ECDLP-256 for critical aspects of their security and arguing that post-quantum cryptography offers a viable path to protect the digital economy, though implementation will take time. Ethereum’s own roadmap says some of the cryptography securing present-day Ethereum would be compromised by quantum computers and that quantum resistance should be pursued as soon as possible, even if practical threats are likely still years away.

That matters because AI and quantum are often discussed separately, but institutions will increasingly have to think about them together. AI is raising the speed at which software flaws can be found, triaged and weaponized. Quantum research is raising the urgency of replacing vulnerable public-key schemes over time. The result is a new security reality. Cyber is no longer just about keeping attackers out. It is about how quickly a civilization can modernize its trust infrastructure.

That phrase may sound abstract, but it is not. Trust infrastructure is your certificate chain, your identity layer, your bank’s mainframe, your exchange’s custody stack, your cloud permissions, your open-source dependencies, your secure messaging protocols and your signing keys. It is also your assumptions about what is expensive for an attacker to do. AI is starting to rewrite those assumptions.

This is why Anthropic’s decision may be remembered less as a product launch story and more as a governance signal. A frontier lab looked at a model’s cyber capability and concluded that the commercial instinct to ship widely should, at least for now, lose to the strategic instinct to gate access. Whether one agrees with every part of that decision, it tells the market something important. We have entered a period in which the most valuable AI systems may not be the ones that answer consumer questions fastest, but the ones that can inspect the hidden machinery of digital society better than people can.

The viral angle here is not that AI will suddenly make hackers omnipotent. It is subtler and more consequential. AI is turning cyber from a background technical function into a leading indicator of economic and institutional quality. The firms, protocols and countries that modernize fastest will not just be safer. They may become more investable, more credible and more sovereign in the digital sense of the word.

And that is where crypto comes back into the frame. The old crypto promise was that blockchains could reduce trust in fallible intermediaries. The new challenge is that every digital system, decentralized or not, still depends on code, keys, clients and human operations. In the AI age, the winners will be the systems that can combine cryptographic assurance with adaptive defense, rapid patching and credible migration paths to stronger standards. This is no longer just a story about decentralization. It is a story about survivable digital architecture.

The future of cyber, then, may not belong to whoever has the smartest hackers or the biggest models. It may belong to whoever rebuilds trust the fastest while the machines are getting better at breaking it.

Recommended reading:

Investing in the “Compute” Wars: How to Play the GPU-Crypto Synergy for 100x Returns

AI Tools That Actually Help Crypto Traders Make Better Decisions in 2026

Top AI Crypto Sectors and Projects (2026 Deep Dive)

Financial Literacy in 2026: The Skills That Protect You in an AI Economy

AI For Blue-Collar, Trade, and Service Workers: The Opportunity Nobody Talks About

Newsletter

Get the most talked about stories directly in your inbox

About Us

We are dedicated to delivering the best digital asset news, reviews, guides, interviews, and more. Stay tuned!

Email: press@decentralised.news

Copyright © 2026 Decentralised News. All rights reserved.