
AI cybersecurity is now a formal competitive front between OpenAI and Anthropic, with OpenAI finalizing an advanced security product for a limited partner release and Anthropic leading a tightly controlled effort called Project Glasswing aimed at finding critical software vulnerabilities before attackers do.
Summary
- OpenAI is finalizing an AI cybersecurity product that will first be released to a limited set of partners.
- Anthropic’s Project Glasswing is a controlled initiative focused on proactively finding critical software vulnerabilities.
- Both efforts raise fundamental questions about who controls AI attack and defense tools and who is responsible if something goes wrong.
Artificial intelligence has evolved from a tool that helps defenders understand threats to a tool that can independently find and exploit vulnerabilities. OpenAI and Anthropic are now tapping directly into this space, with implications for governments, businesses, and the millions of software systems that support the world’s financial infrastructure.
OpenAI is finalizing an AI cybersecurity product with advanced capabilities and plans to initially offer it to a limited group of partners, according to Tech Startups. Anthropic is leading a parallel effort internally called Project Glasswing, a tightly controlled initiative designed to hunt down critical software vulnerabilities before malicious actors find them first.
These two announcements mark a shift in how the two leading AI labs position themselves. Both are moving from general-purpose AI to security-specific products with direct offensive and defensive capabilities. The question is no longer what AI can do when it comes to cybersecurity. It’s who is in control and who is responsible if something goes wrong.
What Anthropic’s balance sheet shows
Anthropic has already demonstrated the scale of what AI security tools can achieve. As crypto.news reported, the company limited access to its Claude Mythos Preview model after initial tests revealed it could uncover thousands of critical vulnerabilities in widely used software environments, including a 27-year-old bug in OpenBSD and a 16-year-old remote execution flaw in FreeBSD. Anthropic said: “Given the pace of advances in AI, it will not be long before such capabilities proliferate, potentially beyond those actors committed to deploying them safely. »
Industry data cited by Anthropic shows a 72% year-over-year increase in AI-based cyberattacks, with 87% of global organizations reporting exposure to AI-based incidents in 2025. Project Glasswing is positioned as Anthropic’s controlled effort to stay ahead of this curve.
The risk of dual-use AI security tools
The deeper problem for regulators and industry is that the same AI tool that detects a vulnerability defensively can find it offensively. As crypto.news noted, a joint study by Anthropic and MATS Fellows found that Claude Sonnet and GPT-5 could produce simulated exploits against Ethereum smart contracts worth $4.6 million in testing, and discovered two new zero-day vulnerabilities in nearly 3,000 recently deployed contracts.
This dual-use reality makes essential the controlled deployment strategies that both companies pursue. But whether limited access is enough to prevent proliferation is one that no lab has fully answered.


