Leaked documents expose problems with Anthropic’s new AI model
According to Fortune reports, internal Anthropic documents have been leaked, revealing details about a new generation of AI models called “Claude Mythos.” The leak was caused by human error in the configuration of Anthropic’s content management system. Security researchers discovered that the company left nearly 3,000 unpublished assets in a publicly available data lake. These included images, PDFs, audio files, and draft blog posts.
Perhaps more worrying is that these documents suggest that Claude Mythos is already in the testing phase. The documents state that Anthropic believes this new model “poses unprecedented cybersecurity risks.” In a draft blog post, the company reportedly said the system was “currently well ahead of any other AI model in terms of cyber capabilities.” They warned that this could lead to models exploiting vulnerabilities much faster than defenders can react.
Anthropic’s cautious approach and past incidents
Due to these potential risks, Anthropic has planned a cautious deployment strategy. They first wanted to give early access to cybersecurity defense organizations. This would give defenders a head start in hardening their systems against AI-based attacks. The concern is not unfounded either. Anthropic previously reported that a Chinese state-sponsored group used Claude Code to infiltrate around 30 organizations, including technology companies and government agencies.
The leaked documents also mentioned an invitation-only CEO summit planned at an 18th-century mansion in the English countryside. Anthropic CEO Dario Amodei was scheduled to host European business leaders to discuss the adoption of AI and showcase the novel capabilities of the Claude model.
Industry Reactions and Competitive Landscape
Once the news spread on the social network X, Elon Musk did not hesitate to comment. He wrote, “Seriously disturbing,” and the post quickly gained tens of thousands of views and likes. Musk has a habit of commenting on negative news about his competitors, especially those he disagrees with. Anthropic was founded by former OpenAI employees, and Musk has openly criticized OpenAI and the AI industry’s broader approach to security.
Meanwhile, Musk’s AI company xAI recently launched a new paid subscription tier called “SuperGrok Lite” for $10 per month. They imposed limits on free Grok users to push them towards paid plans. The company offers several subscription options, including SuperGrok for $30 per month, SuperGrok Heavy for $300 per month, and Grok Business for $30 per month.
I think this situation highlights the tension in the AI industry between rapid development and security concerns. As companies strive to develop more powerful models, security considerations sometimes take a back seat. The fact that these documents were accidentally made public shows how even the most basic security practices can be overlooked.
Human error in CMS configuration is a reminder that technical systems are only as secure as the people who manage them. Having thousands of unpublished assets publicly available seems like a fundamental oversight. One wonders what other security vulnerabilities might exist in these rapidly evolving AI companies.
What interests me is how different companies approach these challenges. Anthropic appears to be taking a more cautious path with its new model, while others may move forward more aggressively. But then again, the leak itself suggests that their internal processes may not be as strict as they would have us believe.
Industry reactions are also telling. Musk’s quick comment shows how competitive this space has become. Everyone is looking at everyone else, ready to point out flaws or missteps. This creates an environment in which transparency can suffer, as companies fear giving ammunition to their competitors.
I wonder how much we ignore the capabilities and risks of these AI models. If this information was leaked accidentally, what other important details remain hidden? And how can we balance innovation with appropriate safeguards when technology is evolving so quickly?
![]()



