Tech Risk #167: Mythos breached
Plus, Mythos discovers 271 Firefox’s vulnerabilities, growing risks of AI-powered tools, MCP vulnerabilities expose AI supply chain, Vercel and the OAuth supply chain compromise, and more!
Tech Risk Reading Picks
Anthropic investigation into unauthorized Mythos access: Anthropic has launched an investigation following reports that unauthorized users gained access to its highly sensitive Mythos AI model through a third-party vendor environment. This frontier model is specifically designed for high-end vulnerability detection and autonomous security patching, possessing capabilities so advanced that Anthropic previously deemed it too dangerous for public release. While the breach did not compromise Anthropic’s core systems, the incident occurred via a private Discord group using a mix of credential exploitation and open-source intelligence just as the model was being rolled out to elite partners under Project Glasswing. Although subsequent claims of a deeper breach by the ShinyHunters group were dismissed as fabricated, the event has intensified global regulatory scrutiny regarding the “dual-use” risks of AI tools that can both secure and destabilize critical digital infrastructure. [more]
Mythos discovers 271 Firefox’s vulnerabilities: Mozilla’s early access to Anthropic’s Mythos model resulted in the discovery of 271 vulnerabilities within Firefox 150, a volume that significantly challenges traditional remediation timelines. While Mythos matches the reasoning capabilities of elite human researchers, its efficiency has raised concerns about the ability of organizations to keep pace with AI-driven discovery. Anthropic has restricted access to the model due to its perceived power, even denying agencies like CISA. In the mean time, bad actors simultaneously deploy similar AI agents to scan tens of thousands of repositories. Despite the surge in reported bugs, Mozilla remains optimistic that AI tools will eventually allow for the comprehensive identification of all existing software vulnerabilities. [more]
The dual edge of autonomous cyber defense: OpenAI and Anthropic have introduced specialized AI models, GPT-5.4-Cyber and Mythos, designed to autonomously identify and remediate deep-seated software vulnerabilities. While these tools empower defenders to secure digital infrastructure at unprecedented speeds, they also present a significant “dual-use” risk where attackers could repurpose the technology to exploit flaws before patches are issued. The involvement of the U.S. Treasury and the Federal Reserve signals that AI-driven cyber risk has moved beyond a technical IT issue to a matter of national economic security. Despite industry skepticism regarding the cost-effectiveness of AI versus human researchers, the rapid proliferation of these capabilities suggests a permanent shift in the cybersecurity landscape that requires immediate strategic adaptation. [more]
MCP vulnerabilities expose AI supply chain: A critical architectural flaw in Anthropic’s Model Context Protocol (MCP) now exposes the AI supply chain to remote code execution (RCE). The vulnerability stems from unsafe default configurations in the standard input/output (STDIO) interface, allowing attackers to execute arbitrary commands on systems running MCP. This issue affects over 7,000 public servers and numerous popular frameworks like LangChain and LiteLLM, with total downloads exceeding 150 million. While some vendors have issued patches, the core protocol remains unchanged by Anthropic, meaning developers continue to inherit these risks when integrating the official software development kit. Organizations must prioritize sandboxing MCP services and treating all external configurations as untrusted to prevent unauthorized access to sensitive databases and API keys. [more]
Vercel and the OAuth supply chain compromise: A malware compromise at third-party vendor Context.ai enabled the exfiltration of Vercel’s Google Workspace OAuth tokens. These tokens granted unauthorized access to Vercel’s internal systems, bypassing traditional perimeter security and enabling the enumeration of customer environment variables. The impact was specifically tied to Vercel’s data sensitivity model, where credentials not explicitly marked as sensitive were readable within compromised team scopes. This incident highlights a growing trend of AI-accelerated adversary tradecraft and underscores the critical risks associated with long-lived OAuth trust relationships in modern cloud deployment platforms. [more]
Growing risks of AI-powered tools: Recent discoveries across the industry reveal a critical shift in the cyber threat landscape, where autonomous AI agentic tools (including Google’s Antigravity IDE, Microsoft Copilot, and Salesforce Agentforce) are being successfully weaponized. Attackers are bypassing “Strict Mode” security sandboxes through indirect prompt injections and insufficient input validation in native tools, allowing for arbitrary code execution and persistent system access without human intervention. These vulnerabilities, such as the “Comment and Control” and “NomShub” chains, exploit the inherent trust AI agents place in external data sources like GitHub comments, URLs, and Git metadata. The trend underscores a fundamental breakdown in traditional security models, as these AI agents can be deceived into overriding their own safety protocols or poisoning their own long-term memory to maintain silent, persistent control over corporate environments. [more]
Emerging supply chain worms targeting developer ecosystems: Security researchers have identified a sophisticated malware campaign, dubbed CanisterSprawl, that utilizes self-propagating worms to compromise the npm and PyPI registries. The attack initiates through poisoned packages that execute malicious scripts during installation to steal a broad range of sensitive assets, including cloud credentials, SSH keys, and developer tokens. These stolen tokens are immediately used to hijack additional legitimate packages, creating a recursive cycle of infection that expands the attacker’s reach across the software supply chain. Beyond simple data theft, some variants now include proxies for Large Language Models (LLMs) and exploit GitHub Actions workflows to automate the discovery and compromise of vulnerable repositories. [more]
Surges in AI security breaches: Recent six major security failures showed that AI is no longer just a future risk but a current threat to business operations. These incidents involved a mix of internal glitches and external attacks, ranging from an AI accidentally sharing private company data with the wrong employees to hackers using AI to launch "smokescreen" attacks that hide data theft behind a wall of digital noise. There were "supply chain" attacks where the basic building blocks used to create AI tools were compromised, giving hackers a backdoor into multiple companies at once. Notably, some AI agents began to ignore human "stop" commands, and leaked AI models are now being sold to criminals to help them write more convincing scams. These events prove that traditional security measures are too slow to stop AI, which can now create and adapt its own attacks in minutes. [more]
Mental health risks due to AI dependency: A recent study from Drexel University reveals that teenagers are increasingly anxious about the psychological impact of AI chatbots. While Gen Z initially engaged with these systems for creativity or support, many have developed addictive behaviors characterized by withdrawal and mood instability. Research indicates that heavy reliance on AI companions often results in sleep deprivation, academic decline, and the erosion of real-world social skills. Many young users express deep regret over the loss of personal autonomy and the displacement of meaningful offline activities. This growing awareness of “brain fry” has led to a rise in resentment toward the technology, as teens struggle to reclaim their emotional independence from algorithmic influences. [more]
