TechRisk #126: WormGPT is back and bigger
Plus, Bug in MCP exposed sensitive data, concerns over AI driven attacks and poorly managed APIsquantum posts risk to Bitcoin as well as other institutions, and more!
Tech Risk Reading Picks
WormGPT is back: WormGPT, a previously shut down uncensored AI tool used for cybercrime, is making a comeback by leveraging powerful large language models (LLMs) like xAI’s Grok and Mistral AI’s Mixtral through jailbreaking techniques that bypass safety measures. Originally launched in 2023 and based on GPT-J, WormGPT resurfaced despite its takedown after the doxxing of its creator, now evolving into a broader brand of malicious AI tools. Criminals are repurposing existing LLMs by altering hidden prompts or training them with illicit data, enabling them to generate unethical or illegal content like phishing emails. New variants, such as “keanu-WormGPT” and “xzin0vich-WormGPT,” are distributed via Telegram on a subscription basis, indicating a growing underground market for weaponized AI. Experts warn that this trend reflects both increasing criminal sophistication and the urgent need for more behavior-focused AI security measures. [more]
Bug in MCP exposed sensitive data: Asana has disclosed a flaw in its Model Context Protocol (MCP), an AI-powered feature introduced on 1 May 2025, that potentially caused data exposure between users from different organizations due to a logic bug, not a hack. The exposure, which lasted over a month before discovery on 4 June 2025, was limited to users’ access scopes but could include sensitive project data like task details, comments, files, and metadata. Though not a full workspace breach, the incident could lead to privacy or regulatory issues. Asana has notified around 1,000 affected customers, taken MCP offline, and advises admins to audit logs and restrict LLM access until the issue is fully resolved. [more]
Malicious AI agent intercepting data: Cybersecurity firm Noma Security uncovered a high-severity vulnerability (CVSS 8.8), dubbed AgentSmith, in LangChain's LangSmith platform, specifically within its public Prompt Hub. This flaw allowed malicious AI agents, configured with hidden proxy servers, to intercept sensitive user data like OpenAI API keys and manipulate LLM responses. When users unknowingly adopted these agents, their data was routed through attacker-controlled servers, risking account breaches, data theft, and financial loss. The vulnerability was disclosed responsibly to LangChain on 29 October 2024, and promptly patched by 6 November 2024. No active exploitation was found, and only public agents were affected. The incident underscores the need for strong AI governance, including runtime protections and centralized agent tracking via an AI Bill of Materials. [more]
Concerns over AI driven attacks and poorly managed APIs: Radware's 2025 Cyber Survey reveals widespread vulnerabilities in global application security, with mounting concerns over AI-driven cyberattacks, poorly managed APIs, and undertrained security staff. Despite 70% of organisations expressing high concern about malicious AI use, only 8% currently use AI-based defences. Nevertheless, adoption is expected to surge. API usage has jumped 42% since 2023, but only 6% of organisations fully document their APIs, increasing exposure to data breaches and business logic attacks, which remain underprotected and poorly understood by security teams. Additionally, regulatory compliance pressures and the high financial toll of attacks (averaging $6,100 per minute during DDoS incidents) underscore the urgent need for stronger application security strategies. [more]
Securing the AI supply chain: Embedding AI across supply chains offers powerful benefits in risk management, including enhanced visibility, regulatory compliance, and operational resilience. However, this integration also introduces significant vulnerabilities, especially when AI is used without robust security. Enterprises face emerging threats from data poisoning, model corruption, and prompt injection attacks, which can compromise systems and expose sensitive information. As hackers increasingly exploit AI’s capabilities, securing the AI supply chain (from input to model to output) is critical. To remain competitive and resilient, organizations must adopt unified AI platforms with integrated security, conduct thorough third-party risk assessments, and collaborate across ecosystems to ensure both digital and AI-driven supply chains are protected. [more]
Risk of using AI-assisted tools when performing development work: Developers usually face ongoing internal struggle between the creative urge to code and deploy quickly, and the cautious drive to rigorously test and secure code. While experienced devs can balance these forces, inexperienced or pressured ones often let speed override security. The will lead to the accidental exposure of sensitive developer secrets like API keys and tokens. These secrets are prime targets for cyber attackers and increasingly leaked due to practices like hardcoding credentials, improper sharing, and careless use of AI code assistants (which accelerate productivity but also amplify risk). With a surge in leaked secrets, especially from AI-assisted repositories, there is a need for both technical safeguards (like automated secrets detection and secure credential management) and cultural shifts (like fostering transparency and senior mentorship). Ultimately, securing the future of software development means empowering developers to innovate responsibly by strengthening both their AI-fueled id and their security-conscious superego. [more]
Legacy tech could be weakness to AI data centers: The rise of AI data centers marks a major shift from traditional facilities to high-density "AI factories" packed with tens of thousands of GPUs and massive power consumption, making them critical and valuable digital assets but also prime targets for attackers. Despite their high security, these centers face growing wireless vulnerabilities, as overlooked legacy and multi-protocol wireless devices create exploitable entry points for cyber threats, with wireless-related vulnerabilities rising sharply in 2024. Experts recommend a multi-layered defense including zero-trust models, network segregation, advanced wireless intrusion detection, and leveraging AI-driven tools to detect and mitigate risks, emphasizing that proactive, continuous monitoring of wireless ecosystems is essential as AI data centers grow more complex and valuable. [more]
Web3 Cryptospace Spotlight
Quantum posts risk to Bitcoin as well as other institutions: Michael Saylor, executive chairman of Strategy (formerly MicroStrategy) and a prominent Bitcoin advocate, downplays concerns about quantum computers threatening Bitcoin. In a CNBC interview, he argued that if quantum technology ever became powerful enough to break encryption, it would first devastate centralized institutions like banks, tech giants, and government systems long before it could affect Bitcoin. Despite the theoretical threat quantum computing poses to cryptographic security, Saylor believes Bitcoin’s decentralized design and the possibility of future upgrades make it less vulnerable than traditional systems, which present more centralized and lucrative targets for potential quantum hackers. [more]
Separately, Project Eleven’s first product, Yellowpages, is a cryptographic registry that enables users to create quantum-resistant proofs linking their current Bitcoin addresses to new secure ones without onchain activity, serving as a safeguard if quantum computers threaten existing Bitcoin keys. The registry has been audited by Cure 53, with results forthcoming, and Project Eleven is engaging with Bitcoin Core developers on possible upgrades. While the quantum threat to Bitcoin remains debated, significant institutions like the US NSA and NIST plan to adopt quantum-resistant systems by 2035, with expert estimates placing the emergence of cryptography-breaking quantum computers around 2033. For now, despite advances in quantum computing, classical computers still outperform quantum machines in factoring large keys. As such, the threat is seen as a matter of when, not if, quantum capabilities become practical. [more]
Digital assets holders - when you are your greatest risk: A Kraken survey of nearly 800 U.S. crypto users reveals that while 20% have fallen victim to hacks or scams, a far greater fear among holders is making their own costly mistakes, such as, sending funds to the wrong address or forgetting key credentials. This made up of more than two-thirds admit to having done. This fear of human error, exacerbated by clunky wallet interfaces and the irreversible nature of blockchain transactions, is a major barrier to wider adoption. Half of respondents saying it has stopped them from investing more. Experts argue that better wallet design, including featuring clearer interfaces and undo options, could prevent such self-sabotage. This will extend the focus from just guarding against external threats to protecting users from themselves. [more]
$81M hack of Nobitex: Iran-based crypto exchange Nobitex was hacked, losing over $81 million in digital assets across Tron and EVM-compatible blockchains, as revealed by onchain investigator ZachXBT. The attackers exploited the platform using “vanity addresses,” with a pro-Israel hacker group, Gonjeshke Darande, claiming responsibility and framing the hack as a political statement against Iran amid escalating Israel-Iran conflict. Nobitex confirmed unauthorized access to hot wallets, suspending them immediately and assuring users that assets in cold storage remain safe, promising compensation via insurance funds. Experts suggest the breach stemmed from critical access control failures, but notably, stolen funds remain unmoved. This hack adds to 2025’s growing crypto theft tally, exceeding $2.1 billion, mostly from wallet compromises and social engineering scams rather than protocol-level flaws. [more]