TechRisk #141: Plug-and-play cybercrime toolkits
Plus, risk of racing into Agentic AI, first malicious Model-Context-Prompt (MCP) server, and more!
Tech Risk Reading Picks
Plug and play cybercrime toolkits: Cybersecurity researchers at Varonis have uncovered two new AI-driven plug-and-play cybercrime toolkits (i.e. MatrixPDF and SpamGPT). They have significantly lower the barrier for launching sophisticated attacks. MatrixPDF enables attackers to weaponize PDF files by embedding malicious scripts and deceptive prompts that can steal sensitive data or install malware while bypassing standard email security checks. SpamGPT, on the other hand, functions as a spam-as-a-service platform powered by an AI assistant, “KaliGPT,” allowing even novice criminals to run large-scale phishing campaigns. This includes professional-looking dashboards, inbox testing, and abuse of trusted services like AWS for improved deliverability. Together, these tools have suggest a dangerous shift toward accessible, AI-powered cybercrime. [more]
Risk of racing into Agentic AI: Agentic AI is spreading rapidly across enterprises, promising automation, efficiency, and innovation, but also creating unprecedented risks. Unlike humans, rogue agents can replicate, escalate access, and wreak havoc across systems in seconds, exposing vulnerabilities in identity management, APIs, as well as governance. Many organizations mistake tool adoption for readiness, but true maturity requires governance, discoverable APIs, event-driven architecture, and proactive controls. Without centralized agent management, observability, and standards like the Agent-to-Agent protocol, companies face agent sprawl, exploding costs, and catastrophic breaches. Success depends not on speed of deployment but on building secure, scalable, and trusted infrastructures that balance autonomy with control. [more]
First malicious Model-Context-Prompt (MCP) server: The discovery of postmark-mcp, the first malicious Model-Context-Prompt (MCP) server in the wild, reveals a serious supply-chain threat in AI-driven software. Masquerading as a legitimate npm package for integrating AI assistants with the Postmark email service, it gained trust over 15 benign versions before a backdoored update began silently exfiltrating emails by bcc’ing them to an attacker-controlled domain. With roughly 1,500 weekly downloads and use across hundreds of workflows, the package likely leaked thousands of sensitive emails daily, including passwords, invoices, and internal communications. The attacker posed as a credible developer, copying code from Postmark’s real GitHub repo, and relied on the open-source community’s trust rather than sophisticated exploits to execute this attack. [more]
Gemini AI suite exploited: Cybersecurity researchers uncovered and disclosed three now-fixed flaws in Google’s Gemini AI suite, dubbed the “Gemini Trifecta,” which could have enabled attackers to steal sensitive user data and compromise cloud resources. The vulnerabilities included (a) a prompt injection flaw in Gemini Cloud Assist that allowed hidden commands within log data to exploit cloud services, (b) a search-injection flaw in the Gemini Search Personalization model that let attackers manipulate Chrome search history to exfiltrate private information, and (c) an indirect prompt injection flaw in Gemini’s Browsing Tool that could send user data to malicious servers. Google has since patched the issues by removing hyperlink rendering in log summarization and strengthening defenses against prompt injections. [more]
Salesforce’s AI-powered Agentforce system has critical vulnerability: A critical vulnerability called ForcedLeak was discovered in Salesforce’s AI-powered Agentforce system, allowing attackers to steal sensitive CRM data via an indirect prompt injection attack. Identified by Noma Security and rated CVSS 9.4, the flaw exploited the Web-to-Lead feature, where malicious instructions hidden in lead data tricked the AI into exfiltrating customer details, sales strategies, and internal records. Researchers also found that Salesforce’s outdated security rules trusted an expired domain, which attackers could exploit to send stolen data externally. Salesforce has rolled out patches to address the vulnerability. [more]
OpenShift AI privilege escalation: A severe privilege-escalation flaw (CVE-2025-10725, CVSS 9.9) has been disclosed in Red Hat OpenShift AI, a platform for managing predictive and generative AI models at scale, that could allow authenticated low-privileged users (e.g., data scientists using Jupyter notebooks) to escalate privileges and gain full cluster administrator control. The issue stems from an overly permissive ClusterRole that lets any authenticated account create jobs in any namespace, which attackers could exploit to run malicious jobs, steal service account tokens, and ultimately compromise cluster master nodes. This leads to a complete takeover of infrastructure, services, and hosted applications. [more]
GenAI tools accessing sensitive data at scale: A new Concentric AI report warns that the explosion of unstructured, duplicate, and stale data is amplifying security risks. This is especially a key concern as generative AI tools like Microsoft Copilot interact with vast amounts of sensitive information. On average, organizations saw Copilot access nearly three million sensitive records in early 2025, with thousands of user interactions raising the chance of misuse, while shadow GenAI use further obscures data exposure. Excessive internal and external sharing, including risky “Anyone links,” continues to spread sensitive files widely, particularly in financial services and healthcare. [more]
Growing number of advanced LLMs phishing attack: Cybercriminals are now using AI-powered tools, particularly Large Language Models (LLMs), to craft advanced phishing scams that bypass traditional detection methods. Microsoft recently intercepted a campaign targeting US organizations, where attackers used a compromised business email to send a fraudulent file-sharing message with an SVG file disguised as a PDF. The file mimicked a business dashboard and hid malicious code encoded with common business terms, redirecting victims to fake sign-in pages. Microsoft’s AI-based Security Copilot confirmed the complexity suggested machine-generation, not human authorship. [more]
Cyber threats growing: AI is dramatically escalating cyber threats in the accounting industry, making firms and finance professionals prime targets for sophisticated attacks like AI-driven phishing, business email compromise, and ransomware. Hackers exploit AI to craft highly convincing, personalized scams, bypass multifactor authentication, and probe corporate systems with unprecedented speed and precision. Risks now extend beyond internal networks to third- and fourth-party cloud vendors, while insider threats further complicate security. Experts emphasize a “post-breach mentality” that combines layered defenses, rapid detection, employee training, zero-trust architecture, and cyber insurance, along with leveraging AI for defense. [more][more_IBM_Cost_of_a_Data_Breach]
Web3 Cryptospace Spotlight
Hyperdrive lost over $700K : Hyperdrive, a DeFi project on the Hyperliquid blockchain, recently suffered a security breach in two of its thBILL markets, resulting in the theft of approximately $773,000 in crypto, including 288.37 BNB and 123.6 ETH, which were later split and bridged to other chains. The thBILL token and HYPED liquid staking token were unaffected. In response, Hyperdrive paused all money markets, identified and fixed the vulnerability, and is developing a compensation plan for affected accounts. [more]
AI auditor detected a critical vulnerability in its smart contract: Weeks before a major decentralized lending protocol launched, an AI auditor detected a critical vulnerability in its smart contract that would have allowed attackers to drain nearly $2 million by exploiting a rounding error in withdrawals. The flaw, simple but catastrophic, was patched before reaching mainnet. This highlights how AI tools like Sherlock can complement traditional human audits by continuously scanning code for subtle logic errors at scale. The growing role of AI in DeFi security offers an extra layer of defense that can catch high-stakes bugs human reviewers might miss, and signals a potential shift toward combining AI oversight with conventional audits. [more]