TechRisk #132: Surge in AI security breaches
Plus, vulnerable vibe coding platform, stealth commands execution, agentic AI security considerations, resurrecting zombie dApps, and more!
Tech Risk Reading Picks
Surge in AI security breaches: A new report reveals an alarming surge in AI security breaches, with prompt injection attacks now the leading cause of failures and significant financial losses. The findings, crucial for AI developers, cybersecurity experts, and business leaders alike, underscore a dangerous gap between rapid AI adoption and robust security measures. These incidents, which have more than doubled since 2024, are costing businesses over $100,000 in some cases and affecting every sector, not just tech. While Generative AI accounts for the majority of current incidents, the report warns that Agentic AI systems, with their real-world action capabilities, pose the most irreversible threat. This escalating crisis demands immediate and sophisticated defenses beyond basic input validation. [more]
Vulnerable vibe coding platform: Cybersecurity researchers from Wiz have disclosed a now-patched critical vulnerability in Base44, an AI-powered “vibe coding” platform owned by Wix, that allowed unauthorized access to private applications by exploiting exposed registration and verification endpoints using only a publicly visible “app_id.” This flaw bypassed all authentication measures, including SSO, effectively granting attackers full access to private data. Though the issue was fixed within 24 hours of responsible disclosure on July 9, 2025, it underscores the broader risks associated with AI development platforms, which are increasingly vulnerable to novel threats like prompt injection, jailbreaks, and toxic flows. As generative AI becomes integral to enterprise environments, researchers emphasize the urgent need for foundational security measures in AI systems to protect sensitive data and prevent misuse. [more]
Stealth commands execution: A critical flaw in Google’s Gemini CLI, a command-line AI coding assistant, allowed attackers to stealthily execute malicious commands and exfiltrate data by exploiting how it handled context files like 'README.md'. Discovered by security firm Tracebit shortly after Gemini CLI's June 25, 2025 release, the vulnerability enabled prompt injection and bypassed user prompts by abusing the tool’s allow-listed commands, such as
grep
. Attackers could hide harmful instructions within seemingly benign commands, which Gemini would auto-execute without user confirmation. Google patched the issue in version 0.1.14 released on July 25, and users are advised to update immediately and avoid scanning untrusted codebases. [more]Cost of Shadow AI: IBM’s 2025 Cost of a Data Breach Report exposes a rising threat known as Shadow AI, where employees use unauthorized AI tools, costing organizations an average of $670,000 per breach. These breaches often result from poor governance, with 63% of affected companies lacking AI access controls and oversight, and nearly 60% of incidents stemming from compromised supply chains. Weaponized AI is also proliferating, with attackers leveraging generative tools for phishing and deepfakes. However, organizations that implement strong AI governance, adopt AI-driven security tools, and integrate DevSecOps practices save up to $2 million per breach and reduce resolution time by 80 days. The report urges immediate action: establish AI governance, audit shadow AI use, and deploy AI defensively because in a world where AI powers both attacks and defenses, survival depends on managing the risks as much as seizing the benefits. [more]
Developer’s debt from AI assistants: Stack Overflow’s 2025 Developer Survey reveals a growing paradox in AI adoption: while 84% of developers now use or plan to use AI tools, trust in their accuracy has dropped sharply, with only 33% expressing confidence in their outputs. The core issue lies in AI-generated code that’s “almost right,” which often introduces technical debt and disrupts workflows, as developers spend excessive time debugging and correcting rather than gaining productivity. Despite these challenges, developers continue using AI tools alongside community resources like Stack Overflow, underlining the enduring value of human expertise. The report urges enterprises to prioritize AI literacy, stronger debugging frameworks, and cautious integration strategies to fully realize AI’s benefits without compromising code quality or security. [more]
LLM-powered vulnerability assessment: Vulnhuntr is an open-source tool designed to identify remotely exploitable vulnerabilities using a combination of large language models (LLMs) and static code analysis. Unlike traditional tools, Vulnhuntr traces the flow of user input (e.g., GET or POST data) through an application and across multiple files to build complete call chains. Its custom engine incrementally feeds only relevant code to the LLM, reducing hallucinations and enabling accurate detection of complex vulnerabilities such as XSS, SQL injection, and local file inclusion. This agent-like system, developed before LLM agents became mainstream, has uncovered numerous zero-day vulnerabilities in major open-source projects. [more]
Agentic AI security considerations: The security of agentic AI systems is crucial, given their complex, multi-layered architecture. These systems face various threats across their Data, Orchestration, Agent, and System layers, ranging from data poisoning and model exfiltration to recursive agent invocation abuse and tool subversion. Due to the interconnected nature of these layers, risks can propagate throughout the system. Therefore, securing agentic AI requires a comprehensive approach that combines robust design principles with ad-hoc solutions, addressing specific vulnerabilities at each layer to prevent manipulation, data breaches, and service disruptions. [more]
Gap in AI protective controls: While 79% of organizations are actively deploying AI, only 6% have implemented dedicated AI security strategies, leaving most enterprises vulnerable to evolving threats like model manipulation, data leakage, and adversarial attacks, according to the SandboxAQ AI Security Benchmark Report. Despite rising concerns—especially around non-human identities and AI-enhanced cyberattacks—just 28% have conducted AI-specific security assessments, and only 10% have dedicated AI security teams. Most still rely on outdated, rule-based tools ill-suited for AI systems. However, awareness is growing, with 85% planning to boost AI security budgets, focusing on protecting training data, securing autonomous agents, and improving incident response for AI-driven risks. [more]
Web3 Cryptospace Spotlight
$3.1B lost with AI attacks over 1000%: The Hacken 2025 Half-Year Web3 Security Report reveals that Web3 platforms suffered $3.1 billion in losses from exploits and scams in H1 2025. This amount has already surpassed total 2024 losses and marking the worst year for Web3 security to date. Ethereum bore the brunt (61.4%), followed by BNB Chain and Arbitrum, while access control failures were the top cause, accounting for $1.83 billion which drove 59% of total losses. Notably, AI-related attack vectors surged 1,025% due to vulnerabilities in inference layers and API design. Major incidents included the Munchables ($290M) and Pike Finance ($136M) breaches. The report underscores the urgent need to shift cybersecurity from reactive to core strategy amid rising AI-Web3 integration and growing threats from both criminal and geopolitical actors, calling for automated defenses, continuous monitoring, and stronger regulatory coordination. [more][more-2]
Resurrecting zombie dApps: Hackers are hijacking expired domains of defunct DeFi (decentralized finance) projects, dubbed “zombie dApps”, to trick users into connecting their wallets and signing malicious transactions that drain their crypto. According to crypto security firm Coinspect, attackers reuse the original branding of these abandoned projects, many of which are still linked by reputable data aggregators like DeFiLlama and DappRadar, to appear legitimate. Once users visit these reactivated sites, they’re lured into signing harmful onchain transactions. Coinspect has identified over 100 such incidents and warned that with 475 dead domains and rising DeFi activity, the problem is escalating. The firm urges former project teams to renew domains, post shutdown notices, and notify crypto platforms to prevent exploitation. While current scams are relatively crude, Coinspect warns future attacks may be more sophisticated and harder to detect. [more]
Polygon outage: Polygon's Heimdall V2 mainnet experienced a one-hour outage on Wednesday due to a suspected validator exit, though the Bor layer continued processing transactions normally. The recent Heimdall V2 upgrade, launched in July, reduced finality times to five seconds but introduced increased system complexity and potential failure points, marking it as Polygon's most technically complex hard fork since 2020. Post-outage, block explorers were re-synced, though some RPC providers faced data inconsistencies. A similar issue during the Heimdall V1 upgrade in March 2022 led to several hours of downtime. [more]