TechRisk #140: Agentic AI identity problem
Plus, ShadowLeak ChatGPT flaw, urgent need to prepare for the risks posed by quantum computing to current cryptographic systems, AI-fueled crypto scams, and more!
Tech Risk Reading Picks
Agentic AI identity problem: Businesses are rapidly deploying autonomous AI agents across functions such as sales, finance, research, and customer service to enhance efficiency, but this surge introduces significant cybersecurity risks. These agents, expected to exceed 45 billion non-human identities by year’s end, require unique identity management and access controls, yet most organizations lack mature strategies to govern them. Unlike humans, AI agents rely on API tokens or certificates, have dynamic lifecycles, and often need sensitive data access, making them attractive targets for threats like prompt injection, phishing, and deepfakes. To mitigate their risks (such as data leaks, unauthorized actions, and system compromises), companies must embed security, interoperability, and visibility into AI agents from the outset, adopting granular identity controls, and comprehensive monitoring. [more]
Critical flaw in ChatGPT Deep Research: Radware security researchers Zvika Babo and Gabi Nakibly uncovered a critical flaw in OpenAI’s ChatGPT Deep Research agent, ShadowLeak, which allowed attackers to steal private user data through a “zero-click” attack hidden in normal-looking emails. By embedding invisible malicious instructions inside emails, attackers could trick the agent into exfiltrating sensitive information, such as Gmail data, directly from OpenAI’s cloud servers without user awareness or visible traces. This method is called indirect prompt injection. Unlike earlier browser-based exploits, this server-side attack made detection much harder. The researchers demonstrated how social engineering tactics and Base64 encoding enabled them to bypass safety checks with 100% success, and noted the method could target services beyond Gmail, including Google Drive, Microsoft Teams, and GitHub. OpenAI patched the vulnerability in August 2025 after Radware’s responsible disclosure, and the researchers recommend sanitizing emails before AI processing and monitoring agent actions closely to prevent similar risks. [more]
LLM enabled malware with ransomware and reverse shells: Cybersecurity teams are warning that generative AI is increasingly being weaponized: SentinelOne researchers revealed a probable proof-of-concept called MalTerminal. It is a Windows binary (and matching Python scripts) that embeds GPT-4 via an OpenAI API and enables the runtime generation of ransomware or reverse shells. Separately, cybersecurity company, StrongestLayer, documented phishing emails that hide prompt-injection instructions in HTML to fool AI security scanners and trigger a Follina (CVE-2022-30190) HTML Application (HTA) payload that disables Defender and fetches more malware, using “LLM poisoning” to bypass analysis. Trend Micro also found attackers abusing AI-powered hosting (Lovable, Netlify, Vercel) to deploy fake CAPTCHA pages that redirect to credential-harvesting sites, showing how easy, cheap, and scalable AI tools have become for adversaries. [more]
29% of organizations suffered attacks on enterprise GenAI infrastructure over the past year: A recent Gartner survey of 302 cybersecurity leaders across North America, EMEA, and Asia/Pacific found that 62% of organizations experienced deepfake attacks using social engineering or automated processes, while 32% faced attacks on AI applications, including prompt-based manipulations of generative AI (GenAI) systems. The survey also revealed that 29% of organizations suffered attacks on enterprise GenAI infrastructure over the past year. Gartner analysts highlighted that as GenAI adoption grows, threats like phishing, deepfakes, and prompt-based attacks are becoming mainstream, urging organizations to strengthen core cybersecurity controls and apply targeted measures for emerging risks rather than making sweeping changes. [more]
NIST on the urgent need to prepare for the risks posed by quantum computing to current cryptographic systems: The NIST National Cybersecurity Center of Excellence (NCCoE) has released a draft of Cybersecurity White Paper 48, which addresses the urgent need to prepare for the risks posed by quantum computing to current cryptographic systems. Since quantum computers could eventually break today’s widely used public-key algorithms, attackers may already be collecting encrypted data for future decryption. To mitigate this threat, organizations must begin transitioning to quantum-resistant cryptography. CSWP 48 supports this process by mapping migration capabilities to established NIST resources (e.g. Cybersecurity Framework 2.0 and SP 800-53 security controls) helping organizations align their PQC strategies with recognized cybersecurity objectives and best practices. [more]
EU airports disrupted after cyberattack: A cyberattack on Collins Aerospace’s MUSE check-in and boarding system disrupted operations at major European airports including Brussels, Berlin’s Brandenburg, and London Heathrow, forcing manual check-ins, baggage handling, and delays on Saturday. While the overall impact on flights was limited, Brussels saw some cancellations and delays, and Heathrow reported minimal disruption. The incident highlighted the aviation industry’s vulnerability to attacks on shared third-party systems. Experts described the breach as a sophisticated intrusion targeting core infrastructure used by multiple airlines, raising concerns about supply chain security. Collins, a subsidiary of RTX Corp., said it was working to restore services. Passengers, meanwhile, faced long waits and frustration as airports managed the fallout. [more]
Web3 Cryptospace Spotlight
Quantum to disrupt Bitcoin within 5 years: At the All-In Summit, Yakovenko warned that Bitcoin faces a 50/50 chance of quantum disruption within five years if Shor’s algorithm becomes practical, potentially breaking its current signature schemes, and urged preparation for a migration to quantum-resistant cryptography. While praising Bitcoin’s simplicity and resilience, he stressed that safeguarding its settlement guarantees may require coordinated upgrades before 2030 if breakthroughs materialize. [more]
AI-fueled crypto scams: In the first half of 2025, crypto scams reached alarming levels with over $3 billion stolen. Much of it fueled by AI, which has made phishing, impersonation, and deepfake-driven fraud far more convincing and accessible to even low-skill criminals. Attackers now leverage AI to analyze social and blockchain data, generate realistic fake websites, bots, and support agents, and even bypass KYC with synthetic identities. Deepfakes of public figures on TikTok and YouTube are amplifying these schemes, and cybercriminals openly sell AI tools and services to aid them. While authorities have disrupted some operations, the scale and sophistication of AI-assisted scams are eroding trust in crypto exchanges. [more]
$2M exploit: A $2 million exploit hit the New Gold Protocol (NGP) on BNB Chain after attackers exploited a vulnerability in its getPrice() function, which relied solely on Uniswap V2 pool reserves, making it susceptible to flash loan manipulation. By distorting the pool’s token reserves, the attacker artificially lowered NGP’s token price, bypassed transaction limits, drained funds, and funneled them through Tornado Cash, leading to an 88% price crash. [more]
$6.28M phishing attack: 18 Sep 2025 - a $6.28M phishing attack drained staked Ethereum (stETH) and Aave-wrapped Bitcoin (aEthWBTC) by exploiting flaws in DeFi Permit signatures, where victims unknowingly approved malicious wallet pop-ups that enabled zero-gas-fee transfers. Blockchain security firm Scam Sniffer linked the exploit to rising phishing schemes leveraging EIP-7702 batch-signature vulnerabilities, which fueled a 72% spike in such attacks in August 2025 alone. [more]
The attack exploited a vulnerability in “Permit” signature mechanisms, a feature designed to streamline token transfers by allowing users to
sign off-chain messages authorizing transactions without incurring
gas fees. According to Yu Xian, founder of SlowMist, the victim unknowingly approved malicious permits through routine wallet pop-ups, enabling hackers to drain the account without triggering immediate red flags. Scam Sniffer noted that the attacker combined Permit and TransferFrom functions to execute the theft, a method that bypasses traditional on-chain approval processes and obscures activity until funds are redirected.