TechRisk #122: Access sensitive accounts with AI impersonation + Agentic AI-enabled ransomware
Tech Risk Reading Picks
AI impersonation to social engineer sensitive account: Hackers have been using AI-generated voice messages to impersonate senior U.S. government officials in a scheme targeting the online accounts of current and former officials, the FBI warned. Since April, the unidentified hackers have sent texts and voice messages to build rapport with federal and state targets, aiming to gain access to sensitive accounts. The FBI cautions that such breaches could have a cascading effect, potentially enabling further impersonation, data theft, or fraud. The broader concern lies in how accessible AI tools now enable bad actors to convincingly mimic trusted contacts, as seen in both scams and geopolitical disinformation efforts, including AI-generated propaganda during the 2024 U.S. election. [more]
Agentic AI ransomware is coming your way: Agentic AI-enabled ransomware, though not yet prevalent, is expected to emerge by 2026 or sooner, driven by rapid advancements in AI technology. Agentic AI refers to systems composed of multiple autonomous agents working together under an orchestrator to achieve complex goals more efficiently—akin to a general contractor coordinating specialized workers to build a house. Unlike traditional software, these AI agents adapt in real time, learn from new data, and can autonomously make decisions. Already, AI is being widely used in social engineering, vulnerability discovery, and phishing kits, outperforming human attackers in many cases. This same capability will soon power sophisticated ransomware: agentic AI malware that autonomously scouts for vulnerabilities, uses deepfake-based social engineering, escalates attacks for maximum profit, and even improves itself post-attack. As bad actors begin weaponizing agentic AI, defenders must respond by integrating similar technologies into their cybersecurity strategies and educating users about AI-driven threats to stay ahead in this evolving digital arms race. [more]
Health data uploaded to GenAI accounts: A new report by Netskope Threat Labs reveals that healthcare workers are frequently uploading sensitive health data to personal cloud and Generative AI (GenAI) accounts, posing significant security risks. Over 80% of data policy violations involved regulated healthcare information, often shared through platforms like Microsoft OneDrive, Google Drive, or GenAI apps such as ChatGPT and Google Gemini. With GenAI now used in 88% of healthcare organisations, nearly two-thirds of users upload sensitive data to personal accounts, complicating oversight. The report stresses the need for approved enterprise GenAI tools, stricter Data Loss Prevention (DLP) policies, and real-time user coaching to mitigate risks. Encouragingly, DLP adoption and the use of approved GenAI platforms are rising, reflecting growing awareness and response to the data protection challenges in healthcare. [more]
Leveraging AI to enhance criminal activities: Cybercriminals are now leveraging AI to dramatically enhance the speed and sophistication of their attacks, mirroring the calculated persistence of Jurassic Park’s velociraptors by constantly probing systems for vulnerabilities. This technological edge allows hackers to not only craft and test malicious code more efficiently but also to automate reconnaissance and tailor attacks to specific defenses. As ransomware surges—affecting 88% of organizations and causing significant downtime—attackers exploit dark web marketplaces to buy AI-tested malware tools, making advanced cyberattacks more accessible. Despite these escalating threats, many breaches stem from basic lapses like unsecured RDP ports or delayed patching. To combat AI-fueled cyber threats, experts urge a return to cybersecurity fundamentals, reinforced by AI-driven containment tools that can counter the evolving, methodical nature of modern attacks. [more]
Most AI chatbots can be bypassed and provide dangerous knowledge: Researchers warn that hacked or “jailbroken” AI chatbots (e.g. ChatGPT, Claude) pose a growing threat by making dangerous and illegal knowledge—such as hacking, drug-making, or cybercrime—easily accessible to the public. These chatbots, powered by large language models (LLMs) trained on vast internet data, can be manipulated to bypass built-in safety controls and provide illicit information, even when safeguards are intended to block such outputs. A team from Ben Gurion University demonstrated how a universal jailbreak could compromise multiple major chatbots, exposing the alarming scalability, accessibility, and adaptability of the threat. Experts emphasize the need for stronger screening of training data, improved security infrastructure, and regulatory oversight to combat the misuse of these systems, likening the risk posed by “dark LLMs” to that of unregulated weapons. Responses from leading tech firms have been criticized as inadequate, underscoring calls for more robust, accountable AI development and deployment practices. [more]
Risk of not adopting AI meaningfully: Former Google CEO Eric Schmidt warns that ignoring AI could render professionals—from artists to doctors—irrelevant, urging rapid adoption to stay competitive. In a TED interview, he emphasized AI's transformative impact across all sectors and shared how he used AI tools to quickly grasp aerospace concepts after acquiring a rocket company. Highlighting AI's potential to boost productivity by 30% annually, Schmidt predicted industry disruptions and evolving roles rather than complete job losses. While acknowledging the overwhelming pace of change, he advised persistence, calling it a “marathon, not a sprint,” and stressed the need for balanced regulation to ensure responsible AI development. [more]
Beyond cyber attacks: For decades, cybersecurity has focused on preventing breaches, but with the increasing sophistication of threats — particularly from AI and vulnerabilities in software supply chains — total protection is no longer realistic. As Theresa Lanowitz and LevelBlue’s 2025 Futures Report highlight, resilience, not just defense, is now critical. Organizations must prepare for inevitable intrusions by building cyber resilience: a coordinated, company-wide capability to rapidly recover from IT disruptions, whether due to cyberattacks, human error, or natural disasters. Yet most companies remain underprepared, with few having visibility into supply chains or readiness for AI-driven attacks. [more]
Unhackable quantum: China Telecom Quantum Group has launched what it claims to be the world’s first commercial cryptography system immune to hacking by even quantum computers, marking a major leap in cybersecurity. This distributed system uniquely combines Quantum Key Distribution and Post-Quantum Cryptography, forming an end-to-end quantum-secure architecture for secure communication, data protection, and identity verification. Demonstrating its capabilities, the company completed a quantum-encrypted phone call over 1,000km and has deployed quantum networks in 16 major Chinese cities, with Hefei hosting the world’s largest such network. With platforms like Quantum Secret and Quantum Cloud Seal already in use, China positions itself at the forefront of global quantum security innovation. [more]
Web3 Cryptospace Spotlight
US Justice Department investigates Coinbase: The U.S. Justice Department is investigating a major security breach at Coinbase, where insiders in India were allegedly bribed to steal sensitive customer data, including that of high-profile figures like Sequoia Capital's Roelof Botha, and demand a $20 million ransom. The breach, involving social engineering tactics and support staff outside the U.S., was revealed just before Coinbase’s anticipated inclusion in the S&P 500. The incident, which could cost up to $400 million to remediate, highlights the risks facing high-profile crypto platforms as they gain mainstream financial recognition. [more]
DeFi Cetus lost $223M: About $223 million was stolen from the Cetus decentralized cryptocurrency exchange in an attack. Cetus indicated that they took immediate action to lock their contract preventing further theft of funds. Consequently, $162 million of the compromised funds have been successfully “paused”. Cetus, which operates on the Sui blockchain, did not respond to requests for comment about what they meant by “paused” but said they are “actively pursuing paths to recover the remainder” of the stolen funds and are working with the Sui Foundation and others. Separately, multiple cryptocurrency security experts and companies said blockchain data showed that about $50 million of the stolen funds have already been transferred to a new wallet.
On the cause of the incident, some pointed to messages in Cetus’ Discord channel as evidence the hacker exploited a vulnerability in the protocol that allowed them to steal the funds. Other experts told news outlets like Bloomberg the hacker likely manipulated the price of the coin as part of the attack. [more]