TechRisk #153: 91,000 attacks on AI infrastructure
Plus, strategic risks and governance implications of AI-enabled cyber threats, learning from AI threats in 2025, A new class of stealth Cloud malware targeting Linux infrastructure, and more!
Tech Risk Reading Picks
Over 91,000 coordinated attacks on AI infrastructure: Security research indicates a sharp rise in over 91,000 coordinated attacks against AI infrastructure over three months highlighting material technology risks for organizations scaling AI adoption: first, server-side request forgery (SSRF) exploits are being used to coerce AI and communications platforms into making unauthorized outbound connections, raising concerns about data leakage, regulatory exposure, and abuse of trusted integrations; second, systematic reconnaissance of large language model (LLM) endpoints is probing for misconfigured proxies that could expose access to paid or proprietary AI services, signalling potential revenue loss, intellectual property theft, and downstream breaches; third, the professional globally distributed nature of the activity (e.g. using VPS-based tooling and quiet “low-noise” queries) suggests attackers are building pipelines for future exploitation rather than one-off testing, increasing long-term risk. A notable controversy is the apparent use of security-research tooling (such as OAST callback infrastructure) at scale, blurring the line between legitimate testing and grey-hat activity, which complicates attribution, response decisions, and legal positioning for affected enterprises. [more]
AI, geopolitics and supply chains are top 2026 cyber risks: The World Economic Forum’s Global Cybersecurity Outlook highlights three interconnected technology risks that demand executive attention: first, rapid AI deployment is expanding attack surfaces and governance exposure, as organisations integrate AI into core operations faster than controls around data leakage, model misuse, accountability and regulatory readiness can mature; second, geopolitical fragmentation is undermining traditional cyber and compliance frameworks, with data sovereignty, diverging regulations and cross-border tensions increasing uncertainty and limiting organisations’ ability to manage risk consistently across jurisdictions; and third, increasingly complex and globally dispersed technology supply chains are amplifying systemic vulnerability, as breaches or disruptions at third parties can cascade into significant operational and reputational harm. Major economies remain divided between prioritising innovation and imposing safeguards, resulting in fragmented, case-by-case regulation that raises compliance burdens for multinational firms and weakens collective cyber defence. [more][more-2]
Learning from AI threats in 2025: Despite headlines about AI and next-generation security, the most material technology risks facing organisations in 2025 remain stubbornly familiar: software supply-chain compromise, phishing-driven credential theft, and malware slipping through trusted platforms. Supply-chain attacks are of growing concern because a single compromised component can rapidly cascade across thousands of downstream systems, amplifying business, operational, and reputational impact at unprecedented scale. This is now achievable even by small or individual attackers using AI-enabled efficiency. Phishing remains highly effective because it targets human behaviour rather than systems; one successful click can trigger enterprise-wide exposure, as seen when developer credentials were abused to poison widely used software packages before remediation could take effect. Official marketplaces and platforms also continue to present risk, as automated and human reviews lag attacker sophistication, allowing malicious extensions or apps to gain broad access under overly permissive models. The key controversy is the industry’s continued emphasis on “shiny” new security concepts while basic controls (includes granular permissions, stronger supply-chain verification, and phishing-resistant authentication) remain inconsistently implemented. This misalignment persists not due to lack of technology, but due to prioritisation and governance gaps at platform and organisational levels. [more]
Strategic risks and governance implications of AI-enabled cyber threats: Artificial intelligence is now being embedded directly into malware and attack workflows, creating several material technology risks for organizations: first, adaptive malware that rewrites its own code in real time can evade traditional, signature-based defenses, increasing the likelihood of undetected breaches and prolonged dwell time; second, AI-driven social engineering enables highly personalized and linguistically polished phishing and fraud, raising the probability of executive-level compromise and financial or reputational loss; and third, the industrialization of AI tools in criminal marketplaces lowers the barrier to entry for sophisticated attacks, expanding the threat surface for mid-size enterprises and supply chains. A key controversy is the dual-use nature of generative AI platforms, where the same models that drive productivity and innovation can be manipulated or socially engineered by attackers, raising unresolved questions for regulators and boards around accountability, acceptable use, and the responsibility of AI providers in preventing misuse without stifling innovation. [more]
Hidden risks in consumer health AI: Consumer health chatbots introduce material technology risk because they can deliver advice that sounds credible yet is contextually wrong, particularly when models lack full patient data and are not calibrated to express uncertainty. This creates “verification asymmetry” where errors are hard for users to detect but can cause real harm. Standard AI safety tests often miss these risks because they reward fluency and empathy rather than identifying subtly misleading guidance, allowing high-risk outputs to pass undetected. Risk further compounds over multi-turn conversations as models prioritize being supportive and consistent over challenging earlier assumptions, while commercial pressures discourage friction such as disclaimers or forced citations that would reduce engagement. The central controversy is accountability: with no unified regulatory framework or clear liability standards for consumer health chatbots, organizations face a governance gray zone where innovation is encouraged but responsibility for harm remains unresolved. [more]
A new class of stealth Cloud malware targeting Linux infrastructure: Cybersecurity researchers have identified VoidLink, a highly advanced and previously undocumented malware framework designed for persistent, stealthy control of Linux-based cloud environments. Key technology risks include its deep cloud awareness (it can detect and adapt to AWS, Azure, Google Cloud, Kubernetes, and Docker), which makes traditional perimeter defenses less effective; its modular, upgradeable design that allows attackers to evolve capabilities over time, increasing dwell time and business impact; and its strong credential-harvesting and lateral-movement features, raising the risk of large-scale data theft and supply-chain compromise through developer and CI/CD environments. Of particular concern is its ability to actively evade detection by assessing installed security controls and dynamically adjusting behavior, undermining standard monitoring and incident-response assumptions. A notable controversy is the assessment that VoidLink is linked to China-affiliated threat actors, which elevates the issue from a technical security incident to a potential geopolitical and regulatory risk, especially for organizations operating critical infrastructure, sensitive intellectual property, or cross-border cloud services. [more]
Runtime security could be the blind spot in Cloud risk: Cloud risk now concentrates at runtime (the live execution layer where identities act, workloads scale, and data moves) because this is where attackers actually operate, exploiting stolen credentials, escalating privileges, deploying malicious compute, and accessing or exfiltrating data faster than traditional controls can react. The key technology risks are of threefold: first, loss of visibility, as ephemeral cloud resources disappear before incidents can be investigated, leaving gaps in accountability and regulatory exposure; second, speed and automation of attacks, where programmatic pivots across identities and services outpace human-led response and amplify business impact; and third, evidence volatility, where the lack of real-time forensic capture undermines incident response, legal defensibility, and post-breach learning. The central controversy is the industry’s continued reliance on CNAPP and posture management as a primary control. While they could serve as a valuable prevention control, these tools focus on what could go wrong rather than what is going wrong. Hence, it may create a false sense of security at board level. [more]
Third party dependency risk of Ledger: The recent Ledger customer data breach underscores several material technology risks: first, third-party dependency risk, where secure core products are undermined by weaker external providers, expanding the attack surface beyond an organization’s direct control; second, concentration risk in centralized customer databases, which amplifies the impact of any single breach by exposing large volumes of personal data at once; third, downstream fraud and reputational risk, as exposed personal data enables highly targeted phishing that can lead to irreversible financial losses for customers and lasting brand damage; and fourth, governance and disclosure risk, illustrated by limited transparency around breach timing and scope, which complicates incident response, regulatory scrutiny, and stakeholder trust. The key controversy centers on the misalignment between blockchain companies’ decentralized security messaging and their reliance on traditional centralized e-commerce infrastructure, raising questions about whether firms promoting “best-in-class” security should be held to higher standards in selecting partners and adopting architectures that better align with their stated principles. [more]
The Hidden Risks of Autonomy: Why AI Agents Are the New Frontier for Hackers.

