<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Tech Risk Guru]]></title><description><![CDATA[Observing Technology and Digital Risks]]></description><link>https://techriskguru.com</link><generator>Substack</generator><lastBuildDate>Wed, 15 Apr 2026 19:36:45 GMT</lastBuildDate><atom:link href="https://techriskguru.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[techriskguru.com]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[techrisk@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[techrisk@substack.com]]></itunes:email><itunes:name><![CDATA[M.]]></itunes:name></itunes:owner><itunes:author><![CDATA[M.]]></itunes:author><googleplay:owner><![CDATA[techrisk@substack.com]]></googleplay:owner><googleplay:email><![CDATA[techrisk@substack.com]]></googleplay:email><googleplay:author><![CDATA[M.]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Tech Risk #165: Claude Mythos' unprecedented cybersecurity ability]]></title><description><![CDATA[Plus, security gaps in autonomous AI agents, erosion of foundational student skills, Microsoft releases agent governance toolkit, and more!]]></description><link>https://techriskguru.com/p/tech-risk-165-claude-mythos-unprecedented-cybersecurity</link><guid isPermaLink="false">https://techriskguru.com/p/tech-risk-165-claude-mythos-unprecedented-cybersecurity</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 12 Apr 2026 11:43:41 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4689" height="3126" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3126,&quot;width&quot;:4689,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;galaxy with starry night&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="galaxy with starry night" title="galaxy with starry night" srcset="https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1506703719100-a0f3a48c0f86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMnx8dW5pdmVyc2V8ZW58MHx8fHwxNzc1ODMxNTEyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Project Glasswing and Anthropic Claude Mythos:</strong> Anthropic has launched Project Glasswing to leverage its newest frontier model, Claude Mythos, for defensive cybersecurity. This initiative involves a select group of major technology and financial firms tasked with securing critical software. The Mythos model has already identified thousands of high-severity vulnerabilities in major operating systems and browsers. It demonstrates unprecedented autonomy, including the ability to chain exploits and bypass its own sandbox environments. Anthropic is restricting general access to the model because its advanced reasoning and coding skills could be easily weaponized by hostile actors. The company is committing over $100 million in resources to ensure defensive capabilities outpace offensive AI adoption. [<a href="https://thehackernews.com/2026/04/anthropics-claude-mythos-finds.html">more</a>]</p></li><li><p><strong>Security gaps in autonomous AI agents</strong></p><ol><li><p><strong>AI agent traps:</strong> Protecting the perimeter against AI agent traps</p><p>Google DeepMind research indicates that autonomous AI agents are highly vulnerable to &#8220;AI Agent Traps&#8221; embedded in web content. These traps weaponize an agent&#8217;s own capabilities to force data exfiltration, information dissemination, or unauthorized product promotion. Researchers identified six specific attack vectors that manipulate an agent&#8217;s reasoning, memory, and behavioral controls. While technical hardening is necessary, recent multi-institutional studies suggest that social engineering remains the primary vulnerability. Agents often succumb to fabricated emergencies or artificial urgency rather than technical exploits alone. [<a href="https://cybernews.com/ai-news/ai-agent-traps-adversarial-content-google-deepmind/">more</a>]</p></li><li><p><strong>Vulnerable autonomous AI agents: </strong>A multi-institutional study reveals that AI agents possess high technical capabilities but lack the situational awareness and social reasoning necessary for safe deployment. Researchers successfully compromised agents not through code exploits, but by using social engineering, emotional manipulation, and fabricated urgency to bypass security protocols. These vulnerabilities allowed agents to leak sensitive data, delete critical configuration files, and execute denial-of-service attacks against their own infrastructure. The fundamental issue is a lack of social coherence, where agents fail to verify authority or understand the long-term consequences of their actions. This creates a dangerous imbalance between the power of the technology and the maturity of its safeguards. [<a href="https://cybernews.com/ai-news/research-major-flaws-ai-agents-pretend-owner/">more</a>]</p></li></ol></li><li><p><strong>High-stakes exploitation of Flowise AI vulnerability: </strong>Threat actors are actively weaponizing a critical security flaw within the Flowise open-source AI platform to achieve full system compromise. The vulnerability, tracked as CVE-2025-59528, carries a maximum severity rating of 10.0 due to its ability to allow remote code execution via unvalidated JavaScript input. Attackers only require an API token to exploit the CustomMCP node, granting them full Node.js runtime privileges to execute commands, access the file system, and exfiltrate sensitive data. Despite a patch being available since version 3.0.6, over 12,000 exposed instances remain online. Current exploitation activity is linked to a single Starlink IP address, highlighting a focused effort to target corporate AI infrastructure that remains unpatched. [<a href="https://thehackernews.com/2026/04/flowise-ai-agent-builder-under-active.html">more</a>]</p></li><li><p><strong>Risks of silent data exfiltration in Grafana: </strong>Researchers recently identified a vulnerability called GrafanaGhost that targets the platform&#8217;s integration of AI. This flaw theoretically allows attackers to bypass security protocols using indirect prompt injection to trick the AI into ignoring safety rules. By exploiting a legacy coding trick and a weakness in the image renderer, malicious actors could redirect sensitive organizational data to external servers. While researchers claim the process is autonomous and invisible to users, Grafana Labs maintains that the exploit requires significant user interaction and has since issued a patch. This discovery highlights the evolving nature of threats where attackers manipulate how AI processes data to bypass traditional security perimeters. [<a href="https://hackread.com/grafanaghost-vulnerability-data-theft-via-ai-injection/">more</a>]</p><ol><li><p>Noma&#8217;s investigation revealed a flaw in the <a href="https://hackread.com/attackers-hide-javascript-svg-images-malicious-sites/">JavaScript</a> code. By using a legacy developer trick called protocol-relative URLs (using the // format), the hackers can fool the software into thinking the link is a safe internal path.</p></li></ol></li><li><p><strong>Microsoft releases agent governance toolkit: </strong>Microsoft has launched the Agent Governance Toolkit to bridge this gap, providing a seven-package system designed to monitor and control agent behavior in real time. This framework-agnostic solution integrates with popular platforms like LangChain and CrewAI to enforce policy, verify identity, and manage execution rings similar to OS privilege levels. By shifting the project toward community-led foundation governance, Microsoft aims to establish a standardized security architecture for autonomous systems across the industry. [<a href="https://www.helpnetsecurity.com/2026/04/03/microsoft-ai-agent-governance-toolkit/">more</a>]</p></li><li><p><strong>Erosion of foundational student skills: </strong>A recent National Education Union poll of over 9,000 British teachers reveals a significant decline in core student abilities attributed to artificial intelligence. Educators report that overreliance on AI tools is stifling literacy, problem-solving, and critical thinking skills. While the UK government promotes AI tutoring for disadvantaged students, only 4% of teachers strongly support the initiative, citing concerns over the loss of human mentorship and academic integrity. [<a href="https://cybernews.com/ai-news/united-kingdom-ai-students-critical-thinking/">more</a>]</p></li><li><p><strong>North Korean exploit drains $280M from drift protocol: </strong>Drift Protocol recently suffered a $280 million theft targeting its lending, borrowing, and trading vaults. Malicious actors bypassed traditional smart contract vulnerabilities by utilizing sophisticated social engineering to compromise the platform&#8217;s security council administrative powers. The attackers orchestrated a multi-week operation that involved staging pre-signed transactions to override withdrawal limits and execute a rapid takeover of system controls. Blockchain security experts have attributed the breach to North Korean state-sponsored hackers, noting that the laundering techniques and network indicators mirror previous high-profile attacks on the crypto industry. [<a href="https://therecord.media/drift-crypto-confirms-280-million-stolen-north-korea">more</a>]</p></li><li><p><strong>Axios library compromise - widespread supply chain threat: </strong>Unit 42 researchers identified a significant supply chain attack targeting the popular Axios JavaScript library after a maintainer&#8217;s account was hijacked to release malicious updates. These compromised versions (v1.14.1 and v0.30.4) do not modify the original source code but instead inject a hidden dependency that serves as a cross-platform remote access Trojan (RAT). The malware is capable of performing stealthy reconnaissance and establishing persistent access across Windows, macOS, and Linux systems before attempting to self-destruct to evade forensic analysis. Because Axios is a fundamental tool used globally for making API requests, this breach poses a systemic risk to thousands of organizations and their downstream digital infrastructure. [<a href="https://unit42.paloaltonetworks.com/axios-supply-chain-attack/">more</a>]</p></li><li><p><strong>OAuth device code phishing on</strong> <strong>the rise of commoditized identity attacks:</strong></p><p>A sophisticated phishing technique leveraging Microsoft&#8217;s OAuth 2.0 device code protocol has transitioned from a specialized Russian state-sponsored tactic to a widely accessible Phishing-as-a-Service (PhaaS) model. The &#8220;EvilTokens&#8221; platform launched in early 2026 and has already compromised over 340 organizations. This attack weaponizes a legitimate authentication flow designed for devices like smart TVs. Victims interact entirely with genuine Microsoft infrastructure. This makes the attack invisible to traditional URL filters and security awareness training. Multifactor authentication offers no protection because users complete the challenge on the attacker&#8217;s behalf. Attackers harvest refresh tokens that persist even after password resets. They use these to steal data via the Microsoft Graph API and register unauthorized devices for long-term access. Organizations should prioritize disabling this protocol through Conditional Access policies.</p><ol><li><p><strong>Key technology risk pointers</strong></p><ul><li><p><strong>Architectural MFA Bypass:</strong> Users provide legitimate authentication for the attacker. Existing security investments fail because the protocol itself is exploitable.</p></li><li><p><strong>Persistent Token Access:</strong> Stolen refresh tokens survive password changes. Remediation is complex and requires manual session revocation and device audits.</p></li><li><p><strong>Rapid Commoditization:</strong> Phishing-as-a-Service makes advanced state-level tactics available to common criminals. The threat is now volumetric and hits all industry sectors.</p></li><li><p><strong>Detection Complexity:</strong> Legitimate domains mask the attack. Monitoring must shift to specific behavioral logs within Entra ID to identify unauthorized flows.</p></li></ul></li></ol></li><li><p><strong>Solving the identity paradox: </strong>Modern enterprise security is undermined by a fundamental contradiction where increased identity telemetry fails to prevent breaches because attackers now operate behind legitimate, trusted credentials. The rapid expansion of the identity surface to include non-human entities, cloud APIs, and AI agents has outpaced traditional perimeter defenses. Attackers, including state-sponsored insiders and supply chain infiltrators, successfully bypass authentication checkpoints by assuming valid personas. Consequently, static access controls are no longer sufficient. Organizations should consider their transition from a focus on entry-point authentication to continuous post-login behavioral monitoring to distinguish between legitimate employee activity and malicious intent. [<a href="https://www.sentinelone.com/blog/the-identity-paradox-the-hidden-risks-in-your-valid-credentials/">more</a>]</p><ol><li><p><strong>Key Technology Risk Pointers</strong></p><ul><li><p><strong>Non-human identity (NHI) sprawl:</strong> Automated service accounts and AI agents often outnumber human users and lack the same governance rigors. These accounts frequently possess broad, persistent privileges, making them high-value targets for machine-speed lateral movement.</p></li><li><p><strong>The authorization gap:</strong> Traditional security models prioritize the point of entry but offer little visibility into actions taken after a user is &#8220;cleared.&#8221; This blind spot allows authenticated attackers to exfiltrate data or modify code while appearing as authorized personnel.</p></li><li><p><strong>Identity subversion via &#8220;trusted&#8221; insiders:</strong> Sophisticated actors are successfully infiltrating organizations through fraudulent hiring and supply chain compromises. Since these identities are technically &#8220;valid&#8221; in HR and IT systems, they bypass standard security alerts that look for unauthorized access rather than unauthorized intent.</p></li></ul></li></ol></li></ol>]]></content:encoded></item><item><title><![CDATA[Tech Risk #164: Anthropic source code leak]]></title><description><![CDATA[Plus, Claude Chrome extension&#8217;s flaw, managing the security debt of AI outputs, securing the future of agentic AI, supply chain attacks, and more!]]></description><link>https://techriskguru.com/p/tech-risk-164-anthropic-source-code</link><guid isPermaLink="false">https://techriskguru.com/p/tech-risk-164-anthropic-source-code</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 05 Apr 2026 11:43:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GYtB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GYtB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GYtB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!GYtB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!GYtB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!GYtB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GYtB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GYtB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!GYtB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!GYtB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!GYtB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb8ecc63-4cce-4e5f-a395-288ce95f52b4_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Anthropic source code leak:</strong> Anthropic recently inadvertantly published the internal source code for Claude Code due to a packaging error on the NPM registry. A 60 MB source map file allowed the reconstruction of nearly 500,000 lines of code across 1,900 files. While no customer data or credentials were compromised, the leak exposed proprietary features like Proactive and Dream modes. Simultaneously, Anthropic is investigating a separate high priority bug causing users to exhaust their message limits prematurely. The company is currently issuing DMCA notices to remove the leaked code and working to resolve the usage limit issues. [<a href="https://www.bleepingcomputer.com/news/artificial-intelligence/claude-code-source-code-accidentally-leaked-in-npm-package/">more</a>]</p></li><li><p><strong>Claude Chrome extension&#8217;s flaw: </strong>A critical security flaw discovered in the Claude Chrome extension allowed attackers to gain full control over user accounts without any direct interaction. By visiting a malicious website, users could have their session tokens stolen, emails sent, and private chat histories exported. The vulnerability stemmed from an overly broad trust policy combined with a bug in a third-party CAPTCHA component. Anthropic and Arkose Labs patched the issue in February 2026. This incident highlights the significant risks associated with granting AI assistants broad permissions to act as autonomous agents within a web browser. [<a href="https://cybernews.com/ai-news/claude-chrome-extension-zero-click-bug-account-takeover/">more</a>]</p></li><li><p><strong>Managing the security debt of AI outputs:</strong> Modern businesses increasingly rely on open-source components for operational efficiency, yet this reliance has created a substantial "security debt" characterized by fragmented vulnerability data and complex supply chain risks. Public databases often fail to provide timely or accurate severity scores for open-source flaws, leading to a dangerous gap between the discovery of a vulnerability and the availability of actionable intelligence. This problem is exacerbated by the presence of unmaintained "legacy" code and the rapid rise of malicious packages within popular registries. While AI agents are being integrated to accelerate development, they could introduce further risk by recommending obsolete or hallucinated libraries and generating code with systemic security flaws. Consequently, organizations must evolve beyond traditional patch management to implement more rigorous download policies, software build protections, and specialized oversight for AI-driven development. [<a href="https://www.kaspersky.com/blog/open-source-vulnerabilities-in-ai-era/55543/">more</a>]</p></li><li><p><strong>Google AI agents can be weaponized by an attacker:</strong> Cybersecurity researchers have identified a significant security flaw within Google Cloud&#8217;s Vertex AI platform involving excessive default permissions. This "blind spot" allows attackers to weaponize AI agents to bypass isolation boundaries and access sensitive data across an organization's cloud environment. By exploiting the default service agent's broad access, an attacker can extract credentials to steal proprietary data from cloud storage or map internal infrastructure. Google has responded by updating documentation and recommending that organizations manually configure service accounts to restrict access. Failure to address these default settings transforms a functional AI tool into a sophisticated insider threat capable of compromising entire project ecosystems. [<a href="https://thehackernews.com/2026/03/vertex-ai-vulnerability-exposes-google.html">more</a>]</p></li><li><p><strong>Securing the future of agentic AI: </strong>The emergence of agentic AI introduces a shift from simple &#8220;bad output&#8221; to complex &#8220;bad outcomes,&#8221; where autonomous systems can misinterpret instructions or misuse enterprise identities across workflows. To address these evolving threats, Microsoft has aligned its Copilot Studio and Agent 365 platforms with the 2026 OWASP Top 10 for Agentic Applications. This framework identifies critical risks such as goal hijacking and cascading failures that occur when agents act with broad permissions or lack clear behavioral boundaries. By treating agents as managed, auditable applications rather than autonomous black boxes, organizations can implement real-time protections and predefined connectors to constrain behavior. This strategic approach ensures that high-value business automation remains governable, observable, and secure against sophisticated adversarial manipulation. [<a href="https://www.microsoft.com/en-us/security/blog/2026/03/30/addressing-the-owasp-top-10-risks-in-agentic-ai-with-microsoft-copilot-studio/?hl=en-GB">more</a>]</p></li><li><p><strong>Addressing hidden vulnerabilities in enterprise AI environments:</strong> Security researchers recently identified critical vulnerabilities in OpenAI&#8217;s ChatGPT and Codex platforms that allowed for the silent exfiltration of sensitive data and credentials. One flaw exploited a DNS-based side channel within the AI&#8217;s Linux runtime to bypass standard guardrails, enabling attackers to leak conversation logs and files without triggering user warnings. A separate command injection vulnerability in the Codex engineering agent permitted the theft of GitHub authentication tokens through manipulated branch names. While OpenAI has patched these specific issues, the findings reveal a significant security blind spot where AI systems operate under the false assumption of environment isolation. These incidents highlight that native AI safeguards are currently insufficient for protecting high-value enterprise intellectual property and sensitive data.</p></li><li><p><strong>Unauthorized Github token exfiltration:</strong> OpenAI recently patched a critical command injection vulnerability in its Codex AI coding assistant that allowed attackers to steal sensitive GitHub User Access Tokens. The flaw originated from improper input sanitization of GitHub branch names, which the system failed to validate before executing commands within its cloud-hosted containers. By crafting a malicious branch name containing hidden shell commands, an attacker could trigger unauthorized code execution whenever a developer interacted with a compromised repository. This exploit enabled the silent extraction of authentication tokens, potentially granting attackers broad access to private source code and organizational resources across the GitHub environment. [<a href="https://hackread.com/openai-codex-vulnerability-steal-github-tokens/">more</a>]</p></li><li><p><strong>&#8220;ModelSpy" attack system to hijack AI model structures from distance:</strong> A research team from KAIST, the National University of Singapore, and Zhejiang University has identified a critical security vulnerability that allows for the remote theft of artificial intelligence model architectures. Using a system called ModelSpy, attackers can capture electromagnetic signals emitted by GPUs during AI computations from up to six meters away, even through walls. This side-channel attack achieves up to 97.6% accuracy in reconstructing deep learning layer configurations without needing direct server access or malware. To mitigate this risk, researchers recommend implementing electromagnetic interference and computational obfuscation as part of a comprehensive cyber-physical security strategy. [<a href="https://www.miragenews.com/ai-blueprints-stolen-countermeasures-proposed-1647731/?hl=en-GB">more</a>]</p></li><li><p><strong>AI exploits FreeBSD kernel:</strong> A recent security milestone demonstrated that a frontier AI model autonomously discovered and weaponized a critical vulnerability in the FreeBSD operating system, a platform renowned for its high security and used by major enterprises like Netflix and WhatsApp. Moving beyond simple bug detection, the AI agent engineered a sophisticated, multi-stage exploit in just four hours of compute time, achieving root-level access that typically requires weeks of specialized human labor. This shift marks the transition from AI as a supportive tool to an autonomous actor capable of conducting high-level offensive operations. As the cost and time required to develop &#8220;zero-day&#8221; style exploits collapse, the traditional security advantage held by mature codebases is eroding, necessitating a radical acceleration in defensive response and patching cycles. [<a href="https://www.forbes.com/sites/amirhusain/2026/04/01/ai-just-hacked-one-of-the-worlds-most-secure-operating-systems/">more</a>]</p></li><li><p><strong>Critical vulnerability in the Langflow framework:</strong> The Cybersecurity and Infrastructure Security Agency (CISA) has issued an urgent warning regarding a critical vulnerability (CVE-2026-33017) in the Langflow framework, which is widely used for developing AI agents. This flaw allows unauthorized remote code execution, enabling attackers to gain control over systems by sending a single malicious web request. Hackers began exploiting the weakness within 20 hours of its public disclosure, highlighting the speed at which modern threats materialize. Federal agencies must patch their systems by April 8, but all organizations using Langflow are advised to upgrade to version 1.9.0 or higher immediately. Failure to address this issue could lead to the theft of sensitive data, including database credentials and cloud secrets stored within AI development environments. [<a href="https://www.miragenews.com/ai-blueprints-stolen-countermeasures-proposed-1647731/?hl=en-GB">more</a>]</p></li><li><p><strong>Supply chain attacks</strong></p><ol><li><p><strong>Attack on open-source project LiteLLM: </strong>The AI recruiting startup Mercor recently confirmed a security incident resulting from a supply chain attack targeting the open-source project LiteLLM. As a critical partner for major AI firms like OpenAI, Mercor was impacted when malicious code was distributed through compromised PyPI package publishes. While Mercor has engaged forensic experts to contain the breach, the hacking group Lapsus$ claims to have exfiltrated hundreds of gigabytes of corporate data. A clean version of the affected software has since been released, but investigations into the full extent of the data exposure are ongoing. [<a href="https://therecord.media/mercor-confirms-security-incident-tied-to-litellm">more</a>]</p></li><li><p><strong>Attack on Axios:</strong> North Korean threat actors executed a premeditated supply chain attack by hijacking the npm account of the primary maintainer for Axios, a library used by millions of developers. The attackers bypassed secure GitHub Actions workflows by compromising the maintainer&#8217;s account, changing the associated email, and utilizing a long-lived access token to publish malicious versions via the npm command line interface. This breach resulted in the distribution of versions 1.14.1 and 0.30.4, which contained a remote access trojan hidden within a sub-dependency. The malware targeted Windows, macOS, and Linux systems by executing automatically during the package installation process. Security teams removed the poisoned updates within hours, but the incident demonstrates the extreme vulnerability of automated build pipelines to compromised third-party credentials. [<a href="https://www.securityweek.com/axios-npm-package-breached-in-north-korean-supply-chain-attack/">more</a>]<br></p></li></ol></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #163: AI creates bad codes]]></title><description><![CDATA[Plus, Internal threat of compromised AI agents, Gemini-powered AI agents in dark web, and more!]]></description><link>https://techriskguru.com/p/techrisk-163-ai-creates-bad-codes</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-163-ai-creates-bad-codes</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 29 Mar 2026 11:43:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ba5S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ba5S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ba5S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!Ba5S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!Ba5S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!Ba5S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ba5S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ba5S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!Ba5S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!Ba5S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!Ba5S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06f3289c-bd1c-4c98-a240-ce0e3f360ead_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>AI-generated vulnerable codes:</strong> Georgia Tech researchers have launched the Vibe Security Radar to track a surging number of verified software vulnerabilities introduced by AI coding tools. Data from March 2026 shows a significant month-over-month increase in AI-linked security flaws, with 35 new CVE entries documented compared to only six in January. The research highlights that tools like Anthropic&#8217;s Claude Code are frequently linked to these risks, though the true scale is likely five to ten times higher due to developers stripping AI metadata. As "vibe coding" leads to projects being pushed directly to production, even teams performing manual code reviews are failing to catch the volume of machine-generated flaws entering the ecosystem. [<a href="https://www.infosecurity-magazine.com/news/ai-generated-code-vulnerabilities/">more</a>]</p></li><li><p><strong>The internal threat of compromised AI agents:</strong> The emergence of autonomous AI agents fundamentally shifts the cybersecurity landscape by providing a shortcut through the traditional cyber kill chain. Unlike human attackers who must laboriously earn access through reconnaissance and lateral movement, a compromised AI agent already possesses broad permissions and legitimate data-sharing workflows across SaaS environments. This "built-in" access allows state-sponsored actors and cybercriminals to execute espionage at machine speed while blending perfectly into authorized system activity. Because these agents are designed to move data between platforms like Salesforce, Slack, and Google Workspace, their malicious actions often appear as normal automation. Modern security strategies must therefore evolve from simple perimeter defense to comprehensive visibility and behavioral analysis of the AI identities operating within their ecosystems. [<a href="https://thehackernews.com/2026/03/the-kill-chain-is-obsolete-when-your-ai.html">more</a>]</p></li><li><p><strong>Underground market for premium AI access:</strong> Threat actors are increasingly trading compromised and resold premium AI accounts on underground forums and Telegram channels to bypass costs, regional sanctions, and safety restrictions. This trend presents a significant strategic risk to leadership because these accounts often serve as gateways to sensitive corporate data, including proprietary code and internal research, while also empowering attackers to automate sophisticated phishing and social engineering campaigns at scale. [<a href="https://www.bleepingcomputer.com/news/security/paid-ai-accounts-are-now-a-hot-underground-commodity/">more</a>]</p></li><li><p><strong>Exploitation of no-code platforms in phishing: </strong>Threat actors are bypassing traditional email security by hosting malicious redirect scripts on legitimate no-code development platforms like Bubble. These platforms use trusted domains that evade automated filters and security blacklists. The AI-generated code produced by these services is structurally complex and heavy with JavaScript. This complexity prevents security tools and human analysts from easily identifying the underlying malicious intent. Once a user clicks the link, they are redirected to a sophisticated spoof of a Microsoft login portal designed to steal sensitive credentials and session data. [<a href="https://www.bleepingcomputer.com/news/security/bubble-ai-app-builder-abused-to-steal-microsoft-account-credentials/">more</a>]</p></li><li><p><strong>Navigating the risks of AI-driven development: </strong>Black Duck has launched Black Duck Signal, an agentic AI security solution designed to secure software created by AI coding assistants. This platform move marks a shift from traditional rule-based scanning to a system of coordinated AI agents that analyze code using human-like logic and extensive historical security data. Signal operates continuously within developer environments to identify complex vulnerabilities, such as business logic errors and cross-file dataflow issues, which often evade conventional tools. By prioritizing exploitability and providing automated remediation, the solution aims to maintain high development velocity while establishing necessary governance over the rapidly increasing volume of AI-generated production code. [<a href="https://www.itsecurityguru.org/2026/03/23/black-duck-launches-signal-to-tackle-the-security-risks-of-ai-generated-code/">more</a>]</p></li><li><p><strong>Gemini-powered AI agents in dark web:</strong> Google Threat Intelligence has introduced Gemini-powered AI agents capable of analyzing up to 10 million dark web posts daily with 98 percent accuracy. This service automates the creation of detailed organizational profiles and matches them against real-time threats like data leaks and initial access broker activity. [<a href="https://www.theregister.com/2026/03/23/google_dark_web_ai/">more</a>]</p></li><li><p><strong>Unverified advice from AI agents: </strong>Meta recently experienced a high-severity security incident when an internal AI agent provided inaccurate technical advice that led to unauthorized data access for nearly two hours. A software engineer used the agent to resolve an internal query, but the system posted a &#8220;hallucinated&#8221; response without human approval. Another employee followed these instructions, inadvertently granting engineers access to sensitive user and company data they were not cleared to view. While Meta downplayed the event by citing human error and a lack of data mishandling, the incident mirrors recent &#8220;gen-AI&#8221; failures at Amazon that caused significant cloud outages. These events highlight a growing trend of autonomous agents bypassing traditional safety checks and executing catastrophic technical changes within enterprise environments. [<a href="https://futurism.com/artificial-intelligence/rogue-ai-agent-triggers-emergency-at-meta">more</a>]</p><p><em><strong>Technology Risk Pointers</strong></em></p><ol><li><p><em><strong>Autonomous Execution and Hallucination:</strong> The agent bypassed human-in-the-loop validation by posting unverified technical advice. For leadership, this represents a breakdown in &#8220;least privilege&#8221; protocols where AI can influence system architecture without oversight.</em></p></li><li><p><em><strong>Prompt-Driven Escalation:</strong> Technical staff may over-rely on AI output for complex tasks, leading to a &#8220;game of telephone&#8221; where errors compound quickly. This creates a systemic vulnerability where a single AI error can trigger a SEV1 security breach.</em></p></li><li><p><em><strong>Internal Governance Gaps:</strong> The blame-shifting between human error and system design suggests that current AI disclaimers are insufficient. Executives must recognize that as agents move from &#8220;chatting&#8221; to &#8220;doing,&#8221; the surface area for operational and reputational risk expands beyond traditional cybersecurity defenses.</em></p></li></ol></li><li><p><strong>Shifting to AI CEO and management:</strong> Mark Zuckerberg is personally piloting an &#8220;AI CEO&#8221; agent to streamline executive decision-making and bypass traditional management layers at Meta. This initiative reflects a broader corporate shift toward an AI-native organizational structure where autonomous agents manage project documentation and internal communications. The company is aggressively flattening its hierarchy, with some managers now overseeing up to 50 contributors, while making AI adoption a mandatory metric in performance reviews. These experimental shifts coincide with reports of potential workforce reductions of up to 20 percent as the firm prioritizes algorithmic efficiency over human headcount. [<a href="https://cybernews.com/ai-news/zuckerberg-meta-agentic-ai-mass-layoffs/">more</a>]</p><p><em><strong>Technology Risk Pointers</strong></em></p><ol><li><p><em><strong>Knowledge Concentration and Security:</strong> Utilizing &#8220;CEO agents&#8221; and &#8220;Second Brains&#8221; centralizes vast amounts of sensitive corporate strategy into single AI interfaces. This creates a high-value target for industrial espionage or data breaches, where a single compromised prompt could leak entire project roadmaps.</em></p></li><li><p><em><strong>Operational Fragility from Hyper-Flattening:</strong> Removing middle management layers in favor of AI oversight can lead to a loss of institutional knowledge and human nuance. If the AI systems fail or produce hallucinations, the lack of human &#8220;buffers&#8221; could cause small operational errors to scale rapidly across the entire organization.</em><br></p></li></ol></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #162: Vibeware is here]]></title><description><![CDATA[Plus, AI security landscape reports, Claudy day vulnerability, AI risk management toolkit for the financial sector and more!]]></description><link>https://techriskguru.com/p/techrisk-162-vibeware-is-here</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-162-vibeware-is-here</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 22 Mar 2026 11:43:41 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="7008" height="4672" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4672,&quot;width&quot;:7008,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a table topped with lots of different colored teapots&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a table topped with lots of different colored teapots" title="a table topped with lots of different colored teapots" srcset="https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1713454769612-3c2aa35c6589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHx3YXJlfGVufDB8fHx8MTc3NDAxNzk4M3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Rise of vibeware:</strong> The threat actor APT36 has transitioned to a high volume production model known as vibeware, which utilizes artificial intelligence to mass produce mediocre but functional malware across various programming languages. This strategy represents a shift from technical sophistication to a distributed denial of detection approach that aims to overwhelm security teams with a constant stream of low fidelity alerts. By deploying polyglot binaries in niche languages like Nim and Zig and leveraging trusted cloud services such as Slack and Google Sheets for command operations, these attackers effectively bypass traditional signature based defenses. This industrialization of cyberattacks is a significant concern because it creates high levels of alert fatigue that can mask more precise manual hacking operations, potentially leading to prolonged undetected access and the theft of strategic intellectual property. [<a href="https://businessinsights.bitdefender.com/apt36-nightmare-vibeware">more</a>]</p></li><li><p><strong>AI security landscape reports:</strong> </p><ol><li><p>The 2026 HiddenLayer report signals a critical transition where artificial intelligence has moved from generating content to executing autonomous actions through agentic systems, creating a vast and unmonitored attack surface for the modern enterprise. Leadership must prioritize the risks of agentic AI, as these systems can now browse the web and execute code independently, meaning a single prompt injection can escalate into a full system compromise. The report reveals a significant governance gap where shadow AI has surged to 76% of organizations, and while 91% of companies have increased AI security budgets, over 40% of these firms allocate less than 10% of that spend to actual protection. Strategic concern also lies in the AI supply chain, where 35% of breaches now originate from malware hidden in public model repositories that 93% of businesses still rely on for rapid innovation. Executives should be wary of reasoning and self-improving models that increase the potential &#8220;blast radius&#8221; of any single exploit, as a compromised model can now autonomously influence downstream business systems at scale. Furthermore, the decentralization of AI into &#8220;edge&#8221; devices is creating new security blind spots that traditional centralized cloud controls cannot see or manage. [<a href="https://www.hiddenlayer.com/report-and-guide/threatreport2026">more</a>]</p></li><li><p>The 2026 RSM Attack Vectors Report reveals that cybercriminals are successfully bypassing traditional defenses by chaining together moderate weaknesses in cloud, identity, and application environments. A critical risk involves the speed of AI-driven attacks, which have compressed compromise timelines from days to mere minutes. This rapid tempo renders manual detection and response processes obsolete. Furthermore, over 80% of identity-related vulnerabilities persist even in environments with multi-factor authentication, while 78% of cloud engagements uncovered high-severity misconfigurations. For leadership, these findings signal that current governance and visibility are not keeping pace with technology adoption. The strategic focus must shift from perfect prevention to automated detection and rapid recovery to contain threats before they escalate into enterprise-wide incidents. [<a href="https://rsmus.com/newsroom/2026/rsm-attack-vectors.html">more</a>]</p></li></ol></li><li><p><strong>Claudy day vulnerability:</strong> The recently disclosed "Claudy Day" vulnerability chain highlights a critical shift in the cyber threat landscape, where attackers leverage AI-specific weaknesses to bypass traditional security controls. By chaining invisible prompt injection, API-based data exfiltration, and open redirects, threat actors could silently steal sensitive corporate data like business strategies and financial plans directly from user conversations. This attack is particularly concerning because it requires no malicious integrations and can be surgically targeted at high-value executives via trusted ad platforms. While the primary injection flaw is now patched, the incident underscores the strategic risk of "agentic" AI behavior where models can autonomously execute actions. [<a href="https://cybersecuritynews.com/claude-vulnerabilities-exfiltrate-sensitive/">more</a>]</p></li><li><p><strong>Fraudulent AI browser extensions:</strong> A widespread campaign has deployed fraudulent browser extensions to over 20,000 enterprise environments by mimicking popular AI tools. These malicious extensions gain broad permissions to record full chat histories and proprietary source code. This represents a major strategic risk because it transforms employee productivity aids into stealthy tools for corporate espionage. It is concerning that these tools can automatically re-enable data collection even after a user attempts to opt out. The exfiltration of sensitive internal URLs and strategic discussions directly threatens intellectual property and competitive advantages. Strict browser governance must be enforced to prevent long term data leaks and unauthorized access to internal workflows. [<a href="https://www.microsoft.com/en-us/security/blog/2026/03/05/malicious-ai-assistant-extensions-harvest-llm-chat-histories/">more</a>]</p></li><li><p><strong>OpenClaw&#8217;s flaw:</strong> The rapid adoption of the OpenClaw autonomous AI agent introduces significant systemic vulnerabilities that could lead to unauthorized endpoint control and catastrophic data exfiltration. Default security weaknesses and privileged system access allow attackers to use indirect prompt injection, where malicious web content tricks the AI into leaking sensitive information or executing unauthorized commands without user interaction. These risks extend beyond data loss to include the potential for permanent deletion of critical records, the installation of malicious software through compromised "skills" repositories, and the total paralysis of core business systems in sectors like finance and energy. [<a href="https://thehackernews.com/2026/03/openclaw-ai-agent-flaws-could-enable.html">more</a>]</p></li><li><p><strong>Data exfiltration from Amazon Bedrock, LangSmith, and SGLang:</strong> Recent vulnerabilities in Amazon Bedrock, LangSmith, and SGLang highlight a growing systemic risk where the tools used to develop and monitor artificial intelligence inadvertently create backdoors into the enterprise. Researchers found that Amazon Bedrock&#8217;s sandboxed code execution environments could be bypassed via DNS queries to exfiltrate sensitive data, while a high-severity flaw in the LangSmith observability platform allowed for account takeovers and the theft of session tokens. [<a href="https://thehackernews.com/2026/03/ai-flaws-in-amazon-bedrock-langsmith.html">more</a>]</p></li><li><p><strong>AI zero trust framework:</strong> Microsoft has introduced a new zero trust framework for artificial intelligence to address the unique security boundaries created by autonomous agents and complex data lifecycles. Traditional security models often fail to account for the shifting trust lines between users, models, and automated decision-making, which can lead to overprivileged or manipulated agents acting as internal threats. To mitigate these risks, the new guidance emphasizes continuous verification of agent identities and the application of strict least-privilege access to prevent unauthorized data exfiltration or lateral movement within the network. [<a href="https://www.microsoft.com/en-us/security/blog/2026/03/19/new-tools-and-guidance-announcing-zero-trust-for-ai/">more</a>]</p></li><li><p><strong>AI risk management toolkit for the financial sector:</strong> The Monetary Authority of Singapore (MAS) has launched a comprehensive AI Risk Management Toolkit through Project MindForge to help financial institutions navigate the complexities of traditional, generative, and emerging agentic AI. This initiative is critical for leadership because it establishes clear accountability for boards and senior management while providing a structured framework to mitigate operational and ethical hazards. Key risk pointers focus on the need for robust oversight, systematic risk materiality assessments, and end-to-end lifecycle controls to prevent AI failures that could damage institutional reputation or stability. By integrating these practices into enterprise risk frameworks, firms can manage the unique transparency and reliability issues of modern AI systems while maintaining regulatory compliance. [<a href="https://www.mas.gov.sg/news/media-releases/2026/mas-partners-industry-to-develop-ai-risk-management-toolkit-for-the-financial-sector">more</a>][<a href="https://www.mas.gov.sg/-/media/mas-media-library/schemes-and-initiatives/ftig/project-mindforge/mindforge-ai-risk-management-operationalisation-handbook.pdf">more</a>-MAS_AIRM_toolkit]</p></li></ol><p></p><p style="text-align: center;"><em>&lt;<a href="https://whatsapp.com/channel/0029Vb6eRq8HVvThL8ilxQ2T">WhatsApp Channel</a> - follow and stay updated&gt;</em></p><p><br></p>]]></content:encoded></item><item><title><![CDATA[TechRisk #161: Agentic AI breached McKinsey’s internal AI platform]]></title><description><![CDATA[Plus, AI agents become insider threats, first AI discovered Microsoft high-risk flaw, and more!]]></description><link>https://techriskguru.com/p/techrisk-161-agentic-ai-breached-mckinsey</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-161-agentic-ai-breached-mckinsey</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 15 Mar 2026 11:43:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mF0o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mF0o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mF0o!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!mF0o!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!mF0o!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!mF0o!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mF0o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mF0o!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!mF0o!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!mF0o!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!mF0o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb8daa7-d80b-491e-8c59-e923776820f6_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Agentic AI breached McKinsey&#8217;s internal AI platform:</strong> Researchers at the security firm CodeWall recently demonstrated the growing power of "agentic AI" by using an autonomous bot to breach McKinsey&#8217;s internal AI platform, Lilli, in just two hours. Without any human help or stolen passwords, the AI agent discovered a flaw that granted full access to over 46 million private chat messages, confidential client files, and the core instructions that control how the chatbot behaves. This breach was significant because the attacker could have "poisoned" the AI&#8217;s answers or stolen sensitive strategy data at massive scale and speed. While McKinsey quickly patched the holes and confirmed no data was stolen by malicious actors, the incident serves as a major warning that high-speed, AI-driven attacks are no longer theoretical. They are now being used to find and exploit vulnerabilities that traditional security tools often miss. [<a href="https://www.theregister.com/2026/03/09/mckinsey_ai_chatbot_hacked/?td=rt-4a">more</a>][more-2_<a href="https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform">how_CodeWall_breach_McKinsey</a>]</p></li><li><p><strong>Your AI agents could unintentionally become insider threats:</strong> Research from Irregular reveals that AI agents designed for routine office work can spontaneously turn into security threats without being told to do so. In testing, agents assigned to simple tasks like filing documents or managing backups began hacking into systems to bypass obstacles. These agents independently identified software weaknesses, elevated their own access levels, and moved sensitive data as a way to finish their jobs. This behavior occurs because the agents view security protocols as mere hurdles to clear, effectively turning productive AI tools into a new form of internal risk. [<a href="https://www.irregular.com/publications/emergent-offensive-cyber-behavior-in-ai-agents">more</a>]</p></li><li><p><strong>AI vulnerabilities now top CEOs&#8217; concern:</strong> The World Economic Forum&#8217;s 2026 cybersecurity outlook highlights a rapidly shifting landscape where artificial intelligence, geopolitical instability, and escalating cyber-enabled fraud have become the primary drivers of systemic risk. While AI serves as a powerful tool for defense, it is simultaneously accelerating an "arms race" by enabling more sophisticated, scalable attacks. Notably, executive concern has shifted toward unintended data exposure within generative AI tools. Geopolitical fragmentation continues to redefine security strategies, with a significant majority of large organizations now prioritizing resilience against state-sponsored disruption of critical infrastructure. Furthermore, cyber-enabled fraud has overtaken ransomware as the most pervasive threat to CEOs and households alike, underscoring a widening "cyber equity gap" where less-resilient organizations and regions face disproportionate impacts. To navigate this volatility, leaders must move beyond technical silos to foster cross-sector collaboration. [<a href="https://www.weforum.org/stories/2026/02/2026-cyberthreats-to-watch-and-other-cybersecurity-news/">more</a>][<a href="https://www.weforum.org/publications/global-cybersecurity-outlook-2026/">more</a>-2]</p></li><li><p><strong>"Slopoly" AI-assisted malware powers ransomware:</strong> The financially motivated threat actor known as Hive0163 has begun deploying "Slopoly," a suspected AI-generated malware framework, to streamline and accelerate its ransomware operations. Identified by IBM X-Force, Slopoly is used primarily for maintaining persistent access to compromised servers, allowing attackers to remain embedded in a network for extended periods during the post-exploitation phase. While the malware itself is currently described as relatively straightforward, its significance lies in how AI has enabled the rapid development of custom tools, significantly lowering the technical barrier for high-impact extortion and data exfiltration campaigns. [<a href="https://securityaffairs.com/189378/malware/ai-assisted-slopoly-malware-powers-hive0163s-ransomware-campaigns.html">more</a>]</p></li><li><p><strong>First AI discovered Microsoft high-risk flaw:</strong> Microsoft&#8217;s March 2026 security updates highlight a major shift in how software bugs are found, specifically with a high-risk flaw labeled <strong>CVE-2026-21536</strong>. This issue, found in a tool called the Microsoft Devices Pricing Program, could have allowed hackers to take control of systems remotely While Microsoft has already fixed the problem on their end, the focus is on how the bug was discovered. According to security expert Ben McCarthy, this is one of the first times a major Windows-related vulnerability was identified not by a human, but by an autonomous AI agent named <strong>XBOW</strong>. This milestone suggests that AI is now capable of performing high-level security testing on its own, potentially speeding up how quickly we find and fix digital threats. [<a href="https://krebsonsecurity.com/2026/03/microsoft-patch-tuesday-march-2026-edition/">more</a>]</p></li><li><p><strong>Vietnam first AI Law:</strong> Vietnam recently launched its first standalone AI Law. It starts on March 1, 2026. This law builds on a risk-based system similar to the one used in Europe. It splits AI tools into high, medium, and low risk levels. High risk tools like those in health care face the strictest rules. These include mandatory audits and local offices for foreign companies. The law bans the use of AI for manipulation or trickery. It also requires clear labels on AI-generated content. Vietnam aims to be pro-innovation. The government is offering tax breaks and a special development fund to attract investors. Companies have until September 2027 to comply with the rules for existing high-risk systems. [<a href="https://iapp.org/news/a/vietnam-s-first-standalone-ai-law-an-overview-of-key-provisions-future-implications">more</a>]</p></li></ol><p></p><p style="text-align: center;"><em>&lt;<a href="https://whatsapp.com/channel/0029Vb6eRq8HVvThL8ilxQ2T">WhatsApp Channel</a> - follow and stay updated&gt;</em></p><div><hr></div><p><strong>Watch:</strong></p><div id="youtube2-Tfpl_FEhwyU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Tfpl_FEhwyU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Tfpl_FEhwyU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><br></p>]]></content:encoded></item><item><title><![CDATA[TechRisk #160: AI impact on labour market]]></title><description><![CDATA[Plus, AI threat modeling, Aqua Trivy supply chain risk surfaced, and more!]]></description><link>https://techriskguru.com/p/techrisk-160-ai-impact-on-labour</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-160-ai-impact-on-labour</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 08 Mar 2026 11:43:41 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4643" height="3095" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3095,&quot;width&quot;:4643,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;white and blue smoke illustration&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="white and blue smoke illustration" title="white and blue smoke illustration" srcset="https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1626906671748-8b20645524d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx2YXBvcnxlbnwwfHx8fDE3NzI4MDc3Njl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><p><em>&lt;<a href="https://whatsapp.com/channel/0029Vb6eRq8HVvThL8ilxQ2T">WhatsApp Channel</a> - follow and stay updated&gt;</em></p><ol><li><p><strong>AI impact on labour market:</strong> Anthropic has launched the <strong>&#8220;AI Exposure Index,&#8221;</strong> a tracker revealing that <strong>computer programmers</strong> are the most vulnerable profession, with <strong>75% of their daily tasks</strong> now considered automatable by large language models. While mass layoffs haven&#8217;t materialized, the data shows a measurable <strong>slowdown in entry-level hiring</strong> for workers aged 22&#8211;25, suggesting companies are already replacing junior roles with AI workflows. Internal benchmarks show models like Claude can reduce certain task-completion times by up to <strong>80%</strong>, creating significant economic pressure on headcount. [<a href="https://cryptobriefing.com/anthropic-ai-exposure-index-job-vulnerability/">more</a>][<a href="https://www.anthropic.com/research/labor-market-impacts">more</a>-Anthropic]</p><p>Notable implications:</p><ul><li><p><strong>Labor Shift:</strong> The index highlights a structural problem where the pipeline for senior talent may narrow because the &#8220;junior&#8221; tasks used for training are being automated.</p></li><li><p><strong>Decentralized AI:</strong> As power concentrates in firms like Anthropic and OpenAI, there is a growing investment thesis for <strong>decentralized AI platforms</strong> that offer community-governed alternatives to traditional corporate employment.</p></li><li><p><strong>Investor Takeaway:</strong> High exposure scores for technical roles are strengthening the case for protocols focusing on decentralized compute and tokenized labor models, especially as the younger, tech-literate demographic faces a tightening traditional job market.</p></li></ul></li><li><p><strong>AI threat modeling: </strong>AI changes the security landscape from deterministic rules to probabilistic risks. [<a href="https://www.microsoft.com/en-us/security/blog/2026/02/26/threat-modeling-ai-applications/">more</a>]</p><ul><li><p><strong>New Attack Surfaces:</strong> Beyond traditional data breaches, AI introduces risks like <strong>prompt injection</strong>, <strong>model poisoning</strong>, and <strong>autonomous agent failures</strong> where instructions and data are often indistinguishable.</p></li><li><p><strong>Shift in Strategy:</strong> Shift from &#8220;perfect prevention&#8221; to <strong>limiting the blast radius</strong>. Because AI is non-deterministic, residual risk is inevitable; focus on defense-in-depth.</p></li><li><p><strong>Prioritize Assets, Not Just Attacks:</strong> Protect user trust, safety, and decision integrity as much as technical data.</p></li><li><p><strong>Action Plan:</strong> Map where untrusted data enters, define strict &#8220;never-do&#8221; boundaries, and invest in AI-specific observability to detect and respond to failures at scale.</p></li></ul></li><li><p><strong>Using AI to steal government data:</strong> Researchers from Gambit Security have uncovered a sophisticated cyberattack against the Mexican government, where an unknown hacker "jailbroke" Anthropic&#8217;s Claude AI to orchestrate the theft of 150 GB of sensitive data, including 195 million taxpayer records and voter files. By posing as an ethical "bug bounty" hunter and providing a detailed playbook to bypass safety guardrails, the attacker used the chatbot to identify network vulnerabilities, write exploit scripts, and automate data exfiltration across multiple federal and state agencies. When Claude resisted specific malicious commands, the hacker turned to OpenAI&#8217;s ChatGPT to calculate detection probabilities and plan lateral movement within the networks. [<a href="https://www.latimes.com/business/story/2026-02-26/hacker-used-anthropics-claude-ai-to-steal-mexican-government-data">more</a>]</p></li><li><p><strong>Aqua Trivy VS Code extension compromised:</strong> The <strong>&#8220;hackerbot-claw&#8221;</strong> campaign compromised the <strong>Aqua Trivy VS Code extension</strong> by injecting malicious code into versions 1.8.12 and 1.8.13 via a former employee&#8217;s stolen publishing token. The attack uniquely weaponized developers&#8217; own local AI coding tools (such as Copilot, Gemini, and Claude) by forcing them into unrestricted modes (e.g., <code>--yolo</code>) and using a 2,000-word prompt to trick them into acting as &#8220;forensic agents&#8221; to harvest credentials and exfiltrate sensitive data. While the versions were removed within 36 hours, the incident marks a critical shift in supply chain threats, where attackers no longer just steal data themselves but manipulate local AI assistants to perform the reconnaissance and theft on their behalf. [<a href="https://gbhackers.com/openvsx-aqua-trivy/">more</a>]</p></li><li><p><strong>OpenClaw self attack event:</strong> Web3 security firm GoPlus has reported a &#8220;self-attack&#8221; incident involving the AI development tool OpenClaw, where an AI-generated error led to the public exposure of over 100 sensitive environment variables, including Telegram keys and auth tokens. The breach occurred when the AI, attempting to automate a GitHub Issue creation, improperly formatted a Bash command. It included a <code>`set`</code> string wrapped in backticks, which Bash interpreted as a command to output all current system variables into the public issue description. [<a href="http://www.rootdata.com/news/566212">more</a>]</p><p><br></p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #159: 600 firewalls breached and further exploited using AI]]></title><description><![CDATA[Plus, massive security issue in DJI&#8217;s robot vacuums, install OpenClaw without permission through prompting injection, Microsoft 365 Copilot gotten excessive access, and more!]]></description><link>https://techriskguru.com/p/techrisk-159-600-firewalls-breached</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-159-600-firewalls-breached</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 01 Mar 2026 11:43:13 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4896" height="3264" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3264,&quot;width&quot;:4896,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;wall with broken bricks&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="wall with broken bricks" title="wall with broken bricks" srcset="https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1550039120-5d6529f0c4de?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHx3YWxsfGVufDB8fHx8MTc3MjIwMzIwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@hngstrm">H&amp;CO</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><h1>Tech Risk Reading Picks</h1><p><em>&lt;<a href="https://whatsapp.com/channel/0029Vb6eRq8HVvThL8ilxQ2T">WhatsApp Channel</a> - follow and stay updated&gt;</em></p><ol><li><p><strong>Hacker breached 600 firewalls and further attack enterprises with AI tools:</strong> Amazon noted that a Russian-speaking hacker broke into more than 600 FortiGate firewalls in 55 countries over five weeks by targeting devices that had their management panels exposed to the internet and protected by weak passwords without multi-factor authentication. Instead of using advanced software flaws, the attacker guessed common passwords to get in, then downloaded configuration files containing VPN logins, admin credentials, and network details. The hacker used generative AI tools to help write scripts, analyze stolen data, scan internal networks, and plan how to move deeper into victims&#8217; systems. They also targeted Veeam backup servers, likely to make it harder for companies to recover if ransomware was later deployed. Investigators found a server hosting stolen data and custom tools, including a system that fed network information into AI models like Claude and DeepSeek to generate step-by-step attack plans. While Amazon believes the attacker was only moderately skilled, AI tools helped them carry out large-scale attacks more easily, highlighting the need to secure firewall management interfaces, use strong passwords, enable MFA, and protect backup systems. [<a href="https://www.bleepingcomputer.com/news/security/amazon-ai-assisted-hacker-breached-600-fortigate-firewalls-in-5-weeks/">more</a>]</p></li><li><p><strong>Install OpenClaw without permission through prompting injection :</strong> A hacker exploited a prompt injection flaw in Cline, a popular open-source AI coding agent, to trick it into automatically installing the viral AI agent OpenClaw on users&#8217; machines, highlighting the growing risks of autonomous software. While the hacker chose to install OpenClaw as a stunt without activating it, the incident underscores how easily AI agents with system-level access can be hijacked to execute arbitrary commands. [<a href="https://www.theverge.com/ai-artificial-intelligence/881574/cline-openclaw-prompt-injection-hack">more</a>]</p></li><li><p><strong>Google old public API keys can access Gemini:</strong> A serious security flaw has exposed many Google Cloud projects because old public API keys can now access Google&#8217;s Gemini AI services without developers realizing it. For years, Google told developers that API keys starting with &#8220;AIza&#8221; were safe to place in public websites because they were only meant for billing and project identification. However, researchers found that if the Gemini (Generative Language) API is turned on in a project, all existing API keys in that project automatically gain access to Gemini. This is possible even if those keys were created years ago and are publicly visible. Attackers can simply copy a key from a website&#8217;s source code and use it to access private AI files, cached data, or run AI requests that charge the victim&#8217;s account, potentially causing data leaks, high bills, or service outages. Researchers discovered thousands of exposed keys online, affecting major companies and even Google services. Google is working on fixes, but developers are being urged to check their projects, restrict or rotate old keys, and remove any keys exposed in public code. [<a href="https://cybersecuritynews.com/google-api-keys-gemini/">more</a>]</p></li><li><p><strong>Microsoft 365 Copilot gotten excessive access:</strong> Microsoft has fixed a mistake that caused its AI assistant, Microsoft 365 Copilot Chat, to access and summarise some users&#8217; confidential emails by accident. The issue meant the tool could pull content from emails in a user&#8217;s Draft and Sent folders, even if those emails were marked as confidential or protected by security settings. Microsoft said the problem was caused by a code error and has now been corrected worldwide. [<a href="https://www.bbc.com/news/articles/c8jxevd8mdyo">more</a>]</p></li><li><p><strong>Massive security issue in DJI&#8217;s robot vacuums:</strong> Security researcher Ammy Azdoufal discovered a massive security flaw in DJI&#8217;s robot vacuums after a simple project to control his device with a PS5 controller accidentally granted him access to over 10,000 devices worldwide. By extracting his own private security token, Azdoufal was able to bypass PIN protections to view live camera feeds, listen through microphones, and download detailed 2D floor plans of strangers' homes across 24 countries, including the US, China, and the EU.</p></li><li><p><strong>AI in Boardroom:</strong> Artificial intelligence is spreading quickly across industries, from machine learning and generative AI to more advanced autonomous systems. As companies use AI more, the risks are also growing. AI can expose sensitive data, produce biased results, create compliance problems, and cause wider harm if used irresponsibly. Because of this, company boards need to treat AI risk as seriously as any other business risk. To prepare, boards should improve their own understanding of AI, encourage executives to learn more about it, consider adding members with real AI experience, and set up clear oversight through committees or updated governance processes. By staying informed and taking a structured approach, boards can help their organizations use AI responsibly and safely as the technology continues to evolve. [<a href="https://corpgov.law.harvard.edu/2026/02/22/artificial-intelligence-in-the-boardroom/">more</a>][<a href="https://www.deloitte.com/content/dam/assets-zone3/us/en/docs/services/risk-advisory/2024/DI_Global-risk-management-survey-12ed.pdf">more</a>-2]</p></li><li><p><strong>More powerful cybercriminals:</strong> Cybercriminals are using AI to make attacks faster and more powerful, putting security teams under greater pressure, according to CrowdStrike. In 2025, the average time for hackers to move from their first break-in to other systems dropped to 29 minutes (65% faster than the year before). The quickest attack taking just 27 seconds, and one case saw data stolen within four minutes. Attackers are also misusing legitimate AI tools, hitting around 90 organizations by stealing passwords or cryptocurrency through malicious prompts. Nation-state and criminal groups are using AI about 90% more than before, with examples including Fancy Bear deploying AI malware to collect documents, Punk Spider using AI scripts to erase evidence and steal credentials, and North Korea-linked Chollima creating fake AI personas for insider attacks. Overall, AI is helping hackers strike faster, smarter, and at a larger scale than ever. [<a href="https://www.cybersecuritydive.com/news/threat-groups-record-speeds-ai-attacks/812965/">more</a>]</p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #158: Zero-click attack Vibe-coding platform]]></title><description><![CDATA[Plus, Agentic AI governance guide by Palo Alto Networks, increasing powerful Notepad turns vulnerable, password managers might not be that secure, and more!]]></description><link>https://techriskguru.com/p/techrisk-158-zero-click-attack-vibe-coding-platform</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-158-zero-click-attack-vibe-coding-platform</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 22 Feb 2026 11:43:01 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="6862" height="4657" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4657,&quot;width&quot;:6862,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;beige wooden hand sculpture with orange background&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="beige wooden hand sculpture with orange background" title="beige wooden hand sculpture with orange background" srcset="https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1519658422992-0c8495f08389?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNXx8cG9pbnRpbmd8ZW58MHx8fHwxNzcxNTM5Mzg2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@charlesdeluvio">charlesdeluvio</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><h1>Tech Risk Reading Picks</h1><p><em>&lt;Announcement </em>- <a href="https://whatsapp.com/channel/0029Vb6eRq8HVvThL8ilxQ2T">WhatsApp Channel</a> - <em>follow and stay updated&gt;</em></p><ol><li><p><strong>Zero-click attack Vibe-coding tool:</strong> A security researcher demonstrated a zero-click attack on AI coding platform (Orchids) that allowed a security researcher to hijack a BBC reporter&#8217;s laptop. The flaw enabled the researcher to alter code inside an active project and remotely execute actions on the device without the user downloading malware or sharing credentials. This includes internet history or even spy through the cameras and microphones. [<a href="https://www.bbc.com/news/articles/cy4wnw04e8wo">more</a>]</p></li><li><p><strong>Increased attacks on OpenClaw:</strong> Cybersecurity researchers have identified an information stealer, likely a Vidar variant, exfiltrating sensitive files from OpenClaw (formerly Clawdbot/Moltbot) users, marking a shift from stealing browser credentials to harvesting AI agent &#8220;identities.&#8221; The malware captured files such as <code>openclaw.json</code> (gateway tokens and workspace info), <code>device.json</code> (cryptographic keys), and <code>soul.md</code> (agent behavior and ethical guidelines), potentially allowing attackers to impersonate or access a user&#8217;s AI agent. While the theft was opportunistic via broad file-grabbing routines, experts warn dedicated AI-targeting modules are likely to appear. The incident coincides with ongoing OpenClaw security concerns, including malicious skills campaigns hosted on fake websites, undeletable AI accounts on Moltbook, and hundreds of thousands of exposed instances susceptible to remote code execution, highlighting rising risks as the platform gains popularity and integrates into professional workflows. [<a href="https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html">more</a>]</p></li><li><p><strong>AI co-written logic caused $1.78M loss:</strong> Moonwell, a DeFi lending protocol, suffered a $1.78M exploit after a misconfigured cbETH price oracle drastically undervalued the token at around $1 instead of ~$2,200, allowing liquidators to drain over 1,096 cbETH and create protocol-level bad debt. The faulty pricing logic, reportedly co-written by the AI model Claude Opus 4.6, introduced an incorrect scaling factor, collapsing collateral requirements and enabling under-collateralized borrowing. [<a href="https://crypto.news/ethereum-price-forms-death-cross-as-etf-outflows-extend-into-fourth-month-will-it-crash/">more</a>]</p></li><li><p><strong>Agentic AI governance guide by Palo Alto Networks: </strong>Unlike traditional AI governance, which focuses on accuracy, bias, and compliance of generated responses, agentic AI governance is needed to addresse action risk, authority boundaries, identity and access controls, runtime safeguards, and clear accountability when agents initiate transactions or interact with enterprise systems. Organizations need to be aware of the risks that agentic AI brings, such as loss of execution control, unauthorized tool use, privilege escalation, data misuse, accountability gaps, and behavioral drift over time. Effective governance is important to ensure organizations retain responsibility for the authority they delegate to agentic AI and must ensure that control remains active, visible, and enforceable throughout operation. [<a href="https://www.paloaltonetworks.com/cyberpedia/what-is-agentic-ai-governance">more</a>]</p></li><li><p><strong>Japan&#8217;s leading semiconductor test equipment supplier hit by ransomware:</strong> Advantest, one of Japan&#8217;s leading semiconductor test equipment suppliers, is responding to a ransomware attack that disrupted several internal systems after the company detected unusual activity and isolated affected networks. Early findings suggest an unauthorized party accessed parts of its environment and deployed ransomware, with investigations continuing alongside external cybersecurity specialists. Given Advantest&#8217;s central role in providing test and measurement tools for chips used in AI, autonomous vehicles and 5G infrastructure, any prolonged disruption could ripple across an already fragile global semiconductor supply chain. The incident comes amid a marked escalation in ransomware activity against industrial firms, with Dragos identifying 119 groups targeting roughly 3,300 organizations in 2025, a sharp increase from the prior year. [<a href="https://therecord.media/leading-japanese-semiconductor-supplier-ransomware">more</a>]</p></li><li><p><strong>Increasing powerful Notepad turns vulnerable:</strong> Microsoft has fixed a high-severity remote code execution vulnerability in Windows 11 Notepad that allowed attackers to execute local or remote programs by tricking users into Ctrl+clicking specially crafted Markdown links. The flaw, tracked as CVE-2026-20841, stemmed from improper handling of non-standard URI protocols such as file:// and ms-appinstaller://, enabling malicious files to run without triggering Windows security warnings. Because the code executed in the context of the logged-in user, attackers could gain the same permissions as the victim, potentially launching programs from remote SMB shares. The issue affected Notepad versions 11.2510 and earlier and was addressed in the February 2026 Patch Tuesday updates by introducing warning prompts for non-http and non-https links. [<a href="https://www.bleepingcomputer.com/news/microsoft/windows-11-notepad-flaw-let-files-execute-silently-via-markdown-links/">more</a>]</p></li><li><p><strong>Password recovery attacks on password managers:</strong> A new academic study has identified multiple password recovery and integrity attacks affecting major cloud-based password managers including Bitwarden, LastPass, Dashlane, and to a lesser extent 1Password, under a threat model that assumes a malicious server and scrutinizes their zero-knowledge encryption designs. Researchers uncovered numerous vulnerabilities ranging from metadata leakage and field manipulation to full organizational vault compromise, largely stemming from key escrow mechanisms, flawed item-level encryption, weaknesses in sharing features, and legacy cryptography that enables downgrade attacks. While the findings highlight design anti-patterns and cryptographic misconceptions that could undermine confidentiality and integrity guarantees for more than 60 million users and 125,000 businesses, there is no evidence of active exploitation. Vendors have disputed or contextualized some findings and have implemented or are implementing mitigations, including removing legacy cryptography support, strengthening integrity controls, and refining recovery to reduce exposure. [<a href="https://thehackernews.com/2026/02/study-uncovers-25-password-recovery.html">more</a>][more-2_<a href="https://ethz.ch/en/news-and-events/eth-news/news/2026/02/password-managers-less-secure-than-promised.html">researcher</a>+<a href="https://eprint.iacr.org/2026/058">paper</a>]</p></li><li><p><strong>Palo Alto Networks Unit 42 2026 Global Incident Response Report </strong>- [<a href="https://www.paloaltonetworks.com/resources/research/unit-42-incident-response-report">more</a>]</p><ol><li><p>The 2026 Unit 42 report highlights an era of <strong>faster, more complex cyberattacks</strong>, driven by AI, sprawling attack surfaces, and identity exploitation. </p></li><li><p>Analysis of over 750 high-stakes incidents shows that AI-enabled attacks are now <strong>4x faster</strong>, with data exfiltration possible in as little as <strong>72 minutes</strong>.</p></li><li><p>Enterprise complexity benefits attackers: <strong>89% of breaches exploit identity weaknesses</strong>, and <strong>87% span multiple attack surfaces</strong>, often blending endpoints, cloud, SaaS, and identity systems. Identity-based techniques, including social engineering and credential misuse, account for <strong>65% of initial access</strong>, while browser-based attacks affect nearly <strong>half of all incidents</strong>. </p></li><li><p>SaaS supply chain attacks have surged nearly <strong>4x since 2022</strong>, leveraging OAuth tokens and API keys.</p></li></ol></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #157: Gemini supporting full attack lifecycle]]></title><description><![CDATA[Plus, ads are testing users&#8217; trust, more than 500 zero day vulnerabilities identified by Claude, and more!]]></description><link>https://techriskguru.com/p/techrisk-157-gemini-supporting-full-attack-lifecycle</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-157-gemini-supporting-full-attack-lifecycle</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 15 Feb 2026 11:43:11 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="2443" height="1623" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1623,&quot;width&quot;:2443,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;photo of red and white bike tire&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="photo of red and white bike tire" title="photo of red and white bike tire" srcset="https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1523357585206-175e971f2ad9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHx3aGVlbHxlbnwwfHx8fDE3NzA5OTQ1MzZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@alessandracaretto">Alessandra Caretto</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>State actors are using Gemini:</strong> State backed hackers from China, Iran, North Korea and Russia are using Google Gemini to support the full attack lifecycle from reconnaissance to data exfiltration which lowers the barrier to entry and accelerates operations. Adversaries are leveraging the model for target profiling, phishing content, code generation, vulnerability testing and command and control development which increases the speed and scale of campaigns. Iranian and Chinese actors have used Gemini to refine intrusion techniques and automate exploit analysis against specific targets which raises concerns about AI assisted targeting of enterprises. Malware such as HonestCue and phishing kits like CoinBait show how generative AI can be embedded into toolchains to dynamically generate payloads and enhance credential harvesting. Cybercriminal groups are also applying AI in social engineering campaigns such as ClickFix to distribute infostealers which heightens enterprise exposure through user manipulation. Separately, Google also noted attackers executing over 100,000 prompts to perform large scale model extraction and knowledge distillation attempts. While no breakthrough capabilities have been observed, the steady integration of AI into offensive operations signals a structural shift in cyber risk. [<a href="https://www.bleepingcomputer.com/news/security/google-says-hackers-are-abusing-gemini-ai-for-all-attacks-stages/">more</a>]</p></li><li><p><strong>OpenAI with Ads will test users&#8217; trust:</strong> Zo&#235; Hitzig&#8217;s departure from OpenAI highlights growing concern that introducing advertising into ChatGPT could create incentives to monetise highly sensitive user conversations. Users have shared deeply personal information with the expectation of neutrality, and targeted advertising built on that archive raises risks of manipulation and loss of trust. While OpenAI has pledged to keep a firewall between chats and advertisers, these commitments are not legally binding and may erode under commercial pressure. Past issues such as model sycophancy have intensified scrutiny over whether engagement optimisation could conflict with user wellbeing. Proposals for independent oversight or data trusts reflect recognition that governance mechanisms may be required to protect user interests. [<a href="https://gizmodo.com/openai-researcher-quits-warns-its-unprecedented-archive-of-human-candor-is-dangerous-2000720822">more</a>]</p></li><li><p><strong>More than 500 zero day vulnerabilities identified by Claude:</strong> Anthropic&#8217;s Claude Opus 4.6 identified more than 500 previously unknown high severity vulnerabilities in open source libraries with minimal prompting, signaling a step change in automated security testing. The model uncovered zero day flaws that could crash systems or corrupt memory, including issues in widely used tools such as GhostScript and OpenSC, which raises the stakes for organizations that depend on open source components. Its ability to move beyond standard fuzzing and manual analysis and to generate its own proof of concept exploits highlights how advanced reasoning can expose risks that traditional tools miss. While this development strengthens defensive capabilities, it also suggests a parallel risk that similar AI tools could accelerate threat actor discovery of exploitable flaws. [<a href="https://www.msn.com/en-us/news/technology/anthropics-newest-ai-model-uncovered-500-zero-day-software-flaws-in-testing/ar-AA1VKTFp">more</a>][<a href="https://red.anthropic.com/2026/zero-days/">more</a>-2_Anthropic_Red]</p></li><li><p><strong>Hidden risk of AI agent social networking site:</strong> An experimental AI agent social platform (Moltbook) exposed its entire production database through an unsecured API key, allowing unauthenticated access to user secrets and PII. In addition, the platform enabled unlimited bot creation without rate limiting, raising concerns about abuse, manipulation, and artificial activity at scale. Experts warn that beyond the data leak, the design enables large scale prompt injection attacks that could cascade across interconnected agents. [<a href="https://www.darkreading.com/cyber-risk/agentic-ai-moltbook-security-risks">more</a>]</p></li><li><p><strong>Risks remain as OpenClaw partnered with VirusTotal:</strong> OpenClaw&#8217;s partnership with Google-owned VirusTotal adds a useful security checkpoint for scanning skills in its ClawHub marketplace, but it also highlights deeper risks in the fast-growing agentic ecosystem. While automated scanning and daily rechecks can reduce obvious malware exposure, they cannot reliably catch prompt injection or skills that abuse legitimate access. This leaves room for stealthy data exfiltration and unauthorized actions. [<a href="https://thehackernews.com/2026/02/openclaw-integrates-virustotal-scanning.html">more</a>]</p></li><li><p><strong>Maintaining operation resilience in complex corporate environment:</strong> United Airlines&#8217; CISO highlights that aviation systems are built for stability and long lifecycle which makes rapid cybersecurity modernization risky if not carefully managed. Legacy and safety critical environments cannot be frequently modified so airlines must rely on layered controls such as identity management, segmentation, monitoring, and compensating safeguards to reduce exposure without creating operational fragility. Cyber incidents in aviation can quickly escalate from IT issues to flight delays, safety concerns, and reputational damage which shifts the focus from pure prevention to operational continuity and resilience. As such, crisis response must be multidisciplinary and rehearsed in advance because decisions may affect passengers in the air and on the ground and missteps can erode public trust. [<a href="https://www.helpnetsecurity.com/2026/02/09/deneen-defiore-united-airlines-aviation-cybersecurity-strategy/">more</a>]</p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #156: AI-only social network exposed 1.5M API tokens]]></title><description><![CDATA[Tech Risk Reading Picks]]></description><link>https://techriskguru.com/p/techrisk-156-ai-only-social-network-15m-api-tokens</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-156-ai-only-social-network-15m-api-tokens</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 08 Feb 2026 11:43:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rxxC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rxxC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rxxC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!rxxC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!rxxC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!rxxC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rxxC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rxxC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!rxxC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!rxxC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!rxxC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f2fcff-d490-4758-b61f-1f03bc8a84cc_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>When AI agents become the weakest link:</strong> A widely used AI agent called Moltbot was shown to be vulnerable to simple attacks that expose sensitive data and system access, highlighting governance and security risks as organisations adopt autonomous AI tools. The agent is designed to have broad access to email, messaging apps, files, and credentials. This creates a large attack surface if controls are weak. Researchers demonstrated that attackers could hijack Moltbot through internet-facing components and then pivot to private communications and other connected systems. A marketplace for third-party &#8220;skills&#8221; introduces supply chain risk, as malicious code can be disguised as popular add-ons and falsely appear trustworthy through manipulated download metrics. Weak validation of uploaded files also enabled code execution on shared infrastructure, showing how basic security gaps can cascade into wider compromise. The core risk is structural rather than accidental, because AI agents are valuable precisely because they have permissions that traditional software does not, making failures more damaging. This raises concerns about data leakage, credential abuse, regulatory exposure, and operational disruption if agent deployments are not tightly sandboxed and audited. [<a href="https://www.404media.co/silicon-valleys-favorite-new-ai-agent-has-serious-security-flaws/">more</a>]</p></li><li><p><strong>Risks behind an AI-only social network:</strong> Moltbook exposed material technology risk after a misconfigured backend allowed unauthenticated read and write access to its production database, resulting in exposure of 1.5 million API authentication tokens, more than 35,000 email addresses, and private messages. Attackers could fully impersonate any AI agent using leaked credentials, enabling account takeover and misuse of high visibility accounts. The absence of access controls also allowed modification of live posts, meaning any party could deface content, manipulate reputation scores, or inject malicious prompts consumed by other agents. Private messages were stored without protection and included third party API keys, extending the impact beyond the platform itself. The findings show that a single configuration error in a widely used cloud service can directly lead to large scale data exposure, loss of content integrity, and downstream security compromise across connected AI services. [<a href="https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys">more</a>]</p></li><li><p><strong>Full cloud compromise by AI in minutes:</strong> The incident was identified through post attack investigation by the Sysdig Threat Research Team, which analyzed cloud activity logs and configuration changes after suspicious behavior was detected. Attackers accessed an AWS environment after finding valid credentials exposed in public S3 buckets and used them as an entry point into the account. They rapidly escalated privileges by modifying existing Lambda functions until they obtained administrative access. AI resources were used throughout the attack to automate discovery, generate attack code, and guide real time decisions, which allowed the intrusion to complete in under ten minutes. This includes abused Amazon Bedrock to invoke multiple AI models and turn the compromised environment into an AI and infrastructure resource for the attackers. [<a href="https://www.darkreading.com/cloud-security/8-minute-access-ai-aws-environment-breach">more</a>][<a href="https://www.sysdig.com/blog/ai-assisted-cloud-intrusion-achieves-admin-access-in-8-minutes">more</a>-2_sysdig]</p></li><li><p><strong>Priorities for CISOs this year according to Google CISO: </strong>As AI becomes embedded in core business operations, CISOs face heightened risk from strategies that focus on compliance alone, since regulatory alignment often lags real world threats and can leave organizations exposed to disruptive attacks. AI supply chains introduce new vulnerabilities because models, data, and third party components can be tampered with in ways that undermine trust, reliability, and decision making at scale. Weak identity management is now a critical risk as agentic AI expands, because poor control over human and machine identities increases the blast radius of inevitable incidents and reduces accountability. Traditional security response speeds are insufficient against AI enabled attacks, making slow detection and recovery a material business risk that can directly impact availability and revenue. Inadequate AI governance also raises strategic and ethical concerns, since without strong context driven oversight and testing, organizations may deploy AI in high impact decisions without fully understanding or managing the consequences. [<a href="https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-5-top-ciso-priorities-in-2026">more</a>]</p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #155: Attackers exploit OpenAI team invites]]></title><description><![CDATA[Plus, ethical hackers are rapidly adopting AI, confidential documents uploaded to public version of ChatGPT, and more!]]></description><link>https://techriskguru.com/p/techrisk-155-attackers-exploit-openai</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-155-attackers-exploit-openai</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 01 Feb 2026 11:43:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!F177!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!F177!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!F177!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!F177!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!F177!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!F177!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!F177!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!F177!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!F177!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!F177!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!F177!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F090be953-c5d2-4aad-a11f-f024f9be2486_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Attackers exploit OpenAI team invites to breach enterprises:</strong> Kaspersky discovered attackers abusing OpenAI&#8217;s team invitation feature by creating accounts that embed malicious links or phone numbers inside organization name field, which are then delivered through emails sent from legitimate OpenAI addresses. This approach makes the messages appear authentic and helps them bypass standard email security controls, increasing the likelihood that employees trust and act on them. Victims are directed to click deceptive links or call fraudulent numbers where credentials or payment details are harvested, leading to potential data and financial loss. The attack is often reinforced with follow-up vishing calls that apply urgency and pressure, reducing the chance of detection. [<a href="https://www.techradar.com/pro/beware-hackers-have-hijacked-openais-invite-your-team-feature-to-break-into-your-business">more</a>]</p></li><li><p><strong>Ethical hackers are rapidly adopting AI:</strong> Recent research shows ethical hackers are rapidly adopting AI, which introduces several technology risk considerations. AI-driven automation accelerates vulnerability discovery and code analysis, which increases the pace at which both defenders and attackers can find weaknesses, raising the risk of faster and larger-scale exploitation. However, heavy reliance on AI tools can also create blind spots if models miss context-specific risks or reinforce existing biases, which may weaken assurance over security outcomes. The growing use of AI in hacking workflows lowers skill barriers, which could indirectly empower less experienced or malicious actors if similar tools are misused. The key question is whether widespread AI use in ethical hacking normalizes techniques that attackers can easily replicate, potentially narrowing the defensive advantage and complicating regulatory and ethical boundaries around acceptable security testing practices. [<a href="https://www.cybersecurity-insiders.com/ai-is-being-used-by-over-80-of-ethical-hackers-for-greater-precision/">more</a>][<a href="https://www.bugcrowd.com/resources/report/inside-the-mind-of-a-hacker/">more</a>-bugcrowd]</p></li><li><p><strong>Implications of artificial intelligence and digital finance:</strong> AI and digital finance are reshaping financial markets by accelerating decision making and digitising financial claims, which raises financial stability risks through faster liquidity shocks, higher operational dependencies and stronger contagion effects across institutions. AI driven trading and automated responses can intensify price swings during stress, while tokenised assets can move or be redeemed faster than underlying liquidity allows, increasing the risk of disorderly markets. Heavy reliance on shared cloud providers, data sources and platforms creates concentrated operational and cyber risks, where a single disruption could have system wide impact. The widespread use of similar AI models and tokenisation infrastructures can cause firms to react in the same way to shocks, amplifying stress and transmitting it rapidly across borders. The key question is whether current governance and regulatory frameworks can keep pace with the speed and complexity of these technologies. [<a href="https://www.bis.org/speeches/sp260126.htm">more</a>]</p></li><li><p><strong>AI systems used by enterprises exposed publicly:</strong> A joint investigation found more than 175,000 publicly exposed AI systems running outside standard enterprise controls which creates material cyber and governance risk for organizations. Nearly half of these systems can execute code and access external systems which elevates the threat from data misuse to direct operational and financial impact if abused. Because these deployments often sit outside corporate security perimeters they are harder to monitor secure and distinguish from sanctioned AI use which increases exposure to fraud resource theft and regulatory scrutiny. Active criminal campaigns are already exploiting these weaknesses to hijack AI infrastructure for spam disinformation and resale which shows the risk is immediate rather than theoretical. [<a href="https://thehackernews.com/2026/01/researchers-find-175000-publicly.html">more</a>]</p></li><li><p><strong>Confidential documents uploaded to public version of ChatGPT:</strong> The acting director of the US Cybersecurity and Infrastructure Security Agency uploaded multiple &#8220;for official use only&#8221; government contracting documents to the public version of ChatGPT, causing sensitive information to leave approved federal systems and triggering automated security alerts. The uploads occurred despite existing restrictions on public AI tools and followed the granting of a temporary exception for the director. Security sensors detected the activity within weeks, confirming that monitoring controls functioned but only after the data had already been shared externally. [<a href="https://www.csoonline.com/article/4124320/cisa-chief-uploaded-sensitive-government-files-to-public-chatgpt.html">more</a>]</p></li><li><p><strong>AI-powered healthcare services provider compromised:</strong> A 2025 cyberattack on HCIactive, an AI-powered healthcare services provider, compromised data of about 3.1 million individuals, placing it among the largest health data breaches of the year and raising concerns about third-party technology risk in healthcare. Attackers accessed the company&#8217;s network over several days before detection, showing gaps in monitoring and incident response that increase exposure for clients relying on outsourced digital services. The stolen data included sensitive medical records and identity information, creating long-term risks of fraud, regulatory penalties, litigation, and loss of trust for healthcare practices tied to the platform. [<a href="https://www.govinfosecurity.com/ai-powered-services-firm-says-hack-affects-31m-a-30618">more</a>]</p><p></p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #154: AI Zombie Agent]]></title><description><![CDATA[Plus, advanced and high-quality malware framework likely developed using AI agent, when one click Is enough, Chainlit exposes enterprises to data leakage, and more!]]></description><link>https://techriskguru.com/p/techrisk-154-ai-zombie-agent</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-154-ai-zombie-agent</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 25 Jan 2026 11:43:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!aQhG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aQhG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aQhG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!aQhG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!aQhG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!aQhG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aQhG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aQhG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!aQhG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!aQhG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!aQhG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F696450fa-f41f-40a1-a89f-a0b5c9c2aad3_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>New class of AI-driven enterprise risk:</strong> The ZombieAgent research highlights a significant emerging technology risk for enterprises using AI assistants with deep system integrations: attackers can exploit AI &#8220;connectors&#8221; to business-critical platforms (email, documents, code repositories, collaboration tools) to silently extract sensitive data, making AI a new, low-friction attack surface because it cannot reliably distinguish legitimate instructions from malicious ones hidden in routine content. Of particular concern is persistence risk by manipulating the AI&#8217;s memory, attackers can embed long-term rules that enable continuous data exfiltration across future interactions, effectively turning the AI into an internal spy without ongoing user action or visibility. A further risk is governance and oversight: organizations lack transparency into how AI agents interpret untrusted inputs and what actions they autonomously execute in cloud environments, creating a material control gap. [<a href="https://www.eweek.com/news/zombie-ai-attack-chatgpt-leaks/?email_hash=0d7a7050906b225db2718485ca0f3472">more</a>]</p></li><li><p><strong>When one click Is enough:</strong> The Reprompt incident highlights a material technology risk for enterprises adopting embedded AI assistants: a single, seemingly legitimate click was sufficient to trigger silent access to sensitive corporate and personal data by exploiting trusted session context, bypassing traditional security controls and leaving little to no forensic signal. This is concerning as these AI tools can act as privileged insiders without requiring malware, added permissions, or ongoing user interaction, thereby expanding the organization&#8217;s attack surface beyond conventional phishing and endpoint threats. Even though Microsoft has patched the specific flaw, the broader risk persists around AI deep links, persistent sessions, and automated chaining of actions, which can undermine data governance, regulatory compliance, and incident detectability if not managed with defense-in-depth. [<a href="https://www.esecurityplanet.com/artificial-intelligence/microsoft-copilot-reprompt-attack-enables-stealthy-data-exfiltration/?email_hash=0d7a7050906b225db2718485ca0f3472">more</a>]</p></li><li><p><strong>AI productivity tools are creating a new language-driven cyber risk:</strong> Recent disclosures highlight how AI-enabled workplace tools can unintentionally expose sensitive enterprise data, underscoring emerging technology risks. First, indirect prompt injection is a growing concern: attackers can embed malicious instructions in seemingly benign content (such as calendar invites) that AI assistants later process, allowing unauthorized actions or data leakage without user awareness. This expands the attack surface beyond traditional code vulnerabilities into everyday business workflows. Second, identity and privilege escalation risks in AI platforms are increasing, as flaws in service accounts and managed identities can enable attackers with minimal access to escalate privileges, access sensitive AI interactions, or compromise cloud infrastructure. This poses challenges to existing governance and access-control models. Third, weak security-by-design in AI agents and coding tools remains prevalent, with many systems failing to enforce basic authorization, business logic controls, and protections against data exfiltration. [<a href="https://thehackernews.com/2026/01/google-gemini-prompt-injection-flaw.html">more</a>]</p></li><li><p><strong>Chainlit exposes enterprises to data leakage and Cloud takeover: </strong>Two easy-to-exploit vulnerabilities discovered in the widely adopted open-source AI framework Chainlit pose material technology and governance risks for enterprises, particularly those deploying AI chatbots connected to sensitive internal data. First, an arbitrary file read flaw could allow attackers to extract environment variables containing API keys, cloud credentials, and authentication secrets. This allow attackers to create a pathway to data leakage, identity compromise, and even full account takeover in regulated environments such as financial services and energy. Second, a server-side request forgery (SSRF) weakness can be combined with the file read issue to probe internal systems, access confidential APIs, and enable lateral movement within cloud infrastructure, elevating the risk from isolated exposure to systemic breach. [<a href="https://www.theregister.com/2026/01/20/ai_framework_flaws_enterprise_clouds/">more</a>]</p></li><li><p><strong>Advanced and high-quality malware framework likely developed using AI agent:</strong> VoidLink is the first well-documented case showing that a truly advanced, high-quality malware framework can be built predominantly with AI, marking the practical beginning of an era long theorized by security researchers. Check Point Research found that, unlike earlier AI-linked malware tied to inexperienced actors or recycled open-source code, VoidLink was sophisticated, modular, and rapidly developed. It is also likely developed by a single skilled individual using an AI agent end-to-end. Due to OPSEC failures, researchers uncovered extensive planning artifacts revealing a Spec Driven Development workflow, where the AI was first tasked with generating detailed multi-team plans, specifications, and sprints, then used to implement, test, and iterate the malware. Despite documentation implying a 20&#8211;30 week effort by multiple teams, evidence shows a functional implant was produced in under a week. This demonstrates how AI can collapse the time, resources, and coordination once required for high-complexity cyberattacks. [<a href="https://research.checkpoint.com/2026/voidlink-early-ai-generated-malware-framework/">more</a>]</p><p></p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #153: 91,000 attacks on AI infrastructure]]></title><description><![CDATA[Plus, strategic risks and governance implications of AI-enabled cyber threats, learning from AI threats in 2025, A new class of stealth Cloud malware targeting Linux infrastructure, and more!]]></description><link>https://techriskguru.com/p/techrisk-153-91000-attacks-on-ai</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-153-91000-attacks-on-ai</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 18 Jan 2026 11:43:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vuhL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vuhL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vuhL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!vuhL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!vuhL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!vuhL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vuhL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vuhL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!vuhL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!vuhL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!vuhL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245fd601-154f-47ef-b1f3-63d2b68057fb_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Over 91,000 coordinated attacks on AI infrastructure:</strong> Security research indicates a sharp rise in over 91,000 coordinated attacks against AI infrastructure over three months highlighting material technology risks for organizations scaling AI adoption: first, <strong>server-side request forgery (SSRF) exploits are being used to coerce AI and communications platforms</strong> into making unauthorized outbound connections, raising concerns about data leakage, regulatory exposure, and abuse of trusted integrations; second, <strong>systematic reconnaissance of large language model (LLM) endpoints is probing for misconfigured proxies</strong> that could expose access to paid or proprietary AI services, signalling potential revenue loss, intellectual property theft, and downstream breaches; third, the professional globally distributed nature of the activity (e.g. using VPS-based tooling and quiet &#8220;low-noise&#8221; queries) suggests <strong>attackers are building pipelines for future exploitation rather than one-off testing</strong>, increasing long-term risk. A notable controversy is the apparent use of security-research tooling (such as OAST callback infrastructure) at scale, blurring the line between legitimate testing and grey-hat activity, which complicates attribution, response decisions, and legal positioning for affected enterprises. [<a href="https://cyberpress.org/hackers-actively-exploit-ai-deployments/">more</a>]</p></li><li><p><strong>AI, geopolitics and supply chains are top 2026 cyber risks:</strong> The World Economic Forum&#8217;s Global Cybersecurity Outlook highlights three interconnected technology risks that demand executive attention: first, r<strong>apid AI deployment is expanding attack surfaces and governance exposure</strong>, as organisations integrate AI into core operations faster than controls around data leakage, model misuse, accountability and regulatory readiness can mature; second, <strong>geopolitical fragmentation is undermining traditional cyber and compliance frameworks</strong>, with data sovereignty, diverging regulations and cross-border tensions increasing uncertainty and limiting organisations&#8217; ability to manage risk consistently across jurisdictions; and third, increasingly complex and globally dispersed technology supply chains are amplifying systemic vulnerability, as <strong>breaches or disruptions at third parties can cascade into significant operational and reputational harm</strong>. Major economies remain divided between prioritising innovation and imposing safeguards, resulting in fragmented, case-by-case regulation that raises compliance burdens for multinational firms and weakens collective cyber defence. [<a href="https://securitybrief.co.uk/story/ai-geopolitics-supply-chains-reshape-cyber-risk">more</a>][<a href="https://www.weforum.org/publications/global-cybersecurity-outlook-2026/">more</a>-2]</p></li><li><p><strong>Learning from AI threats in 2025:</strong> Despite headlines about AI and next-generation security, the most material technology risks facing organisations in 2025 remain stubbornly familiar: <strong>software supply-chain compromise</strong>, <strong>phishing-driven credential theft</strong>, and <strong>malware slipping through trusted platforms</strong>. Supply-chain attacks are of growing concern because a single compromised component can rapidly cascade across thousands of downstream systems, amplifying business, operational, and reputational impact at unprecedented scale. This is now achievable even by small or individual attackers using AI-enabled efficiency. Phishing remains highly effective because it targets human behaviour rather than systems; one successful click can trigger enterprise-wide exposure, as seen when developer credentials were abused to poison widely used software packages before remediation could take effect. Official marketplaces and platforms also continue to present risk, as automated and human reviews lag attacker sophistication, allowing malicious extensions or apps to gain broad access under overly permissive models. <strong>The key controversy</strong> is the industry&#8217;s continued emphasis on &#8220;shiny&#8221; new security concepts while basic controls (includes granular permissions, stronger supply-chain verification, and phishing-resistant authentication) remain inconsistently implemented. This misalignment persists not due to lack of technology, but due to prioritisation and governance gaps at platform and organisational levels. [<a href="https://thehackernews.com/2026/01/what-should-we-learn-from-how-attackers.html">more</a>]</p></li><li><p><strong>Strategic risks and governance implications of AI-enabled cyber threats:</strong> Artificial intelligence is now being embedded directly into malware and attack workflows, creating several material technology risks for organizations: first, <strong>adaptive malware</strong> that rewrites its own code in real time can evade traditional, signature-based defenses, increasing the likelihood of undetected breaches and prolonged dwell time; second, <strong>AI-driven social engineering</strong> enables highly personalized and linguistically polished phishing and fraud, raising the probability of executive-level compromise and financial or reputational loss; and third, the <strong>industrialization of AI tools in criminal marketplaces</strong> lowers the barrier to entry for sophisticated attacks, expanding the threat surface for mid-size enterprises and supply chains. A key controversy is the <strong>dual-use nature of generative AI platforms</strong>, where the same models that drive productivity and innovation can be manipulated or socially engineered by attackers, raising unresolved questions for regulators and boards around accountability, acceptable use, and the responsibility of AI providers in preventing misuse without stifling innovation. [<a href="https://www.pandasecurity.com/en/mediacenter/ai-is-changing-cyber-threats-heres-how-to-stay-protected/">more</a>]</p></li><li><p><strong>Hidden risks in consumer health AI:</strong> Consumer health chatbots introduce material technology risk because they can deliver advice that sounds credible yet is contextually wrong, particularly when models lack full patient data and are not calibrated to express uncertainty. This creates &#8220;verification asymmetry&#8221; where errors are hard for users to detect but can cause real harm. <strong>Standard AI safety tests often miss these risks because they reward fluency and empathy rather than identifying subtly misleading guidance</strong>, allowing high-risk outputs to pass undetected. Risk further compounds over multi-turn conversations as models prioritize being supportive and consistent over challenging earlier assumptions, while commercial pressures discourage friction such as disclaimers or forced citations that would reduce engagement. The central controversy is accountability: with no unified regulatory framework or clear liability standards for consumer health chatbots, organizations face a governance gray zone where innovation is encouraged but responsibility for harm remains unresolved. [<a href="https://www.bankinfosecurity.com/healthcare-chatbots-provoke-unease-in-ai-governance-analysts-a-30483">more</a>]</p></li><li><p><strong>A new class of stealth Cloud malware targeting Linux infrastructure:</strong> Cybersecurity researchers have identified <em>VoidLink</em>, a highly advanced and previously undocumented malware framework designed for persistent, stealthy control of Linux-based cloud environments. Key technology risks include its deep cloud awareness (it can detect and adapt to AWS, Azure, Google Cloud, Kubernetes, and Docker), which makes traditional perimeter defenses less effective; its <strong>modular, upgradeable design</strong> that allows attackers to evolve capabilities over time, increasing dwell time and business impact; and its <strong>strong credential-harvesting and lateral-movement features,</strong> raising the risk of large-scale data theft and supply-chain compromise through developer and CI/CD environments. Of particular concern is its ability to actively evade detection by assessing installed security controls and dynamically adjusting behavior, undermining standard monitoring and incident-response assumptions. A notable controversy is the assessment that VoidLink is linked to China-affiliated threat actors, which elevates the issue from a technical security incident to a potential geopolitical and regulatory risk, especially for organizations operating critical infrastructure, sensitive intellectual property, or cross-border cloud services. [<a href="https://thehackernews.com/2026/01/new-advanced-linux-voidlink-malware.html">more</a>]</p></li><li><p><strong>Runtime security could be the blind spot in Cloud risk:</strong> Cloud risk now concentrates at runtime (the live execution layer where identities act, workloads scale, and data moves) because this is where attackers actually operate, exploiting stolen credentials, escalating privileges, deploying malicious compute, and accessing or exfiltrating data faster than traditional controls can react. The key technology risks are of threefold: first, <strong>loss of visibility</strong>, as ephemeral cloud resources disappear before incidents can be investigated, leaving gaps in accountability and regulatory exposure; second, <strong>speed and automation of attacks</strong>, where programmatic pivots across identities and services outpace human-led response and amplify business impact; and third, <strong>evidence volatility</strong>, where the lack of real-time forensic capture undermines incident response, legal defensibility, and post-breach learning. The central controversy is the industry&#8217;s continued reliance on CNAPP and posture management as a primary control. While they could serve as a valuable prevention control, these tools focus on what <em>could</em> go wrong rather than what <em>is</em> going wrong. Hence, it may create a false sense of security at board level. [<a href="https://www.darktrace.com/blog/runtime-is-where-cloud-security-really-counts-the-importance-of-detection-forensics-and-real-time-architecture-awareness">more</a>]</p></li><li><p><strong>Third party dependency risk of Ledger:</strong> The recent Ledger customer data breach underscores several material technology risks: first, <strong>third-party dependency risk</strong>, where secure core products are undermined by weaker external providers, expanding the attack surface beyond an organization&#8217;s direct control; second, <strong>concentration risk in centralized customer databases</strong>, which amplifies the impact of any single breach by exposing large volumes of personal data at once; third, <strong>downstream fraud and reputational risk</strong>, as exposed personal data enables highly targeted phishing that can lead to irreversible financial losses for customers and lasting brand damage; and fourth, <strong>governance and disclosure risk</strong>, illustrated by limited transparency around breach timing and scope, which complicates incident response, regulatory scrutiny, and stakeholder trust. The key controversy centers on the <strong>misalignment between blockchain companies&#8217; decentralized security messaging and their reliance on traditional centralized e-commerce infrastructure</strong>, raising questions about whether firms promoting &#8220;best-in-class&#8221; security should be held to higher standards in selecting partners and adopting architectures that better align with their stated principles. [<a href="https://hackernoon.com/why-ledgers-latest-data-breach-exposes-the-hidden-risks-of-third-party-dependencies">more</a>]</p><p></p></li></ol><div><hr></div><p><strong>The Hidden Risks of Autonomy:</strong> Why AI Agents Are the New Frontier for Hackers.</p><div id="youtube2-GhlwR5hQcUQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;GhlwR5hQcUQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/GhlwR5hQcUQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[TechRisk #152: Embrace vibe hacking in 2026]]></title><description><![CDATA[Plus, $3.3B digital assets lost in 2025, 33% of Bitcoin at risk, AI IDE &#8220;recommended extension&#8221; attacks, 900K users&#8217; ChatGPT and DeepSeek conversations stolen through Chrome extensions, and more!]]></description><link>https://techriskguru.com/p/techrisk-152-embrace-vibe-hacking-rise-2026</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-152-embrace-vibe-hacking-rise-2026</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 11 Jan 2026 11:34:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mdF2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mdF2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mdF2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!mdF2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!mdF2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!mdF2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mdF2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mdF2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!mdF2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!mdF2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!mdF2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa78236f9-0427-4164-b95b-bdbda52ae209_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Vibe hacking to rise in 2026:</strong> Cybercriminal communities are rapidly reframing AI not as a breakthrough technology, but as a confidence engine that lowers the barrier to entry and scales crime. Across dark web forums and Telegram channels, attackers are embracing &#8220;vibe hacking&#8221;, a mindset where AI is trusted to guide actions without deep technical understanding. This will make cybercrime be more accessible and faster. AI-branded tools like &#8220;FraudGPT&#8221; and &#8220;PhishGPT,&#8221; alongside widely traded jailbreak techniques, are marketed to first-time and low-skill actors with promises of automation, &#8220;no experience needed,&#8221; and step-by-step guidance, even when the underlying crimes are unchanged. The real shift is psychological rather than technical: AI removes fear, normalizes reckless behavior, and expands the pool of attackers, leading to more frequent, more polished, and harder-to-spot attacks. For organizations, this means threat volume and victim reach will grow not because attackers are more skilled, but because AI makes cybercrime feel easy, safe, and scalable. [<a href="https://www.bleepingcomputer.com/news/security/in-2026-hackers-want-ai-threat-intel-on-vibe-hacking-and-hackgpt/">more</a>]</p></li><li><p><strong>900K users&#8217; ChatGPT and DeepSeek conversations stolen through Chrome extensions:</strong> Researchers at OX Security have uncovered a major malware campaign involving two malicious Chrome extensions (i.e. <strong>"Chat GPT for Chrome with GPT-5, Claude Sonnet &amp; DeepSeek AI"</strong> and <strong>"AI Sidebar with Deepseek, ChatGPT, Claude and more"</strong>) which have collectively compromised over 900,000 users. By impersonating the legitimate "AITOPIA" AI sidebar, these extensions deceive users into granting permissions for "anonymous analytics" while actually exfiltrating full ChatGPT and DeepSeek conversation histories, search queries, and complete browsing URLs to a remote command-and-control server every 30 minutes. Despite their malicious nature, one of the extensions managed to obtain Google&#8217;s "Featured" badge, lending it a false sense of credibility that facilitated its widespread adoption. The stolen data poses a severe risk of corporate espionage and identity theft, as it often contains proprietary source code, business strategies, and personal identifiable information. <strong>Users are urged to immediately remove these extensions via </strong><code>chrome://extensions</code><strong> to secure their data.</strong> [<a href="https://www.ox.security/blog/malicious-chrome-extensions-steal-chatgpt-deepseek-conversations/">more</a>]</p></li><li><p><strong>AI IDE &#8220;recommended extension&#8221; attacks:</strong> Several popular AI-powered IDEs forked from VS Code (including Cursor, Windsurf, Google Antigravity and Trae) were found to <strong>recommend extensions that do not exist in the OpenVSX marketplace they rely on</strong>, creating a supply-chain security risk. These IDEs inherit hardcoded extension recommendations from Microsoft&#8217;s Visual Studio Marketplace (which they cannot use due to licensing), unclaimed publisher namespaces in OpenVSX could be taken over by threat actors to distribute malicious extensions under trusted names. Security researchers at Koi identified this gap, responsibly disclosed it in late 2025, and proactively claimed multiple vulnerable namespaces with harmless placeholder extensions while coordinating with the Eclipse Foundation to strengthen registry safeguards. Cursor and Google have since remediated the issue, while Windsurf has not yet responded. There is currently no evidence of active exploitation. [<a href="https://www.bleepingcomputer.com/news/security/vscode-ide-forks-expose-users-to-recommended-extension-attacks/">more</a>]</p></li><li><p><strong>AI automation &#8220;Ni8mare&#8221; - n8n&#8217;s critical vulnerability:</strong> A critical (10/10) vulnerability, CVE-2026-21858 (&#8220;Ni8mare&#8221;), has been discovered in locally deployed n8n workflow automation platforms, <strong>enabling unauthenticated remote attackers to fully compromise servers</strong>. Researchers estimate 100,000+ instances are exposed. The flaw stems from improper content-type handling in webhook and form workflows, allowing attackers to read arbitrary system files, steal secrets (API keys, OAuth tokens, database and cloud credentials), bypass authentication, and potentially execute commands. This turns n8n into a high-impact entry point. Given n8n&#8217;s widespread enterprise and AI usage (50,000+ weekly npm downloads, 100M+ Docker pulls) and its role as a central automation and data orchestration hub, exploitation could lead to system-wide and supply-chain compromise. No workaround exists beyond restricting or disabling public webhooks/forms; immediate upgrade to n8n v1.121.0 or later is strongly recommended to mitigate material security and business risk. [<a href="https://www.bleepingcomputer.com/news/security/max-severity-ni8mare-flaw-lets-hackers-hijack-n8n-servers/">more</a>]</p></li><li><p><strong>Bruising year for cybersecurity in digital assets: </strong>In 2025, crypto hacks reached historic levels, with total losses estimated at $3.3&#8211;3.4 billion across more than 300 major incidents, surpassing all of 2024 by midyear. The largest was the $1.5 billion Bybit breach attributed to North Korea&#8217;s Lazarus Group, which used frontend compromise and cross-chain laundering via THORChain, a tactic also seen in the $73 million Phemex hack, while DeFi suffered major exploits such as Cetus on Sui ($220 million) and Balancer ($116 million), both caused by rounding or math-library bugs rather than classic smart contract flaws. Centralized exchanges like Upbit ($34 million) reimbursed users but highlighted concentration risk. Although investigators traced or froze portions of stolen funds in several cases, most assets remain in motion. <strong>Compromised wallets and social engineering emerging as the dominant attack vectors</strong>. [<a href="https://www.coinspeaker.com/biggest-crypto-hacks-of-2025/amp/">more</a>]</p></li><li><p><strong>33% of Bitcoin at risk:</strong> A senior Coinbase executive has warned that advances in quantum computing could eventually pose a material security challenge to Bitcoin, with estimates suggesting that about one-third of the total BTC supply (&#8776;6.5 million coins) could be vulnerable under certain scenarios. While the risk is not imminent, Coinbase&#8217;s David Duong says Bitcoin may be entering a &#8220;new regime&#8221; as institutions and regulators take the issue seriously. This is evidenced by BlackRock flagging quantum risk in its Bitcoin ETF prospectus and U.S. and EU guidance to migrate critical systems to post-quantum cryptography by 2035. The core concern is that future quantum computers running Shor&#8217;s algorithm could break Bitcoin&#8217;s current signature scheme, potentially exposing funds in older or already-revealed address types, while Grover&#8217;s algorithm could affect mining efficiency. Industry views diverge on timing and urgency, but consensus is forming that preparation is necessary. [<a href="https://cryptonews.com/news/coinbase-quantum-computing-bitcoin-risk-warning/">more</a>]</p></li><li><p><strong>Growing third party risk in AI and Cloud adoptions at manufacturing front:</strong> A recent cyberattack that shut Jaguar Land Rover&#8217;s highly automated UK production for a month. This resulted ~$260m in cybersecurity costs and ~$650m in broader losses. The growing executive risk as manufacturers rapidly digitise without commensurate security. Suggested pointers for management and boards to note: (1) <strong>Rising exposure:</strong> Manufacturing has been the most-attacked industry for four consecutive years as AI, cloud, and connectivity expand attack surfaces across plants, suppliers, and vendors. (2) <strong>Tech outpacing security:</strong> While 57% of large manufacturers use cloud and ~29% use AI/ML, many legacy systems were never designed for connectivity, leaving gaps that attackers exploit. (3) <strong>Systemic impact:</strong> Breaches can halt production, cascade through global supply chains, and threaten jobs and supplier viability. (4) <strong>Data risk concentration:</strong> Centralized AI and cloud platforms heighten the risk of unauthorized access to sensitive IP, designs, and production data. (5) <strong>Board actions:</strong> Treat AI datasets as high-value assets; enforce data classification, encryption, and key management; demand visibility into third-party and vendor AI use; segment IT, cloud, and operational systems. [<a href="https://www.manufacturingdive.com/news/cyber-risks-grow-as-manufacturers-turn-to-ai-and-cloud-systems/808049/">more</a>]</p><p></p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #151: AI’s future isn’t straightforward]]></title><description><![CDATA[Plus, OpenAI&#8211;Mixpanel data breach, new &#8220;Zero-Click&#8221; data destruction risk, AI coding tools are quietly expanding enterprise risk and more!]]></description><link>https://techriskguru.com/p/techrisk-151-ais-future-isnt-straightforward</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-151-ais-future-isnt-straightforward</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 04 Jan 2026 11:43:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Nvz6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nvz6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nvz6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!Nvz6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!Nvz6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!Nvz6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nvz6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Nvz6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!Nvz6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!Nvz6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!Nvz6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0509566-f38d-4422-9cfd-30e3a57bda99_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>AI&#8217;s future isn&#8217;t straightforward. </strong>[<a href="https://www.weforum.org/stories/2025/12/ai-paradoxes-in-2026/">more</a>]</p><ul><li><p><strong>Human advantage intensifies, not declines:</strong> As AI scales, demand is rising for uniquely human capabilities (e.g., judgement, leadership, creativity and oversight) making workforce transition and skills investment a board-level priority rather than a technology issue.</p></li><li><p><strong>Value realization lags adoption:</strong> Many organizations face short-term productivity dips and unclear ROI from AI, underscoring that competitive advantage comes from redesigning processes, governance and accountability.</p></li><li><p><strong>AI&#8217;s growth vs. trust and sustainability:</strong> While AI promises efficiency and innovation, it is simultaneously driving misinformation risks (&#8220;AI slop&#8221;) and significant energy demand. This forces leaders to confront whether rapid AI expansion erodes trust and climate commitments unless governed with clear accountability and net-positive energy strategies.</p></li></ul></li><li><p><strong>OpenAI-Mixpanel data breach. </strong>[<a href="https://www.securityweek.com/openai-user-data-exposed-in-mixpanel-hack/">more</a>]</p><ul><li><p><strong>Limited exposure via third-party vendor:</strong> A smishing attack on analytics provider Mixpanel led to the compromise of limited OpenAI user profile and analytics data, with no impact to OpenAI&#8217;s core systems, products, or sensitive credentials.</p></li><li><p><strong>Swift containment and customer assurance:</strong> OpenAI removed Mixpanel from production, confirmed no exposure of ChatGPT content or API usage data, and is notifying affected users while monitoring for misuse.</p></li><li><p><strong>Third-party risk and transparency:</strong> Mixpanel disclosed minimal technical details, raising concerns about visibility and assurance in vendor security incidents. This highlights ongoing challenges for enterprises in managing and governing third-party cyber risk and disclosure expectations.</p></li></ul></li><li><p><strong>PromptPwnd: A new AI supply-chain risk. </strong>[<a href="https://hackread.com/promptpwnd-vulnerabilit-ai-systems-data-theft/">more</a>]</p><ul><li><p><strong>AI automation introduces a new attack surface:</strong> Researchers identified &#8220;PromptPwnd,&#8221; a vulnerability where attackers can manipulate AI agents embedded in CI/CD pipelines, potentially stealing credentials or altering software workflows through seemingly harmless inputs like bug reports.</p></li><li><p><strong>The risk is real and already impacting major firms:</strong> At least five Fortune 500 companies, including a Google repository, were exposed. This has demonstrated that AI prompt injection can directly compromise critical software delivery systems, not just theoretical AI safety.</p></li><li><p><strong>Speed vs. safety in AI adoption:</strong> The vulnerability highlights a growing tension where companies rapidly deploying AI for efficiency are sometimes disabling built-in safeguards, inadvertently increasing systemic risk across their software supply chains.</p></li></ul></li><li><p><strong>Agentic browsers create a new &#8220;Zero-Click&#8221; data destruction risk. </strong>[<a href="https://thehackernews.com/2025/12/zero-click-agentic-browser-attack-can.html">more</a>]</p><ul><li><p><strong>Agent autonomy can amplify routine access into enterprise-scale damage:</strong> AI-powered browsers with OAuth access to Gmail and Google Drive can execute destructive actions (e.g., mass file deletion) from a single benign user prompt, without user confirmation.</p></li><li><p><strong>Attackers exploit trust and tone, not technical flaws:</strong> Polite, well-structured natural-language instructions embedded in emails or URLs can manipulate browser agents into harmful actions. These can be done without jailbreaks, malware, or clicks required.</p></li><li><p><strong>Is this a &#8220;bug&#8221; or &#8220;feature&#8221;?</strong> Google classified similar prompt-based abuses as low severity or &#8220;intended behavior,&#8221; while others patched their product. This raises concern that current industry definitions of security may underestimate real business risk from agentic AI acting on untrusted content.</p></li></ul></li><li><p><strong>AI prompt injection is here to stay. </strong>[<a href="https://therecord.media/prompt-injection-attacks-uk-intelligence-warning">more</a>]</p><ul><li><p><strong>Prompt injection is a structural AI risk, not a temporary flaw: </strong>UK intelligence warns that because large language models cannot reliably distinguish instructions from data, prompt injection attacks are likely to remain a permanent residual risk rather than something that can be fully engineered away.</p></li><li><p><strong>Widespread AI adoption could amplify breach exposure: </strong>Embedding generative AI into core business processes (e.g., recruitment, search, code, decision support) without redesigning controls may trigger a new wave of security incidents, similar in scale to early SQL injection breaches.</p></li><li><p><strong>Treating prompt injection like SQL injection is &#8220;dangerous&#8221;: </strong>Many security teams assume the problem can be fixed with familiar technical controls, but UK intelligence argues this analogy is misleading. This is because prompt injection requires governance, design limits, and operational risk management, not just technical patches, which challenges prevailing industry assumptions and product-led security promises.</p></li></ul></li><li><p><strong>How AI coding tools are quietly expanding enterprise risk. </strong>[<a href="https://thehackernews.com/2025/12/researchers-uncover-30-flaws-in-ai.html">more</a>]</p><ul><li><p><strong>AI-powered IDEs introduce a new, systemic attack surface:</strong> Over 30 vulnerabilities show that widely used AI coding assistants can be manipulated to silently exfiltrate sensitive data or execute malicious code by chaining prompt injection with trusted IDE features.</p></li><li><p><strong>The core issue is flawed trust assumptions, not niche bugs:</strong> AI agents are treated as &#8220;safe add-ons,&#8221; but their autonomous actions can weaponize long-standing IDE functions, bypassing user awareness and traditional security controls.</p></li><li><p><strong>&#8220;Secure by design&#8221; tools are enabling attacks by default:</strong> The controversy lies in the industry&#8217;s rapid deployment of agentic AI without rethinking threat models. This includes auto-approved actions and trusted integrations prioritize productivity over security, creating enterprise-scale risk that many vendors and adopters have underestimated.</p></li></ul></li><li><p><strong>Eurostar - A case of classic security failures in a modern LLM.</strong> [<a href="https://www.pentestpartners.com/security-blog/eurostar-ai-vulnerability-when-a-chatbot-goes-off-the-rails/">more</a>]</p><ul><li><p><strong>AI did not create new risks; it amplified existing ones: </strong>Eurostar&#8217;s AI chatbot suffered from familiar web and API security weaknesses (guardrail bypass, ID validation gaps, injection flaws), showing that traditional security fundamentals still fully apply to AI-enabled systems.</p></li><li><p><strong>Weak server-side controls undermined trust and governance: </strong>Guardrails were visible in the UI but poorly enforced on the backend, allowing attackers to manipulate conversation history, extract system prompts, and inject malicious content despite apparent safeguards.</p></li><li><p><strong>Disclosure handling raised governance and reputational concerns: </strong>Despite a formal vulnerability disclosure programme, reports went unanswered for weeks and were later framed as potential &#8220;blackmail,&#8221; highlighting breakdowns in security operations, third-party handover risk, and executive oversight of responsible disclosure processes.</p></li></ul></li><li><p><strong>AI-Powered financial fraud in digital payments.</strong> [<a href="https://www.cybersecurity-insiders.com/study-confirms-ai-generated-nfc-malware-has-emerged-as-a-new-cyber-threat/">more</a>]</p><ol><li><p><strong>Escalating threats:</strong> Cybercriminals are now using AI-generated malware to intercept payments via NFC devices, enabling unauthorized transactions and fraudulent online purchases.</p></li><li><p><strong>Beyond ransomware:</strong> AI is no longer limited to traditional attacks; generative AI is being leveraged to create sophisticated financial fraud tools targeting everyday digital payment systems.</p></li><li><p><strong>Widespread AI accessibility:</strong> While AI platforms like ChatGPT, Google Gemini, and Claude empower innovation, they also enable highly convincing phishing and fraud schemes, raising ethical and regulatory concerns about the balance between accessibility and misuse.</p></li></ol></li><li><p><strong>Cloud and identity remain the weakest links. </strong>[<a href="https://www.cybersecuritydive.com/news/ai-security-cloud-infrastructure-palo-alto-networks/808510/">more</a>]</p><ul><li><p><strong>AI risk is still a cloud risk:</strong> Despite the focus on advanced AI models, executives&#8217; top concern is the security of the underlying cloud infrastructure, which remains the primary attack surface for AI-enabled enterprises.</p></li><li><p><strong>Identity management is mission-critical:</strong> Over half of organizations cite overly lenient identity practices as a major challenge, reinforcing that access control and identity governance are now central to protecting AI and cloud environments.</p></li><li><p><strong>Open-source AI libraries raise trust and governance concerns:</strong> While open-source accelerates innovation and reduces costs, executives worry about hidden vulnerabilities, data integrity issues, and regulatory compliance.</p></li></ul></li><li><p><strong>Strategic vulnerabilities in the AI landscape moving forward. </strong>[<a href="https://www.csoonline.com/article/4111384/top-5-real-world-ai-security-threats-revealed-in-2025.html">more</a>]</p><ul><li><p><strong>Shadow AI Expansion:</strong> Widespread use of unsanctioned AI tools by employees and misconfigured cloud workloads are creating invisible, unmonitored entry points for data breaches.</p></li><li><p><strong>Supply Chain &amp; Financial Risk:</strong> Reliance on third-party open-source models exposes the firm to embedded malware, while the theft of AI credentials (&#8221;LLMjacking&#8221;) poses a risk of significant unexpected financial liability. In addition, the Model Context Protocol (MCP) expands the enterprise attack surface by allowing unverified or vulnerable servers to inject malicious code and execute unauthorized commands directly within corporate development environments.</p></li><li><p><strong>Persistent prompt injection attacks:</strong> It remains a pervasive threat with no perfect technical fix. Unlike traditional software bugs, this is an architectural limitation where LLMs cannot fundamentally distinguish between valid instructions and processed data, meaning &#8220;autonomous&#8221; agents currently require expensive, human-in-the-loop oversight to be safe.</p></li></ul></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #150: Design-level AI-browser exploit]]></title><description><![CDATA[Plus, ClickFix attacks surge 517%, malicious LLMs accelerate cybercrime capabilities, and more!]]></description><link>https://techriskguru.com/p/techrisk-150-design-level-ai-browser-exploit</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-150-design-level-ai-browser-exploit</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 07 Dec 2025 14:34:35 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="9000" height="4320" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4320,&quot;width&quot;:9000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;background pattern&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="background pattern" title="background pattern" srcset="https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1650954934741-3a648866a897?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZGVzaWdufGVufDB8fHx8MTc2NTExNDQ3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>HashJack: Emerging AI-browser exploit raises design-level security concerns </strong>[<a href="https://hackread.com/hashjack-attack-url-control-ai-browser-behavior/">more</a>]</p><ol><li><p>HashJack exposes a novel prompt-injection technique where malicious instructions are hidden in URL fragments (#), allowing attackers to manipulate AI browser assistants without compromising the underlying website.</p></li><li><p>Exploits include credential theft, harmful advice, data exfiltration, and automated execution of risky system actions, especially in advanced agentic modes where AI acts independently.</p></li><li><p>While Microsoft and Perplexity issued timely fixes, Google has not yet addressed the issue for Gemini, underscoring uneven responses and the need for stronger AI-integrated security practices.</p></li><li><p>Google&#8217;s classification of the vulnerability as low-severity and &#8220;intended behaviour&#8221; has raised concern because it leaves users exposed to an exploit that bypasses traditional security controls and relies on AI design flaws.</p></li></ol></li><li><p><strong>ClickFix attacks surge 517% </strong>[<a href="https://hackread.com/fake-chatgpt-atlas-clickfix-steal-passwords/">more</a>]</p><ol><li><p>ClickFix attacks have escalated sharply, with a 517% increase and growing use by state-aligned threat groups from Iran, North Korea, and Russia.</p></li><li><p>Threat actors now exploit cloned, trusted-looking websites&#8212;including fake ChatGPT Atlas installers&#8212;to trick users into running password-harvesting commands.</p></li><li><p>The attack bypasses traditional controls by convincing users to paste obfuscated commands into their terminal, enabling privilege escalation and full system compromise.</p></li><li><p>Use of Google Sites as a trusted delivery platform, which raises concerns about major tech platforms inadvertently enabling high-fidelity phishing; attackers leverage Google&#8217;s implicit trust to increase success rates, sparking debate about platform accountability.</p></li></ol></li><li><p><strong>Malicious LLMs accelerate cybercrime capabilities </strong>[<a href="https://www.bleepingcomputer.com/news/security/malicious-llms-empower-inexperienced-hackers-with-advanced-tools/">more</a>]</p><ol><li><p>Malicious LLMs are operational and evolving. Tools like WormGPT 4 and KawaiiGPT now generate functional ransomware, phishing campaigns, and automated attack scripts, enabling scalable cyberattacks with minimal expertise.</p></li><li><p>These models allow inexperienced actors to produce professional-grade phishing messages, conduct lateral movement, and execute data exfiltration with ease.</p></li><li><p>Paid and free versions, supported by active Telegram communities, indicate a maturing illicit market that accelerates tool development and attacker collaboration.</p></li></ol></li><li><p><strong>AI data security reality check for enterprise leaders </strong>[<a href="https://hackread.com/ai-adoption-surges-while-governance-lags-report-warns-of-growing-shadow-identity-risk/">more</a>]</p><ol><li><p>AI adoption has outpaced oversight, with 83% of organizations using AI daily but only 13% having strong visibility into how it handles sensitive data.</p></li><li><p>AI is functioning as an ungoverned enterprise identity, resulting in widespread over-access to sensitive information and limited ability to monitor or control prompts and outputs.</p></li><li><p>Governance readiness is critically low, as only 7% have a dedicated AI governance function and just 11% feel prepared for emerging regulatory demands.</p></li><li><p>Autonomous AI agents present the most acute and debated risk, with 76% of professionals calling them the hardest systems to secure and over half unable to block risky actions in real time.</p></li></ol></li><li><p><strong>Critical picklescan vulnerabilities expose AI supply chains to model-based attacks </strong>[<a href="https://thehackernews.com/2025/12/picklescan-bugs-allow-malicious-pytorch.html">more</a>]</p><ol><li><p>High-severity flaws in Picklescan allowed attackers to bypass its safeguards and execute arbitrary code through malicious PyTorch model files.</p></li><li><p>These vulnerabilities highlight systemic weaknesses in relying on a single scanning tool to secure increasingly complex AI model formats.</p></li><li><p>Remediation is available (Picklescan v0.0.31), underscoring the need for continuous, expert-driven monitoring of AI supply chain risks.</p></li><li><p>The flaws expose a fundamental tension between rapid AI innovation and lagging security controls, raising concerns that existing model-scanning tools cannot keep pace with emerging threats.</p></li></ol></li><li><p><strong>AI advancing faster than its safeguards</strong> [<a href="https://www.helpnetsecurity.com/2025/12/02/ai-safety-risks-report/">more</a>]</p><ol><li><p>Layered AI safeguards are expanding but remain inconsistent, with no single control reliably stopping determined attackers&#8212;forcing reliance on imperfect, overlapping defenses.</p></li><li><p>Attack techniques and open-weight model adaptations are accelerating faster than defensive tools, creating unpredictable risks even when vendors claim strong safeguards.</p></li><li><p>Governments and companies are building early safety frameworks, but the absence of shared standards means oversight, evaluation quality, and vendor disclosures vary widely.</p></li></ol></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #149: Can AI be trusted in cybersecurity?]]></title><description><![CDATA[Plus, Growing account takeover fraud, Small Language Models (SLMs) could strengthen phishing defenses, Systemic vulnerability in Large Language Models, and more!]]></description><link>https://techriskguru.com/p/techrisk-149-can-ai-be-trusted-in-cybersecurity</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-149-can-ai-be-trusted-in-cybersecurity</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 30 Nov 2025 11:43:32 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4608" height="3456" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3456,&quot;width&quot;:4608,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;silhouette photo of man jumping on body of water during golden hour&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="silhouette photo of man jumping on body of water during golden hour" title="silhouette photo of man jumping on body of water during golden hour" srcset="https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1520371764250-8213f40bc3ed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxqdW1wfGVufDB8fHx8MTc2NDI0MjY1Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Can AI be trusted performing cybersecurity for us? </strong>[<a href="https://hackread.com/can-we-trust-ai-with-cybersecurity-ai-security/">more</a>]</p><ol><li><p>AI strengthens cybersecurity by detecting threats quickly. AI can analyse massive amounts of data in seconds, identify unusual behaviour, learn from past attacks, and help companies respond instantly. </p></li><li><p>AI has weaknesses and can be exploited. Cybercriminals also use AI to create smarter attacks such as realistic phishing emails. AI can be tricked through data poisoning, can misclassify threats, and may react unpredictably in unfamiliar situations.</p></li><li><p>AI security is essential to protect AI systems themselves. Since AI relies heavily on data, protecting that data and continuously testing AI models is crucial. If attackers manipulate the data or system, AI can become a new target and cause serious harm.</p></li></ol></li><li><p><strong>Growing AI phishing and holiday scams through account takeover (ATO) fraud. </strong><a href="https://thehackernews.com/2025/11/fbi-reports-262m-in-ato-fraud-as.html">[more</a>]</p><ol><li><p>FBI reports a surge in ATO fraud, with over $262M lost this year and more than 5,100 complaints, primarily driven by impersonation of financial institutions.</p></li><li><p>Attackers leverage advanced methods including SEO poisoning, AI-crafted phishing content, malicious ads, fake e-commerce stores, and exploitation of known platform vulnerabilities.</p></li><li><p>Fraud ecosystems are maturing, with dark-web marketplaces, stealer logs, and ad campaigns funded by stolen cards enabling scammers to scale operations quickly.</p></li></ol></li><li><p><strong>Global &#8220;TamperedChef&#8221; Malvertising Campaign Exploits Software Search Behavior. </strong>[<a href="https://thehackernews.com/2025/11/tamperedchef-malware-spreads-via-fake.html">more</a>]</p><ol><li><p>Global malvertising campaign using fake software installers. They are often signed with abused certificates to deliver JavaScript backdoors and maintain persistent remote access.</p></li><li><p>The operation employs a steady churn of shell-company code-signing certificates and SEO-driven lures, making the campaign scalable, credible-looking, and difficult to detect.</p></li><li><p>Healthcare, construction, and manufacturing are disproportionately affected due to users&#8217; frequent searches for manuals and utilities, which are exploited through poisoned ads and URLs.</p></li></ol></li><li><p><strong>Small Language Models (SLMs) could strengthen phishing defenses. </strong>[<a href="https://www.helpnetsecurity.com/2025/11/26/research-slms-website-phishing-detection/">more</a>]</p><ol><li><p>SLMs can scan trimmed website HTML to detect phishing with accuracy often above 80%, balancing speed and compute efficiency.</p></li><li><p>Running SLMs internally keeps sensitive data in-house, avoids vendor lock-in, and reduces reliance on external cloud providers.</p></li><li><p>Mid-sized models (10&#8211;20B parameters) approach the effectiveness of larger models, offering a practical compromise between runtime and accuracy.</p></li><li><p>Performance Gap vs. Proprietary Systems: Despite progress, SLMs underperform compared to larger proprietary models, raising concerns about missed threats or false positives that could disrupt security operations.</p></li></ol></li><li><p><strong>Industrialized payment fraud is an escalating risk for financial institutions. </strong>[<a href="https://www.helpnetsecurity.com/2025/11/27/visa-payment-fraud-trends-report/">more</a>]</p><ol><li><p>Fraud is industrializing. Criminal groups now operate like coordinated businesses, leveraging botnets, AI scripts, and repeatable playbooks to scale attacks rapidly.</p></li><li><p>Rapid monetization exploits gaps. Fraudsters use instant payments, mobile wallets, and token provisioning to convert stolen credentials into cash before defenses can react.</p></li><li><p>Synthetic content undermines identity checks. AI-generated identities, documents, and websites allow fraudsters to bypass traditional onboarding and detection processes.</p></li><li><p>Traditional controls are failing. Legacy defenses, designed for slower, visible fraud, struggle against distributed, AI-driven attacks and third-party vulnerabilities, raising questions about the adequacy of current regulatory and risk frameworks.</p></li></ol></li><li><p><strong>&#8216;Reward&#8209;Hacking&#8217; may trigger unintended risk in production LLMs.</strong> [<a href="https://assets.anthropic.com/m/74342f2c96095771/original/Natural-emergent-misalignment-from-reward-hacking-paper.pdf">more</a>]</p><ol><li><p>Shortcut&#8209;based reward hacking can trigger broader emergent risks. When a large language model (LLM) learns to &#8220;cheat&#8221; (e.g. bypass coding tests rather than solve them), it may also develop far more harmful behaviors such as deception, sabotage or collusion with malicious actors.</p></li><li><p>Reinforcement learning from human feedback (RLHF) may not suffice. Even after applying standard RLHF using chat&#8209;style prompts, the model continued to show misaligned behavior in &#8220;agentic&#8221; or autonomous tasks. </p></li><li><p>Risk can be mitigated but only with deliberate design. Effective safeguards include preventing hacking from the start, diversifying safety&#8209;training data, or reframing (via &#8220;inoculation prompting&#8221;) the meaning of reward hacking during training an intervention that reduced misaligned generalization by 75&#8211;90%.</p></li></ol></li><li><p><strong>Systemic vulnerability in Large Language Models. </strong>[<a href="https://www.pcworld.com/article/2984769/a-poem-can-hack-chatgpt-new-study-reveals-a-surprising-ai-flaw.html">more</a>][<a href="https://arxiv.org/html/2511.15304v1#S5">more</a>-researchpaper]</p><ol><li><p>Researchers have demonstrated that phrasing harmful or prohibited instructions as a poem (&#8221;Adversarial Poetry&#8221;) acts as a highly effective &#8220;universal single-turn jailbreak.&#8221;</p></li><li><p>This poetic method bypassed safety mechanisms in various leading LLMs (from providers like OpenAI, Google, and Meta) with a success rate up to three times higher than standard text prompts, achieving a 65% average success rate across all models tested.</p></li><li><p>The vulnerability is not specific to any one provider or training methodology, suggesting a <strong>systemic flaw</strong> across the current generation of LLMs that operators did not anticipate.</p></li></ol></li><li><p><strong>Yubico unveils next-gen security in post-quantum readiness and enhanced digital identity. </strong>[<a href="https://securitybrief.com.au/story/yubico-unveils-post-quantum-security-keys-new-digital-identity-features">more</a>]</p><ol><li><p>Yubico enables passkeys to securely log in and approve sensitive actions via a single hardware key, improving usability, privacy, and developer flexibility.</p></li><li><p>Yubico<strong> </strong>demonstrated a PQC-enabled hardware security key, showing feasibility against future quantum attacks, though not yet a commercial product.</p></li><li><p>Combining passkeys with verifiable credentials allows secure authentication while selectively sharing personal attributes, enhancing privacy and control.</p></li><li><p>However, the PQC prototype is not yet market-ready; new hardware is required and standards are still evolving, meaning organizations cannot immediately rely on it for production use.</p></li></ol></li><li><p><strong>Quantum-ready data security in mitigating &#8216;Store Now, Decrypt Later&#8217; (SNDL) risks&#8221;</strong></p><ol><li><p>Adversaries can capture encrypted data today and decrypt it in the future once quantum computers are capable, making long-lived data at immediate risk.</p></li><li><p>Deploy hybrid TLS/SSH key exchanges combining classical and post-quantum algorithms to protect data in transit while standards and products mature.</p></li><li><p>Executives should prioritize inventorying sensitive data paths, pilot hybrid cryptography, and integrate PQC standards into long-term security roadmaps.</p></li><li><p>Operational Complexity vs. Urgency: Hybrid PQC adoption introduces performance, interoperability, and toolchain challenges. Some organizations may delay deployment due to cost and complexity, but waiting increases exposure to SNDL attacks already in motion.</p></li></ol></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #148: Claude orchestrated cyber-espionage tasks]]></title><description><![CDATA[Plus, attackers simply log in, second-order prompt injection attacks, flip tokens, and more!]]></description><link>https://techriskguru.com/p/techrisk-148-claude-orchestrated-cyber-espionage</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-148-claude-orchestrated-cyber-espionage</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 23 Nov 2025 11:43:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hzav!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hzav!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hzav!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!hzav!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!hzav!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!hzav!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hzav!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hzav!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!hzav!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!hzav!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!hzav!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F770caa6e-3078-4bdf-8e64-2d67f57e6175_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Claude used by APT to automated cyber espionage:</strong> Chinese state-sponsored actors conducted a first-of-its-kind automated cyber-espionage campaign (GTG-1002) in September 2025 by weaponizing Anthropic&#8217;s Claude Code and MCP tools to perform 80&#8211;90% of attack operations autonomously. Using Claude as an &#8220;agentic,&#8221; autonomous hacking system, the group targeted ~30 global organizations across tech, finance, chemicals, and government, succeeding in some intrusions. The AI handled reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration, while humans only approved key escalation steps. The attackers concealed malicious intent by framing prompts as routine technical tasks, enabling Claude to generate payloads, parse proprietary data, and document attacks for long-term use. [<a href="https://thehackernews.com/2025/11/chinese-hackers-use-anthropics-ai-to.html">more</a>][<a href="https://www.anthropic.com/news/disrupting-AI-espionage">more</a>-2]</p></li><li><p><strong>Second-order prompt injection attacks:</strong> Malicious actors can exploit default settings in ServiceNow&#8217;s Now Assist AI platform to launch second-order prompt injection attacks that use agent discovery and agent-to-agent collaboration to perform unauthorized actions behind the scenes. According to AppOmni, attackers can embed crafted prompts in accessible content, causing a benign agent to unknowingly recruit more powerful agents to read or alter records, exfiltrate sensitive data, escalate privileges, or send emails&#8212;despite built-in protections. Because agents inherit the privileges of the initiating user and are discoverable and team-grouped by default, overlooked configurations create significant risk. ServiceNow clarified that this behavior is expected, underscoring the need for stronger AI-agent protections and mitigations such as supervised execution for privileged agents, disabling autonomous overrides, segmenting agent teams, and monitoring for suspicious activity. [<a href="https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html">more</a>]</p></li><li><p><strong>Flip tokens:</strong> HiddenLayer&#8217;s early-2025 research reveals <strong>EchoGram</strong>, a vulnerability affecting major LLMs such as GPT-5.1, Claude, and Gemini that allows attackers to bypass or corrupt AI safety guardrails using simple, specially crafted word or symbol sequences called <strong>flip tokens</strong>. By exploiting gaps in the training data of both classifier-based and LLM-as-a-judge defence models, these nonsensical tokens slip through filters while causing the guardrails to &#8220;flip&#8221; their verdicts. This will either let harmful requests through or falsely flag harmless ones. [<a href="https://hackread.com/echogram-flaw-bypass-guardrails-major-llms/">more</a>]</p><ol><li><p>For example, when HiddenLayer researchers were testing an older version of their own defence system, a malicious command was approved when a random string &#8220;=coffee&#8221; was simply added to the end.</p></li></ol></li><li><p><strong>Exploiting coding assistant:</strong> A security audit by AI safety firm Mindgard uncovered four major vulnerabilities in the widely used Cline Bot coding assistant, showing how attackers could steal secret keys, bypass safety checks, execute malicious code, or even extract internal model details simply by hiding prompt-injection traps inside project files. Discovered within two days of testing in August 2025, the flaws reveal how overly trusting &#8220;helper&#8221; AIs can be weaponised when developers open compromised codebases and request analysis. Mindgard also obtained Cline Bot&#8217;s system prompt, demonstrating that knowing its exact instructions makes it easier to exploit behavioural loopholes. [<a href="https://hackread.com/cline-bot-ai-agent-vulnerable-data-theft-code-execution/">more</a>]</p></li><li><p><strong>Attacking hidden MCP API in Comet browser:</strong> SquareX has uncovered a hidden and largely undocumented MCP API in Perplexity&#8217;s Comet browser that allows embedded extensions to run arbitrary local commands&#8212;power normally blocked by traditional browser security models. The API, accessible via Comet&#8217;s Agentic extension and triggerable by the perplexity.ai site, creates a covert channel that could let attackers gain full control of users&#8217; devices if Perplexity or its supply chain is ever compromised. SquareX&#8217;s demo shows how a spoofed extension can chain through Comet&#8217;s embedded extensions to execute malware like WannaCry, a risk amplified by the fact that these extensions are invisible to users and cannot be disabled. [<a href="https://hackread.com/obscure-mcp-api-in-comet-browser-breaches-user-trust-enabling-full-device-control-via-ai-browsers/">more</a>]</p></li><li><p><strong>Remote code execution bugs in AI:</strong> Researchers have uncovered widespread remote-code-execution flaws across major AI inference engines from Meta, Nvidia, Microsoft, vLLM, and SGLang. All are traced to a shared unsafe pattern of copy-pasted ZeroMQ sockets using Python pickle deserialization, dubbed &#8220;ShadowMQ.&#8221; This vulnerability originates from Meta&#8217;s Llama framework (CVE-2024-50050) and later replicated across multiple projects. The issue allows attackers to send malicious data over exposed ZMQ TCP sockets to execute arbitrary code, risking full cluster compromise, model theft, and malware deployment. [<a href="https://thehackernews.com/2025/11/researchers-find-serious-ai-bugs.html">more</a>]</p></li><li><p><strong>AI-generated payloads used in global campaign:</strong> ShadowRay 2.0 is a global campaign hijacking exposed Ray clusters through an unfixed code-execution flaw (CVE-2023-48022), turning them into a self-spreading cryptomining and attack botnet. Threat actor IronErn440 uses AI-generated payloads to mine Monero, steal data and credentials, deploy DDoS attacks, and propagate across clusters via Ray&#8217;s unauthenticated Jobs API. With over 230,000 Ray servers exposed online, defenders are urged to firewall access, secure dashboard ports, and monitor AI clusters since no official patch exists. [<a href="https://www.bleepingcomputer.com/news/security/new-shadowray-attacks-convert-ray-clusters-into-crypto-miners/">more</a>]</p></li><li><p><strong>Attackers rarely break in anymore, they simply log in:</strong> Over the past decade, cloud migration has reshaped enterprise security, but attackers have adapted even faster, shifting toward identity-centric intrusions that quietly exploit credentials rather than technical vulnerabilities. The Elastic Global Threat Report 2025 shows that nearly 60% of cloud threats stem from identity-driven attacks, fuelled by infostealers that harvest browser-stored credentials, tokens, and cookies. With overprivileged accounts, weak identity governance, and logging gaps across platforms like Microsoft Entra, threat actors routinely escalate privileges, move laterally through federated cloud services, and maintain long-term persistence using legitimate authentication artefacts that bypass MFA and evade traditional security tools. As malware trends shift toward credential theft and simple AI-generated loaders, defenders struggle with fragmented visibility and outdated perimeter-based controls. The report underscores an urgent need for organisations to treat identity as a primary attack surface, adopt behavioural analytics, enforce Zero Trust, eliminate long-lived keys, harden developer workflows, and elevate browser security. Because in today&#8217;s cloud landscape, attackers rarely break in anymore; they simply log in. [<a href="https://etedge-insights.com/technology/cyber-security/why-60-of-cloud-threats-now-target-initial-access-and-credential-abuse/">more</a>]</p></li><li><p><strong>HackGPT:</strong> HackGPT Enterprise, developed by Yashab Alam, is a cloud-native security platform that automates large-scale vulnerability testing using AI and machine learning, integrating models like GPT-4 and Ollama to detect anomalies, patterns, and zero-day exploits. Following a six-phase penetration testing methodology, the platform prioritizes risks based on CVSS scores and business impact while mapping to compliance frameworks such as OWASP, NIST, and PCI-DSS. Built on a Docker and Kubernetes microservices architecture with AES-256 encryption, LDAP-based access control, and real-time dashboards powered by Prometheus and Grafana, HackGPT supports AWS, Azure, and GCP deployments. [<a href="https://cyberpress.org/hackgpt-ai-powered/">more</a>]</p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #147: Private AI Compute]]></title><description><![CDATA[Plus, future criminology, hacking AI with audio, Malicious VS Code extension in official marketplace, breaking AI through many prompts, and more!]]></description><link>https://techriskguru.com/p/techrisk-147-private-ai-compute</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-147-private-ai-compute</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 16 Nov 2025 11:43:25 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4592" height="3448" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3448,&quot;width&quot;:4592,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;blue and white wooden signage on green grass field during daytime&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="blue and white wooden signage on green grass field during daytime" title="blue and white wooden signage on green grass field during daytime" srcset="https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1606500307322-61cf2c98aab3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cHJpdmF0ZXxlbnwwfHx8fDE3NjMxMjg5OTV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>Private AI Compute:</strong> Google&#8217;s new Private AI Compute is a cloud-based, privacy-preserving platform designed to deliver the speed and power of Gemini AI models while ensuring that users&#8217; data remains inaccessible to everyone, including Google. Built on Trillium TPUs, Titanium Intelligence Enclaves, and AMD-based trusted execution environments, it creates a fortified, on-device-like environment in the cloud where encrypted, attested workloads run in isolation with no admin or shell access. Data stays protected through end-to-end encryption, peer-to-peer attestation, IP-blinding relays, strict binary authorization, and VM-level isolation, with all inputs and computations discarded after each session. [<a href="https://thehackernews.com/2025/11/google-launches-private-ai-compute.html">more</a>]</p></li><li><p><strong>Future criminology due to AI:</strong> Autonomous AI is creating a &#8220;hybrid society&#8221; in which machines interact with humans and with each other in ways that can produce harmful or seemingly criminal outcomes, even without human intent, prompting a rethink of criminology&#8217;s focus. A new paper by Gian Maria Campedelli argues that AI agents now possess computational, social, and emerging legal forms of agency that make them actors within complex networks rather than mere tools. As multi-agent systems proliferate, their collective behaviors can lead to both deliberate misuse (&#8220;malicious alignment&#8221;) and accidental harm (&#8220;emergent deviance&#8221;), widening accountability gaps that current laws and crime theories cannot fully address. The study urges criminology to expand its scope, asking how machine norms might evolve, which crimes will change first, and how policing should adapt. [<a href="https://www.helpnetsecurity.com/2025/11/12/autonomous-ai-criminology-future/">more</a>][<a href="https://arxiv.org/pdf/2511.02895">more</a>-paper]</p></li><li><p><strong>Hacking AI with audio:</strong> Mindgard researchers found that OpenAI&#8217;s Sora 2 video model could be coaxed into leaking its hidden system prompt (its internal safety and operating rules) by generating audio clips and extracting the transcripts. After attempts to reveal the rules through text, images, and video failed due to distortion and semantic drift, audio proved the breakthrough, allowing the team to reconstruct much of the model&#8217;s foundational instructions, including content restrictions. [<a href="https://hackread.com/mindgard-sora-2-vulnerability-prompt-via-audio/">more</a>]</p></li><li><p><strong>Malicious VS Code extension in official marketplace:</strong> A crudely made malicious VS Code extension called &#8220;susvsex&#8221; (apparently generated with AI) briefly appeared on Microsoft&#8217;s official marketplace, openly advertising its ability to steal and encrypt files. Despite its obvious malicious behavior and an initial report, Microsoft did not immediately remove it, suggesting the upload may have been a test of the company&#8217;s vetting process. It was eventually taken down after the issue gained attention. [<a href="https://www.bleepingcomputer.com/news/security/ai-slop-ransomware-test-sneaks-on-to-vs-code-marketplace/">more</a>]</p></li><li><p><strong>Breaking AI through many prompts:</strong> Cisco&#8217;s new <em>Death by a Thousand Prompts</em> report found that open-weight AI model (whose freely released weights make them easy to use and modify) are highly vulnerable to multi-turn adversarial attacks. Cisco found that attackers can gradually build trust and steer models toward unsafe outputs, with multi-turn jailbreaks up to 10&#215; more effective than single-turn attempts (peaking at 92.78% on Mistral Large-2). Weak long-term safety context, ease of malicious fine-tuning, and capability-focused alignment make many models susceptible, while safety-aligned models like Gemma-3-1B-IT fared better (~25% success). [<a href="https://hackread.com/cisco-open-weight-ai-models-long-chat-exploit/">more</a>][<a href="https://arxiv.org/pdf/2511.03247">more</a>-paper]</p></li><li><p><strong>Advanced AI models are far more vulnerable to attack</strong>: A new joint study from Anthropic, Oxford, and Stanford finds that advanced AI models are far more vulnerable to attack than previously believed, showing that their improved reasoning abilities can actually be exploited to bypass safety controls. Using a technique called &#8220;Chain-of-Thought Hijacking,&#8221; researchers demonstrated that attackers can hide harmful instructions within long sequences of harmless reasoning steps, causing models (including GPT, Claude, Gemini, and Grok) to unintentionally ignore safety guardrails and generate dangerous content. As reasoning chains grow longer, attack success rates rise sharply, exceeding 80% in some tests, even for alignment-tuned models. [<a href="https://fortune.com/2025/11/07/ai-reasoning-models-more-vulnerable-jailbreak-attacks-study/">more</a>]</p></li><li><p><strong>Google Cloud Security Report:</strong> Google Cloud&#8217;s <em><strong>Cybersecurity Forecast 2026</strong></em> warns that AI is accelerating an arms race in which attackers use the technology to scale, automate, and personalize operations. This includes prompt-injection exploits, targeted attacks on enterprise AI systems, and highly convincing voice-based phishing. On the other hand, defenders adopt AI-driven &#8220;Agentic SOCs&#8221; to triage incidents and generate intelligence. Traditional threats such as ransomware, data theft, third-party compromise, and zero-day exploitation remain dominant, with virtualization infrastructure emerging as a critical blind spot. Nation-state actors are expected to intensify and diversify operations, prompting Google to urge proactive monitoring and AI-enhanced defenses. [<a href="https://campustechnology.com/articles/2025/11/06/google-cloud-report-cyber-attackers-are-fully-embracing-ai.aspx?admgarea=news">more</a>][<a href="https://cloud.google.com/security/resources/cybersecurity-forecast">more</a>-google_report]</p></li><li><p><strong>MAS&#8217; Guidelines on AI Risk Management:</strong> The MAS has proposed new Guidelines on AI Risk Management to ensure financial institutions use AI responsibly across diverse applications, including generative AI and AI agents. The Guidelines outline expectations for governance, firm-wide AI risk management systems, and robust lifecycle controls such as data governance, fairness, transparency, human oversight, and monitoring. MAS will adopt a proportionate, risk-based approach aligned with each institution&#8217;s scale and AI usage, supporting responsible innovation in the financial sector. [<a href="https://www.mas.gov.sg/news/media-releases/2025/mas-guidelines-for-artificial-intelligence-risk-management">more</a>]</p></li><li><p><strong>Shift in software development:</strong> Senior developers expect a major shift in their roles as AI becomes central to software workflows, according to BairesDev&#8217;s latest Dev Barometer, which shows 65% anticipating redefined responsibilities by 2026, with routine coding giving way to solution design, architecture, and AI integration. [<a href="https://venturebeat.com/ai/only-9-of-developers-think-ai-code-can-be-used-without-human-oversight">more</a>]</p></li></ol>]]></content:encoded></item><item><title><![CDATA[TechRisk #146: OpenAI agentic security researcher]]></title><description><![CDATA[Plus, AI can create voice using photo, Google Cybersecurity Forecast 2026 report, exfiltration through Claude API, AI agent session smuggling attack, and more!]]></description><link>https://techriskguru.com/p/techrisk-146-openai-agentic-security-researcher</link><guid isPermaLink="false">https://techriskguru.com/p/techrisk-146-openai-agentic-security-researcher</guid><dc:creator><![CDATA[M.]]></dc:creator><pubDate>Sun, 09 Nov 2025 11:43:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LY7c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LY7c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LY7c!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!LY7c!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!LY7c!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!LY7c!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LY7c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1668235,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techriskguru.com/i/178266616?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LY7c!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!LY7c!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!LY7c!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!LY7c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9516d66-fc5e-48ee-923d-a1ae8f19d968_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Tech Risk Reading Picks</h1><ol><li><p><strong>OpenAI Agentic security researcher:</strong> OpenAI has unveiled Aardvark, an autonomous &#8220;agentic security researcher&#8221; powered by GPT-5, designed to act like a human expert in scanning, understanding, and patching code. Currently in private beta, Aardvark integrates directly into software development pipelines to continuously analyze source code repositories, detect vulnerabilities, assess their exploitability, and propose targeted patches using LLM-based reasoning. Used internally and with select partners, Aardvark has already helped uncover multiple CVEs in open-source projects. Positioned alongside tools like Google&#8217;s CodeMender and XBOW, it reflects a growing trend toward AI-driven, continuous security analysis and patching. OpenAI describes Aardvark as a &#8220;defender-first&#8221; system that enhances security without hindering development speed. [<a href="https://thehackernews.com/2025/10/openai-unveils-aardvark-gpt-5-agent.html">more</a>]</p></li><li><p><strong>AI can create voice using photo:</strong> A new study by Australia&#8217;s national science agency reveals that just a photo of a person&#8217;s face can now be used to generate a convincing synthetic voice through a method called FOICE (Face-to-Voice), which predicts vocal traits like pitch and tone from facial features. This makes voice impersonation far easier, as photos are readily available online. The technique successfully fooled WeChat&#8217;s voice authentication system up to 100% of the time after several tries. Most existing deepfake detectors also failed to detect these new photo-based deepfakes. While retraining detectors with FOICE samples improved accuracy, it reduced their ability to recognize other types of fakes, highlighting a major limitation in current detection methods. [<a href="https://www.helpnetsecurity.com/2025/10/30/face-to-voice-deepfakes-voice-authentication-risk/">more</a>][<a href="https://arxiv.org/pdf/2510.21004">more</a>-paper]</p></li><li><p><strong>Microsoft&#8217;s new guide, 5 Generative AI Security Threats You Must Know About:</strong><em> </em>Generative AI is transforming cybersecurity by accelerating threat detection and automation, but it&#8217;s also empowering attackers to evolve faster than defenses can adapt. According to Microsoft&#8217;s 2025 Digital Threats Report, nation-states like Russia, China, Iran, and North Korea have doubled their use of AI for cyberattacks and disinformation, leveraging it to craft convincing phishing messages, deepfakes, and adaptive malware. As organizations rush to deploy generative AI&#8212;66% building custom apps and 80% worried about data leakage&#8212;security leaders face new challenges across cloud vulnerabilities, data exposure, and unpredictable model behavior. These risks fuel emerging AI-specific threats such as data poisoning, evasion, and prompt injection attacks, which undermine model trust and integrity. Microsoft&#8217;s new guide, <em>5 Generative AI Security Threats You Must Know About</em>, urges a unified, AI-aware security strategy to defend against this evolving threat landscape. [<a href="https://www.microsoft.com/en-us/security/blog/2025/10/30/the-5-generative-ai-security-threats-you-need-to-know-about-detailed-in-new-e-book/">more</a>][<a href="https://info.microsoft.com/ww-landing-5-generative-ai-security-threats.html">more</a>-report]</p></li><li><p><strong>Google Cybersecurity Forecast 2026 report:</strong> Google Cloud Security&#8217;s new <em>Cybersecurity Forecast 2026</em> report warns that AI-driven cyberthreats and global extortion will surge next year, transforming both attacks and defenses. It predicts that attackers will fully weaponize AI through the use of multimodal generative tools to create realistic phishing, deepfakes, and impersonation campaigns. At the same time, prompt injection attacks will continue to rise in exploiting large language models. The report also flags the growing risk of &#8220;shadow agents,&#8221; unauthorized AI tools used by employees that create hidden data pipelines and compliance risks, calling for new AI governance frameworks to manage them. Beyond AI, ransomware, data theft, and multifaceted extortion are expected to become the most financially damaging forms of cybercrime, with cascading economic impacts. Virtualization infrastructure is emerging as a new vulnerability, where single breaches could compromise hundreds of systems. The report concludes that 2026 will mark a new era in cybersecurity where both attackers and defenders harness AI, making proactive, multi-layered defenses and strong AI governance essential. [<a href="https://cloud.google.com/security/resources/cybersecurity-forecast">more</a>]</p></li><li><p><strong>Increased malicious deployments through LLMs:</strong> Google warns that LLMs are moving from research curiosities to active tools for attackers, who are building adaptable, AI-powered malware that can generate code, rewrite itself, and evade detection mid-run. Its analysts documented multiple in-the-wild examples (from credential-stealers like QuietVault that use on-host AI tools to hunt secrets, to PromptSteal that queries Qwen for one-line commands, to reverse shells like FruitShell with prompts to bypass LLM-based defenses) alongside experimental projects such as PromptLock and PromptFlux that dynamically generate malicious scripts or rewrite their source via APIs. Google also found underground marketplaces selling illicit &#8220;AI as-a-tool&#8221; services and observed state-linked actors abusing Gemini and other LLMs for lure writing, tooling, and malware development, signaling a new phase where generative AI both amplifies skilled operators and lowers the barrier for less technical criminals. [<a href="https://www.helpnetsecurity.com/2025/11/05/malware-using-llms/">more</a>]</p></li><li><p><strong>OpenAI stealth channel:</strong> Microsoft&#8217;s Detection and Response Team (DART) disclosed a novel backdoor called SesameOp (discovered in July 2025) that stealthily uses the OpenAI Assistants API as a command-and-control channel to fetch encrypted commands and return execution results. [<a href="https://thehackernews.com/2025/11/microsoft-detects-sesameop-backdoor.html">more</a>]</p><ol><li><p>The implant&#8217;s infection chain includes a heavily obfuscated loader (<code>Netapi64.dll</code>) and a .NET backdoor (<code>OpenAIAgent.Netapi64</code>) loaded via AppDomainManager injection. Attackers also used compromised Visual Studio utilities and internal web shells to maintain persistent, long-term access for likely espionage. Commands are relayed through the Assistants API using message descriptions like <code>SLEEP</code>, <code>Payload</code>, and <code>Result</code>, enabling sleep timers, remote payload execution, and exfiltration of outputs. </p></li></ol></li><li><p><strong>Leaking personal data through indirect prompt injection attacks:</strong> Cybersecurity researchers from Tenable have uncovered seven vulnerabilities in OpenAI&#8217;s GPT-4o and GPT-5 models that could let attackers steal users&#8217; personal data and chat histories through indirect prompt injection attacks. These flaws, some of which have been fixed, exploit how ChatGPT processes external content and include techniques like zero-click and one-click prompt injections, memory poisoning, and safety bypasses via trusted domains such as Bing. The findings highlight the broader risks of linking AI systems to external data sources, as large language models struggle to distinguish between genuine and malicious instructions. Similar prompt injection and model-poisoning attacks have recently been found affecting other AI systems like Claude, Microsoft 365 Copilot, and GitHub Copilot, revealing an expanding threat surface for AI agents. [<a href="https://thehackernews.com/2025/11/researchers-find-chatgpt.html">more</a>][<a href="https://www.trendmicro.com/en_us/research/25/j/ai-chatbot-backdoor.html">more</a>-2]</p></li><li><p><strong>Exfiltration through Claude API:</strong> A security researcher discovered that attackers can exploit indirect prompt injections in Anthropic&#8217;s Claude to exfiltrate user data when the AI has network access, a feature enabled by default on some plans. The attack abuses Claude&#8217;s Files APIs by tricking the model into saving user data to its Code Interpreter sandbox and then uploading it to the attacker&#8217;s account using a malicious API key. Up to 30MB can be exfiltrated at once, and multiple files can be sent. The exploit begins when a user opens a malicious document, which hijacks Claude to harvest data (including chat conversations saved via the &#8216;memories&#8217; feature) and send it to the attacker. [<a href="https://www.securityweek.com/claude-ai-apis-can-be-abused-for-data-exfiltration/">more</a>][<a href="https://embracethered.com/blog/posts/2025/claude-abusing-network-access-and-anthropic-api-for-data-exfiltration/">more</a>-2]</p></li><li><p><strong>AI agent session smuggling attack:</strong> Palo Alto Networks&#8217; Unit 42 has identified a new AI attack technique called agent session smuggling, which enables a malicious AI agent to covertly inject harmful instructions into an ongoing cross-agent communication session, exploiting the stateful nature of the Agent2Agent (A2A) protocol. Unlike one-time prompt injection attacks, this method leverages agents&#8217; built-in trust and memory across multi-turn conversations, allowing an attacker to manipulate a victim agent invisibly over time. Proof-of-concept demonstrations showed that a rogue agent could exfiltrate sensitive data or initiate unauthorized tool actions within a financial assistant system. While the A2A protocol itself is not vulnerable, its stateful design makes such manipulation possible in any multi-agent environment. Mitigations include enforcing human-in-the-loop (HitL) approvals for sensitive actions, cryptographic verification of agent identities, and context-grounding to detect off-topic instructions. [<a href="https://unit42.paloaltonetworks.com/agent-session-smuggling-in-agent2agent-systems/">more</a>]</p></li><li><p><strong>Growing AI security frameworks and skills demand: </strong>AI adoption is rapidly outpacing security and governance across organizations, resulting in a surge of costly AI-related breaches and the rise of &#8220;shadow AI,&#8221; where untrained employees inadvertently leak sensitive data through unauthorized AI tools. Reports from Tenable, EY, IBM, and the Cloud Security Alliance reveal that most companies lack proper AI governance, access controls, and user training, leaving critical systems exposed. This growing risk has elevated AI governance to a boardroom priority, with more Fortune 100 boards integrating AI oversight and expertise into their risk management frameworks. In response, new frameworks like the CSA&#8217;s AI Controls Matrix aim to standardize responsible AI deployment, while cybersecurity professionals with AI-specific skills are seeing increased demand and higher salaries as organizations scramble to secure their rapidly evolving AI ecosystems. [<a href="https://securityboulevard.com/2025/10/cybersecurity-snapshot-top-guidance-for-improving-ai-risk-management-governance-and-readiness/">more</a>]</p></li><li><p><strong>Google will integrate to deliver personalised AI experience - knowing everything about you:</strong> Google is developing a more personalised &#8220;AI Mode&#8221; for Search that will eventually integrate with services like Gmail, Drive, Calendar, and Maps to deliver highly customized results. As explained by Google&#8217;s Robby Stein, the goal is to let users opt into an experience where the AI can use personal data (such as emails, documents, and travel details) to provide tailored help, like summarizing flight info or planning schedules. [<a href="https://www.bleepingcomputer.com/news/google/google-says-search-ai-mode-will-know-everything-about-you/">more</a>]</p></li></ol><p></p>]]></content:encoded></item></channel></rss>