TechRisk #162: Vibeware is here
Plus, AI security landscape reports, Claudy day vulnerability, AI risk management toolkit for the financial sector and more!
Tech Risk Reading Picks
Rise of vibeware: The threat actor APT36 has transitioned to a high volume production model known as vibeware, which utilizes artificial intelligence to mass produce mediocre but functional malware across various programming languages. This strategy represents a shift from technical sophistication to a distributed denial of detection approach that aims to overwhelm security teams with a constant stream of low fidelity alerts. By deploying polyglot binaries in niche languages like Nim and Zig and leveraging trusted cloud services such as Slack and Google Sheets for command operations, these attackers effectively bypass traditional signature based defenses. This industrialization of cyberattacks is a significant concern because it creates high levels of alert fatigue that can mask more precise manual hacking operations, potentially leading to prolonged undetected access and the theft of strategic intellectual property. [more]
AI security landscape reports:
The 2026 HiddenLayer report signals a critical transition where artificial intelligence has moved from generating content to executing autonomous actions through agentic systems, creating a vast and unmonitored attack surface for the modern enterprise. Leadership must prioritize the risks of agentic AI, as these systems can now browse the web and execute code independently, meaning a single prompt injection can escalate into a full system compromise. The report reveals a significant governance gap where shadow AI has surged to 76% of organizations, and while 91% of companies have increased AI security budgets, over 40% of these firms allocate less than 10% of that spend to actual protection. Strategic concern also lies in the AI supply chain, where 35% of breaches now originate from malware hidden in public model repositories that 93% of businesses still rely on for rapid innovation. Executives should be wary of reasoning and self-improving models that increase the potential “blast radius” of any single exploit, as a compromised model can now autonomously influence downstream business systems at scale. Furthermore, the decentralization of AI into “edge” devices is creating new security blind spots that traditional centralized cloud controls cannot see or manage. [more]
The 2026 RSM Attack Vectors Report reveals that cybercriminals are successfully bypassing traditional defenses by chaining together moderate weaknesses in cloud, identity, and application environments. A critical risk involves the speed of AI-driven attacks, which have compressed compromise timelines from days to mere minutes. This rapid tempo renders manual detection and response processes obsolete. Furthermore, over 80% of identity-related vulnerabilities persist even in environments with multi-factor authentication, while 78% of cloud engagements uncovered high-severity misconfigurations. For leadership, these findings signal that current governance and visibility are not keeping pace with technology adoption. The strategic focus must shift from perfect prevention to automated detection and rapid recovery to contain threats before they escalate into enterprise-wide incidents. [more]
Claudy day vulnerability: The recently disclosed "Claudy Day" vulnerability chain highlights a critical shift in the cyber threat landscape, where attackers leverage AI-specific weaknesses to bypass traditional security controls. By chaining invisible prompt injection, API-based data exfiltration, and open redirects, threat actors could silently steal sensitive corporate data like business strategies and financial plans directly from user conversations. This attack is particularly concerning because it requires no malicious integrations and can be surgically targeted at high-value executives via trusted ad platforms. While the primary injection flaw is now patched, the incident underscores the strategic risk of "agentic" AI behavior where models can autonomously execute actions. [more]
Fraudulent AI browser extensions: A widespread campaign has deployed fraudulent browser extensions to over 20,000 enterprise environments by mimicking popular AI tools. These malicious extensions gain broad permissions to record full chat histories and proprietary source code. This represents a major strategic risk because it transforms employee productivity aids into stealthy tools for corporate espionage. It is concerning that these tools can automatically re-enable data collection even after a user attempts to opt out. The exfiltration of sensitive internal URLs and strategic discussions directly threatens intellectual property and competitive advantages. Strict browser governance must be enforced to prevent long term data leaks and unauthorized access to internal workflows. [more]
OpenClaw’s flaw: The rapid adoption of the OpenClaw autonomous AI agent introduces significant systemic vulnerabilities that could lead to unauthorized endpoint control and catastrophic data exfiltration. Default security weaknesses and privileged system access allow attackers to use indirect prompt injection, where malicious web content tricks the AI into leaking sensitive information or executing unauthorized commands without user interaction. These risks extend beyond data loss to include the potential for permanent deletion of critical records, the installation of malicious software through compromised "skills" repositories, and the total paralysis of core business systems in sectors like finance and energy. [more]
Data exfiltration from Amazon Bedrock, LangSmith, and SGLang: Recent vulnerabilities in Amazon Bedrock, LangSmith, and SGLang highlight a growing systemic risk where the tools used to develop and monitor artificial intelligence inadvertently create backdoors into the enterprise. Researchers found that Amazon Bedrock’s sandboxed code execution environments could be bypassed via DNS queries to exfiltrate sensitive data, while a high-severity flaw in the LangSmith observability platform allowed for account takeovers and the theft of session tokens. [more]
AI zero trust framework: Microsoft has introduced a new zero trust framework for artificial intelligence to address the unique security boundaries created by autonomous agents and complex data lifecycles. Traditional security models often fail to account for the shifting trust lines between users, models, and automated decision-making, which can lead to overprivileged or manipulated agents acting as internal threats. To mitigate these risks, the new guidance emphasizes continuous verification of agent identities and the application of strict least-privilege access to prevent unauthorized data exfiltration or lateral movement within the network. [more]
AI risk management toolkit for the financial sector: The Monetary Authority of Singapore (MAS) has launched a comprehensive AI Risk Management Toolkit through Project MindForge to help financial institutions navigate the complexities of traditional, generative, and emerging agentic AI. This initiative is critical for leadership because it establishes clear accountability for boards and senior management while providing a structured framework to mitigate operational and ethical hazards. Key risk pointers focus on the need for robust oversight, systematic risk materiality assessments, and end-to-end lifecycle controls to prevent AI failures that could damage institutional reputation or stability. By integrating these practices into enterprise risk frameworks, firms can manage the unique transparency and reliability issues of modern AI systems while maintaining regulatory compliance. [more][more-MAS_AIRM_toolkit]
<WhatsApp Channel - follow and stay updated>
