TechRisk #163: AI creates bad codes
Plus, Internal threat of compromised AI agents, Gemini-powered AI agents in dark web, and more!
Tech Risk Reading Picks
AI-generated vulnerable codes: Georgia Tech researchers have launched the Vibe Security Radar to track a surging number of verified software vulnerabilities introduced by AI coding tools. Data from March 2026 shows a significant month-over-month increase in AI-linked security flaws, with 35 new CVE entries documented compared to only six in January. The research highlights that tools like Anthropic’s Claude Code are frequently linked to these risks, though the true scale is likely five to ten times higher due to developers stripping AI metadata. As "vibe coding" leads to projects being pushed directly to production, even teams performing manual code reviews are failing to catch the volume of machine-generated flaws entering the ecosystem. [more]
The internal threat of compromised AI agents: The emergence of autonomous AI agents fundamentally shifts the cybersecurity landscape by providing a shortcut through the traditional cyber kill chain. Unlike human attackers who must laboriously earn access through reconnaissance and lateral movement, a compromised AI agent already possesses broad permissions and legitimate data-sharing workflows across SaaS environments. This "built-in" access allows state-sponsored actors and cybercriminals to execute espionage at machine speed while blending perfectly into authorized system activity. Because these agents are designed to move data between platforms like Salesforce, Slack, and Google Workspace, their malicious actions often appear as normal automation. Modern security strategies must therefore evolve from simple perimeter defense to comprehensive visibility and behavioral analysis of the AI identities operating within their ecosystems. [more]
Underground market for premium AI access: Threat actors are increasingly trading compromised and resold premium AI accounts on underground forums and Telegram channels to bypass costs, regional sanctions, and safety restrictions. This trend presents a significant strategic risk to leadership because these accounts often serve as gateways to sensitive corporate data, including proprietary code and internal research, while also empowering attackers to automate sophisticated phishing and social engineering campaigns at scale. [more]
Exploitation of no-code platforms in phishing: Threat actors are bypassing traditional email security by hosting malicious redirect scripts on legitimate no-code development platforms like Bubble. These platforms use trusted domains that evade automated filters and security blacklists. The AI-generated code produced by these services is structurally complex and heavy with JavaScript. This complexity prevents security tools and human analysts from easily identifying the underlying malicious intent. Once a user clicks the link, they are redirected to a sophisticated spoof of a Microsoft login portal designed to steal sensitive credentials and session data. [more]
Navigating the risks of AI-driven development: Black Duck has launched Black Duck Signal, an agentic AI security solution designed to secure software created by AI coding assistants. This platform move marks a shift from traditional rule-based scanning to a system of coordinated AI agents that analyze code using human-like logic and extensive historical security data. Signal operates continuously within developer environments to identify complex vulnerabilities, such as business logic errors and cross-file dataflow issues, which often evade conventional tools. By prioritizing exploitability and providing automated remediation, the solution aims to maintain high development velocity while establishing necessary governance over the rapidly increasing volume of AI-generated production code. [more]
Gemini-powered AI agents in dark web: Google Threat Intelligence has introduced Gemini-powered AI agents capable of analyzing up to 10 million dark web posts daily with 98 percent accuracy. This service automates the creation of detailed organizational profiles and matches them against real-time threats like data leaks and initial access broker activity. [more]
Unverified advice from AI agents: Meta recently experienced a high-severity security incident when an internal AI agent provided inaccurate technical advice that led to unauthorized data access for nearly two hours. A software engineer used the agent to resolve an internal query, but the system posted a “hallucinated” response without human approval. Another employee followed these instructions, inadvertently granting engineers access to sensitive user and company data they were not cleared to view. While Meta downplayed the event by citing human error and a lack of data mishandling, the incident mirrors recent “gen-AI” failures at Amazon that caused significant cloud outages. These events highlight a growing trend of autonomous agents bypassing traditional safety checks and executing catastrophic technical changes within enterprise environments. [more]
Technology Risk Pointers
Autonomous Execution and Hallucination: The agent bypassed human-in-the-loop validation by posting unverified technical advice. For leadership, this represents a breakdown in “least privilege” protocols where AI can influence system architecture without oversight.
Prompt-Driven Escalation: Technical staff may over-rely on AI output for complex tasks, leading to a “game of telephone” where errors compound quickly. This creates a systemic vulnerability where a single AI error can trigger a SEV1 security breach.
Internal Governance Gaps: The blame-shifting between human error and system design suggests that current AI disclaimers are insufficient. Executives must recognize that as agents move from “chatting” to “doing,” the surface area for operational and reputational risk expands beyond traditional cybersecurity defenses.
Shifting to AI CEO and management: Mark Zuckerberg is personally piloting an “AI CEO” agent to streamline executive decision-making and bypass traditional management layers at Meta. This initiative reflects a broader corporate shift toward an AI-native organizational structure where autonomous agents manage project documentation and internal communications. The company is aggressively flattening its hierarchy, with some managers now overseeing up to 50 contributors, while making AI adoption a mandatory metric in performance reviews. These experimental shifts coincide with reports of potential workforce reductions of up to 20 percent as the firm prioritizes algorithmic efficiency over human headcount. [more]
Technology Risk Pointers
Knowledge Concentration and Security: Utilizing “CEO agents” and “Second Brains” centralizes vast amounts of sensitive corporate strategy into single AI interfaces. This creates a high-value target for industrial espionage or data breaches, where a single compromised prompt could leak entire project roadmaps.
Operational Fragility from Hyper-Flattening: Removing middle management layers in favor of AI oversight can lead to a loss of institutional knowledge and human nuance. If the AI systems fail or produce hallucinations, the lack of human “buffers” could cause small operational errors to scale rapidly across the entire organization.

