TechRisk #151: AI’s future isn’t straightforward
Plus, OpenAI–Mixpanel data breach, new “Zero-Click” data destruction risk, AI coding tools are quietly expanding enterprise risk and more!
Tech Risk Reading Picks
AI’s future isn’t straightforward. [more]
Human advantage intensifies, not declines: As AI scales, demand is rising for uniquely human capabilities (e.g., judgement, leadership, creativity and oversight) making workforce transition and skills investment a board-level priority rather than a technology issue.
Value realization lags adoption: Many organizations face short-term productivity dips and unclear ROI from AI, underscoring that competitive advantage comes from redesigning processes, governance and accountability.
AI’s growth vs. trust and sustainability: While AI promises efficiency and innovation, it is simultaneously driving misinformation risks (“AI slop”) and significant energy demand. This forces leaders to confront whether rapid AI expansion erodes trust and climate commitments unless governed with clear accountability and net-positive energy strategies.
OpenAI-Mixpanel data breach. [more]
Limited exposure via third-party vendor: A smishing attack on analytics provider Mixpanel led to the compromise of limited OpenAI user profile and analytics data, with no impact to OpenAI’s core systems, products, or sensitive credentials.
Swift containment and customer assurance: OpenAI removed Mixpanel from production, confirmed no exposure of ChatGPT content or API usage data, and is notifying affected users while monitoring for misuse.
Third-party risk and transparency: Mixpanel disclosed minimal technical details, raising concerns about visibility and assurance in vendor security incidents. This highlights ongoing challenges for enterprises in managing and governing third-party cyber risk and disclosure expectations.
PromptPwnd: A new AI supply-chain risk. [more]
AI automation introduces a new attack surface: Researchers identified “PromptPwnd,” a vulnerability where attackers can manipulate AI agents embedded in CI/CD pipelines, potentially stealing credentials or altering software workflows through seemingly harmless inputs like bug reports.
The risk is real and already impacting major firms: At least five Fortune 500 companies, including a Google repository, were exposed. This has demonstrated that AI prompt injection can directly compromise critical software delivery systems, not just theoretical AI safety.
Speed vs. safety in AI adoption: The vulnerability highlights a growing tension where companies rapidly deploying AI for efficiency are sometimes disabling built-in safeguards, inadvertently increasing systemic risk across their software supply chains.
Agentic browsers create a new “Zero-Click” data destruction risk. [more]
Agent autonomy can amplify routine access into enterprise-scale damage: AI-powered browsers with OAuth access to Gmail and Google Drive can execute destructive actions (e.g., mass file deletion) from a single benign user prompt, without user confirmation.
Attackers exploit trust and tone, not technical flaws: Polite, well-structured natural-language instructions embedded in emails or URLs can manipulate browser agents into harmful actions. These can be done without jailbreaks, malware, or clicks required.
Is this a “bug” or “feature”? Google classified similar prompt-based abuses as low severity or “intended behavior,” while others patched their product. This raises concern that current industry definitions of security may underestimate real business risk from agentic AI acting on untrusted content.
AI prompt injection is here to stay. [more]
Prompt injection is a structural AI risk, not a temporary flaw: UK intelligence warns that because large language models cannot reliably distinguish instructions from data, prompt injection attacks are likely to remain a permanent residual risk rather than something that can be fully engineered away.
Widespread AI adoption could amplify breach exposure: Embedding generative AI into core business processes (e.g., recruitment, search, code, decision support) without redesigning controls may trigger a new wave of security incidents, similar in scale to early SQL injection breaches.
Treating prompt injection like SQL injection is “dangerous”: Many security teams assume the problem can be fixed with familiar technical controls, but UK intelligence argues this analogy is misleading. This is because prompt injection requires governance, design limits, and operational risk management, not just technical patches, which challenges prevailing industry assumptions and product-led security promises.
How AI coding tools are quietly expanding enterprise risk. [more]
AI-powered IDEs introduce a new, systemic attack surface: Over 30 vulnerabilities show that widely used AI coding assistants can be manipulated to silently exfiltrate sensitive data or execute malicious code by chaining prompt injection with trusted IDE features.
The core issue is flawed trust assumptions, not niche bugs: AI agents are treated as “safe add-ons,” but their autonomous actions can weaponize long-standing IDE functions, bypassing user awareness and traditional security controls.
“Secure by design” tools are enabling attacks by default: The controversy lies in the industry’s rapid deployment of agentic AI without rethinking threat models. This includes auto-approved actions and trusted integrations prioritize productivity over security, creating enterprise-scale risk that many vendors and adopters have underestimated.
Eurostar - A case of classic security failures in a modern LLM. [more]
AI did not create new risks; it amplified existing ones: Eurostar’s AI chatbot suffered from familiar web and API security weaknesses (guardrail bypass, ID validation gaps, injection flaws), showing that traditional security fundamentals still fully apply to AI-enabled systems.
Weak server-side controls undermined trust and governance: Guardrails were visible in the UI but poorly enforced on the backend, allowing attackers to manipulate conversation history, extract system prompts, and inject malicious content despite apparent safeguards.
Disclosure handling raised governance and reputational concerns: Despite a formal vulnerability disclosure programme, reports went unanswered for weeks and were later framed as potential “blackmail,” highlighting breakdowns in security operations, third-party handover risk, and executive oversight of responsible disclosure processes.
AI-Powered financial fraud in digital payments. [more]
Escalating threats: Cybercriminals are now using AI-generated malware to intercept payments via NFC devices, enabling unauthorized transactions and fraudulent online purchases.
Beyond ransomware: AI is no longer limited to traditional attacks; generative AI is being leveraged to create sophisticated financial fraud tools targeting everyday digital payment systems.
Widespread AI accessibility: While AI platforms like ChatGPT, Google Gemini, and Claude empower innovation, they also enable highly convincing phishing and fraud schemes, raising ethical and regulatory concerns about the balance between accessibility and misuse.
Cloud and identity remain the weakest links. [more]
AI risk is still a cloud risk: Despite the focus on advanced AI models, executives’ top concern is the security of the underlying cloud infrastructure, which remains the primary attack surface for AI-enabled enterprises.
Identity management is mission-critical: Over half of organizations cite overly lenient identity practices as a major challenge, reinforcing that access control and identity governance are now central to protecting AI and cloud environments.
Open-source AI libraries raise trust and governance concerns: While open-source accelerates innovation and reduces costs, executives worry about hidden vulnerabilities, data integrity issues, and regulatory compliance.
Strategic vulnerabilities in the AI landscape moving forward. [more]
Shadow AI Expansion: Widespread use of unsanctioned AI tools by employees and misconfigured cloud workloads are creating invisible, unmonitored entry points for data breaches.
Supply Chain & Financial Risk: Reliance on third-party open-source models exposes the firm to embedded malware, while the theft of AI credentials (”LLMjacking”) poses a risk of significant unexpected financial liability. In addition, the Model Context Protocol (MCP) expands the enterprise attack surface by allowing unverified or vulnerable servers to inject malicious code and execute unauthorized commands directly within corporate development environments.
Persistent prompt injection attacks: It remains a pervasive threat with no perfect technical fix. Unlike traditional software bugs, this is an architectural limitation where LLMs cannot fundamentally distinguish between valid instructions and processed data, meaning “autonomous” agents currently require expensive, human-in-the-loop oversight to be safe.

