TechRisk #144: Hacking AI browsers
Plus, over 90 vulnerabilities found in AI powered IDEs, managing AI like an employee, auditable vibe coding, and more!
Tech Risk Reading Picks
Security risk in AI-powered browsers: Recent research highlights a significant security risk in AI-powered browsers, showing that they can be exploited through “prompt injection” attacks hidden in images or websites. Brave Software demonstrated that malicious instructions (concealed in faint text within an image) can trick browsers like Perplexity’s Comet and Fellou into actions such as accessing a user’s email or visiting hacker-controlled sites. While users can sometimes intervene during these attacks, the findings illustrate a systemic vulnerability: AI browsers can act with the user’s authenticated privileges, potentially exposing sensitive data. [more]
Over 90 vulnerabilities found in AI powered IDEs: Researchers at Ox Security disclosed that the Cursor and Windsurf IDEs (AI-powered and VS Code–forked editors) running on outdated Chromium and V8 builds inherited 94+ known, already-patched n-day vulnerabilities. Estimated 1.8 million developers were affected. Ox demonstrated a proof-of-concept exploit for the Maglev JIT integer-overflow (CVE-2025-7656, fixed July 15) that crashes Cursor via a deeplink and warned that real-world attacks could lead to arbitrary code execution (through malicious extensions, poisoned READMEs, phishing, etc.). [more]
Managing AI like an employee: As generative AI rapidly moves from experimentation to enterprise integration, many companies risk undermining its potential by skipping proper onboarding. Unlike static software, AI systems are probabilistic, adaptive, and prone to drift, hallucination, bias, or data leakage if left ungoverned. Real-world incidents (from Air Canada’s chatbot liability to biased recruiting algorithms and data leaks) show the tangible costs of unmonitored AI. Effective onboarding should treat AI like a new employee: define its role, train it with contextual knowledge via retrieval-augmented generation (RAG) or similar methods, test it in simulations before deployment, and establish continuous feedback, monitoring, and audits. [more]
51% of cybersecurity professionals view AI-driven threats as biggest concern: According to ISACA, 51% of European IT and cybersecurity professionals expect AI-driven cyber threats and deepfakes to be their biggest concern in 2026, as most organizations remain unprepared to manage AI-related risks or use generative AI securely. While AI is seen as both a major threat and opportunity, confidence in tackling issues like ransomware, regulatory complexity, and supply chain vulnerabilities remains low. Professionals identify AI-driven social engineering as the top emerging cyber threat, with resilience and responsible AI use becoming key priorities. [more]
Auditable vibe coding: Codev is an open-source platform designed to address the pitfalls of “vibe coding,” where rapid AI-driven prototyping often produces brittle, undocumented code. Built on the SP(IDE)R framework, Codev treats natural language conversations between humans and AIs as part of the actual source code, turning them into structured, versioned, and auditable assets. [more]
Dual edged sword of AI in digital forensics: A new study from the University of Cagliari explores how AI is transforming both cybercrime and digital forensics, highlighting its power and its risks. AI now aids investigators by identifying relevant evidence, filtering sensitive content, detecting attack patterns, and automating forensic workflows (from data collection to legal reporting) while also enhancing training through simulated cyberattacks. Yet, the same technologies can be exploited by criminals to hide traces, generate convincing fakes, or manipulate digital evidence, challenging authenticity and trust. [more]
AI governance maturity remain uneven: AuditBoard’s 2025 research finds that while AI adoption is rapidly spreading across enterprise risk functions, organizational confidence and governance maturity remain uneven. Many firms have implemented AI tools and trained teams in machine learning, yet few feel ready for upcoming regulatory demands. [more]
KPMG’s approach to AI security: Organizations must implement comprehensive AI governance frameworks that align with risk appetite and ethical standards, foster transparency, fairness, and explainability, and engage employees to build trust. They should address both traditional and AI-specific security threats (such as prompt injection, data poisoning, agent hijacking, and API vulnerabilities) through continuous testing, threat modeling, and monitoring. Leveraging automation and AI tools to support asset discovery, supply chain visibility, GRC integration, and audit readiness. Finally, organizations must demonstrate accountability and due diligence via transparent reporting, adherence to Trusted AI principles, and risk-based AI scorecards to ensure consistent evaluation and regulatory compliance. [more]
SANS fellow’s recommendations to keep AI system safe: AI can greatly enhance cybersecurity by reducing alert fatigue, spotting patterns faster, and scaling human effort, but it also introduces new risks that must be actively managed. Organizations need to treat AI systems (especially agentic ones that can act autonomously) as first-class identities within their security frameworks, enforcing scoped credentials, strong authentication, audit logging, and isolation to prevent misuse. Securing AI also requires protecting the underlying models, data pipelines, and integrations through access and data controls, hardened deployment strategies, inference protection, continuous monitoring, and lifecycle model integrity. Finally, teams should balance automation and human oversight: routine, low-risk tasks like log parsing or alert deduplication can be automated, while context-heavy decisions like incident response should leverage AI for assistance but remain human-controlled, ensuring trust, accountability, and effective defense. [more]
Web3 Cryptospace
Overview of EtherHiding: EtherHiding is a multi-stage attack method, first seen in 2023 and attributed by Google to North Korean-linked actors, that combines social engineering (fake job offers, recruitment scams, Discord/Telegram lures and bogus coding tests or “patch” downloads) with website compromise and malicious smart-contract code: attackers inject a Loader Script into a legitimate site which then talks to an embedded smart contract (often via read-only blockchain calls to avoid fees and detection) to trigger crypto- and data-stealing payloads; the attack typically installs second-stage JavaScript malware called JADESNOW to exfiltrate sensitive data and may add a persistent third stage for long-term access to high-value victims. [more]
