TechRisk #150: Design-level AI-browser exploit
Plus, ClickFix attacks surge 517%, malicious LLMs accelerate cybercrime capabilities, and more!
Tech Risk Reading Picks
HashJack: Emerging AI-browser exploit raises design-level security concerns [more]
HashJack exposes a novel prompt-injection technique where malicious instructions are hidden in URL fragments (#), allowing attackers to manipulate AI browser assistants without compromising the underlying website.
Exploits include credential theft, harmful advice, data exfiltration, and automated execution of risky system actions, especially in advanced agentic modes where AI acts independently.
While Microsoft and Perplexity issued timely fixes, Google has not yet addressed the issue for Gemini, underscoring uneven responses and the need for stronger AI-integrated security practices.
Google’s classification of the vulnerability as low-severity and “intended behaviour” has raised concern because it leaves users exposed to an exploit that bypasses traditional security controls and relies on AI design flaws.
ClickFix attacks surge 517% [more]
ClickFix attacks have escalated sharply, with a 517% increase and growing use by state-aligned threat groups from Iran, North Korea, and Russia.
Threat actors now exploit cloned, trusted-looking websites—including fake ChatGPT Atlas installers—to trick users into running password-harvesting commands.
The attack bypasses traditional controls by convincing users to paste obfuscated commands into their terminal, enabling privilege escalation and full system compromise.
Use of Google Sites as a trusted delivery platform, which raises concerns about major tech platforms inadvertently enabling high-fidelity phishing; attackers leverage Google’s implicit trust to increase success rates, sparking debate about platform accountability.
Malicious LLMs accelerate cybercrime capabilities [more]
Malicious LLMs are operational and evolving. Tools like WormGPT 4 and KawaiiGPT now generate functional ransomware, phishing campaigns, and automated attack scripts, enabling scalable cyberattacks with minimal expertise.
These models allow inexperienced actors to produce professional-grade phishing messages, conduct lateral movement, and execute data exfiltration with ease.
Paid and free versions, supported by active Telegram communities, indicate a maturing illicit market that accelerates tool development and attacker collaboration.
AI data security reality check for enterprise leaders [more]
AI adoption has outpaced oversight, with 83% of organizations using AI daily but only 13% having strong visibility into how it handles sensitive data.
AI is functioning as an ungoverned enterprise identity, resulting in widespread over-access to sensitive information and limited ability to monitor or control prompts and outputs.
Governance readiness is critically low, as only 7% have a dedicated AI governance function and just 11% feel prepared for emerging regulatory demands.
Autonomous AI agents present the most acute and debated risk, with 76% of professionals calling them the hardest systems to secure and over half unable to block risky actions in real time.
Critical picklescan vulnerabilities expose AI supply chains to model-based attacks [more]
High-severity flaws in Picklescan allowed attackers to bypass its safeguards and execute arbitrary code through malicious PyTorch model files.
These vulnerabilities highlight systemic weaknesses in relying on a single scanning tool to secure increasingly complex AI model formats.
Remediation is available (Picklescan v0.0.31), underscoring the need for continuous, expert-driven monitoring of AI supply chain risks.
The flaws expose a fundamental tension between rapid AI innovation and lagging security controls, raising concerns that existing model-scanning tools cannot keep pace with emerging threats.
AI advancing faster than its safeguards [more]
Layered AI safeguards are expanding but remain inconsistent, with no single control reliably stopping determined attackers—forcing reliance on imperfect, overlapping defenses.
Attack techniques and open-weight model adaptations are accelerating faster than defensive tools, creating unpredictable risks even when vendors claim strong safeguards.
Governments and companies are building early safety frameworks, but the absence of shared standards means oversight, evaluation quality, and vendor disclosures vary widely.
