TechRisk #128: The new weakest link
Plus, Cybercriminals’ LLMs approach, Thailand's draft AI law, cost effective prompts, Web3 projects infiltrated by hackers, and more!
Tech Risk Reading Picks
The new weakest link: SquareX’s research reveals that Browser AI Agents, the software tools that automate web-based tasks on behalf of users, have overtaken employees as the weakest link in enterprise security. While they offer significant productivity gains, these agents lack the security awareness and contextual understanding that human users possess. These make them highly susceptible to phishing, OAuth abuse, and impersonation attacks. Unlike employees, Browser AI Agents cannot identify suspicious URLs, unusual permissions, or deceptive designs, and the burden of manually safeguarding each task through secure prompts is unrealistic. SquareX demonstrated such vulnerabilities using the Browser Use framework, where an agent granted full email access to a malicious app. Since traditional security tools and browsers cannot distinguish between user and agent actions, enterprises must urgently adopt browser-native security measures and rethink access management to protect against threats introduced by these AI-driven tools. [more]
Cybercriminals’ LLMs approach: Cisco Talos has uncovered a troubling trend of cybercriminals exploiting Large Language Models (LLMs) to enhance their malicious activities. Despite LLMs being designed with safety features like alignment and guardrails, attackers are bypassing these protections through three key methods: using uncensored models (e.g., OnionGPT, WhiteRabbitNeo), developing custom criminal LLMs (like WormGPT, FraudGPT), and jailbreaking legitimate ones via prompt manipulation. These tools are increasingly traded on the dark web, enabling the creation of phishing emails, malware, and other illicit content. [more]
Critical vulnerability in Anthropic’s MCP ecosystem: Cybersecurity researchers have uncovered a critical vulnerability (CVE-2025-49596) in Anthropic's MCP Inspector tool that allows remote code execution (RCE), exposing developers to serious risks such as data theft, backdoor installation, and lateral movement across networks. With a CVSS score of 9.4, the flaw stems from insecure default configurations (such as, unauthenticated proxy communication and exposure to the internet via 0.0.0.0 ) combined with a CSRF exploit and an old browser vulnerability (0.0.0.0 Day), enabling attackers to execute malicious code simply by tricking developers into visiting a crafted website. Despite MCP Inspector being a non-production and open-source reference tool, it has been widely adopted. The vulnerability was patched in version 0.14.1. [more]
Cost effective prompts: As large language models (LLMs) grow more powerful with extended context windows and deeper reasoning, they also become increasingly compute- and cost-intensive, especially when prompts are inefficient or unnecessarily complex. This has led to the rise of “prompt ops,” a discipline focused on refining and orchestrating prompt interactions to optimize cost, performance, and utility. Unlike prompt engineering, which focuses on crafting prompts, prompt ops manages their lifecycle (such as, monitoring, adjusting, and tuning them over time). Poor prompting, including being vague, overly verbose, or not leveraging structure, can waste compute and inflate costs. As models tend to over-generate and users often overfeed context, it's essential to design prompts that are specific, efficient, and well-structured. Emerging tools and best practices aim to automate and streamline this process, with the eventual goal of agents managing prompt optimization autonomously. [more]
AI risk at inference stage: AI’s promise for enterprises is clear, but the security risks at the inference stage (where AI models generate real-time outputs) are driving unexpected, significant costs that threaten ROI, compliance, and customer trust. Attackers exploit inference vulnerabilities such as prompt injection, data poisoning, and information leakage, causing costly breaches and regulatory penalties that inflate total ownership costs. Many organizations underestimate these risks by focusing security on infrastructure rather than inference, and relying on third-party models without thorough vetting. Experts also stress adopting foundational security principles with modern AI-specific controls, such as runtime monitoring, zero-trust access, and behavioral analytics. [more]
Thailand’s AI law: Thailand aims to advance its AI ecosystem through its latest draft AI law. With its comprehensive coverage, it aims to balance innovation with risk management. The Electronic Transactions Development Agency (ETDA) built the framework by studying global models like the EU’s AI Act and incorporating a four-tier governance approach, covering international cooperation, sectoral oversight, corporate implementation, and public literacy. The draft law uses a risk-based classification for AI systems and delegates enforcement to existing sector-specific regulators, with oversight by new regulator and expert committees. It also promotes legal clarity, supports innovation such as autonomous vehicles and AI sandboxes, and requires foreign AI service providers to establish local representatives. The law emphasizes accountability, human attribution, and user protection, while also encouraging the use of previously collected data under strict conditions. Industry feedback was largely positive. They welcome the law's balance and proposing practical suggestions including provide clearer definitions, tiered compliance for SMEs, certified AI auditors, and a phased approach to content labelling. This helps to ensure responsible AI deployment and public trust. [more][more-comparisions_of_AI_laws]
Aviation sector under the heat: Hawaiian Airlines has reported a cybersecurity incident amid heightened warnings from U.S. authorities and cybersecurity firms about the Scattered Spider cybercrime group targeting the aviation sector. The FBI, Mandiant, and Palo Alto Networks have alerted the industry that Scattered Spider (known for using social engineering tactics) may be focusing on airlines and their vendors. While it’s unclear if the group is behind the Hawaiian Airlines breach, similar attacks on WestJet and system issues at American Airlines raise concerns of a broader campaign. Experts advise immediate system hardening and stronger identity verification protocols to counter these threats. [more]
Web3 Cryptospace Spotlight
Stablecoin lending protocol lost $9.3M: Resupply, a stablecoin lending protocol, suffered a $9.3 million exploit due to a price manipulation bug in its pair contract. An attacker managed to secure a $10 million loan with minimal collateral going through Tornado Cash. This led to a sharp drop in the protocol’s total value locked from $135 million to $85 million and a decline in the RSUP token’s value to $7 million. While only the wstUSR market was affected and the impacted contract has since been paused, the incident has raised concerns over the protocol’s security. Experts suggest the breach could have been avoided with better input validation and oracle checks, and Resupply has promised a full post-mortem soon. [more]
Web3 projects infiltrated by hackers: Hackers impersonating IT staff have infiltrated several Web3 projects, including Favrr, Replicandy, and ChainSaw. Consequently, nearly $1 million were lost when these hackers exploited internal security flaws and manipulated NFT minting systems tied to Pepe creator Matt Furie. These breaches, traced to suspected North Korean operatives using fake developer identities and VPNs, triggered massive devaluations in NFT markets and raised alarms over poor vetting in the crypto hiring space. The Replicandy breach alone caused over $310,000 in losses and these stolen funds were laundered through wallets and exchanges. This attack is part of a broader pattern in 2025, where North Korea-linked groups have stolen over $1.6 billion in crypto. [more]