TechRisk #125: Zero-click attacking Copilot (and other AI applications)
Plus, Apple noted LRMs’ limitations, AI Red-Team playbook and guides, ChatGPT used by various threat actor groups to improve efficiency, AI agents pose threats to Web3, and more!
Tech Risk Reading Picks
Zero-click attack on Copilot: Microsoft 365 Copilot was recently found vulnerable to a critical zero-click attack called EchoLeak (CVE-2025-32711), which allowed attackers to exfiltrate sensitive user data without interaction, according to Aim Security. The exploit involved indirect prompt injection via a specially crafted email that, when referenced by a user query, triggered Copilot to execute hidden instructions and leak past chat information to an attacker’s server. Despite Copilot’s security mechanisms like XPIA classifiers and CSP, the attack bypassed these protections through deceptive email phrasing. Microsoft has since patched the issue server-side, requiring no user action, but the method could threaten other AI applications as well. [more]
Apple noted LRMs’ limitations: Recent advancements in frontier language models have led to the development of Large Reasoning Models (LRMs), which generate detailed reasoning traces before answering. While LRMs show improved performance on benchmarks, their true reasoning abilities and limitations remain unclear due to evaluation methods that emphasize final answers and risk data contamination. Apple’s study addresses these gaps using controlled puzzle environments that vary complexity while preserving logical consistency, allowing in-depth analysis of both answers and reasoning processes. Findings reveal that LRMs collapse in accuracy beyond certain complexity levels and exhibit a surprising drop in reasoning effort despite sufficient resources. Comparing LRMs with standard LLMs under equal compute, the study identifies three performance regimes and highlights LRMs’ struggles with consistent algorithmic reasoning. Overall, the work questions the depth of LRMs' reasoning capabilities, emphasizing the need for better evaluation and understanding. [more]
European Commission on GenAI risks and opportunities: The European Commission’s Joint Research Centre (JRC) Outlook report explores the transformative potential of Generative AI (GenAI) within the EU, highlighting its capacity to drive innovation, productivity, and societal change across sectors like healthcare, education, science, and the creative industries. While GenAI offers immense opportunities, it also brings critical challenges such as misinformation, bias, labour disruption, and privacy concerns, necessitating a multidisciplinary approach. The report provides a detailed overview of GenAI's technological capabilities, economic impact, and societal implications, and reviews the EU’s regulatory responses, including the AI Act and data legislation. It emphasizes the importance of strategic policy and governance to harness GenAI’s benefits responsibly, ensuring alignment with democratic values and legal frameworks. [more]
ChatGPT used by various threat actor groups to improve efficiency: OpenAI has identified and banned accounts linked to state-affiliated threat actors from countries including Russia, China, North Korea, Iran, Cambodia, and the Philippines for abusing its AI models in cybercrime and influence operations. These actors used ChatGPT to aid malware development, debug code, automate social media manipulation, and conduct fraudulent schemes such as employment scams and divisive political messaging. Notably, a Russian group used the model to refine a malware strain called ScopeCreep, while Chinese-affiliated groups leveraged it for system configuration and influence campaigns. Although no novel threats were created, OpenAI noted that its tools improved efficiency and scale for malicious activities. [more]
AI threats outpaced defenses: Check Point’s 2025 Cloud Security Report highlights the growing challenges enterprises face in securing increasingly complex multi-cloud environments amid evolving cyber threats, particularly those powered by AI. Based on insights from over 900 global cybersecurity leaders, the report reveals that 65% of organizations experienced cloud-related incidents in the past year, yet only a small fraction could detect or remediate them quickly. Widespread tool sprawl, legacy defenses, and limited visibility—especially into lateral movement within cloud networks—are major vulnerabilities, compounded by skills shortages and rapid tech evolution. Despite increased cloud adoption and rising concern over AI-driven attacks, preparedness remains low. [more]
Hacken’s AI Red-Team Playbook for Security Leaders - [more]
Cloud Security Alliance - CSA’s Agentic AI Red Teaming Guide - [more]
Web3 Cryptospace Spotlight
DeFi ALEX Protocol hacked again: 6 Jun - Bitcoin-based DeFi platform, ALEX Protocol lost $8.3 million due to a vulnerability in its self-listing verification logic that allowed attackers to drain multiple liquidity pools. This is its second major exploit. The stolen assets included STX, sBTC, USDC/USDT, and WBTC. In response, ALEX has pledged full user reimbursement via the ALEX Lab Foundation Treasury, with compensation in USDC based on token values during the attack window. However, the incident—following a $4.5 million exploit in 2024—has intensified scrutiny over the platform’s security, raising serious doubts about its ability to regain user trust without major systemic improvements. [more]
AI agents pose threats to Web3: At the Incrypted Online Marathon during Blockchain Week 2025, David Schwed, CISO at Robinhood, warned of emerging threats in Web3 from autonomous AI agents capable of executing rapid, sophisticated cyberattacks — including draining DeFi pools in under a minute. He criticized traditional cybersecurity measures as inadequate against such fast-evolving threats and emphasized the urgency for Web3 projects to adopt proactive, AI-powered defense strategies. Schwed advocated for integrating Agentic AI into development pipelines for continuous security testing and real-time response, while stressing the importance of transparency, regulatory explainability, and embedding security into the culture and maturity of Web3 organizations from the outset. [more]