TechRisk #81: Towards AI models collapse
Plus, Singapore’s S$100M push on AI and quantum, SAP AI flaws, Google Cloud saw attackers targeting both enterprises and individuals, Web3 $12M human error, and more!
Tech Risk Reading Picks
Towards AI models collapse: The article explores the phenomenon of "model collapse" in AI, where generative models trained on data produced by previous models suffer degradation over generations. This recursive training leads to a loss of information about the original data distribution, particularly affecting low-probability events. The study demonstrates that model collapse can affect large language models, variational autoencoders, and Gaussian mixture models, highlighting the importance of maintaining genuine human-generated data for training to preserve model performance. [more]
Red teaming AI systems: Microsoft’s AI Red Team, formed to enhance AI security, mimics hacker tactics to identify vulnerabilities in AI systems. They address risks like biased outputs and automated attacks. This multidisciplinary team, including experts in various fields, collaborates with Microsoft's AI Ethics and Effects team. They also release open-source tools and best practices to aid external security professionals. This integrated approach ensures AI safety by mapping, measuring, and managing potential harms before technology reaches customers. [more]
AI chat assistants vulnerability: Researchers at Ben-Gurion University have discovered a vulnerability in AI chat assistants, including ChatGPT and Microsoft Bing AI, that allows hackers to read encrypted conversations. The attack exploits a side channel called the "token-length sequence," which arises from the way these assistants transmit data. When AI responses are sent as individual tokens (small chunks of text), the size and sequence of these encrypted packets can be analyzed to infer the content of the conversation. This method allows attackers with a passive adversary-in-the-middle (AitM) position to achieve a high level of accuracy in deciphering responses—up to 29% perfect word accuracy and 55% topic inference. The vulnerability affects all major AI chat assistants except for Google Gemini. Mitigating this issue requires changes in how data is transmitted, such as sending tokens in large batches or padding them to uniform lengths to obscure their true size and sequence. This discovery highlights the need for improved security measures in the deployment of AI assistants to protect user privacy. [more]
AI missteps could unravel global peace and security: There is a need for a comprehensive overhaul of AI education and professional development to include courses on ethics, governance, and the societal impact of technology. It advocates for responsible innovation tools and continuous learning to mitigate risks. Additionally, it highlights the importance of diverse and inclusive AI research communities and the necessity for AI practitioners to engage with policymakers and the public to address potential risks effectively and ensure that AI technologies positively contribute to global security. [more]
Singapore’s S$100M push on AI and quantum: Singapore is investing S$100 million to enhance quantum and AI technologies in its financial sector through the Financial Sector Technology and Innovation Grant Scheme (FSTI 3.0). The initiative includes grants for establishing quantum technology centers, supporting industry-wide AI solutions, and improving cybersecurity with post-quantum cryptography and quantum key distribution. The Monetary Authority of Singapore (MAS) is partnering with educational institutions to develop quantum and AI talent. This effort aims to foster innovation, improve technological capabilities, and address industry challenges such as fraud detection. [more]
Gaps in securing GenAI tools: Companies are struggling to secure generative AI tools, with many implementing basic cybersecurity measures like blocking controls, data loss prevention (DLP) tools, and live coaching. Netskope's study highlights that over one-third of sensitive data shared with AI apps is regulated, necessitating robust security strategies. Despite increased use of DLP tools and block/allow policies, many organizations lack mechanisms to handle risks from AI outputs, such as misinformation and malicious content. The rapid adoption of generative AI tools has intensified the need for comprehensive security measures. [more]
Significant rise in cyber threats and the sophistication of AI-driven attacks: Google Cloud's 2024 report reveals a significant rise in cyber threats, emphasizing the sophistication of AI-driven attacks and the growth of ransomware-as-a-service. It highlights increased vulnerabilities in supply chains, particularly in the Asia Pacific region, where many organizations have been negatively impacted by breaches. The report stresses the importance of adopting zero-trust security models and proactive measures like threat intelligence sharing to mitigate these threats. [more][more-report]
Bypassing secure email gateways: Threat actors are increasingly using encoded URLs to bypass secure email gateways (SEGs), posing a significant challenge to email security. [more]
This tactic involves rewriting or encoding malicious URLs to manipulate SEGs, allowing the URLs to slip through without proper scrutiny. Security researchers have observed a notable uptick in these attacks, particularly in the second quarter of this year. The four email security gateways most commonly exploited by threat actors include VIPRE Email Security, Bitdefender LinkScan, Hornet Security Advanced Threat Protection URL Rewriting, and Barracuda Email Gateway Defense Link Protection.
The encoding process exploits vulnerabilities in SEG technologies. SEGs typically rewrite URLs in outbound emails to direct them through their own infrastructure for safety checks. However, when the encoded URLs arrive at their destination, some SEGs fail to properly scan them, assuming they are safe based on their origin. This oversight allows potentially malicious links to evade detection and reach unsuspecting recipients.
To counteract this threat, organizations should focus on user education and improving SEG technologies to better detect and handle encoded URLs.
SAP AI flaws: Cybersecurity researchers identified five vulnerabilities in SAP AI Core, dubbed SAPwned, which could allow attackers to access and modify customer data and internal artifacts. These flaws, now fixed by SAP, could enable unauthorized access to cloud environments and Kubernetes clusters, leading to potential supply chain attacks and data theft. The issues stemmed from inadequate isolation and sandboxing, highlighting the need for robust tenant isolation in AI services to prevent similar threats. [more]
Crowdstrike outage: CrowdStrike recently acknowledged that a security update led to the disruption of 8.5 million Windows PCs. The company attributed this incident to bugs encountered during testing. The update was intended to enhance security measures but instead caused widespread system failures. CrowdStrike is currently investigating the root cause and working on solutions to prevent similar occurrences in the future. The company has also released patches to address the issues and restore functionality to the affected systems. [more][more-crowdstrike]
Airport attacked after global IT outage: Split Airport was attacked by the Akira ransomware gang, causing major flight cancellations and delays due to the shutdown of IT infrastructure. The attack happened shortly after a global IT outage from a CrowdStrike update. The Akira group, active since March 2023, has targeted over 250 organizations and made $42 million in ransoms. They have attacked various sectors, including cloud services, universities, government bodies, and financial institutions. [more]
Web3 Cryptospace Spotlight
$12M human error : LI.FI, a cross-chain blockchain protocol, was exploited due to a human error during a smart contract update, leading to the loss of nearly $12 million in USDC, USDT, and DAI from 153 wallets. The error involved a vulnerability in validating transactions related to the protocol's interaction with the LibSwap code library. Upon detecting the breach, LI.FI quickly activated their incident response plan, disabling the vulnerable code and containing the threat. The protocol is now focused on recovering user funds, collaborating with law enforcement and web3 security firms. Affected users are urged to contact LI.FI for assistance. Previously, LI.FI experienced a similar exploit in 2022, resulting in a $600,000 loss. [more]
$7.6M drained through DeFi protocol: Rho Markets, a DeFi protocol on the Scroll network, was hacked for $7.6 million through an Oracle vulnerability. The stolen funds included USD Coin (USDC) and Tether USD (USDT). The platform halted operations to investigate the exploit. Both Rho Markets and Scroll are collaborating on a thorough assessment. The exploit was isolated to Rho Markets, ensuring the core Scroll network's security. Future steps will focus on enhancing Oracle infrastructure to prevent similar attacks. [more]
$230M stolen after multisig wallet breach: WazirX, India's largest crypto exchange, recently suffered a hacking incident resulting in a loss of $230 million. The breach affected one of their multisig wallets, which was under the custody of Laminal. WazirX has launched a bounty program to recover the stolen assets and is collaborating with external experts and other exchanges for tracing and recovery. [more]