TechRisk Notes#53: CRYSTALS-Kyber vulnerable + Growing AI risks
Plus, Google MFA bypassed, Identity theft through AI, CoinsPaid got hacked again, and more!
Tech Risk Reading Picks
Quantum-safe algorithm at risk: Multiple implementations of Kyber, a quantum-safe encryption algorithm, are vulnerable to timing-based attacks called KyberSlash. CRYSTALS-Kyber is the official implementation for quantum-safe encryption and is part of the NIST selection for post-quantum algorithms. Popular projects like Mullvad VPN and Signal messenger use Kyber. KyberSlash exploits timing differences in division operations during the decapsulation process, enabling attackers to gradually compute secret keys. If a service allows multiple operation requests to the same key pair, timing analysis could compromise encryption. Researchers at Cryspen discovered KyberSlash1 and KyberSlash2 vulnerabilities, demonstrating secret key recovery in two out of three attempts. [more]
Google MFA bypassed: Hackers have found a way to gain unauthorized access to Google accounts, bypassing any multi-factor authentication (MFA) the user may have set up. To do this they steal authentication cookies and then extend their lifespan. It doesn’t even help if the owner of the account changes their password. [more]
If you think your account has been compromised, you will have to sign out of all browsers to invalidate the current session tokens and then reset your password. Next you will need to sign back in to generate new tokens. Only this stops the unauthorized access because it invalidates the old tokens.
Navigating AI risk: For high-risk use cases, companies should take the following precautions, which are drawn from the European Union Artificial Intelligence Act and the technical companion of the White House’s “Blueprint for an AI Bill of Rights”. [more]
Proposed regulations of AI: MIT experts have released four policy briefs addressing the safe deployment and regulation of artificial intelligence (AI). [more]
The first brief proposes a framework for AI governance in the U.S., emphasizing the need to balance maintaining U.S. AI leadership with ensuring beneficial AI deployment, prioritizing security, privacy, safety, shared prosperity, and democratic values.
The second brief focuses on regulating large language models, highlighting the challenges posed by their broad applicability and unpredictable behavior. Recommendations include considering the model's purpose, availability, and implementing safety measures like verifiable attribution and watermarks.
The third brief explores "pro-worker AI," discussing how AI applications can impact inequality and suggesting that certain applications, like personalized teaching tools and healthcare accessibility, hold promise for workers.
The fourth brief addresses labeling AI-generated content as a strategy to mitigate harms. The authors provide a framework for platforms and policymakers to consider factors such as content creation processes, preventing misinformation, and evaluating label efficiency in various contexts.
Identity theft through AI: OpenAI is set to launch its GPT store in January 2024, allowing users to create and trade customized AI agents. This development highlights the need to consider the psychological and ethical implications of AI replicas. Instances of virtual replicas of psychotherapist Esther Perel and psychologist Martin Seligman being created without their consent have emerged. While these AI replicas aim to assist and spread healing, concerns about unauthorized "AI replica identity theft" or "AI personality theft" arise, as individuals' virtual counterparts can be developed without their permission, taking the form of interactive chatbots or digital avatars. [more]
Raising risk of AI: U.S. law enforcement and intelligence officials have raised concerns about the potential misuse of artificial intelligence (AI) in facilitating cybercrimes such as hacking, scams, and money laundering. The director of cybersecurity at the National Security Agency, Rob Joyce, mentioned that AI is being used by less capable individuals to guide hacking operations, making them more effective and dangerous. The FBI has observed an increase in cyber intrusions due to AI lowering technical barriers. Federal prosecutors also highlighted the potential for AI to aid financial crimes, such as generating convincing scam messages and using deepfake technology to trick banks' identity verification systems. [more]
AI-fueled misinformation and disinformation: The World Economic Forum's Global Risks Report for 2024 highlights concerns about AI-fueled misinformation and disinformation. As AI continues to advance and becomes more accessible, it appears to be a significant factor contributing to global risks. This is particularly relevant as the report anticipates the 2024 election year and ongoing geopolitical conflicts. The increased capability and accessibility of AI may pose challenges related to the spread of misinformation, potentially impacting political events and global stability. [more][more-wef-report]
Mandiant’s X incident: Mandiant’s investigation revealed that the account was likely compromised as a result of a “brute-force password attack”, and noted that the incident only impacted a single account. It also pointed out that there was no evidence that Mandiant or Google Cloud systems were compromised in relation to this event. [more]
Web3 Cryptospace Spotlight
$3.4M drained: Gamma Strategies suffered a security breach that resulted in a loss of approximately $3.4 million. The vulnerability arose from a discrepancy in the accounting mechanisms for depositing and withdrawing funds, causing a mismatch between liquidity and shares. The exploit allowed the attacker to withdraw an excessive number of tokens, as explained by BlockSec founder Yajin Zhou. [more][more-2]
CoinsPaid hacked again: Crypto payment gateway CoinsPaid suffered its second security breach in six months, with Cyvers reporting unauthorized transactions totaling nearly $7.5 million. Cyvers' AI system identified irregular transactions, leading to the withdrawal of $6.1 million in Tether (USDT), Ether (ETH), USD Coin (USDC), and CoinsPaid's native token CPD. CoinsPaid, an Estonian payment processor, has not yet commented on the incident. [more]
In July 2023, CoinsPaid experienced another security breach resulting in the theft of over $37 million. The attackers, allegedly from the North Korean state-backed Lazarus Group, employed advanced social engineering tactics. They tricked an employee through a fake job interview, leading to the download of malicious code and granting unauthorized access to CoinsPaid's infrastructure.
Blockchain security firm breached: CertiK's Twitter/X account, a blockchain security firm, was hijacked in a social engineering attack. The threat actor, using a hacked account linked to a well-known media entity, redirected CertiK's 343,000 followers to a malicious website promoting a cryptocurrency wallet drainer. [more]