TechRisk Notes#63: HackerGPT helps in defence strategy + AI based deception attacks to rise
Plus, Microsoft to release AI Cyber tool, IBM and DARPA collaborated on AI defense project, old smart contract exploited for near $2M, double spending bug led to near $5M loss, and more.
Tech Risk Reading Picks
HackerGPT: HackerGPT, launched in 2023 and now with the beta version 2.0, is a cybersecurity tool powered by ChatGPT. It offers a vast array of hacking tools and techniques to aid users in managing cybersecurity strategies effectively. Using advanced natural language processing, it provides actionable insights for offensive and defensive cyber activities. Its collaboration with WhiteRabbitNeo enhances its capabilities, offering access to the WhiteRabbitNeo 33B parameter model. The free subscription allows 50 uses per day. [more]
Microsoft AI Cyber: Microsoft is set to launch artificial intelligence tools on April 1 aimed at aiding cybersecurity workers in summarizing suspicious incidents and uncovering the tactics hackers use to conceal their actions. Named Copilot for Security, it was introduced a year ago and has been undergoing trials with companies like BP Plc and Dow Chemical Co. [more]
AI defense project: Over the past four years, IBM has collaborated with DARPA and other entities to address adversarial AI challenges through a project called Guaranteeing AI Robustness Against Deception (GARD). The key focus was to create defenses against emerging threats, establishing theoretical foundations for robust systems, and developing tools for evaluating algorithm defenses. [more]
AI deception attacks to rise in 2024: Recorded Future conducted experiments showcasing potential malicious uses of AI, including deepfakes for social engineering, AI-assisted influence operations, AI-powered malware development, and reconnaissance. These threats are expected to increase due to the accessibility and decreasing costs of AI tools. Limitations currently revolve around open-source model performance and bypass techniques for security measures. Organizations must expand their understanding of potential attack surfaces to include AI-generated content and prepare for advanced AI-driven threats by adopting more sophisticated detection methods. [more][more-2]
Hijacking GenAI conversations: Researchers at the Offensive AI Research Lab from Ben-Gurion University reveals a vulnerability in Large Language Model (LLM) assistants, including ChatGPT. The flaw allows attackers to conduct side channel attacks, enabling them to intercept and read private chats without detection. Despite encryption measures, flaws in OpenAI's implementation allow attackers to exploit the transmission of small encrypted tokens, enabling them to decrypt messages. To address this, the researchers recommend either sending tokens together or using padding to make analysis difficult. OpenAI and Cloudflare have since adopted the padding technique to mitigate the vulnerability. [more]
AI SDLC risk: As developers and data scientists rush to build and deploy AI products, it's crucial not to overlook security and the risk of supply-chain attacks. While there's an abundance of tools and models to work with, they can contain hidden threats like backdoors or malware. Machine learning experts may not prioritize security, but it's essential to ensure that code, libraries, and datasets are safe. Supply-chain attacks in AI can lead to compromised workstations, network intrusions, misclassification of data, and potential harm to users. [more]
Heighted AI risk: Chief Audit Executives anticipate a surge in audit focus on AI-related risks due to the rapid adoption of generative AI. Gartner's survey of 102 CAEs found that as organizations embrace new AI tech, internal auditors seek to broaden their coverage to address various risks, including control failures, unreliable outputs, and cyber threats linked to AI. Among the top six risks expected to see heightened audit attention, half are AI-related. [more]
Post-quantum cryptography solution: The RESQUE consortium, comprising six French entities including Thales, TheGreenBow, CryptoExperts, CryptoNext Security, ANSSI, and Inria, aims to develop post-quantum cryptography solutions to safeguard communications, infrastructure, and networks against potential quantum computer threats. The project, funded by the French government and the EU, focuses on creating a hybrid post-quantum VPN and a high-performance post-quantum hardware security module. [more]
Web3 Cryptospace Spotlight
Private key compromised: 15 Mar - Mozaic Finance, a yield farming protocol, was exploited on March 15 on the Arbitrum network. The protocol's development team confirmed the attack, stating the stolen funds were deposited on the MEXC exchange. They express confidence in recovering the funds. The exploit involved calling the "bridgeViaLifi" contract, suggesting a compromise of a developer wallet's private key, according to blockchain security firm CertiK. [more]
Old smart contract exploited: 20 Mar - Dolomite crypto exchange's old contract was exploited, leading to a loss of around $1.8 million. Users who had authorized approvals to the contract were affected. The team advises revoking approvals to the Dolomite address starting with 0xe2466. Users who only used the current version on Arbitrum should be safe. The team has disabled the faulty contract to prevent further attacks but recommends revoking approvals as a precautionary measure. [more]
Domain hijack: 20 Mar - Layerswap.io, a platform connecting centralized crypto exchanges and layer-2 blockchains, faced a domain hijack, redirecting users to a phishing site. The hacker tried resetting Layerswap's X account, locking access to social media. GoDaddy's slow response prolonged the domain control by the hacker. Hours later, Layerswap regained control and reversed the changes. About $100,000 of user funds were drained during the incident. [more]
Double spending bug: 21 Mar - Super Sushi Samurai (SSS), a GameFi project running on Coinbase's Base layer-2 blockchain and Telegram, lost approximately $4.8 million due to a double-spending glitch. CertiK, a blockchain analytics firm, identified the vulnerability in SSS contracts' update() function, leading to balance doubling when users transfer tokens to themselves. During the incident, one user exploited this flaw, acquiring 11.5 trillion SSS tokens, which were sold for approximately 1,310 ETH. [more]