TechRisk #95: Zero-day discovery by AI
Plus, Anthropic urged the need for AI regulation, Ollama’s critical flaws, surge in cloud-based attacks, Web3 $2.8M malicious contract heist, and more!
Tech Risk Reading Picks
Cyber threat discovery by AI: Cybersecurity firm GreyNoise Intelligence used AI-powered threat detection to uncover two critical zero-day vulnerabilities, CVE-2024-8956 and CVE-2024-8957, in widely deployed IoT cameras in healthcare, industrial, and government settings. Their Sift (AI) system flagged an exploit attempt targeting these cameras, allowing GreyNoise to identify the vulnerabilities before they were broadly exploited. The more severe flaw (rated CVSS 9.1) permits unauthorized access to sensitive data, while the second enables command execution when combined with the first, potentially giving attackers full camera control. GreyNoise's founder highlighted AI’s crucial role in catching and addressing the threat quickly. [more]
Google’s AI agent found 0-day exploit: Google’s Project Zero and DeepMind have unveiled Big Sleep, an AI-assisted vulnerability-finding agent that identified a zero-day exploit in SQLite, a widely used database engine. This marks a first public instance of an AI discovering a memory-safety vulnerability, previously undiscovered by conventional methods. The vulnerability, a stack buffer underflow, was fixed by the SQLite team promptly, preventing impact on users. Big Sleep’s success demonstrates AI’s potential to surpass traditional “fuzzing” techniques by detecting complex, elusive bugs early in development. Though still experimental, Big Sleep is expected to offer defenders significant advantages in security research. [more]
Urgent need for AI regulation: Anthropic released recommendations urging governments to adopt "targeted regulation" to address the escalating risks posed by advanced AI. They highlighted AI's rapid progress in tasks like coding and cybersecurity, noting models have advanced significantly in solving real-world problems, including those related to cyber offense and scientific understanding. An internal test even showed models can match PhD-level expertise in certain fields, signaling imminent risks. [more]
The company proposed a Responsible Scaling Policy (RSP) as a regulatory framework, recommending transparency, security incentives, and simplicity to balance innovation and safety. They advised that governments require AI companies to disclose risk policies and verify adherence.
Anthropic also encouraged AI companies to prioritize security in development and engage in proactive threat modeling. They emphasized the need for collaborative efforts among policymakers, industry leaders, and civil society to establish effective regulation.
Bypassing Azure AI guardrail: Mindgard, a UK cybersecurity startup, identified critical vulnerabilities in Microsoft’s Azure AI Content Safety service that allow attackers to bypass safeguards and introduce harmful content into AI-generated outputs. Discovered in February 2024 and reported to Microsoft in March, these vulnerabilities led Microsoft to implement stronger mitigations by October. The issues involved two main techniques: Character Injection and Adversarial Machine Learning (AML). Character Injection exploits character manipulations, like diacritics or zero-width spaces, to mislead AI models, while AML uses subtle data alterations to manipulate the model’s responses. These techniques reduced detection accuracy by 100% and 58.49%, respectively, making it possible for attackers to bypass moderation and inject harmful content. [more]
Ollama’s critical vulnerabilities: Cybersecurity researchers have identified six vulnerabilities in the Ollama AI framework, which could allow attackers to execute denial-of-service attacks, model poisoning, and model theft through HTTP requests. Four of these vulnerabilities (tracked with CVEs) have been patched, while two remain unpatched, allowing for model poisoning and theft if endpoints aren’t properly secured. Ollama users are advised to limit endpoint exposure with proxies or firewalls, as many may unknowingly leave endpoints open to the internet. Around 25% of Ollama’s nearly 10,000 internet-facing instances are deemed vulnerable, with significant deployments in countries like China, the U.S., and Germany. [more]
Defamation risks of AI generated content: Meta and Google’s use of AI to summarize user comments and reviews could lead to new defamation risks, as Australian law might hold these platforms liable for publishing defamatory content. Following a 2021 high court ruling that platforms can be accountable for user-generated content, legal experts warn that Meta and Google’s AI-generated outputs may be considered defamatory if they summarize harmful statements. [more]
Surge in cloud-based attacks: Sysdig's 2024 Global Threat Year-in-Review highlights the surge in cloud-based attacks, with attackers using automation, AI resource jacking, and open-source tools for credential theft. Major threats include AI resource jacking costing victims up to $100,000 per day, rapid cryptomining deployments, and long-term resource siphoning by groups like RUBYCARP. The report underscores a rising financial toll, with public cloud breaches averaging over $5 million and a projected global cyberattack cost exceeding $100 billion by 2025. Sysdig’s team advises companies to prioritize resilience, as cloud attacks grow faster, more sophisticated, and more costly each year. [more][more-sysdig]
Web3 Cryptospace Spotlight
Infect through animation library : 30 Oct - a "massive supply chain attack" compromised the Lottie Player animation library, affecting several crypto apps, including 1inch and TEN Finance. Attackers accessed a LottieFiles engineer's GitHub account, pushing malicious updates that injected fake wallet connection prompts for a crypto drainer tool, “Ace Drainer,” into sites using the library. Users of affected apps saw deceptive popups across popular websites. LottieFiles removed the compromised versions and urged sites to upgrade to safe versions (2.0.4 or 2.0.8), as apps still using affected versions may remain vulnerable. [more]
Malicious contract heist of $2.8M: Sunray Finance's SUN token was compromised by a malicious smart contract on the Arbitrum network, leading to a loss of $2.8M in liquidity from swaps to USDT and WETH, rendering SUN tokens worthless. An attacker deployed a contract that minted SUN tokens outside the normal schedule following a smart contract upgrade. They quickly swapped the 200 trillion newly created SUN tokens, crashing its value to zero. The attacker used the Across bridge to fund their wallet and conducted transactions visible on the SUN token page, with one swap reaching 2.1M USDT and another causing a $750K WETH loss. [more]
Web3 casino hacked: 3 Nov - MetaWin, an online crypto casino, was hacked, resulting in a $4 million loss from its hot wallets. CEO Richard Skelhorn confirmed that MetaWin has "topped off" funds after the breach and temporarily paused withdrawals, which are now resumed for 95% of customers. The attack exploited MetaWin's frictionless withdrawal system, enabling access to Ethereum and Solana wallets. Blockchain analyst ZachXBT tracked the hacker’s movement of funds to KuCoin and HitBTC, identifying 115 associated wallet addresses. Authorities are investigating, and Skelhorn reassured users by personally covering part of the losses. This breach follows several recent attacks in the crypto industry, including Radiant Capital’s $58 million and M2 exchange’s $13 million losses, with October 2024 alone seeing $88.47 million lost in crypto exploits. [more]