TechRisk #109: Microsoft's first quantum chip
Plus, AI red teaming is bu**s**t, China’s post-quantum cryptographic (PQC) algorithms, web3 developers targeted with an “undetectable” malware, Malicious web3 game on Steam, and more!
Tech Risk Reading Picks
Microsoft’s first quantum chip: Microsoft has introduced Majorana 1, the first quantum chip using its new Topological Core architecture, which aims to bring practical, industrial-scale quantum computing within years. This breakthrough is made possible by topoconductors, novel materials that stabilize qubits through Majorana particles, enabling more reliable and scalable quantum computation. By leveraging this technology, Microsoft envisions million-qubit quantum computers capable of solving complex scientific and industrial problems, such as breaking down microplastics, designing self-healing materials, and advancing healthcare. Unlike traditional qubits, which require extensive error correction, Microsoft's topological qubits offer greater stability and efficiency, making large-scale quantum computing more feasible. This innovation, validated in a recent Nature paper, has also earned Microsoft a place in DARPA’s elite program for utility-scale quantum computing. With its quantum computing ecosystem integrating AI and cloud computing through Azure Quantum, Microsoft believes this advancement will revolutionize industries by enabling near-instant problem-solving at an unprecedented scale. [more]
China’s post-quantum cryptographic (PQC) algorithms: China has launched a global initiative to develop post-quantum cryptographic (PQC) algorithms, diverging from US-led encryption efforts driven by the National Institute of Standards and Technology (NIST). The Institute of Commercial Cryptography Standards (ICCS) is seeking international proposals for encryption methods resilient to quantum attacks, evaluating them for security, performance, and feasibility. Experts suggest this move reflects China's concerns over potential US intelligence “back doors” in encryption standards and its broader push for technological self-reliance. While NIST has been developing PQC standards since 2012, China’s approach to cryptographic standardization remains more opaque. ICCS’s initiative aligns with China’s strategy of exerting greater control over its technology infrastructure while potentially integrating its own security mechanisms into encryption protocols. [more]
AI red teaming is bu**s**t: Leading cybersecurity researchers at DEF CON have warned that current AI security methods are fundamentally flawed and require a complete overhaul, as highlighted in the inaugural Hackers' Almanack report. The report, developed with the University of Chicago's Cyber Policy Initiative, critiques "red teaming" as insufficient for safeguarding AI systems due to fragmented documentation and inadequate evaluations. Nearly 500 participants, including newcomers, successfully found AI vulnerabilities at the conference, underscoring the need for a more systematic approach. Researchers advocate for a framework similar to the Common Vulnerabilities and Exposures (CVE) system used in traditional cybersecurity to standardize AI vulnerability documentation and mitigation. [more-Hackers_Almanack_report]
Alleged OmniGPT breach: A hacker named "Gloomer" has allegedly put data from a massive OmniGPT breach up for sale on the dark web, exposing personal information of over 30,000 users, including emails, phone numbers, API and crypto keys, credentials, and billing details. The breach, first hinted at in late January, reportedly includes 34 million chatbot messages and uploaded files, some containing sensitive data. While the method of attack remains unclear, the leak poses serious security risks such as identity theft, account takeovers, and phishing attacks. OmniGPT, an AI aggregator providing access to models like ChatGPT-4 and Midjourney, has not yet responded to the allegations. If confirmed, the company could face compliance issues under regulations like GDPR, as the stolen data reportedly includes users from multiple countries. [more]
Rise of malicious AI models: The growing threat of malicious AI models on open-source repositories like Hugging Face highlights the need for stronger security measures in AI development. Attackers are exploiting vulnerabilities, such as the insecure Pickle format, to insert malicious code while bypassing security checks. A recent example, the "NullifAI" attack, demonstrated how repositories' defenses can be evaded, emphasizing that companies should not rely solely on these platforms for security. Instead, businesses using open-source AI must implement robust supply chain security, scrutinize licensing complexities, and ensure proper model alignment to mitigate risks. Researchers advocate for safer alternatives, like Safetensors, to replace risky formats and urge AI teams to manage AI dependencies with the same diligence as other software components. [more]
AI-powered Siri faced engineering challenges and software bugs: Apple's planned AI-powered Siri overhaul is facing delays due to engineering challenges and software bugs tied to its integration with Apple Intelligence, potentially pushing the launch to May or later instead of the originally planned April release. The update, first teased at WWDC 2024, aims to make Siri more conversational and capable, but internal testing has revealed inconsistent performance. The delays may also impact future Apple Intelligence updates, including features intended for iOS 19, possibly pushing them to 2026. With rivals like OpenAI and Google advancing their AI assistants, Apple faces pressure to perfect its rollout, as a flawed release could hurt its reputation and market position. Additionally, Apple is working on adapting its AI for the Chinese market, ensuring compliance with government regulations. [more]
Vulnerabilities in NVIDIA's CUDA Toolkit utilities: Nine vulnerabilities were discovered in NVIDIA's CUDA Toolkit utilities, cuobjdump and nvdisasm, which are used for inspecting and analyzing CUDA binary files. These vulnerabilities, tracked under nine CVEs, primarily involve integer overflow and out-of-bounds read issues, potentially leading to limited denial of service and information disclosure. The vulnerabilities were identified through extensive fuzz testing and affect older versions of these tools. NVIDIA released an update in February 2025 to mitigate these risks, and developers are advised to upgrade to the latest CUDA Toolkit version. Palo Alto Networks provides additional protection against potential exploitation through its security services. [more]
40% of AI-related data breaches by 2027: Gartner predicts that by 2027, over 40% of AI-related data breaches will result from cross-border misuse of generative AI (GenAI) due to inadequate global AI governance and security standards. The rapid adoption of GenAI has outpaced regulatory measures, leading to unintended data transfers and security risks, especially when AI tools operate across different jurisdictions. The lack of standardized AI policies forces enterprises to implement region-specific strategies, limiting scalability and operational efficiency. Gartner advises organizations to enhance AI governance by strengthening data security, establishing oversight committees, and investing in trust, risk, and security management (TRiSM) solutions. As AI governance becomes a global regulatory mandate, enterprises that fail to integrate necessary controls may face compliance challenges and competitive disadvantages. [more]
Web3 Cryptospace Spotlight
Zklead lost $9.5M: ZkLend, a decentralized lending protocol on Starknet, suffered a $9.5 million exploit and has offered the hacker a 10% bounty for the return of the remaining funds. The platform urged the attacker to return 3,300 ETH by 14 Feb 2025, assuring no legal action if complied with but vowing to pursue legal measures if ignored. In response to the breach, ZkLend suspended withdrawals, warned users against deposits and loan repayments, and is investigating the incident alongside blockchain security experts and law enforcement. [more]
Web3 developers targeted with an “undetectable” malware: Lazarus Group, a North Korean state-sponsored cybercriminal organization, is targeting software and Web3 developers with an “undetectable” malware campaign dubbed Marstech Mayhem. According to STRIKE researchers from SecurityScorecard, the group is embedding malicious JavaScript implants into GitHub repositories and NPM packages, often disguising them within legitimate code. The malware, Marstech1, specifically scans for cryptocurrency wallets like MetaMask, Exodus, and Atomic, modifying browser configurations to intercept transactions—aligning with Lazarus’ ongoing mission to steal cryptocurrency for North Korea’s state funding, including its nuclear weapons program. With at least 233 confirmed victims across the US, Europe, and Asia, experts warn developers to adopt stronger security measures and monitor their supply chains to defend against such sophisticated threats. [more]
Web3 inherits Web2 vulnerabilities: Web3, despite its promise of decentralization and security, has inherited Web2’s vulnerabilities, exacerbating cyber threats like malware, phishing, ransomware, and DDoS attacks. A Naoris Protocol study highlights that 95% of Web3 developers have observed rising malware attacks, with Web3’s reliance on centralized cloud services (AWS, Google Cloud, Azure) making it susceptible to disruptions. The traditional Web2 security model, built on centralized control, clashes with Web3’s decentralized structure, amplifying risks. To address these challenges, experts advocate for Decentralized Physical Infrastructure Networks (DePIN), which distribute security across a trustless network, eliminating single points of failure. Naoris Protocol’s research shows strong developer support for DePIN, with 40% considering it crucial for Web3 security, as it extends blockchain security principles to devices, enhancing cyber resilience. [more]
Malicious web3 game on Steam: PirateFi, a free-to-play web3 survival game, launched on Steam but was quickly removed after Valve identified "malicious files" in its builds, warning players their systems could be compromised. Reports suggest the game falsely claimed to have 7,000 players, while analytics estimate only 800–1,500 downloads. Suspicious Steam reviews showed early positive feedback from older accounts, but fresh accounts later accused the game of data theft and fraud. A Telegram group allegedly recruited moderators for $17 an hour as part of a scam to increase downloads. After six days, Valve removed the game on February 12, raising concerns about Steam’s security measures. [more]