TechRisk #104: Mistakes of AI
Plus, updated OWASP Top 10 reflects emerging threats, quantum impact might be elusive for now, seedless and secure with Multi-Party Computation, Mango Markets full shutdown
Tech Risk Reading Picks
Unique Mistakes of AI Systems: Humans frequently make mistakes, which are often predictable based on factors like knowledge gaps, fatigue, or distraction, and society has developed various systems to mitigate their impact. Similarly, AI systems, such as large language models (LLMs), also make errors, but these are fundamentally different—often bizarre, inconsistent, and unaccompanied by ignorance or lack of confidence. While humans tend to err in clustered and predictable ways, AI mistakes can occur randomly and across diverse contexts, making them harder to anticipate and trust in complex scenarios. Addressing this challenge requires two approaches: engineering AI systems to make more human-like, intelligible mistakes and developing new error-correcting mechanisms tailored to AI's unique failure modes. While some human mistake-correcting methods, like double-checking, are partially effective with AI, others exploit machine-specific traits, such as repeated questioning. Researchers are also exploring AI's behaviors, some of which mimic human tendencies, like sensitivity to phrasing or bias towards familiarity. Ultimately, understanding and mitigating AI errors demands confining their applications to contexts that align with their capabilities while carefully managing the risks posed by their mistakes. [more]
AI makes hacking easier: Artificial intelligence (AI) is revolutionizing the landscape of cybercrime, dismantling the stereotype of solitary, hooded hackers and revealing a younger, more dynamic demographic. A study by Experian highlights that the average age of jailed cybercriminals in the U.S. has dropped to 19, with teens increasingly leveraging AI to hack, create deepfakes, and execute fraudulent attacks, often recruited via online chat rooms, the dark web, and gaming platforms. Compounding this threat, 57% of fraud originates from company insiders, and the growing use of AI by employees with elevated credentials raises serious security concerns. To counter these risks, dynamic identification technologies, which use continuously refreshed barcodes and codes to prevent unauthorized duplication, are gaining traction, with potential applications for social security numbers and driver’s licenses. While dynamic identification promises robust security, the rise of AI-driven fraud and hackers targeting each other for financial or political gain underscores the evolving complexities of cybersecurity. [more]
Azure AI service abused: Microsoft has filed a lawsuit against a group of 10 unnamed defendants accused of using stolen credentials and custom software to bypass safety measures in its Azure OpenAI Service. The defendants allegedly orchestrated a "hacking-as-a-service" operation using a tool called de3u, which exploited stolen API keys to generate content, including images via OpenAI’s DALL-E model, while evading Microsoft’s content filtering mechanisms. According to the complaint, the defendants resold access to these services to other malicious actors, providing detailed instructions on how to use the custom tools to generate harmful and illicit content. Microsoft claims the group systematically stole API keys from customers to gain unauthorized access, violating multiple federal laws, including the Computer Fraud and Abuse Act. The company discovered the misuse in July 2024 and has since implemented countermeasures, secured court authorization to seize a related website, and taken steps to investigate and disrupt the scheme. Microsoft seeks injunctive relief and damages for the alleged actions. [more][more_microsoft]
Addressing concerns over AI risk: Artificial intelligence (AI) has evolved from its mid-20th-century origins into a transformative force in business, driving operational efficiency, customer experience, and innovation in Europe. However, its adoption raises challenges such as data privacy, ethical concerns, and regulatory compliance within the EU's strict legal framework. To navigate these complexities, organisations should establish clear accountability for AI risks, assess potential issues like algorithmic bias and regulatory non-compliance, and align risk management strategies with evolving AI applications. Comprehensive employee training and communication programs can enhance AI integration, while performance monitoring ensures alignment with business goals and legal requirements. By addressing these aspects, businesses can harness AI's potential while mitigating associated risks. [more]
Updated OWASP Top 10 reflects emerging threats: The integration of AI coding tools into software development marks a transformative era, with 63% of organizations already adopting these technologies. However, their use introduces significant risks, highlighted in the updated OWASP Top 10 for Large Language Model (LLM) Applications. Key threats include Prompt Injection, supply chain vulnerabilities, and Sensitive Information Disclosure, which expose businesses to risks like malware, data breaches, and exploitable logic flaws. Emerging concerns like Vector and Embedding Weaknesses, especially in Retrieval-Augmented Generation (RAG) systems, further underscore the need for stringent security practices. Addressing these challenges requires robust developer training, critical thinking, and adherence to secure coding principles to mitigate risks and ensure AI-driven applications are both innovative and secure. [more]
Quantum impact might be elusive for now: Quantum computing is emerging as a transformative technology with the potential to revolutionize industries like finance, logistics, and pharmaceuticals by solving complex problems far faster than traditional supercomputers. Using qubits that can exist in multiple states simultaneously, quantum computers have made significant strides, evidenced by breakthroughs such as Google’s Willow quantum chip and the expanding Quantum-as-a-Service (QaaS) sector. However, challenges like scalability, error rates, and a lack of demonstrated quantum advantage remain. While leaders like IBM and Google continue advancing hybrid quantum-classical systems, skepticism persists about the technology's near-term readiness, with industry figures offering conflicting views on its timeline for mainstream adoption. Despite a predicted slowdown in investment, companies are encouraged to prepare for quantum integration to stay competitive as incremental advancements continue. [more]
Fake PoC patch targeting cybersecurity professionals: A sophisticated attack known as "LDAPNightmare" targets security researchers by disguising malware as a fake Proof-of-Concept (PoC) exploit for the patched CVE-2024-49113 Windows LDAP vulnerability. Hosted in a malicious repository that mimics a legitimate one, the malware steals sensitive system and network data, transmitting it to attackers’ servers. The repository conceals a malicious executable that triggers scripts to collect and exfiltrate data via external servers. Trend Micro's research highlights the risk of this approach, as it exploits high-profile vulnerabilities and preys on cybersecurity professionals, potentially compromising critical systems. To mitigate risks, researchers should verify repository authenticity, prioritize official sources, and be vigilant for suspicious activity. [more]
Web3 Cryptospace Spotlight
Seedless and secure with MPC: The phrase "Not your keys, not your bitcoin" underscores the importance of self-custody wallets, which offer secure ownership but require users to manage cumbersome seed phrases—a method prone to risks like loss or theft. Alternatives to seed phrase wallets are emerging, incorporating advanced cryptographic technologies such as Multi-Party Computation (MPC), Two-Party Computation with MPC (2PC-MPC), and Account Abstraction (AA), enabling features like biometrics and PIN-based access. Innovations like Ika Network’s dWallet provide decentralized, programmable asset management without seed phrases, while Holonym’s Human Keys leverage user-friendly inputs and zero-knowledge proofs for secure, scalable wallet access. These advancements enhance usability without compromising security, broadening crypto adoption while maintaining decentralization. [more]
New Web3 transaction simulations attack: Threat actors are leveraging a sophisticated tactic called "transaction simulation spoofing" to exploit a vulnerability in Web3 wallets, allowing them to steal cryptocurrency. This was demonstrated by an attack that stole 143.45 ETH (worth $460,000). This method exploits the time delay between transaction simulations—which preview the expected outcome of blockchain transactions and execution. Attackers lure victims to malicious websites mimicking legitimate platforms, triggering simulations that falsely indicate small gains, while altering the on-chain contract state before execution to drain the victim's wallet. ScamSniffer, which identified this attack, emphasizes the need for Web3 wallets to reduce simulation refresh rates, force refresh before critical actions, and implement expiration warnings. Users are advised to approach unverified "free claim" offers with extreme caution and rely only on trusted dApps. This attack underscores the evolving sophistication of phishing techniques targeting trusted security mechanisms in Web3. [more]
New Banshee malware variant: A new variant of the Banshee malware has been discovered, threatening the online security of 100 million macOS users by targeting browser credentials, cryptocurrency wallets, passwords, and sensitive files. First identified in 2024 by Check Point Research, Banshee operates as a "stealer-as-a-service," advertised on underground forums and distributed via phishing websites and malicious GitHub repositories disguised as popular software. Its latest variant, unveiled in late 2024, incorporated techniques to evade detection by mimicking Apple's XProtect anti-virus engine. The malware remained undetected for months, highlighting its stealth and advanced capabilities, until a source code leak in November 2024 enabled antivirus developers to enhance detection and countermeasures, raising awareness of its evolving threat. [more]
DeFi exploited for nearly $200K: UniLend Finance, a decentralized finance protocol, suffered an exploit on Ethereum resulting in losses of approximately $197,000. On January 12, TenArmorAlert reported that an attacker manipulated a flaw in UniLend’s share price calculation during the redeem process, enabling them to inflate collateral value and drain funds. The attacker deposited USDC and Lido Staked Ether (stETH) as collateral, borrowed the pool’s stETH, and redeemed their initial deposits without repaying, effectively depleting the pool. The exploit, executed at 11:19:59 AM UTC, was initially estimated to cause $196.2K in losses, later updated to $197.6K by SlowMist. [more]
Mango Market full shutdown: Mango Markets, a Solana-based DeFi platform exploited for $117 million in 2022, has announced its full shutdown following a unanimous governance vote. The closure, effective January 13, 2025, at 8 PM UTC, will involve changes to lending parameters, including reducing the target lending ratio to 0.1% of deposits and significant interest rate hikes across major cryptocurrencies. The shutdown reflects the community’s acknowledgment of the platform’s untenable state. [more]