TechRisk #119: Heighten concerns over near-term AI harms
Plus, ChatGPT Action Figures privacy risks, cybercriminals tried to exploit GenAI, API key to xAI private models leaked, near-term AI harm, $137M stolen in a day, and more!
Tech Risk Reading Picks
Significant concerns over near term AI harms: A University of Zurich study involving over 10,000 participants in the US and UK found that people are significantly more concerned about the immediate harms of AI—such as bias, misinformation, and job loss—than about speculative future threats like AI-driven human extinction. While existential narratives do raise concern, they do not reduce public awareness of current issues, countering the belief that focusing on long-term risks distracts from present challenges. Instead, the study reveals that the public can hold nuanced views, recognizing both immediate and future AI risks, and supports a balanced, inclusive discussion on the subject. [more]
Environmental and human impacts of generative AI remain poorly understood: A recent report by the U.S. Government Accountability Office (GAO) warns that the environmental and human impacts of generative AI remain poorly understood due to a lack of transparency from developers and rapid technological evolution. The GAO outlines potential societal risks including job displacement, misinformation, national security threats, and biased or unaccountable systems, while also highlighting the significant but underexplored environmental costs such as energy use, emissions, and water consumption. Despite these concerns, current policies may be inadequate to manage AI’s long-term effects, especially under the Trump administration, which has pushed aggressive AI adoption while rolling back oversight and climate-related initiatives. [more]
Action figures privacy implications: A viral trend in April saw users creating hyper-personalized action figures and Studio Ghibli-style portraits using OpenAI’s GPT-4o-powered image generator, highlighting both the fun and privacy implications of AI image tools. While the technology allows easy creation of stylized images with just a photo and a ChatGPT account, it also collects significant user data—including metadata, device info, behavioral patterns, and potentially identifiable background details—which can be used to train OpenAI’s models. Though OpenAI asserts its commitment to user privacy and offers opt-out options and data controls, experts caution users to be aware of what they're sharing, as uploaded images and prompts may carry long-term privacy risks, especially in jurisdictions with less robust data protection laws. [more]
Cybercriminals attempting to exploit GenAI: Cybercriminals are increasingly experimenting with generative AI (GenAI), but according to Verizon's latest Data Breach Investigations Report, their attempts to exploit it have not yet yielded significant success. While AI-assisted phishing and productivity-enhancing uses are rising—such as using LLMs for content creation and debugging—there’s little evidence of successful AI-specific attacks. Despite this, AI still poses a growing security risk, particularly in telecom, where GenAI integration in mobile devices introduces new vulnerabilities. AI-driven malicious emails have doubled in two years, and DDoS attacks are becoming more automated through AI, placing telcos—especially wireline operators—at heightened risk. [more][more-Verizon_Data_Breach_Investigations_Report]
Red teaming AI models: Organizations are increasingly adopting the practice of "AI red-teaming," inspired by cybersecurity tactics, to test the robustness and ethical boundaries of artificial intelligence systems by simulating attacks or probing for vulnerabilities. Experts from Microsoft, MITRE, and the Center for Security and Emerging Technology (CSET) describe it as an iterative process that explores an AI model’s flaws, unintended behaviors, and potential misuse. While different organizations take varied approaches—ranging from evaluating operational AI systems to eliciting hidden capabilities—the field still lacks standardization, making cross-product comparisons difficult. Initiatives like MITRE’s ATLAS aim to foster shared learning and build a collective knowledge base, but as the field rapidly evolves, experts emphasize the need for a common framework and shared language to guide future development and regulation. [more]
API key to xAI private models leaked: An xAI employee accidentally leaked an API key on GitHub that granted public access for nearly two months to over 60 private and unreleased AI models developed by Elon Musk’s company, potentially including data from SpaceX, Tesla, and Twitter/X. Security researcher Philippe Caturegli and GitGuardian uncovered the leak and found the key still active despite early warnings. The models, some customized with internal company data, could have enabled malicious use or data extraction, raising serious concerns about xAI’s security practices. The exposure also adds to broader worries over the Musk-led DOGE initiative using AI to process sensitive government data. [more]
Microsoft AI-focused bug bounty: As AI-powered cyber threats grow more advanced, Microsoft has launched a new AI-focused bug bounty program to proactively address vulnerabilities in its platforms, offering rewards up to $30,000 for critical disclosures. This initiative, part of its broader vulnerability disclosure program—which paid out $16.6 million in 2024 alone—targets AI security flaws specifically in Dynamics 365 and the Power Platform. These platforms, which support business intelligence and automation tools like PowerApps and Copilot-assisted features, are now under scrutiny as Microsoft invites individuals and organizations to identify high-severity vulnerabilities using its proprietary AI-focused severity classification system. [more]
Leveraging AI in third-party management: Amidst a rapidly evolving risk landscape characterized by growing operational and cybersecurity threats, an increasing number and complexity of third-party relationships, and mounting business pressures for efficiency, third-party risk management (TPRM) is undergoing a fundamental transformation. Risk leaders are increasingly leveraging AI and centralization to move beyond traditional, intermittent risk assessments towards a more resilient, efficient, and strategic approach. While centralization offers benefits like streamlined processes and a holistic risk view, AI presents unprecedented opportunities, from automating manual tasks and enabling real-time monitoring using diverse data streams to deploying predictive analytics and even agentic AI for proactive risk mitigation and remediation. Despite current challenges like fragmented structures and low AI adoption, the synergistic potential of AI and centralization, coupled with the need to address today's complex, interconnected risks, is driving organizations towards a future where TPRM is proactive, data-driven, and deeply integrated with enterprise-wide strategic objectives. [more]
Web3 Cryptospace Spotlight
$137M in a day: Multiple North Korea-linked cyber threat groups have increasingly targeted the Web3 and cryptocurrency sectors to generate revenue and circumvent international sanctions, with profits reportedly funding the country's weapons programs. Google’s Mandiant identified several threat clusters—UNC1069, UNC4899, UNC5342, and UNC4736—using sophisticated malware, social engineering, and job-themed lures to compromise developers and steal digital assets. Another group, UNC3782, has conducted large-scale phishing campaigns, stealing over $137 million in one day. Meanwhile, UNC5267 orchestrates a global scheme in which thousands of DPRK IT workers use fake identities and deepfakes to infiltrate foreign companies, funnel salaries to Pyongyang, and maintain insider access for espionage and extortion purposes. [more]
$5.8M exploit: Solana-based DeFi protocol Loopscale temporarily suspended its lending markets following a $5.8 million exploit on April 26, in which a hacker drained around 5.7 million USDC and 1,200 SOL via undercollateralized loans. The attack, affecting only the USDC and SOL vaults, represents about 12% of Loopscale’s total $40 million in TVL. While some app functions like loan repayments and top-ups have been restored, withdrawals remain restricted as investigations continue. Launched on April 10, Loopscale sets itself apart with a direct lender-borrower matching model and supports niche lending markets, having already attracted over 7,000 users. [more]
Recovered nearly half of the stolen fund: DeFi protocol Loopscale has successfully recovered nearly half of the $5.7 million in USDC and Solana stolen in a recent exploit, with approximately $2.88 million worth of Wrapped SOL returned following white hat negotiations with the attacker. Loopscale had offered a 10% bounty and release of liability for the return of 90% of the funds, which led to a response from the exploiter indicating a willingness to negotiate; this recovery follows a trend of increasing successful fund returns in decentralized finance, as seen with Term Finance's recent partial recovery after a $1.6 million loss. [more]
$650K lost through misconfiguration: Ethereum-based lending protocol Term Finance lost roughly $1.6 million in ETH due to a misconfigured oracle causing faulty liquidations, though it clarified this was not a hack or smart contract exploit. The protocol successfully recovered over $1 million through internal captures and negotiations, leaving an outstanding loss of about $650,000 which the team plans to cover from its treasury and detail in a post-mortem report. Separately, another DeFi protocol, Impermax Finance, also suffered a loss over the weekend, losing approximately $150,000 to a flash loan attack, and similarly plans to release a post-mortem analysis. [more]