TechRisk Notes#50: First-ever conviction
Plus, beware of model collapse, OpenAI has a new function to veto CEO, $3 million worth of NFTs hacked and risk of orphan identities and more!
Tech Risk Reading Picks
OT APT campaign: In 2022, the Iranian advanced persistent threat group OilRig targeted Israeli organizations in cyberattacks. Notably, they used custom downloaders, including SampleCheck5000, ODAgent, OilCheck, and OilBooster, developed in the last year. These downloaders utilized legitimate Microsoft cloud services for communication and data exfiltration. OilRig, also known as APT34, Helix Kitten, Cobalt Gypsy, Lyceum, Crambus, or Siamesekitten, has an extensive arsenal of custom malware. ESET researchers reported these findings on December 14, 2022. [more][more-2]
AI risk - model collapse: Model collapse in generative AI occurs when models become unreliable or fail due to being trained on synthetic data instead of human-generated data. This can lead to serious consequences, including job losses, financial issues, increased bias, and data breaches. The lack of transparency about the origin of training data poses a significant risk. Popular models like ChatGPT face challenges, such as diminishing code-writing abilities, possibly influenced by biased or recursive data sources like Stack Overflow. Solutions include building a strong AI data infrastructure, promoting openness about data sources, researching the impact of removing specific data, developing specialized and ethically assessed models, fostering data literacy, and ensuring transparency in the origin of machine-generated content. These measures aim to create more robust, trustworthy, and diverse AI systems as our reliance on them continues to grow. [more]
AI risk governance: OpenAI has announced a new safety plan to address concerns related to the development of advanced AI models. The plan gives the board of directors the power to veto the CEO if they believe the risks associated with the AI being developed are too high. OpenAI acknowledges that the study of AI risks has fallen short of what is necessary and is adopting a Preparedness Framework to systematize safety thinking. The framework involves safety systems teams overseeing current AI models, a Preparedness Team assessing frontier models, and a Superalignment Team monitoring the development of superintelligent models. [more]
The teams will report to the board, which will receive detailed scorecards assessing risks in categories such as cybersecurity, persuasion, model autonomy, and CBRN threats. If the risks are deemed medium or below, deployment is allowed; if high, development can proceed with caution; if critical, all development stops. OpenAI also emphasizes collaboration with external parties and independent third-party audits for accountability. The company aims to anticipate future risks and is committed to pioneering research on how risks evolve as models scale.
Cloud risk - orphan identities: Jeff Moncrief, field CTO at Sonrai Security, emphasizes the importance of addressing the threats posed by unused identities in cloud environments. He suggests a comprehensive approach to tighten cloud security by removing both unused human user identities resulting from turnover and machine identities created during application testing. These idle identities were often overlooked and can pose significant risks. [more]
Quantum breakthrough: Researchers from top institutions including Harvard, MIT, CalTech, NIST, QuEra Computing, and Princeton have achieved a breakthrough in quantum computing. They developed a quantum processor with the highest number of logical qubits, which are more controllable and error-correctable than physical qubits. Their approach involves utilizing atomic particles and lasers, showcasing potential advancements in the scalability and utilization of logical qubit devices. This research could pave the way for further exploration in quantum computing capabilities. [more]
Web3 Cryptospace Spotlight
Stolen NFTs returned after bounty payment: 16 Dec - 36 Bored Ape Yacht Club (BAYC) and 18 Mutant Ape Yacht Club (MAYC) NFTs worth nearly $3 million were stolen from NFT Trader after the attacker exploited the platform. Subsequently, the NFTs were returned after the attacker was paid 120 Ether ($267,000). The ‘bounty’ was raised by a community effort, led by non-profit Boring Security and funded by BAYC creator Yuga Labs' co-founder Greg Solano. [more]
The hack was a result of a platform upgrade that enabled unauthorized NFT transfers due to previously granted trading permissions. The community advised revoking approvals for old NFT Trader contracts to prevent future attacks.
First-ever conviction for smart contract hack: Shakeeb Ahmed, 34, has pleaded guilty in a groundbreaking case for exploiting vulnerabilities in smart contracts on two Solana decentralized finance (DeFi) exchanges, resulting in a $12.6 million theft. This marks the first conviction for such a hack, setting a crucial precedent in holding individuals accountable in the rapidly growing DeFi space. He was apprehended in July, pled guilty, and agreed to forfeit the stolen funds. [more]
Ledger to compensate hacked victims: Ledger, a crypto wallet-maker, will compensate victims for approximately $600,000 in losses resulting from a recent hack. The breach involved Ledger Connect Kit software compromised through a phishing attack on a former employee, leading to a widespread exploit affecting various wallets and decentralized applications (dapps). Ledger plans to reimburse all affected users, even non-customers, and is updating its hardware wallets. By June 2024, the company will eliminate blind signing, opting for clear signing to enhance transaction security, enabling users to review and verify details before approval. [more]