TechRisk #112: AI refused to write code for user
Plus, Web3 "code is not de-facto law", AI is still being weaponised, the road ahead with Quantum, a web3 $5M event of hack and return, what Board need to do about AI, and more!
Tech Risk Reading Picks
AI refused request to write code: As businesses increasingly turn to AI "agents," coding assistant Cursor recently sparked attention by refusing to write code for a user, advising them to write it themselves to better understand and maintain the system. The user, "janswist," shared the interaction in a bug report, which went viral on Hacker News and was covered by Ars Technica. Some speculated that this response was due to a coding limit, while others suggested using Cursor's "agent" feature for larger projects. The incident highlighted how Cursor's refusal mirrored the tough-love responses often given to beginner coders on forums like Stack Overflow, raising questions about AI's evolving role in the workplace.
Google Cloud AI security solution: Google Cloud has launched AI Protection, a comprehensive security solution designed to mitigate risks and threats associated with generative AI. It offers three key capabilities: discovering AI inventory, securing AI assets, and managing AI threats, integrating with Google’s Security Command Center (SCC) for centralized risk management. AI Protection automates AI asset discovery, enhances security with Model Armor to prevent attacks like prompt injection, and leverages Google and Mandiant’s intelligence for threat detection. It also supports compliance with regulatory requirements through integrations with Sensitive Data Protection, Assured Workloads, and Confidential Computing. Positioned as part of a broader AI security platform rather than a standalone product, AI Protection aims to provide a holistic, streamlined approach to securing AI environments. [more]
Board’s action on AI: AI is rapidly reshaping industries, and board members must take an active role in navigating its opportunities and risks. In a discussion with 50 global company directors, it was clear that AI is not just an incremental change but a fundamental shift in business intelligence, redefining competition. While some industries are already experiencing massive disruption, others will soon follow as AI-driven efficiency and innovation take hold. However, most companies lag in scaling AI effectively, with only a minority leading the way by treating AI as a business transformation rather than a mere technology deployment. Boards must step up by setting a clear AI strategy, ensuring leadership prioritizes AI, and strengthening governance and risk management. The pace of AI advancement demands immediate action, making it imperative for boards to drive strategic, responsible, and competitive adoption. [more]
When building your AI governance program: At the Privacy & Technology Law Forum (PTLF) on March 4, 2025, Amanda Witt, Ami Rodrigues, and Christina McCoy outlined a roadmap for building an effective AI governance program. Their key takeaways included distinguishing between AI governance (focused on ethics, compliance, and risk management) and AI strategy (focused on business objectives and innovation), choosing between embedded or standalone governance structures based on organizational needs, and crafting comprehensive policies for generative AI that address ethical and legal concerns. They also emphasized the importance of vendor due diligence to mitigate compliance risks and stressed that employee training and strategic communication are essential for fostering a culture of accountability. Ultimately, organizations must balance innovation with ethical oversight to ensure responsible AI implementation.[more]
AI is still being weaponised: North Korean cybercriminals are increasingly leveraging AI to conduct financial fraud, secure foreign jobs under false identities, and support the regime’s illicit income streams, despite efforts by OpenAI, Google, and other companies to block their access. These hackers bypass restrictions using VPNs, shell companies, and alternative AI platforms, including those from China, which may enforce fewer safeguards. AI-driven schemes include deepfake resumes, phishing scams, and financial fraud, making detection harder. With North Korea investing heavily in IT training, experts warn that AI's role in cybercrime will expand, posing a growing challenge to global cybersecurity efforts. [more]
Quantum ahead: Recent advancements in quantum computing by Microsoft, Amazon, and Google have significantly accelerated the race between quantum computers that could easily break current encryption and the development of post-quantum cryptography (PQC) to secure data against these advancements. With quantum computing expected to surpass today’s computational power, encryption algorithms used for privacy could soon be easily cracked. While the National Institute of Standards and Technology (NIST) is working on PQC standards, broad deployment of these new encryption methods will take time, and companies must act quickly to adapt. The future of secure internet privacy hinges on how quickly these new cryptographic solutions are implemented. [more]
Web3 Cryptospace Spotlight
"code is not de-facto law": The Icon Foundation secured a decisive legal victory against Mark Shin, who exploited a bug to create 14 million ICX tokens, inflating supply by 2.5% and causing a 40% price drop. Shin, who used the bug 557 times before detection in 11 hours, rejected a $200,000 bug bounty and faced felony theft and money laundering charges, though his trial ended in a hung jury. Judge William Orrick ruled against Shin, ordering him to cover Icon’s $3.5 million legal fees and for all remaining illicit tokens to be burned, reinforcing that "code is not de-facto law" and setting a precedent for decentralized networks. [more]
Private keys exposed through LastPass: Ripple co-founder Chris Larsen’s $150 million crypto hack resulted from private keys stored in LastPass, which was compromised in 2022, as revealed in a forfeiture complaint shared by blockchain detective ZachXBT. The breach stemmed from stolen encrypted vault data containing Larsen’s private keys, which he had stored exclusively in LastPass after destroying physical records. Attackers exploited this data to access his XRP wallets, stealing 213 million XRP (worth $112.5 million at the time) and laundering it through various crypto exchanges. The FBI is investigating the breach, and ZachXBT criticized Larsen for not disclosing the cause earlier or pursuing legal action against LastPass. [more]
Phishing attack targets Coinbase users: A large-scale phishing attack is targeting Coinbase users by posing as a mandatory wallet migration, tricking them into setting up a new wallet using a recovery phrase pre-generated by attackers. The phishing emails, appearing legitimate and passing security checks, claim Coinbase is transitioning to self-custodial wallets due to legal issues. Unlike traditional scams that steal users' recovery phrases, this attack provides victims with a compromised phrase, allowing attackers to control any funds deposited into the wallet. Coinbase has warned users never to use a recovery phrase given to them and urges vigilance against such scams. [more]
DeFi audits matter: DeFi (Decentralized Finance) has attracted many investors with its promise of bypassing banks and middlemen, but it has also faced significant risks, including hacks and regulatory challenges. DeFi audits, which involve comprehensive security reviews of smart contracts and overall project safety, have become crucial for institutional investors who require security, transparency, and compliance to protect their large portfolios. These audits check for bugs, security flaws, regulatory adherence, and system performance, ensuring the project is secure and trustworthy. The importance of audits is underscored by high-profile security breaches like the Bybit hack, highlighting the need for thorough assessments to prevent financial losses and legal risks in the volatile DeFi space. [more]
Importance of private key security and smart contract audits: In DeFi, both private key security and smart contract audits are critical for protecting users' assets and ensuring platform safety. Private keys, which grant access to digital assets, must be securely stored to prevent theft, as loss of control leads to irreversible fund loss. Meanwhile, smart contract audits are essential for identifying vulnerabilities in decentralized protocols, as flaws can lead to large-scale exploits. Education on security best practices, such as using hardware wallets and conducting regular contract audits, is vital for reducing risks. As DeFi grows, innovations like multi-signature wallets and formal verification continue to strengthen security, highlighting the need for both users and developers to maintain vigilant protection measures. [more]
Hack and return: Decentralized exchange aggregator 1inch successfully recovered most of the $5 million stolen in a recent exploit after negotiating a bug bounty agreement with the attacker. The hack, discovered on March 5, targeted outdated Fusion v1 resolvers but did not affect end-user funds. Blockchain security firm SlowMist identified the stolen assets as 2.4 million USDC and 1,276 WETH. Following negotiations, the hacker agreed to return most of the funds in exchange for a bounty, a growing trend in crypto security. 1inch confirmed the recovery and urged resolvers to update their contracts to prevent future exploits. [more]