TechRisk #105: Cyber Offensive Ghost GPT
Plus, NIST revised AI safety guidelines, Dual-edged nature of technological advancement, risk of using AI-generated codes, Web3 developers targeted, and more!
Tech Risk Reading Picks
GhostGPT: Abnormal Security has uncovered GhostGPT, an uncensored AI chatbot tailored for cybercrime, highlighting the growing misuse of artificial intelligence by threat actors. Unlike ethical AI models like ChatGPT, GhostGPT bypasses safeguards, enabling cybercriminals to generate phishing emails, malware, and other malicious content with ease. Likely developed by connecting to a jailbroken version of ChatGPT or an open-source LLM, GhostGPT provides unfiltered responses to harmful queries, lowering the technical barriers for cybercrime and empowering even inexperienced actors to execute sophisticated attacks. This tool's rise in popularity may further escalate the threat of AI-driven cybercrime and the urgent need for innovative cybersecurity strategies. [more]
NIST revised AI safety guidelines: The U.S. AI Safety Institute (US AISI) at NIST released the second public draft of its Managing Misuse Risk for Dual-Use Foundation Models guidelines (NIST AI 800-1), focusing on voluntary best practices for addressing risks to public safety and national security throughout the AI lifecycle. Building on feedback from over 70 experts, the updated draft includes expanded guidance, such as detailed best practices for model evaluation, domain-specific guidelines for chemical, biological, and cybersecurity risks, and a clarified “marginal risk” framework for assessing model impacts. The revisions also address open model developers and extend risk management strategies across the AI supply chain. These updates aim to make the guidelines more actionable while fostering collaboration to promote safe and trustworthy AI innovation in the U.S. [more]
WEF on technology risks: The World Economic Forum Global Risks Report 2025 highlights the dual-edged nature of technological advancement, emphasizing its capacity to both empower humanity and create profound risks. The report identifies AI and frontier technologies as central to this dynamic, with risks like misinformation, algorithmic bias, and cyber vulnerabilities reshaping industries and societal trust. It warns that the rapid pace of innovation has outstripped humanity’s ability to govern or ethically manage its consequences, particularly in addressing polarization, inequality, and environmental crises. With insights from over 900 experts, the report underscores the interconnectedness of environmental, societal, economic, geopolitical, and technological risks, urging global collaboration to establish ethical frameworks, enhance digital resilience, and foster responsible innovation. At this critical juncture, the report calls for immediate action to ensure technology is a tool for progress rather than division, shaping a future defined by equitable and transformative change. [more][more-WEF_report]
Risk of using AI-generated codes: As the use of AI-generated coding tools like GitHub Copilot and Claude becomes increasingly widespread, security leaders across major countries are raising concerns about potential vulnerabilities, bugs, and misuse of these technologies. Research highlights that while 83% of firms already utilize AI for coding, with 57% integrating it as standard practice, 92% of security leaders worry about the lack of oversight and questionable integrity of AI-generated code. Risks such as security flaws, malware, and developer disengagement with the codebase are significant, alongside privacy issues like exposing sensitive data to AI tools. To mitigate these challenges, organizations must implement oversight policies, conduct thorough code reviews, train developers on the limitations of AI tools, and carefully vet and monitor the AI tools in use. Security leaders emphasize maintaining AI as a tool rather than over-relying on it, ensuring that safeguards like private models, usage guidelines, and best practices are in place to secure AI-generated code effectively. [more]
Class-action lawsuit on using private messages without consent: LinkedIn is facing a class-action lawsuit alleging it shared Premium users' private messages with third parties without consent to train AI models. The suit claims LinkedIn introduced a privacy feature in August 2023 that enabled data sharing by default without notifying users via its terms of service or privacy policy. After press coverage and user backlash, LinkedIn updated its privacy policy but buried key disclosures about using personal messages for AI training in an FAQ. The lawsuit alleges LinkedIn knowingly concealed its practices, which also potentially involve other Microsoft AI models, and stated that data already used for AI training cannot be undone. LinkedIn denies the claims, calling them meritless. [more]
Web3 Cryptospace Spotlight
Web3 developers targeted: North Korea's Lazarus Group has launched "Operation 99," a sophisticated cyberattack campaign targeting Web3 and cryptocurrency developers through fake LinkedIn recruitment schemes that lure victims into cloning malicious GitLab repositories, enabling malware deployment to steal sensitive data, cryptocurrency, and intellectual property across multiple operating systems globally. [more]
Cyvers’ Web3 security report 2024: In 2024, Web3 security faced a significant setback, with over $6 billion lost to cyberattacks—an alarming 40% increase from 2023. The majority of losses stemmed from access control failures (81%) and smart contract exploits (19%), as hackers exploited weak authentication protocols and poorly written code. Ethereum bore the brunt, accounting for 51% of stolen funds, while notable platforms like DMM Bitcoin and PlayDapp faced multi-million-dollar heists. Although $1.3 billion was recovered in the first half of the year, recoveries dwindled in the latter half, highlighting the urgency for faster interventions. Key lessons include prioritizing continuous smart contract audits, adopting AI-powered threat detection, and strengthening access controls. As Web3 enters 2025, security must become a core focus to prevent escalating losses and ensure a safer decentralized ecosystem. [more][more_Cyvers]
Data breach concerns: The 2022 OpenSea data breach, which exposed over seven million email addresses, highlights the vulnerabilities of centralized data management in the Web3 ecosystem. The incident, involving an employee from Customer.io, underscores the risks of relying on centralized infrastructures despite the availability of decentralized alternatives like DePIN (Decentralized Physical Infrastructure Networks). Industry experts argue that fully adopting decentralized storage solutions is essential to safeguarding user data and preventing future breaches. As data generation accelerates, fueled by AI advancements, transitioning to decentralized frameworks is seen as crucial for strengthening Web3 security, preserving user trust, and ensuring the sustainability of decentralized ecosystems. [more]