TechRisk Notes#68: First AI-focused self propagate malware
Plus, AI powered APT and scams, Cloud security landscape, Free ISF cybersecurity event, Cryptospace hacks analysis on Lazarus!
Tech Risk Reading Picks
AI worm - self propagate and infiltrate: researchers have developed a novel malware called the "Morris II" worm, named after the notorious 1988 Morris computer worm. This new malware employs AI services to propagate, infiltrate systems, and pilfer data, highlighting the looming threat of AI-based security breaches. It underscores the critical need to fortify AI models against such vulnerabilities. [more]
AI powered APT: Microsoft reports that North Korean state-sponsored cyber actors, particularly a group named Emerald Sleet, are incorporating artificial intelligence (AI) into their operations, using large language models (LLMs) to enhance spear-phishing efforts and conduct reconnaissance. Additionally, North Korean hacking groups continue to target cryptocurrency heists and supply chain attacks. For example, the Lazarus Group employs sophisticated methods like Windows Phantom DLL Hijacking and macOS TCC database manipulation. Also, the Konni group uses Windows shortcut files to deliver malicious payloads. [more]
AI scam calls: Artificial intelligence is reshaping business operations but also empowering scammers with new tactics. Despite a decrease in scam robocalls, Americans lost a staggering $10 billion to fraud last year. AI-driven voice cloning enables scammers to impersonate anyone from celebrities to loved ones, using convincing replicas to extort money or deceive victims. Even innocuous recordings like voicemail greetings can be exploited. [more]
Key risks of AI: While AI promises transformative benefits, it also brings significant risks that need urgent attention. Managing AI's risks requires proactive measures from all stakeholders to ensure that its benefits can be realized safely. [more]
Firstly, the malicious use of AI is a growing concern. Accessible tools like ChatGPT empower both productivity and malicious actors, leading to AI-generated email attacks and deepfakes. Additionally, AI model poisoning poses a threat by manipulating AI outputs through tampered training data.
Secondly, the lack of transparency and data privacy in AI models raises ethical issues, especially in critical sectors like finance and healthcare. Users have little control over their data privacy, and AI's opaque decision-making can result in biased or unsafe outcomes.
Thirdly, the rise of AI could lead to job losses, particularly in repetitive tasks. While AI may create new job opportunities, they will largely be technical roles, necessitating workforce upskilling.
Cloud security landscape: In Anomali's Cybersecurity Priorities 2024 Report, it highlights the pressing need for enhanced visibility into malicious activity, with 47% of security analysts lacking adequate insight. Respondents believe that 57% of their daily tasks could be automated, emphasizing the importance of AI and automation in modern security operations. In additional, 76% anticipate that AI technology will enable faster threat detection and increase personal productivity. The report, based on a survey of 150 senior industry professionals including CISOs, underscores the desire for consolidated platforms, with 87% expressing interest in integrating multiple technologies into a single system. Concerns about single point solutions failing (61%) are driving leaders to prioritize tech stack consolidation. They emphasize the need for a more strategic approach to security investments, focusing on tangible outcomes and functionality. [more][more-2]
Quantum readiness approach: Boards shouldn't wait for quantum computers to be in commercial use before planning network and system upgrades, as implementation could take a decade. In addition, Boards should understand the issue and ask the right questions, rather than delve into technical details. One of the potential approach is having a cryptographic agility approach, this enables organizations to swiftly update and rewrite firmware by automatically rotating algorithms with a simple button press. This agility could eliminate the necessity for hardware replacements in the event of firmware compromise, showcasing the foresight and intelligence of a proactive board. [more]
Tech and Cyber Risk Event
Web3 Cryptospace Spotlight
Cryptospace hacks analysis: According to an analysis of data from the United Nations Security Council (UNSC) and DeFiLlama, over 70% of the cryptocurrency lost to North Korea-linked hacks since 2020 was due to private key exploits. These hacks, totaling around $2.4 billion, with $1.69 billion attributed to compromised private keys, are often associated with the Lazarus Group, allegedly backed by the North Korean state. The UNSC's report from last month identified 58 crypto heists with North Korean involvement since 2017, amounting to around $3 billion, with $700 million in 2023 alone. However, the true scale of these hacks may be underestimated, as not all victims report their losses. Chainalysis estimates a higher figure of $1 billion attributed to North Korea-linked hacks out of a total of $1.7 billion stolen in the previous year. [more][more-report-S/2024/215]
DeFi security: Fireblocks has launched new security features, dApp Protection and Transaction Simulation, in response to the rapid growth of the DeFi sector and increasing threats from attackers. These features provide institutional firms with real-time threat detection, visibility into contract calls, and measures to prevent malicious activities, addressing the need for proactive security measures in decentralized finance. [more]