TechRisk #82: AI safety and security tasks completed
Plus, Cloud Security Alliance’s AI model risk framework is out, NIST AI testing tool, CISOs’ challenges over AI development, Cyber Risks Associated with GenAI for Financial Sector and more!
Tech Risk Reading Picks
White House executive tasks on AI completed: The Biden-Harris Administration announced new AI actions and commitments, including Apple's agreement to voluntary AI standards. Federal agencies completed all 270-day tasks from the Executive Order aimed at AI safety, security, privacy, equity, and innovation. Actions include new guidelines, expanded AI testbeds, pilot results, and talent recruitment. Significant investments in responsible AI development, privacy technologies, and global leadership initiatives were also detailed. The comprehensive plan emphasizes advancing U.S. AI leadership globally. [more]
Bill to get Airlines to prepare for future IT disruption: In response to recent flight delays and cargo backups due to a global IT outage, a new bill in Congress, the "Ensuring Airline Resiliency to Reduce Delays and Cancellations Act," aims to require airlines to develop operational resiliency plans. These plans should address severe weather, IT failures, cybersecurity risks, and other potential disruptions. Introduced by Representatives Rick Larsen and Steve Cohen, the bill seeks to ensure airlines are better prepared for future large-scale disruptions. [more][more-bill]
Cloud Security Alliance’s AI Model Risk Framework: The AI Model Risk Management Framework by the Cloud Security Alliance outlines a proactive approach to managing risks associated with machine learning models. It emphasizes the importance of responsible AI development and deployment. Key components of the framework include AI model cards, data sheets, risk cards, and scenario planning. These elements aim to enhance transparency, inform decision-making, and ensure robust model validation. The framework addresses potential financial, regulatory, and reputational risks, promoting a continuous feedback loop for improvement. [more]
NIST’s AI testing tool: The National Institute of Standards and Technology (NIST) has re-released a tool named Dioptra, designed to help test the risks associated with AI models. This modular, open-source web-based tool, originally released in 2022, aims to assess, analyze, and track AI risks by simulating adversarial attacks, such as data poisoning, to determine how these malicious actions can degrade AI model performance. Dioptra provides a common platform for AI researchers and developers to benchmark models and conduct "red-teaming" exercises, where models are exposed to various simulated threats to test their resilience. The initiative also involves the U.K.'s AI Safety Institute, reflecting an international collaboration to advance AI model testing standards. [more][more-NIST][more-Dioptra]
AI-based cybercriminal tools: Generative AI, such as FraudGPT, is being harnessed by cybercriminals to enhance hacking operations. FraudGPT aids in writing malicious code, crafting phishing pages, and creating undetectable malware. It’s promoted on the dark web and Telegram, showcasing its capabilities in various cybercrimes, from credit card fraud to digital impersonation. Alongside FraudGPT, WormGPT is also being used to launch sophisticated phishing attacks. This rise in malicious AI tools underscores the need for advanced cybersecurity measures and regulatory interventions to combat evolving threats. [more]
CISOs’ challenges over AI development: CISOs are facing significant challenges as they navigate the dual priorities of innovation and security in the era of AI. A global study by Checkmarx reveals that while nearly all development teams use AI for code generation, 80% of them are concerned about the associated security risks. The lack of AI governance, with only 29% of organizations having measures in place, exacerbates these concerns. As AI-generated code becomes more prevalent, security teams are overwhelmed by new vulnerabilities, underscoring the need for robust governance and advanced security tools. [more][more-Checkmarx]
Protect AI system on Cloud: Enhancing threat detection for GenAI workloads requires innovative strategies, especially in cloud environments. Traditional threat detection systems, which use algorithms to flag suspicious log events, face challenges with false positives and the unique complexities of cloud systems. GenAI workloads introduce additional difficulties, such as asset management and the lack of specialized detection logic. Aligning with frameworks like MITRE ATLAS and employing cloud attack emulation can significantly improve detection accuracy and reduce alert fatigue, offering a proactive approach to safeguarding against sophisticated threats in cloud-based GenAI applications. [more]
Web3 Cryptospace Spotlight
$25M governance attack: Compound Finance DAO recently approved a controversial proposal, granting $25 million in COMP tokens to the "Golden Boys" group's goldCOMP vault. Despite past rejections, this third attempt passed with a narrow 52% majority. Critics label it a "governance attack," highlighting low weekend voting participation. Security concerns include the group's strategy and timing. The proposal’s approval led to a 6% drop in COMP's value. The lack of robust opposition and insufficient alerts to COMP holders have sparked concerns about the DAO's vigilance. [more]
Singapore AI Security Guides and Threats
Guide to Cyber Risks Associated with Generative Artificial Intelligence for Financial Sector: The Monetary Authority of Singapore (MAS) issued a circular addressing cyber risks associated with generative artificial intelligence (AI). It highlights potential threats such as data poisoning, model evasion attacks, and the misuse of AI-generated content for malicious purposes. MAS emphasizes the importance of robust cybersecurity measures, including regular AI model monitoring, validation, and secure development practices. The circular advises financial institutions to implement strong governance frameworks, conduct risk assessments, and ensure staff are adequately trained to handle AI-related risks [more][more-circular]
Draft Securing AI Systems: The Cyber Security Agency of Singapore (CSA) warns that AI systems are vulnerable to adversarial attacks and other cybersecurity risks that could lead to data breaches and harmful outcomes. To mitigate these risks, CSA has issued Guidelines on Securing AI Systems, advocating for AI to be secure by design and default. Additionally, a community-driven Companion Guide for Securing AI Systems, developed with input from AI and cybersecurity experts, offers practical measures and best practices. CSA is now seeking public feedback on these documents from global partners, industry professionals, and the public to ensure their effectiveness in protecting AI systems as their adoption in Singapore grows. [more]
Singapore Threat Landscape Report: The "Singapore Cyber Landscape 2023" report by the Cyber Security Agency (CSA) highlights key cybersecurity challenges and initiatives in Singapore. Major threats include phishing and ransomware, with a slight decrease in ransomware cases but persistent phishing scams. Emerging trends to watch include ransomware tactics focusing on data exfiltration and the increasing use of AI in cyber threats and defenses. [more]