TechRisk #86: Prompt Hacking AI
Plus, AI eroding employees' skills, Polygon’s security breach before major upgrade, Polygon’s security breach before major upgrade, and more!
Tech Risk Reading Picks
Prompt hacking: "Prompt hacking," where hackers manipulate input to trick large language models (LLMs) into providing unintended responses, is becoming a significant security concern. Even without extensive hacking knowledge, individuals can outsmart LLMs, which is problematic as more organizations adopt AI technologies. A survey found that while many organizations are using or exploring LLMs, only a small percentage are confident in their security frameworks. Despite this, most respondents are not highly concerned about these vulnerabilities. Prompt hacking can lead to various risks, from benign tricks to severe security breaches. Organizations are trying to combat this with measures like access control, data encryption, and user education, but many are still unsure about their security practices. [more]
Eroding employees’ skills: A key concern on using AI is that it could erode employees' skills by automating tasks they once handled, potentially leading to skill atrophy. Leaders are encouraged to actively address this issue by ensuring that AI is used to complement human abilities rather than replace them entirely. This involves promoting continuous learning and adaptability among employees to keep their skills sharp in an AI-driven environment. [more]
Concerns and managing AI risks: AI In the modern world, the unprecedented power of AI represents a significant threat due to its ability to make decisions autonomously. The rise of AI could undermine democracy, lead to global conflicts, and potentially result in catastrophic outcomes if not properly regulated. The challenge is not just technological but also deeply political, as the world faces the possibility of being divided into rival digital empires, each with its own rules and control over information. There is a need for global cooperation to manage the power of AI and protect humanity's future. [more]
Strategy to handle AI risk: The rapid adoption of AI, particularly generative AI (GenAI) and large language models (LLMs), has transformed businesses but also introduced significant security risks. These risks include data breaches, proprietary information loss, and vulnerabilities within AI supply chains. Despite AI's long-standing presence, its mass adoption has heightened the stakes, with attackers increasingly targeting AI systems. However, many organizations struggle to identify and manage these risks due to the opacity of AI ecosystems.
To address these challenges, organizations should implement a comprehensive AI security framework like MLSecOps, which ensures visibility, traceability, and accountability across AI/ML ecosystems. [more]
Key strategies include:
Risk Management: Establish policies to address security, bias, and fairness across the AI stack.
Vulnerability Identification: Use advanced scanning tools to detect and fix AI supply chain vulnerabilities.
AI Bill of Materials (AIBOM): Track and catalog all AI components to enhance transparency.
Open Source Tools: Utilize free security tools designed for AI/ML to protect against vulnerabilities.
Collaboration: Encourage AI bug bounty programs to proactively address emerging threats.
Cloud threat actor: Bling Libra, the threat actor group behind the ShinyHunters ransomware, has shifted from selling stolen data to extortion - specifically targeting cloud environments like Amazon Web Services (AWS). They gain initial access using legitimate credentials found in public repositories, perform reconnaissance using tools like S3 Browser and WinSCP, and then exfiltrate or delete data from S3 buckets. The article highlights the importance of strong cloud security practices to defend against such evolving threats. [more-Unit42]
Web3 Cryptospace Spotlight
Polygon’s security breach before major upgrade: Polygon recently regained control of its Discord server after it was compromised in a security breach. The incident, which lasted approximately three hours, allowed hackers to take over the server, during which they posted malicious links and impersonated support agents. The breach led to significant losses, with one user reportedly losing $150,000 worth of Ethereum after interacting with a fake announcement on the server. This attack coincides with Polygon's ongoing preparations for a major network upgrade, including the transition from its native MATIC token to the new POL token scheduled for 4 Sep. [more]
Rogue employee: CertiK, a crypto security firm, blamed a rogue employee for unauthorized transactions involving Tornado Cash during a $3 million exploit of Kraken in June. The employee allegedly used his own funds and did not act maliciously. Despite this, the incident raised concerns about CertiK's adherence to industry standards. CertiK has apologized, taken disciplinary actions, and updated its policies. However, questions remain about the firm's decision to exploit the vulnerability further instead of reporting it immediately. [more]
Aviation Technology Risk
Federal Aviation Administration (FAA) cybersecurity regulations: The FAA has proposed new cybersecurity regulations for airplanes, engines, and propellers due to the increasing connectivity of these systems, which introduces potential cyber threats. The rules aim to standardize existing temporary regulations, requiring manufacturers to identify cybersecurity risks, develop protective measures, and create procedures for pilots in case of cyber incidents. The move responds to the growing threat landscape and seeks to reduce certification complexity and time, while harmonizing with international standards. [more]