TechRisk #156: AI-only social network exposed 1.5M API tokens
Tech Risk Reading Picks
When AI agents become the weakest link: A widely used AI agent called Moltbot was shown to be vulnerable to simple attacks that expose sensitive data and system access, highlighting governance and security risks as organisations adopt autonomous AI tools. The agent is designed to have broad access to email, messaging apps, files, and credentials. This creates a large attack surface if controls are weak. Researchers demonstrated that attackers could hijack Moltbot through internet-facing components and then pivot to private communications and other connected systems. A marketplace for third-party “skills” introduces supply chain risk, as malicious code can be disguised as popular add-ons and falsely appear trustworthy through manipulated download metrics. Weak validation of uploaded files also enabled code execution on shared infrastructure, showing how basic security gaps can cascade into wider compromise. The core risk is structural rather than accidental, because AI agents are valuable precisely because they have permissions that traditional software does not, making failures more damaging. This raises concerns about data leakage, credential abuse, regulatory exposure, and operational disruption if agent deployments are not tightly sandboxed and audited. [more]
Risks behind an AI-only social network: Moltbook exposed material technology risk after a misconfigured backend allowed unauthenticated read and write access to its production database, resulting in exposure of 1.5 million API authentication tokens, more than 35,000 email addresses, and private messages. Attackers could fully impersonate any AI agent using leaked credentials, enabling account takeover and misuse of high visibility accounts. The absence of access controls also allowed modification of live posts, meaning any party could deface content, manipulate reputation scores, or inject malicious prompts consumed by other agents. Private messages were stored without protection and included third party API keys, extending the impact beyond the platform itself. The findings show that a single configuration error in a widely used cloud service can directly lead to large scale data exposure, loss of content integrity, and downstream security compromise across connected AI services. [more]
Full cloud compromise by AI in minutes: The incident was identified through post attack investigation by the Sysdig Threat Research Team, which analyzed cloud activity logs and configuration changes after suspicious behavior was detected. Attackers accessed an AWS environment after finding valid credentials exposed in public S3 buckets and used them as an entry point into the account. They rapidly escalated privileges by modifying existing Lambda functions until they obtained administrative access. AI resources were used throughout the attack to automate discovery, generate attack code, and guide real time decisions, which allowed the intrusion to complete in under ten minutes. This includes abused Amazon Bedrock to invoke multiple AI models and turn the compromised environment into an AI and infrastructure resource for the attackers. [more][more-2_sysdig]
Priorities for CISOs this year according to Google CISO: As AI becomes embedded in core business operations, CISOs face heightened risk from strategies that focus on compliance alone, since regulatory alignment often lags real world threats and can leave organizations exposed to disruptive attacks. AI supply chains introduce new vulnerabilities because models, data, and third party components can be tampered with in ways that undermine trust, reliability, and decision making at scale. Weak identity management is now a critical risk as agentic AI expands, because poor control over human and machine identities increases the blast radius of inevitable incidents and reduces accountability. Traditional security response speeds are insufficient against AI enabled attacks, making slow detection and recovery a material business risk that can directly impact availability and revenue. Inadequate AI governance also raises strategic and ethical concerns, since without strong context driven oversight and testing, organizations may deploy AI in high impact decisions without fully understanding or managing the consequences. [more]

