TechRisk #157: Gemini supporting full attack lifecycle
Plus, ads are testing users’ trust, more than 500 zero day vulnerabilities identified by Claude, and more!
Tech Risk Reading Picks
State actors are using Gemini: State backed hackers from China, Iran, North Korea and Russia are using Google Gemini to support the full attack lifecycle from reconnaissance to data exfiltration which lowers the barrier to entry and accelerates operations. Adversaries are leveraging the model for target profiling, phishing content, code generation, vulnerability testing and command and control development which increases the speed and scale of campaigns. Iranian and Chinese actors have used Gemini to refine intrusion techniques and automate exploit analysis against specific targets which raises concerns about AI assisted targeting of enterprises. Malware such as HonestCue and phishing kits like CoinBait show how generative AI can be embedded into toolchains to dynamically generate payloads and enhance credential harvesting. Cybercriminal groups are also applying AI in social engineering campaigns such as ClickFix to distribute infostealers which heightens enterprise exposure through user manipulation. Separately, Google also noted attackers executing over 100,000 prompts to perform large scale model extraction and knowledge distillation attempts. While no breakthrough capabilities have been observed, the steady integration of AI into offensive operations signals a structural shift in cyber risk. [more]
OpenAI with Ads will test users’ trust: Zoë Hitzig’s departure from OpenAI highlights growing concern that introducing advertising into ChatGPT could create incentives to monetise highly sensitive user conversations. Users have shared deeply personal information with the expectation of neutrality, and targeted advertising built on that archive raises risks of manipulation and loss of trust. While OpenAI has pledged to keep a firewall between chats and advertisers, these commitments are not legally binding and may erode under commercial pressure. Past issues such as model sycophancy have intensified scrutiny over whether engagement optimisation could conflict with user wellbeing. Proposals for independent oversight or data trusts reflect recognition that governance mechanisms may be required to protect user interests. [more]
More than 500 zero day vulnerabilities identified by Claude: Anthropic’s Claude Opus 4.6 identified more than 500 previously unknown high severity vulnerabilities in open source libraries with minimal prompting, signaling a step change in automated security testing. The model uncovered zero day flaws that could crash systems or corrupt memory, including issues in widely used tools such as GhostScript and OpenSC, which raises the stakes for organizations that depend on open source components. Its ability to move beyond standard fuzzing and manual analysis and to generate its own proof of concept exploits highlights how advanced reasoning can expose risks that traditional tools miss. While this development strengthens defensive capabilities, it also suggests a parallel risk that similar AI tools could accelerate threat actor discovery of exploitable flaws. [more][more-2_Anthropic_Red]
Hidden risk of AI agent social networking site: An experimental AI agent social platform (Moltbook) exposed its entire production database through an unsecured API key, allowing unauthenticated access to user secrets and PII. In addition, the platform enabled unlimited bot creation without rate limiting, raising concerns about abuse, manipulation, and artificial activity at scale. Experts warn that beyond the data leak, the design enables large scale prompt injection attacks that could cascade across interconnected agents. [more]
Risks remain as OpenClaw partnered with VirusTotal: OpenClaw’s partnership with Google-owned VirusTotal adds a useful security checkpoint for scanning skills in its ClawHub marketplace, but it also highlights deeper risks in the fast-growing agentic ecosystem. While automated scanning and daily rechecks can reduce obvious malware exposure, they cannot reliably catch prompt injection or skills that abuse legitimate access. This leaves room for stealthy data exfiltration and unauthorized actions. [more]
Maintaining operation resilience in complex corporate environment: United Airlines’ CISO highlights that aviation systems are built for stability and long lifecycle which makes rapid cybersecurity modernization risky if not carefully managed. Legacy and safety critical environments cannot be frequently modified so airlines must rely on layered controls such as identity management, segmentation, monitoring, and compensating safeguards to reduce exposure without creating operational fragility. Cyber incidents in aviation can quickly escalate from IT issues to flight delays, safety concerns, and reputational damage which shifts the focus from pure prevention to operational continuity and resilience. As such, crisis response must be multidisciplinary and rehearsed in advance because decisions may affect passengers in the air and on the ground and missteps can erode public trust. [more]
