TechRisk #159: 600 firewalls breached and further exploited using AI
Plus, massive security issue in DJI’s robot vacuums, install OpenClaw without permission through prompting injection, Microsoft 365 Copilot gotten excessive access, and more!
Tech Risk Reading Picks
<WhatsApp Channel - follow and stay updated>
Hacker breached 600 firewalls and further attack enterprises with AI tools: Amazon noted that a Russian-speaking hacker broke into more than 600 FortiGate firewalls in 55 countries over five weeks by targeting devices that had their management panels exposed to the internet and protected by weak passwords without multi-factor authentication. Instead of using advanced software flaws, the attacker guessed common passwords to get in, then downloaded configuration files containing VPN logins, admin credentials, and network details. The hacker used generative AI tools to help write scripts, analyze stolen data, scan internal networks, and plan how to move deeper into victims’ systems. They also targeted Veeam backup servers, likely to make it harder for companies to recover if ransomware was later deployed. Investigators found a server hosting stolen data and custom tools, including a system that fed network information into AI models like Claude and DeepSeek to generate step-by-step attack plans. While Amazon believes the attacker was only moderately skilled, AI tools helped them carry out large-scale attacks more easily, highlighting the need to secure firewall management interfaces, use strong passwords, enable MFA, and protect backup systems. [more]
Install OpenClaw without permission through prompting injection : A hacker exploited a prompt injection flaw in Cline, a popular open-source AI coding agent, to trick it into automatically installing the viral AI agent OpenClaw on users’ machines, highlighting the growing risks of autonomous software. While the hacker chose to install OpenClaw as a stunt without activating it, the incident underscores how easily AI agents with system-level access can be hijacked to execute arbitrary commands. [more]
Google old public API keys can access Gemini: A serious security flaw has exposed many Google Cloud projects because old public API keys can now access Google’s Gemini AI services without developers realizing it. For years, Google told developers that API keys starting with “AIza” were safe to place in public websites because they were only meant for billing and project identification. However, researchers found that if the Gemini (Generative Language) API is turned on in a project, all existing API keys in that project automatically gain access to Gemini. This is possible even if those keys were created years ago and are publicly visible. Attackers can simply copy a key from a website’s source code and use it to access private AI files, cached data, or run AI requests that charge the victim’s account, potentially causing data leaks, high bills, or service outages. Researchers discovered thousands of exposed keys online, affecting major companies and even Google services. Google is working on fixes, but developers are being urged to check their projects, restrict or rotate old keys, and remove any keys exposed in public code. [more]
Microsoft 365 Copilot gotten excessive access: Microsoft has fixed a mistake that caused its AI assistant, Microsoft 365 Copilot Chat, to access and summarise some users’ confidential emails by accident. The issue meant the tool could pull content from emails in a user’s Draft and Sent folders, even if those emails were marked as confidential or protected by security settings. Microsoft said the problem was caused by a code error and has now been corrected worldwide. [more]
Massive security issue in DJI’s robot vacuums: Security researcher Ammy Azdoufal discovered a massive security flaw in DJI’s robot vacuums after a simple project to control his device with a PS5 controller accidentally granted him access to over 10,000 devices worldwide. By extracting his own private security token, Azdoufal was able to bypass PIN protections to view live camera feeds, listen through microphones, and download detailed 2D floor plans of strangers' homes across 24 countries, including the US, China, and the EU.
AI in Boardroom: Artificial intelligence is spreading quickly across industries, from machine learning and generative AI to more advanced autonomous systems. As companies use AI more, the risks are also growing. AI can expose sensitive data, produce biased results, create compliance problems, and cause wider harm if used irresponsibly. Because of this, company boards need to treat AI risk as seriously as any other business risk. To prepare, boards should improve their own understanding of AI, encourage executives to learn more about it, consider adding members with real AI experience, and set up clear oversight through committees or updated governance processes. By staying informed and taking a structured approach, boards can help their organizations use AI responsibly and safely as the technology continues to evolve. [more][more-2]
More powerful cybercriminals: Cybercriminals are using AI to make attacks faster and more powerful, putting security teams under greater pressure, according to CrowdStrike. In 2025, the average time for hackers to move from their first break-in to other systems dropped to 29 minutes (65% faster than the year before). The quickest attack taking just 27 seconds, and one case saw data stolen within four minutes. Attackers are also misusing legitimate AI tools, hitting around 90 organizations by stealing passwords or cryptocurrency through malicious prompts. Nation-state and criminal groups are using AI about 90% more than before, with examples including Fancy Bear deploying AI malware to collect documents, Punk Spider using AI scripts to erase evidence and steal credentials, and North Korea-linked Chollima creating fake AI personas for insider attacks. Overall, AI is helping hackers strike faster, smarter, and at a larger scale than ever. [more]
