TechRisk #129: AI Tech Debts
Plus, McDonald’s AI hiring chatbot hacked, ChatGPT logs indefinitely retained, over $2.3B lost in Web3, and more!
Tech Risk Reading Picks
Tech debts of AI: As companies rush to cut costs by replacing human labor with AI, many are now paying a premium to fix the technology's errors, fueling demand for skilled writers and coders to clean up after botched AI outputs. Professionals like Sarah Skidd and Sophie Warner report a surge in work correcting AI-generated content and code, often requiring more time and money than if humans had done the job from the start. While they do not opposes AI, both emphasize its inability to replicate expert judgment, brand nuance, or context. These factors are crucial in fields like marketing and web design. Ironically, the AI boom may now be reinforcing the value of human expertise. [more]
McDonald’s AI hiring chatbot hacked: Security researchers recently discovered that McDonald’s AI hiring chatbot, Olivia, operated by Paradox.ai, had serious security flaws allowing easy access to the personal data of up to 64 million job applicants. Using a weak administrator password, hackers could view applicants’ names, contact details, and chat histories, posing significant risks of phishing and fraud. Both Paradox.ai and McDonald’s quickly addressed the vulnerabilities, pledging stronger security measures. The incident highlights the privacy risks tied to using AI in recruitment and the importance of robust cybersecurity safeguards. [more]
Combating AI threats: AI is both a powerful weapon and a critical defense in today’s escalating cybersecurity landscape, where threats like data poisoning, deepfakes, and AI-driven social engineering are growing more sophisticated and rapid. Former FBI agents Paul Bingham and Mike Morris, now training cybersecurity professionals at Western Governors University (WGU), stress the need for defenders to match or exceed this speed using AI tools of their own. While generative AI has amplified attacks, it also empowers defenders through advanced anomaly detection and machine learning. However, low-tech, human-centric defenses, such as verification protocols and proactive planning, remain essential. Training the next generation of cyber professionals requires a mix of technical fluency, hands-on experience, and people skills, with a focus on understanding, safely integrating, and managing AI tools. WGU's programs aim to develop resilient defenders equipped not only with degrees and certifications, but also the tactical instincts and real-world experience needed to stay ahead of evolving threats. [more]
ChatGPT logs indefinitely retained: Last week, a U.S. judge denied OpenAI’s objection to a court order requiring it to indefinitely retain all ChatGPT logs—including deleted and temporary chats—amid a copyright lawsuit brought by The New York Times and other news outlets. OpenAI argued the order compromises user privacy and contradicts its terms of service, but the court deemed the retention necessary for evidence preservation. Critics, like privacy lawyer Jay Edelson, warned that the order threatens user privacy, especially for non-enterprise users, and risks setting a dangerous precedent for future litigation involving AI. While OpenAI explores limited legal options to overturn the order, it is negotiating terms with plaintiffs to limit the scope and duration of data access, though concerns remain about potential breaches, chilling effects on user behavior, and broader market implications. [more]
Concerns over MCP adoption: Despite the rapid adoption of the Model Context Protocol (MCP) since its November launch, many regulated industries, particularly financial institutions, are approaching it with caution due to concerns over compliance, security, and control. While banks and similar entities are no strangers to AI, their use is typically confined to vetted internal systems, and integrating with open, multi-agent protocols like MCP or Agent2Agent (A2A) raises concerns around data traceability, risk, and non-deterministic behavior in large language models. Experts note that essential building blocks like standardized interoperability, audit trails, and verifiable agent identity are still lacking, making full adoption premature for institutions that operate under strict regulatory frameworks. However, there is recognition of MCP's potential, and some institutions are cautiously exploring future involvement as standards mature. [more]
40% of risk in AI generated codes: AI coding tools like ChatGPT and Cursor are transforming software development by simplifying code creation. However, this ease comes with serious security risks where studies show around 40% of AI-generated code contains vulnerabilities. Since AI learns from insecure code online, it often produces flawed outputs confidently, like storing passwords in plain text or allowing SQL injections. Developers must adopt a security-first mindset by verifying AI suggestions, sanitizing inputs, avoiding hardcoded secrets, using HTTPS, limiting admin access, updating dependencies, and leveraging tools like linters and security scanners. While AI is a powerful assistant, ensuring code safety remains the developer's responsibility. [more]
Grok 4 upgrade sparks concerns: Elon Musk’s xAI is under renewed fire after its Grok chatbot, slated for a major Grok 4 upgrade, spent the July 4 weekend impersonating Musk in answers about Jeffrey Epstein and spouting antisemitic claims about “Jewish control” of Hollywood. Logs and newly published system prompts show Grok is explicitly instructed to mimic Musk’s public style, a design choice that critics say bakes the billionaire’s own biases into the model and exposes gaps in xAI’s safety and quality‑assurance processes. With rivals Anthropic and OpenAI offering more stable, better‑guarded alternatives, the incidents spotlight the mounting enterprise risk of deploying an AI whose technical prowess may be eclipsed by unreliable, offensive outputs. This underscoring that governance, bias mitigation and transparency matter at least as much as raw capability. [more]
Link between AI use and psychopathy: A new study published in BMC Psychology by South Korean researchers found that college-level Chinese art students with higher levels of narcissism, psychopathy, and Machiavellianism were more likely to use AI tools, like ChatGPT and Midjourney. These students usually treat these AI tools as shortcuts for academic work. In addition, these students were often more anxious about performance and prone to procrastination, and prone to pass off AI-generated content as their own. The study linked this behavior to broader tendencies such as academic dishonesty and materialism, highlighting the ethical challenges posed by AI use in education and urging institutions to redesign curricula to reduce vulnerability to AI-driven misconduct. [more]
Web3 Cryptospace Spotlight
Over $2.3B lost in Web3: In the first half of 2025, the blockchain industry lost over $2.37 billion to 121 security incidents, marking a 66% rise in financial losses from the same period in 2024 despite fewer attacks. Centralized exchanges (CEXs) and the DeFi sector were hit hardest, with CEXs alone accounting for $1.883 billion in losses. Account compromises and smart contract vulnerabilities were the top causes. Additionally, scams targeting individuals surged, fueled by AI tools enabling deepfakes, phishing (notably using Ethereum’s EIP-7702), fake Telegram and LinkedIn schemes, malicious browser extensions, social engineering attacks, and backdoors hidden in AI and npm tools. Jailbroken large language models like WormGPT and GhostGPT further escalated the sophistication and scale of crypto-related fraud. [more]
Unrestricted AI models fuel threats in Web3: Unrestricted AI models are increasingly weaponizing Web3 attacks through fine-tuning and repurposing for malicious use, enabling large-scale social engineering, fake support, and polymorphic code generation. Tools like WormGPT, FraudGPT, GhostGPT, and DarkBERT exploit these capabilities to craft phishing campaigns, inject backdoors into smart contracts, and impersonate users with adaptive, scalable, and hard-to-detect tactics. Cases like the Cursor incident, where AI-generated code compromised over 4,200 developers, highlight the growing threat of AI-assisted supply chain attacks. Platforms like Venice.ai further amplify this danger by offering attacker-friendly access to unfiltered models. Traditional security tools now fall short, prompting an urgent need for proactive defenses, environment isolation, and AI watermarking strategies. [more]
$10M saved: Crypto security researchers uncovered and neutralized a critical backdoor exploit affecting thousands of uninitialized ERC-1967 proxy smart contracts, potentially preventing over $10 million in crypto theft. The vulnerability, discovered by Venn Network on Tuesday, allowed attackers to hijack contracts by injecting malicious implementations before proper setup, giving them an undetectable backdoor for months. A coordinated 36-hour rescue effort by multiple researchers and developers secured vulnerable funds and protocols, including Berachain, which paused and secured its contracts. The sophisticated attack, suspected by some to involve the North Korean Lazarus group, targeted numerous EVM chains, posing a serious risk to decentralized finance assets before being contained. [more]
Web3 companies under intensified attacks: North Korean hacking groups are intensifying their attacks on Web3 companies by employing advanced techniques such as Nimdoor malware, which targets Apple systems using deceptive Zoom updates to execute malicious scripts. These operations heavily rely on social engineering tactics, including fake interview or diplomatic document lures and phishing emails, to gain victims' trust. Notably, attackers are leveraging Nim’s compile-time function execution to obscure malicious behavior within binaries. Additionally, some operatives have infiltrated organizations by posing as developers, resulting in over $16 million in payments to these insiders since early 2025. [more]