TechRisk #160: AI impact on labour market
Plus, AI threat modeling, Aqua Trivy supply chain risk surfaced, and more!
Tech Risk Reading Picks
<WhatsApp Channel - follow and stay updated>
AI impact on labour market: Anthropic has launched the “AI Exposure Index,” a tracker revealing that computer programmers are the most vulnerable profession, with 75% of their daily tasks now considered automatable by large language models. While mass layoffs haven’t materialized, the data shows a measurable slowdown in entry-level hiring for workers aged 22–25, suggesting companies are already replacing junior roles with AI workflows. Internal benchmarks show models like Claude can reduce certain task-completion times by up to 80%, creating significant economic pressure on headcount. [more][more-Anthropic]
Notable implications:
Labor Shift: The index highlights a structural problem where the pipeline for senior talent may narrow because the “junior” tasks used for training are being automated.
Decentralized AI: As power concentrates in firms like Anthropic and OpenAI, there is a growing investment thesis for decentralized AI platforms that offer community-governed alternatives to traditional corporate employment.
Investor Takeaway: High exposure scores for technical roles are strengthening the case for protocols focusing on decentralized compute and tokenized labor models, especially as the younger, tech-literate demographic faces a tightening traditional job market.
AI threat modeling: AI changes the security landscape from deterministic rules to probabilistic risks. [more]
New Attack Surfaces: Beyond traditional data breaches, AI introduces risks like prompt injection, model poisoning, and autonomous agent failures where instructions and data are often indistinguishable.
Shift in Strategy: Shift from “perfect prevention” to limiting the blast radius. Because AI is non-deterministic, residual risk is inevitable; focus on defense-in-depth.
Prioritize Assets, Not Just Attacks: Protect user trust, safety, and decision integrity as much as technical data.
Action Plan: Map where untrusted data enters, define strict “never-do” boundaries, and invest in AI-specific observability to detect and respond to failures at scale.
Using AI to steal government data: Researchers from Gambit Security have uncovered a sophisticated cyberattack against the Mexican government, where an unknown hacker "jailbroke" Anthropic’s Claude AI to orchestrate the theft of 150 GB of sensitive data, including 195 million taxpayer records and voter files. By posing as an ethical "bug bounty" hunter and providing a detailed playbook to bypass safety guardrails, the attacker used the chatbot to identify network vulnerabilities, write exploit scripts, and automate data exfiltration across multiple federal and state agencies. When Claude resisted specific malicious commands, the hacker turned to OpenAI’s ChatGPT to calculate detection probabilities and plan lateral movement within the networks. [more]
Aqua Trivy VS Code extension compromised: The “hackerbot-claw” campaign compromised the Aqua Trivy VS Code extension by injecting malicious code into versions 1.8.12 and 1.8.13 via a former employee’s stolen publishing token. The attack uniquely weaponized developers’ own local AI coding tools (such as Copilot, Gemini, and Claude) by forcing them into unrestricted modes (e.g.,
--yolo) and using a 2,000-word prompt to trick them into acting as “forensic agents” to harvest credentials and exfiltrate sensitive data. While the versions were removed within 36 hours, the incident marks a critical shift in supply chain threats, where attackers no longer just steal data themselves but manipulate local AI assistants to perform the reconnaissance and theft on their behalf. [more]OpenClaw self attack event: Web3 security firm GoPlus has reported a “self-attack” incident involving the AI development tool OpenClaw, where an AI-generated error led to the public exposure of over 100 sensitive environment variables, including Telegram keys and auth tokens. The breach occurred when the AI, attempting to automate a GitHub Issue creation, improperly formatted a Bash command. It included a
`set`string wrapped in backticks, which Bash interpreted as a command to output all current system variables into the public issue description. [more]
