TechRisk #117: 1-bit AI model
Plus, Python framework flaw affects AI services, Vibe coding produces 5% false code, DeFi trading platform lost $7M and more!
Tech Risk Reading Picks
Efficient 1-bit AI model: Microsoft researchers have introduced BitNet b1.58 2B4T, a highly efficient 1-bit AI model (or "bitnet") with 2 billion parameters that can run on CPUs, including Apple’s M2, without relying on GPUs. Bitnets use extreme weight quantization — reducing values to -1, 0, or 1 — enabling them to operate faster and more memory-efficiently than traditional models. Trained on 4 trillion tokens, BitNet reportedly outperforms similar-sized models from Meta, Google, and Alibaba on key benchmarks, and runs up to twice as fast while consuming significantly less memory. However, it depends on Microsoft’s custom bitnet.cpp framework, which currently limits hardware compatibility and excludes GPU support, posing challenges for broader adoption. [more]
BentoML flaw affecting AI services: A critical security flaw (CVE-2025-27520) has been discovered in BentoML, a popular Python framework for deploying AI services, which allows unauthenticated attackers to execute remote code and potentially take over servers. The vulnerability, rated 9.8 in severity and found in versions 1.3.8 through 1.4.2, stems from improper input validation in the
deserialize_value()
function withinserde.py
, allowing malicious data disguised as serialized input to be executed. Notably, this flaw is a reappearance of a previously fixed issue (CVE-2024-2912) that resurfaced in version 1.3.8. The exploit involves Python Pickle files containing harmful instructions, posing risks such as data theft or server compromise. A fix is available in version 1.4.3, and immediate upgrading is recommended; if not possible, using a Web Application Firewall may help mitigate the risk, though not completely eliminate it. [more]Legal status of AI: The central concern in today’s AI discourse isn’t runaway machines but the legal frameworks that might quietly grant AI systems economic and legal rights—like owning property or entering contracts—without human accountability. As AI grows more autonomous, it risks embedding itself deeply into economic systems, distorting foundational institutions of ownership and responsibility. Drawing parallels to historical legal frameworks like the Civil Rights Act of 1871, the essay argues for clearly defined legal boundaries that deny AI personhood and associated rights, warning that without such limits, humans risk becoming subordinate to systems they no longer control—permanently. [more]
Vibe coding produces 5% false code: Large-language models (LLMs) are both praised and criticized for their coding abilities, but researchers like Joe Spracklen at USTA warn that their flaws could be exploited through a practice called "vibe coding." LLMs often hallucinate plausible-sounding but incorrect code, which, in environments using package managers (like npm or PyPI), might result in references to non-existent packages. Malicious actors could anticipate these hallucinations and upload fake packages as attack vectors. Even advanced models like ChatGPT-4 generate such false packages over 5% of the time. While researchers propose mitigation strategies, this highlights the critical need for human oversight in maintaining code security. [more]
AI-generated audio phishing: Hackers are increasingly using AI-generated audio and deepfake technology to impersonate tax professionals and IRS agents during tax season, enabling more convincing phishing scams that trick individuals into handing over sensitive financial data. These AI-powered schemes include voice phishing calls, fake video messages, and highly realistic emails that mimic legitimate IRS communication, often exploiting stolen personal data to enhance credibility. Experts warn of a surge in such attacks, along with mobile scams, spoofed websites, and malware-laden phishing emails exploiting tax-related search terms and unpatched software. To stay safe, individuals are urged to verify identities, be wary of urgent demands, and use tools to detect manipulated content. [more]
Challenges in Cloud identity management: Sysdig’s 2025 Cloud-Native Security & Usage Report highlights both progress and challenges in enterprise cloud security, noting strides in identity and vulnerability management, AI security, and threat detection across global regions. The rapid rise in AI workload adoption, with a 500% growth and doubled use of generative AI packages, has fortunately been accompanied by a 38% drop in public exposure, reflecting a shift toward more secure implementations. Security teams are now detecting threats in under five seconds and responding within minutes—faster than the critical 10-minute attack window. However, emerging risks from machine identities (which outnumber human ones 40,000 to 1), container image bloat, and attacker automation remain significant concerns. The report also stresses the growing importance of open-source tools like Kubernetes and Falco, while warning that cybercriminals increasingly exploit open-source software for attacks. Overall, organizations are focusing more on in-use vulnerabilities—now under 6%—signaling a smarter, more targeted security approach. [more][more-report]
Aging IT infrastructure hits aviation sector: The aviation industry is grappling with escalating cyber threats due to outdated systems, aging infrastructure, and increasingly sophisticated attacks, according to a new report by the Foundation for Defense of Democracies. The report urges the FAA to overhaul the air traffic control system with a focus on cyber resilience and recommends joint cyber vulnerability assessments by the TSA, FAA, and CISA at major civilian-military airports. While steps have been taken to improve cybersecurity, growing travel demands are straining fragile systems, risking major disruptions even without direct attacks—evidenced by recent incidents involving Southwest Airlines, CrowdStrike, and Boeing. The FAA maintains it has a comprehensive cybersecurity approach, though concerns over supply chain vulnerabilities and ransomware attacks continue to mount. [more]
Device code flow authentication MFA bypass: Microsoft announced in February 2025 a managed Conditional Access policy to block the device code flow authentication method in Microsoft Entra ID (formerly Azure AD), particularly for organizations not actively using it. Device code flow, used on input-limited devices like smart TVs or command-line tools, is vulnerable to phishing attacks, as highlighted by a recent campaign from threat actor STORM-2372. This group exploited the flow to hijack sessions and bypass MFA protections. To mitigate this risk, organizations are advised to disable device code flow unless it's essential, implement Conditional Access policies to block it, and educate users on associated phishing risks. This action aligns with Microsoft’s Secure Future Initiative to enforce secure-by-default configurations and reduce exposure to legacy authentication methods. [more]
Web3 Cryptospace Spotlight
Whitehat saved $2.6M: A white hat MEV operator known as c0ffeebabe[.]eth intercepted approximately $2.6 million in stolen crypto assets following a vulnerability in Morpho Labs’ Morpho Blue DeFi protocol, which was introduced via a front-end update on April 10. The exploit occurred the next day, leading to a loss reported by blockchain security firm PeckShield. However, c0ffeebabe.eth, recognized for similar white hat interventions in past DeFi exploits, front-ran the malicious transaction, securing the funds before the hacker could. Morpho Labs promptly rolled back the faulty update, assured users that core protocol funds remained safe, and began investigating the issue, promising a detailed report. It's still unclear if the intercepted funds have been returned to the original owner. [more]
$7M lost due to price oracle flaw: KiloEx, a new DeFi perpetual trading platform backed by YZi Labs, suffered a devastating $7 million multi-chain exploit due to a vulnerability in its price oracle, shaking investor confidence and slashing its KILO token market cap by 30%. The attack, executed via a wallet funded through Tornado Cash, affected multiple blockchains including BNB Smart Chain, Base, and Taiko, forcing KiloEx to halt operations and launch a bug bounty to investigate the breach. This incident underscores the urgent need for stronger security protocols in cross-chain DeFi platforms and raises critical questions about the long-term stability of the ecosystem. [more]
Stealth campaign targeting crypto wallets: Security researchers from ReversingLabs have discovered a stealthy malware campaign targeting crypto wallets through malicious packages uploaded to trusted open-source repositories like npm. Disguised as a legitimate tool for converting PDFs to Office documents, the malware-laden npm package covertly infects popular crypto wallets like Atomic and Exodus by overwriting their files to redirect outgoing transactions to wallets controlled by cybercriminals. Simply deleting the package isn’t enough to stop the attack; the only effective remedy is to completely remove and reinstall the affected wallet software. [more]