TechRisk #79: Fight deepfake with AI and Blockchain
Plus, export control on quantum computing system, list of 5G technology risks, and more!
Tech Risk Reading Picks
AI and Blockchain to promote technology safety: The integration of AI and blockchain technology is proving to be a formidable defense against the rising threat of deepfakes in KYC (Know Your Customer) processes. AI's advanced capabilities in detecting and analyzing deepfakes, when combined with blockchain's immutable and transparent data storage, create a robust system for verifying identities and preventing fraud. This synergy not only enhances the security of KYC processes but also ensures that verification data remains tamper-proof, making it increasingly difficult for sophisticated forgeries to bypass security checks. Industries such as finance and government stand to benefit significantly from the adoption of these technologies. The dual approach of AI and blockchain not only strengthens security measures but also fosters greater trust in digital interactions. By continuously advancing AI algorithms and leveraging blockchain's decentralized nature, organizations can promptly identify and respond to suspicious activities, thereby reducing the risk of fraud and improving the overall integrity of KYC processes. [more]
AI threat report: A recent report by Bugcrowd highlights that 91% of security leaders feel AI threats are surpassing their teams' capabilities. With 87% actively hiring but 56% understaffed, there's concern that AI could replace human roles within five years. While 90% believe AI might outperform human security, 58% worry its risks outweigh benefits. Ethical hacking and crowdsourced security are becoming critical, with 70% of leaders using these methods to enhance defenses. The report underscores the evolving role of CISOs amid these growing challenges. [more]
OpenAI hack incident in 2023: In 2023, a hacker infiltrated OpenAI's internal messaging systems, stealing information about the company's AI technologies from employee discussions in an online forum. The breach, which did not involve accessing systems where OpenAI's AI models are developed and housed, was disclosed to staff and the board but kept from the public as no customer or partner data was compromised. OpenAI executives did not see the incident as a national security threat and did not report it to federal law enforcement. This event has raised concerns about AI safety and security, leading to increased scrutiny and calls for regulatory measures to protect advanced AI technologies from misuse. [more]
AI safety programme by Anthropic: Anthropic has launched a funding program to enhance AI evaluation methods, aiming to boost commercial AI adoption. This initiative addresses a gap in assessing AI capabilities and risks, creating more robust benchmarks for complex applications. Key focus areas include cybersecurity and CBRN threat assessments. Improved evaluations could alleviate current challenges in AI adoption, such as safety and reliability concerns. The success of this program depends on the quality of the evaluations, which must be rigorous and relevant to real-world scenarios. [more]
Top AI security risks by Trend Micro: Trend Micro's article highlights the top AI security risks for 2024, categorized into access, data, and reputational risks. These include insecure plugin design, poisoned training data, model theft, and excessive reliance on AI, which can lead to unauthorized actions, misinformation, and legal issues. Organizations are urged to adopt a zero-trust security model, use sandboxing, and verify AI outputs to mitigate these risks. [more]
Leading technology companies recognize AI-related risks in their SEC filings: Major tech firms like Microsoft, Google, Meta, and NVIDIA have disclosed AI risks in recent SEC filings. These risks include reputational harm, legal liabilities, regulatory scrutiny, and ethical challenges. Microsoft and Google highlighted concerns about flawed algorithms, biased datasets, and harmful content, while Meta and NVIDIA pointed to potential issues like misinformation, cybersecurity threats, and geopolitical tensions. These companies acknowledge the complexity of managing AI risks and the potential impact of current and proposed regulations, such as the EU’s AI Act and the US's AI Executive Order. [more]
Quantum computing export control: The quantum computing export bans issued by France, Spain, the UK, the Netherlands, and Canada restrict the sale of quantum computers with more than 34 qubits and above a certain error threshold. These bans, identical across the mentioned countries, are part of the Wassenaar Agreement, which regulates dual-use technologies. The rationale behind the specific 34 qubit number remains undisclosed due to national security concerns, and experts are puzzled about its origin. While quantum computers hold potential for significant advancements and risks, such as breaking advanced cryptographic encryption, their current high error rates and cooling requirements mean they are not yet a practical threat. The immediate effect of these restrictions is likely increased national isolation in quantum computing research. [more][more-2]
Risk of using weak or outdated encryption: Amid escalating cyber threats, the cybersecurity community is urging organizations to treat weak or outdated encryption as a serious vulnerability. With sophisticated hackers increasingly able to crack antiquated encryption methods, the risks of data breaches, financial loss, and reputational damage have never been higher. Experts emphasize the need for regular audits, updates, and the adoption of modern encryption techniques to safeguard sensitive information, particularly in industries like finance and healthcare. Additionally, stringent regulations such as GDPR and CCPA underscore the necessity of robust encryption practices to avoid severe legal and financial repercussions. [more]
5G technology risk: In the rapidly evolving landscape of 5G technology, security poses significant challenges and necessitates innovative solutions. The proliferation of IoT devices expands the attack surface, while network slicing and supply chain vulnerabilities present additional risks. To combat these threats, experts recommend robust encryption, secure network slicing, enhanced supply chain security, advanced threat detection, and privacy-preserving technologies. Emphasizing global collaboration, adherence to data protection laws, and the adoption of a zero-trust architecture are crucial steps to fortify the 5G ecosystem. [more]
Increased Attack Surface: More IoT devices mean more entry points for cybercriminals.
Network Slicing Vulnerabilities: A breach in one virtual network can affect others.
Supply Chain Security: Multiple vendors complicate securing hardware and software.
Software Dependency: Reliance on software-defined networking increases vulnerability risks.
Privacy Concerns: Protecting the vast amounts of data transmitted over 5G is crucial for user trust and regulatory compliance.
Web3 Cryptospace Spotlight
Kraken’s 47min behind the ‘ethical’ hack: When Kraken’s Chief Security Officer, Nick Percoco, discovered that the exchange had been hacked for $3 million, he was on a flight to Japan. Despite his plans for a holiday, he had to address the crisis immediately. The breach was discovered by an independent researcher through Kraken's bug bounty program, which offers rewards for identifying vulnerabilities. This particular flaw allowed unauthorized crediting of accounts, leading to the withdrawal of $3 million in cryptocurrencies over five days by CertiK, a security firm. This action was unusual, as security firms typically report vulnerabilities without exploiting them. The funds were eventually returned, and the vulnerability was fixed in 47 minutes. This incident highlighted a severe breach and raised concerns over industry standards and practices in bug bounty programs. [more]
Negotiate with hackers: A white hat hacker named Ogle has saved $450 million in DeFi hacks through negotiation with black hat hackers. Traditional methods often fail in DeFi hacks due to police unfamiliarity with crypto and developers' lack of negotiation skills. Ogle's strategy involves persuading hackers to return 90% of stolen funds. His notable successes include recovering $240 million from Euler Finance and playing key roles in other significant recoveries. His approach highlights the importance of negotiation in DeFi security. [more]