TechRisk Notes#24: Atomic wallet attack methods.
Also, EV charging grids were not secure, Biden wanted AI risk addressed, AI is not ending humanity (yet), Bitcoin lightning network nodes faced buggy upgrade, and more.
EmergingTech Spotlight
Electric vehicle and smart grid risk: In a recent security research, the researcher noted that there were significant risks in electric vehicles charging infrastructure. [more]
The analyst highlighted that one of the key risk was exclusively to the grid side of the charging process, rather than the vehicle. In some of the smart chargers investigated, the security research team was able to take full and remote control of every charger on that manufacturer’s platform.
From a consumer perspective, such attack could have turned all their chargers off, so they wake up there’s no charge in the car or it didn’t charge overnight. However, the worrying part was that the team could turn all the chargers on and off at the same time, which could result in instability in the grid, especially at peak times, or when the grid is already under pressure.
United States’ take on AI. U.S. President Joe Biden said the risks of artificial intelligence to national security and the economy need to be addressed, and he would seek expert advice on the subject. He mentioned the need to safeguard Americans' rights and safety while protecting privacy, to address bias and misinformation, to make sure AI systems are safe before they are released.
EU AI Act (draft): Once approved, EU AI Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.
The extent of regulation depends on the risks created by a particular application, from minimal to “unacceptable.” [more]
Systems that fall into the “unacceptable” category are banned outright. These include real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China, which assign people a “health score” based on their behavior.
The legislation also sets tight restrictions on “high-risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.” These include systems used to influence voters in an election, as well as social media platforms with more than 45 million users that recommend content to their users — a list that would include Facebook, Twitter and Instagram.
AI systems with minimal or no risk, such as spam filters, fall largely outside of the rules.
The Act also outlines transparency requirements for AI systems. For instance, systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content. Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.
AI is not ending humanity. Meta’s chief AI scientist, Yann LeCun, a Turing Award winner, pointed out that the idea that a superintelligent AI system will take over the world is “preposterously ridiculous.” Joelle Pineau, Meta’s vice president of AI research, noted that the extreme focus on future risks does not leave much bandwidth to talk about current AI harms. [more]
Risk of data in the Cloud. Thales’ Global Cloud Security Study for 2022 found that during the past 12 months, 45% of businesses have experienced a cloud data breach or failed to perform audits, which was 5% from the previous year. Securing data in the cloud has become a daunting task for organizations, especially shadow data. [more]
Cloud misconfigurations are another prime reason why ensuring cloud security is challenging. Attackers are always looking for misconfigurations because they are an entry point to access cloud assets. One recent example is Toyota, where over two million customers’ data got exposed due to misconfigurations in its cloud storage systems.
APIs are the weakest link in the security chain, which may leave the cloud environment vulnerable to cyber-attacks. Statistics also reveal that 52% of cybersecurity experts regard insecure APIs as a significant cloud security threat. Many cloud applications and services rely on APIs for functionalities like authentication and access. However, these interfaces come with security flaws, such as misconfiguration, that hackers can easily exploit and access sensitive data.
Web3 Cryptospace Spotlight
18 Jun - DeFi lending platform Midas Capital was exploited for more than $600K worth of MATIC token. This will be the second time it was exploited in 2023. Security analysts noted that the malicious actor made use of unexpected external and token price calculation flaw to exploit the DeFi platform. [more][more-2][more-securityanalysis][more-securityanalysis2]
21 Jun - Atomic Wallet maintained that there have been no new confirmed cases following the initial reports of the hack, and it reiterated that less than 0.1% of app users were affected. However, the wallet provider did not shed light on the exact nature of the exploit. Instead, it outlined four “probable” causes, including a virus on user devices, an infrastructure breach, a man-in-the-middle attack, or malware code injection. It emphasized that none of these scenarios have been confirmed as the root cause of the massive breaches. It did mention, however, that its security infrastructure has been updated. Meanwhile, Ouriel Ohayon, CEO of rival wallet provider ZenGo, questioned why Atomic Wallet needed to update its security infrastructure and what prompted such a measure. [more]
Zero-knowledge Proof upgrade. Polygon co-founder, Mihailo Bjelic,suggested adding zero-knowledge proof technology, i.e. the Polygon proof-of-stake (PoS) network to a “zkEVM validium” version, to bolster the Polygon network’s security while still keeping fees low. [more]
Ponzi scheme. A South Korean company lured investors with its new technology: a blockchain app that can identify dogs by their nose wrinkles. The investigation found that what the company promoted to be its dog nose wrinkle reader was fake. The South Korean police said that the investors lost more than $100 million in what it describes as a “typical Ponzi.” The project came with a cryptocurrency and offered high returns on investment. [more]
Bitcoin Lighting Network buggy node. Lightning Labs has pointed out a memory leak bug in the recent Lightning Network Daemon (LND) version 0.16.3, and urged operators not to upgrade or roll back to the previous version for those who have already upgraded their software. This bug could cause memory leaks that may result in an increased demand for available memory. This can lead to a gradual increase in the amount of memory used by the program and cause the node to crash. [more]
Near-miss security event. Sui Foundation awarded $500,000 to smart-contract audit firm CertiK for discovering a potential attack vector on the Sui network. The vulnerability was an infinite loop bug in the Sui code, which could be triggered by a malicious smart contract and cause the blockchain’s nodes to go on an endless circle, essentially paralyzing the network. [more]
“Differing from traditional attacks that shut down chains by crashing nodes, the HamsterWheel attack traps all nodes in a state of ceaseless operation without processing new transactions, as if they were running on a hamster wheel. This strategy can cripple entire networks, effectively rendering them inoperable,” CertiK said in a press release.