TechRisk #155: Attackers exploit OpenAI team invites
Plus, ethical hackers are rapidly adopting AI, confidential documents uploaded to public version of ChatGPT, and more!
Tech Risk Reading Picks
Attackers exploit OpenAI team invites to breach enterprises: Kaspersky discovered attackers abusing OpenAI’s team invitation feature by creating accounts that embed malicious links or phone numbers inside organization name field, which are then delivered through emails sent from legitimate OpenAI addresses. This approach makes the messages appear authentic and helps them bypass standard email security controls, increasing the likelihood that employees trust and act on them. Victims are directed to click deceptive links or call fraudulent numbers where credentials or payment details are harvested, leading to potential data and financial loss. The attack is often reinforced with follow-up vishing calls that apply urgency and pressure, reducing the chance of detection. [more]
Ethical hackers are rapidly adopting AI: Recent research shows ethical hackers are rapidly adopting AI, which introduces several technology risk considerations. AI-driven automation accelerates vulnerability discovery and code analysis, which increases the pace at which both defenders and attackers can find weaknesses, raising the risk of faster and larger-scale exploitation. However, heavy reliance on AI tools can also create blind spots if models miss context-specific risks or reinforce existing biases, which may weaken assurance over security outcomes. The growing use of AI in hacking workflows lowers skill barriers, which could indirectly empower less experienced or malicious actors if similar tools are misused. The key question is whether widespread AI use in ethical hacking normalizes techniques that attackers can easily replicate, potentially narrowing the defensive advantage and complicating regulatory and ethical boundaries around acceptable security testing practices. [more][more-bugcrowd]
Implications of artificial intelligence and digital finance: AI and digital finance are reshaping financial markets by accelerating decision making and digitising financial claims, which raises financial stability risks through faster liquidity shocks, higher operational dependencies and stronger contagion effects across institutions. AI driven trading and automated responses can intensify price swings during stress, while tokenised assets can move or be redeemed faster than underlying liquidity allows, increasing the risk of disorderly markets. Heavy reliance on shared cloud providers, data sources and platforms creates concentrated operational and cyber risks, where a single disruption could have system wide impact. The widespread use of similar AI models and tokenisation infrastructures can cause firms to react in the same way to shocks, amplifying stress and transmitting it rapidly across borders. The key question is whether current governance and regulatory frameworks can keep pace with the speed and complexity of these technologies. [more]
AI systems used by enterprises exposed publicly: A joint investigation found more than 175,000 publicly exposed AI systems running outside standard enterprise controls which creates material cyber and governance risk for organizations. Nearly half of these systems can execute code and access external systems which elevates the threat from data misuse to direct operational and financial impact if abused. Because these deployments often sit outside corporate security perimeters they are harder to monitor secure and distinguish from sanctioned AI use which increases exposure to fraud resource theft and regulatory scrutiny. Active criminal campaigns are already exploiting these weaknesses to hijack AI infrastructure for spam disinformation and resale which shows the risk is immediate rather than theoretical. [more]
Confidential documents uploaded to public version of ChatGPT: The acting director of the US Cybersecurity and Infrastructure Security Agency uploaded multiple “for official use only” government contracting documents to the public version of ChatGPT, causing sensitive information to leave approved federal systems and triggering automated security alerts. The uploads occurred despite existing restrictions on public AI tools and followed the granting of a temporary exception for the director. Security sensors detected the activity within weeks, confirming that monitoring controls functioned but only after the data had already been shared externally. [more]
AI-powered healthcare services provider compromised: A 2025 cyberattack on HCIactive, an AI-powered healthcare services provider, compromised data of about 3.1 million individuals, placing it among the largest health data breaches of the year and raising concerns about third-party technology risk in healthcare. Attackers accessed the company’s network over several days before detection, showing gaps in monitoring and incident response that increase exposure for clients relying on outsourced digital services. The stolen data included sensitive medical records and identity information, creating long-term risks of fraud, regulatory penalties, litigation, and loss of trust for healthcare practices tied to the platform. [more]

