Google has thwarted a cyberattack by a criminal group that utilized artificial intelligence to exploit a previously unknown security vulnerability. This incident raises concerns about the increasing use of AI in cybercrime and the challenges it poses for cybersecurity efforts.

Google reported that it disrupted a cyberattack involving a criminal group using AI to exploit a zero-day vulnerability, which allowed bypassing two-factor authentication on an online system administration tool.
The company notified the affected organization and law enforcement, preventing potential damage. Evidence suggested that the attackers used an AI large language model to discover the vulnerability, although Google did not disclose the specific model or the group behind the attack.
John Hultquist, a chief analyst at Google, emphasized the speed at which criminal hackers can exploit vulnerabilities with AI, contrasting it with the slower methods typically used by government spies.
The incident coincides with heightened discussions about AI regulation, particularly following the announcement of Anthropic's Mythos model, which has raised concerns about its potential for misuse in hacking and cybersecurity.
The Trump administration has indicated a mixed approach to AI oversight, with new agreements to evaluate powerful AI models before their release, while also facing internal conflicts regarding regulation. Experts warn that the proliferation of AI tools could lead to increased cybersecurity risks in the near term.