“The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use,” Google wrote in the report.
Google did not identify the threat actor involved and said it does not believe its Gemini models were used in the operation. Instead, the company pointed to the growing use of publicly available AI tools and models for offensive cyber activity.
The report highlights how quickly AI is becoming embedded in both sides of the cybersecurity landscape. Security firms and major tech companies are increasingly using AI systems to detect threats, analyze vulnerabilities, and automate defense workflows. At the same time, attackers are experimenting with many of the same techniques to accelerate malware development, vulnerability discovery, and exploitation efforts.
Google specifically referenced OpenClaw as one example of the types of AI tools being used by threat actors. The company said groups connected to China and North Korea have already shown “significant interest in capitalizing on AI for vulnerability discovery.”
The findings arrive as AI companies debate how much access to advanced cybersecurity-focused models should be made public. In April, Anthropic delayed the broader release of its Mythos model over concerns that it could help attackers identify and exploit software flaws. The company later restricted access to a smaller testing group that included Apple, CrowdStrike, Microsoft, and Palo Alto Networks.
Last week, OpenAI also announced GPT-5.5-Cyber, a specialized version of its latest model designed for vetted cybersecurity teams working on defensive workflows such as vulnerability research, malware analysis, and penetration testing.
Google’s report suggests the concern is no longer theoretical. Rather than waiting for frontier AI models to become publicly available, attackers are already assembling their own offensive tooling using accessible models and open-source techniques.
The company outlined several ways AI is now being incorporated into cyber operations, including automated vulnerability research, malware development, and attack planning. While Google said it was able to disrupt this particular operation before exploitation occurred at scale, the report underscores how AI-driven cybersecurity threats are moving from experimentation into active deployment.
The incident also reflects the shrinking timeline between vulnerability discovery and exploitation. AI-assisted systems can potentially automate parts of software analysis that previously required large teams of researchers and long investigative cycles, allowing attackers to probe systems and identify weaknesses more quickly.
For enterprise security teams, the report signals growing pressure to adapt defenses to an environment where both attackers and defenders increasingly rely on AI-driven automation.
This analysis is based on reporting from CNBC.
Image courtesy of Avast.
This article was generated with AI assistance and reviewed for accuracy and quality.