The Evolving AI Threat Landscape: How Adversaries Are Using Generative AI for Cyberattacks
Introduction
Since our last update in February 2026, Google Threat Intelligence Group (GTIG) has closely monitored a significant shift in how adversaries integrate artificial intelligence into their operations. What once was experimental is now a mature, industrial-scale application. Drawing insights from Mandiant incident response, Gemini, and GTIG's proactive research, this article highlights the dual role of AI: both as a powerful engine for attacks and as a prime target. Below, we examine six key developments shaping the current threat environment.

Vulnerability Discovery and Exploit Generation
For the first time, GTIG observed a threat actor using a zero-day exploit that was likely developed with AI assistance. This criminal group planned a mass exploitation event, but our proactive countermeasures may have prevented its use. State-linked actors from the People's Republic of China (PRC) and the Democratic People's Republic of Korea (DPRK) have also shown strong interest in leveraging AI for vulnerability research. By automating the discovery of flaws, adversaries can accelerate their timelines and target previously overlooked weaknesses.
AI-Augmented Development for Defense Evasion
AI-driven coding is enabling adversaries to rapidly build infrastructure suites and polymorphic malware. These tools help evade defenses by generating obfuscation networks and integrating decoy logic. For example, suspected Russia-nexus threat actors have used AI-generated code to create malware that adapts its signature and behavior. The speed of AI-assisted development makes it far more challenging for security teams to keep pace.
Autonomous Malware Operations
The emergence of AI-enabled malware like PROMPTSPY marks a turning point toward autonomous attack orchestration. This malware interprets system states and dynamically generates commands, allowing it to manipulate victim environments without direct human intervention. Our analysis reveals previously unreported capabilities, including the ability to offload complex operational tasks to AI models. This approach enables scaled, adaptive attacks that can respond to countermeasures in real time.
AI-Augmented Research and Information Operations
Adversaries continue to use AI as a high-speed research assistant throughout the attack lifecycle. More concerning, they are moving toward agentic workflows—autonomous frameworks that plan and execute attacks. In information operations (IO), AI is used to fabricate consensus by generating synthetic media and deepfakes at scale. A key example is the pro-Russia campaign “Operation Overload,” which leveraged AI to flood platforms with misleading content.

Obfuscated LLM Access
Threat actors are now pursuing premium-tier access to large language models through professionalized middleware and automated registration pipelines. This allows them to bypass usage limits and maintain anonymity. These infrastructures enable large-scale misuse while subsidizing operations via trial abuse and programmatic account cycling. The result is a shadow ecosystem of illicit AI access that challenges enforcement efforts.
Supply Chain Attacks Targeting AI Environments
Groups like “TeamPCP” (aka UNC6780) have begun targeting AI software dependencies and environments as an initial access vector. These supply chain attacks can lead to multiple types of breaches, from data theft to lateral movement into critical infrastructure. By compromising the tools that organizations rely on for AI development, adversaries can achieve broad and stealthy access.
In summary, the threat landscape is rapidly evolving as AI becomes both a weapon and a target. Organizations must adapt their defenses to address these new capabilities, from zero-day exploits driven by AI to autonomous malware and supply chain compromises. GTIG continues to monitor these developments to provide timely intelligence.
Related Articles
- Behind TrueChaos: How a Zero-Day in TrueConf Targeted Southeast Asian Governments
- Securing Your Enterprise in the Age of AI-Powered Vulnerability Discovery
- The Evolving AI Threat Landscape: January–February 2026 Report
- The Importance of Accuracy in Cybersecurity Journalism: A Case Study of the Instructure Retraction
- Trellix Source Code Breach: Unauthorized Access Confirmed
- Unveiling DEEP#DOOR: A Python Backdoor Targeting Browser and Cloud Credentials via Tunneling
- How to Nominate a Cybersecurity Star for the 2026 Awards: A Step-by-Step Guide
- Supply Chain Attacks Target PyTorch Lightning and Intercom-client: Credential Theft Campaign Unveiled