Stratos Ally

The Growing Threat of AI-Powered Cyberattacks, Google Reveals Gemini AI Abuse by APTs 

Picture of StratosAlly

StratosAlly

The Growing Threat of AI-Powered Cyberattacks, Google Reveals Gemini AI Abuse by APTs 

Google’s Threat Intelligence Group (GTIG) has revealed that they have observed state-sponsored hacking groups from over 20 countries, Iran and China being among the top, exploiting its AI assistant, Gemini, to enhance their cyberattacks. While these actors are not creating any new AI-driven attacks, the threat actors/ groups are leveraging Gemini for productivity gains across different stages of their operations. The most common uses include using the AI tool for assistance in coding malicious tools and scripts, searching for vulnerabilities, target reconnaissance, and exploring evasion techniques. 

Iranian actors are at the top to utilize Gemini for such activity. They have heavily utilized Gemini to perform reconnaissance on various defense organizations and international experts, develop phishing campaigns, and create content for influence operations. They also utilize it for translation and technical explanations related to cybersecurity and military technologies. The second most users were from China. These groups primarily used the platform to perform reconnaissance of U.S. military and government organizations, research vulnerabilities, codes for lateral movement and privilege escalation, and post-compromise operations. North Korean actors use Gemini across the attack lifecycle, from researching hosting providers to assisting with malware development and supporting their clandestine IT worker scheme. Russian groups were the ones observed to have shown limited engagement. Russian actors mainly use Gemini for scripting, translation, and payload crafting, possibly preferring domestic AI models for security reasons. 

Google has observed unsuccessful attempts to jailbreak Gemini and bypass its security measures. This report confirms the increasing misuse of generative AI models by confirming similar findings from OpenAI regarding ChatGPT. The vulnerabilities discovered in DeepSeek R1 and Alibaba’s Qwen 2.5 recently highlight how the growing availability of AI models with weak security protections that are easily susceptible to prompt injection attacks pose a significant concern. This trend underscores the urgent need for robust security measures in all AI models to prevent their exploitation for malicious purposes. 

more Related articles