Even a year after its discovery, the server-side request forgery (SSRF) vulnerability in ChatGPT, identified as CVE-2024-27564, remains a notable threat. Although it is classified as medium severity (CVSS: 6.5), this flaw continues to be actively exploited. Researchers from Veriti have uncovered more than 10,479 attack attempts originating from a single malicious IP address. This vulnerability demonstrates how even lower severity flaws can be weaponized when security misconfigurations create openings for cybercriminals.
The exploit allows adversaries to bypass ChatGPT’s safety mechanisms, enabling them to generate malware, phishing scripts, and reconnaissance tools that can aid in cyber espionage. This is an SSRF vulnerability in the pictureproxy.php file, and by injecting specially crafted URLs into the system’s parameters, attackers can manipulate the application to make unauthorized requests. Thus, bypassing its built-in safeguards meant to block harmful content generation. Using prompt engineering and contextual manipulation, they force ChatGPT to provide restricted information otherwise blocked by the AI’s security protocols. Malware development, phishing template creation, and detailed reconnaissance are conducted using this technique.
Veriti’s security researchers have uncovered that thirty-five percent of the analyzed organizations are still exposed to this vulnerability due to improperly configured security measures, including IPS, WAFs, and firewalls. Financial institutions are among the primary targets of these attacks, and the U.S. is among the top geo-location targets for these attacks, followed by Germany and Thailand. These security gaps enable attackers to bypass conventional defense mechanisms and manipulate ChatGPT without setting off typical detection protocols.
Earlier this year, the Cybersecurity and Infrastructure Security Agency (CISA) acknowledged the increasing threat of AI-driven cyberattacks. In response, they published the “AI Cybersecurity Collaboration Playbook.” This document emphasizes the importance of proactive information sharing among government agencies, private sector organizations, and AI developers to combat emerging threats effectively. The agency highlights that AI-powered attacks are evolving rapidly, with cybercriminals exploiting vulnerabilities in AI-integrated systems, including those used by government entities handling sensitive data.
This exploitation serves as a warning about the increasing risks linked to vulnerabilities in artificial intelligence (AI). As generative AI models advance, attackers will continue searching for ways to manipulate these technologies for harmful purposes. This evolving threat landscape of AI-powered cyberattacks highlights the need for ongoing collaboration among AI developers, government agencies, private-sector organizations, and cybersecurity professionals to protect against emerging threats.