In the ever-evolving landscape of cybercrime, one tool has emerged as a particularly insidious player: Evil-GPT. Marketed on hacker forums on the dark web as the “ultimate enemy of ChatGPT” and the “best alternative to WormGPT,” this malicious artificial intelligence (AI) chatbot has quickly gained notoriety among cybercriminals. Evil-GPT is designed to help attackers execute a range of nefarious activities, from crafting malware to generating phishing attacks.
The dark web post shown below touts Evil-GPT as a Python-built chatbot for only $10. The example prompt in the ad asks for a Python malware that steals a user’s credentials and browser cookies, then exfiltrates them via Discord webhook — exactly the kind of task ChatGPT’s ethics would block.
Capabilities of Evil-GPT
Evil-GPT is essentially an uncensored AI code assistant specializing in malicious output without the safety filters typically found in legitimate AI models. According to cybersecurity analyses, it can generate malware scripts (for data theft, system info gathering, etc.) and craft phishing lures, all without safety filters. For instance, in one documented case Evil-GPT produced a Python script to grab Chrome cookies and system data and send them to an attacker’s server. This functionality positions Evil-GPT as a cheap “malware factory” particularly appealing to low-skilled hackers.
Notably, Evil-GPT was reportedly built entirely in Python and was marketed as a lightweight alternative to more resource-intensive AI models. Its design focuses on stealth and theft — for example, stealing browser data or credentials — suggesting it’s meant to aid infostealing, remoted access Trojan (RAT) development and other agentic malware tasks.
Promotion and branding
Evil-GPT first surfaced in August 2023 on the popular dark web forum BreachForums. The seller, using the alias “AMLO,” explicitly positioned it as “the best alternative to WormGPT” for would-be hackers. By branding it the “enemy of ChatGPT,” the seller emphasized its lack of restrictions compared to more mainstream AI tools. The low price point (only $10 per copy) and the public forum advertisement indicate a strategy to mass-market this tool to cybercriminals looking for affordable AI assistance.
Threat intel firm FalconFeeds even captured the forum screenshots and noted the seller had only joined that forum in August 2023, implying Evil-GPT was a newly launched product at the time. Unlike some pricier subscription-based AI tools, Evil-GPT’s one-time sale model and low cost suggest it was aimed at widespread adoption (or potentially a quick cash-grab by the seller).
Real-world use and updates
How much Evil-GPT has been used in actual attacks is an open question. By late 2023, researchers warned that Evil-GPT could lower the bar for generating malware and phishing at scale. A Trend Micro investigation found that Evil-GPT might not even be a wholly independent AI model — it appeared to function as a wrapper around the OpenAI API, requiring an API key to operate. In other words, Evil-GPT may have been invoking ChatGPT “behind the scenes” with clever prompt engineering to bypass OpenAI’s filters.
This finding suggests some criminal toolkits are more hype than reality, essentially repackaging legitimate AI in a malicious way. Nonetheless, even a simple wrapper can be valuable to bad actors if it provides anonymity (using stolen API keys) and a library of working jailbreak prompts.
By 2024, write-ups in security reports and news articles continued to cite Evil-GPT as part of the growing roster of malicious AI tools. Its name recognition in the underground community indicates that, at minimum, Evil-GPT succeeded in entering the discourse as a viable crimeware tool.
Enterprises should assume that phishing emails or malware code they encounter could have been auto-generated by tools like Evil-GPT, given how accessible it became. The low cost and public availability mean incident responders may increasingly find Evil-GPT’s fingerprints (or those of similar AI) in cybercrime investigations going forward.
Conclusion
Evil-GPT exemplifies the dark potential of generative AI when placed in the hands of cybercriminals. As we continue this series, we will explore other malicious AI tools, including WolfGPT, DarkBard and PoisonGPT, each contributing to the evolving threat landscape. Understanding these tools and their implications is crucial for organizations seeking to bolster their defenses against the rising tide of AI-driven cybercrime. Stay tuned for our next post, where we will delve into WolfGPT and its capabilities as an upgraded dark AI for malware creation.
This article was originally published at Barracuda Blog.
Photo: iLixe48 / Shutterstock