Share This:

AIGenerative artificial intelligence (Gen AI) has unleashed a new threat landscape for cybercrime. Threat actors now weaponize large language models (LLMs), which once served primarily legitimate purposes. Underground forums and dark web marketplaces are buzzing with “dark LLMs” — custom or jailbroken AI chatbots designed explicitly for malicious activities. In fact, mentions of these malicious AI tools on cybercriminal forums skyrocketed by over 219% in 2024, signaling a significant shift in the tactics employed by cybercriminals.

Tools like WormGPT (an uncensored ChatGPT clone) saw rapid adoption, aiding criminals in automating phishing and business email compromise (BEC) attacks. By late 2024, WormGPT’s Telegram bot reportedly had reached nearly 3,000 users, with hundreds of paying subscribers, highlighting the growing demand for AI-driven crimeware. These malicious generative AI tools can write convincing phishing lures, generate undetectable malware code, create fake personas, and even produce disinformation at scale.

In this blog series, I will delve deeper into the dark side of generative AI, examining four prominent examples of malicious AI tools: Evil-GPT, WolfGPT, DarkBard, and PoisonGPT. I’ll provide an overview of their capabilities (from phishing generation and malware automation to propaganda creation), their promotional channels (from dark web forums to Telegram groups), and documented evidence of their use over the past year. Finally, I’ll discuss strategic implications for enterprise defense and offer recommendations for chief information security officers (CISOs) and security teams to counter this emerging threat.

Dual-use nature of Gen AI

Generative AI is a double-edged sword: The same technology that can draft your emails or write software code can also be exploited for nefarious purposes. Modern LLMs, like OpenAI’s GPT-3.5/4 or Google’s Bard, are powerful assistants with built-in ethical guardrails. However, criminals have found ways around these safeguards — either by “jailbreaking” public models or by using open-source models to create their own unrestricted AI. The result is a wave of malicious GPT systems purpose-built for cybercrime. These rogue AIs can generate malware code, exploit scripts, phishing content, and more on demand, with none of the usual content filters or limitations.

Several factors make this possible. First, the open-source LLM boom provides freely available models, such as GPT-J and LLaMA, that anyone with enough expertise can fine-tune on malicious data. Additionally, threat actors are sharing jailbreak techniques to manipulate legitimate AI chatbots into producing harmful outputs.

A thriving underground market has also emerged, selling illicit AI-as-a-service. Here, enterprising cybercriminals offer subscription-based “evil AI” bots that promise “no boundaries” — AI that will happily produce phishing emails, malware, fake news, or any illegal output a buyer wants. This commodification of generative AI lowers the barrier to entry for cybercrime: Even attackers with limited skills can leverage AI to vastly increase the scale and sophistication of their campaigns. In essence, generative AI has a dual-use problem, and the security community is now grappling with the fallout of its malicious misuse.

What’s next

In the upcoming posts in this blog series, I will explore each of the four malicious AI tools in detail:

  1. Evil-GPT: I’ll examine the capabilities, promotional strategies and real-world cybercrime applications of this tool dubbed the “enemy of ChatGPT.”
  2. WolfGPT: This “upgraded” dark AI focuses on malware creation. I’ll analyze its features and the potential risks it poses to cybersecurity.
  3. DarkBard: I’ll investigate the unique capabilities of this tool known as the “evil twin” of Google Bard, and how it can be used for real-time misinformation.
  4. PoisonGPT: This tool exemplifies the darker application of generative AI for disinformation. I’ll discuss its implications and the risks associated with AI supply-chain attacks.

Finally, I’ll discuss the strategic implications for enterprise defense and provide actionable recommendations for CISOs and security teams to counter these emerging threats.

Stay tuned as I navigate the complexities of the AI threat landscape and uncover the challenges and strategies that lie ahead in the fight against malicious generative AI.

This article was originally published at Barracuda Blog.

Photo: alexugalek / Shutterstock


Share This:
Adam Khan

Posted by Adam Khan

Adam Khan is the VP, Global Security Operations at Barracuda. He currently leads a Global Security Team which consist of highly skilled Blue, Purple, and Red Team members. He previously worked over 20 years for companies such as Priceline.com, BarnesandNoble.com, and Scholastic. Adam's experience is focused on application/infrastructure automation and security. He is passionate about protecting SMBs from cyberattacks, which is the heart of American innovation.

Leave a reply

Your email address will not be published. Required fields are marked *