Share This:

AI cybercriminals

It seems that each time there is a new technology advancement, criminals quickly cut to the front of the line when it comes to dreaming up new applications. That is the case with generative artificial intelligence (AI), which has been rapidly expanding its mindshare and utilization since the launch of ChatGPT.

Generative AI can create original content (including text, video, audio, and more) using deep learning. With ChatGPT and similar tools, users have leveraged the technology to do everything from creating marketing content and improving software performance to writing college term papers and creating deepfake images and videos, including a robocall that impersonated President Joe Biden.

Cybercriminals have also found ways to use generative AI. A recent SlashNext report found that since the fourth quarter of 2022, there has been a 1,265 percent increase in malicious phishing emails and a 967 percent increase in credential phishing that coincided with the launch of ChatGPT. According to a Barracuda and the Ponemon Institute report, 50 percent of IT pros expect increased cyberattacks because of AI. 

new ebook from Barracuda looks at how generative AI is helping criminals rapidly increase the scope, frequency, and complexity of their cyberattacks by automatically generating code, automating attack activities, creating more believable phishing email content, and gathering intel to improve spoofing and personalized attacks.

Here are five ways cybercriminals are leveraging generative AI:

1. More effective phishing and spoofing

With AI, criminals can automate the creation of convincing phishing emails, highly personalized messages, and spoofed emails/websites/login pages. They can better mimic the writing style of trusted senders without the typical grammatical errors and misspellings often found in these fraudulent communications. Tools like ChatGPT are supposed to prevent users from creating malicious messages. However, hackers have found ways around these guardrails, and new chatbot services are explicitly built to develop malicious content.

2. Automated malware generation

Now, tools are available to help novice cybercriminals write malicious code. They can also automatically discover vulnerabilities that can be used to launch zero-day attacks and create adaptive malware.

3. Believable deepfakes

Deepfake videos and audio are being leveraged to improve phishing and spoofing success rates. Voice fraud can be used to initiate fraudulent wire transfers or credential sharing over the phone, in the same way phishing emails do via computer. Cybercriminals can also generate fake video conferences, false information that can damage a company’s reputation, or even videos that can be used for extortion. 

4. Improved content localization

Phishing and other attacks are often easy to spot because they originate in non-English-speaking countries. English-language attacks coming from these regions are usually rife with misspellings and grammatical errors. AI can significantly improve the translations of these emails, enabling attackers to broaden their scope to other languages and geographies. They can also make better use of industry-specific jargon, local news, and other content to make malicious emails more believable.

5. More effective access and credential theft

Using AI, criminals can create more convincing spoofed login pages and accelerate credential-stuffing attacks. They can also use the technology to quickly generate password lists, analyze stolen data for credential information, improve the effectiveness of password cracking efforts, and help defeat CAPTCHA tools. 

AI in and of itself isn’t a threat. But it can make cyberattacks faster, more effective, and easier to launch (even for novices). For managed service providers (MSPs) and end users who want to protect themselves and their clients from this new breed of advanced cyberattack, an investment in security solutions that also leverage AI and machine learning will be critical. This is a topic we will address in a follow-up article on how AI can be used to improve security.

Note: This was originally published at ChannelBuzz

Photo: Max Acronym / Shutterstock


Share This:
Olesia Klevchuk

Posted by Olesia Klevchuk

Olesia Klevchuk is a Senior Product Marketing Manager for email security at Barracuda Networks. In her role, she focuses on defining how organizations can protect themselves against advanced email threats, spear phishing and account takeover. Prior to Barracuda, Olesia worked in email security, brand protection, and IT research.

Leave a reply

Your email address will not be published. Required fields are marked *