Artificial intelligence (AI) has been an IT buzzword lately, and it’s no surprise. The benefits of AI include advancing our technology, improving business operations, and much more. Adoption of AI has more than doubled since 2017, and one recent AI model has captured the world’s attention — ChatGPT. Users of ChatGPT are in awe of the technology, and even investors are flocking to stocks with AI in their technology stack. So, is this AI chatbot good or bad? Let’s start with what it is.
What is ChatGPT?
Unless you’ve been living under a rock, which if that’s the case there’s no judgement here, then I’m sure you’ve heard about ChatGPT, an AI-based chatbot system that creates conversations by using natural language processing (NLP).
The system was launched by the AI and research company OpenAI. While it was designed to mimic real human conversations, it’s capable of much more. For example, when it makes a mistake, it will apologize to the user. It can elaborate and provide detailed explanations while also remembering everything that was said earlier in the conversation. To put it simply, ChatGPT has a mind of its own.
Some of the most common uses of the platform include:
- Brainstorming new content ideas and topics
- Recapping long documents to generate a condensed summary
- Translating text in one language to another language
- Creating blogs, social media captions, email responses, and other forms of marketing content
Many people have been shocked with just how human-like the platform really is. This is because during the creation process, the responses were trained on immense amounts of data written by humans. Unfortunately, just like the marketing content that is commonly produced on ChatGPT, phishing emails and spam also need to sound human-like and personalized too; and this is where a level of concern comes into play.
Wait, ChatGPT can be used maliciously?
That’s right. It could be used to craft a million slightly different variations of the same phishing email (to avoid detectors) in any language in the world. Any detection system that heavily relies just on natural language processing (e.g., traditional spam filters) will be extremely vulnerable to this.
ChatGPT can also be used to easily replicate an existing legitimate website (e.g., the login pages of Gmail, Outlook, or your bank) with small slight variations to evade detection as well. While these possibilities are alarming, let’s look at an example of a more serious threat that could take place.
Often attackers need to interact with their targets, such as in a business email compromise when the attacker might impersonate the CEO and ask the CFO to do a wire transfer.
What happens if the CFO responds with a question? Well, it turns out ChatGPT can provide a very plausible answer and conduct an entire conversation in a professional manner, convincing the CFO that she is indeed speaking with the actual CEO and not with a bot. Unless the CFO physically talks to the CEO or calls them, she will have no way of 100 percent verifying their identity.
Not only that, but an attacker can also train the model on a large amount of text written by a specific person (e.g., their emails), and then write emails, contracts, or letters in the style of that person, asking for sensitive information, a wire transfer, etc. You can imagine how difficult it will be to detect because the model will be familiar with the context of the relationship with a particular person and will be able to generate very plausible-sounding emails or text messages.
A few final thoughts
ChatGPT claims to have implemented controls and monitoring systems to prevent malicious use. However, soon we could have similar open-source versions of ChatGPT that attackers will be able to run on their own. For better or worse, this platform and its competitors are going to be around for a while.
Photo: CHUAN CHUAN / Shutterstock
Just a thought: If an AI platform like ChatGPT can be used maliciously to create phishing emails, etc, why can’t it also be used as part of a spam filter/anti-phishing solution to detect those same attack emails?