Artificial intelligence is making its presence felt in everything from Amazon’s Alexa to factory floor industrial applications.
AI-infused software is ramping up its predictive capabilities and strengthening its algorithmic methodology to become more accurate. Of course, AI can be abused.
Old tricks, new technologies
One way AI is increasingly abused is through vishing. Vishing — extracting someone’s personal information through a phone call — has been around for as long as phones. However, with the emergence of AI-assisted voice technology, vishing attacks are increasing.
In a 2018 survey of InfoSec professionals, it was found that 45 percent of respondents had experienced phishing via phone calls (vishing) and SMS/text messaging (smishing) in 2017.
The problem has gotten bad enough that the University of Kentucky issued a campus-wide vishing alert earlier this year. The alert warns that:
“Common vishing scams also involve trying to obtain PINs, Social Security numbers, credit card security codes, passwords, and other personal details. All this information can be used for identity fraud or to steal money directly from bank accounts.”
This could become a huge problem
Vishing goes beyond campuses and into the boardroom, as AI-assisted vishing campaigns have successful breached businesses. Independent, Nairobi-based AI and cybersecurity expert Kange Ken warns the problem will get worse.
“Use of pre-trained AI models that can mimic an individual’s voice by hackers could become a huge problem in the coming years,” cautions Ken.
For instance, all a skilled hacker has to do is find a video of a CEO speaking at an event or leading a seminar, grab it off of social media, and then use the voice snippet as material to mold into a “key” that will work across voice-activated platforms.
“This could be a potent spear phishing tool. A hacker just needs to send a voice message to their prospective target mimicking the company’s CEO,” details Ken. He adds that most chat applications use voice messages for automatic downloading, providing an attacker with an easy way to access an organization’s network and cause disinformation in an organization.
Even if there are no recordings available on social media, a determined hacker could still find a way to get a snippet of someone’s voice by recording a quick phone call. A hacker does not need to harvest that much voice material to do significant damage.
“For a hacker to mimic your voice, all they need is just a few seconds or minutes of any audio of the prospective target. They then run the recording through an available AI model and fine-tune the content of the voice message,” describes Ken.
Vishing scams, while not as common as phishing, are among the most difficult to deter because they can be impossible to prevent if the hacker is successful. For instance, a phone call where a cybercriminal successfully persuades someone to give them an access code would be virtually impossible to prevent.
“Individuals should be careful with what they post online and also enable multi-factor authentication on any service that they use,” advises Ken.
Education is best weapon
MSPs can enable detection measures to sniff out any unusual access activity originating from previously unknown ISPs. Ken notes that” vishing” attacks can cripple voice-based authentication. Because of that, he recommends multi-factor authentication like an SMS based service that automatically requests a confirmation code.
“This is much better at deterring would-be attackers,” Ken says.
In addition to multi-factor authentication, the biggest weapon MSPs wield is education. Employees are becoming increasingly conditioned to be on the lookout for suspicious emails, avoid downloading attachments, and the like, that they forget about old-fashioned phone scams.
Part of the problem is that hackers weaponize people’s personal information and social media accounts. A resume innocently posted online provides all the social engineering fodder a persuasive bad actor needs to do some severe damage.
The Derbyshire Police, recently issued an alert warning PC users that there have been cases where cybercriminals have successfully initiated phone conversations by causing pop-up messages to appear on computer screens. This pop-up requests users to call a number to speak with a technician. Once the phone call was in motion, persuasive smooth talkers can coax out all sorts of information. The Derbyshire Police say that most of the vishing attempts have been using Microsoft’s name to add a veneer of believability to the efforts.
“Derbyshire Constabulary has been made aware of calls of this nature from people claiming to be from Microsoft. In most cases, Microsoft will never contact end-users, so please be vigilant if you receive any calls of this nature, not just limited to Microsoft.”
In addition to routine and regular security checks of your client’s network, an employee refresher to avoid posting personal information online and to not give personal information, or credentials over the phone, is in order. When in doubt, urge employees to ask for a phone number so they can call the person back. That simple step would go a long way to decapitate vishing attempts. As AI use continues to increase, MSPs need to educate users to think beyond phishing emails, and think about security in their daily routine.
Photo: Gajus / Shutterstock