Malware and hackers do not recognize international boundaries. Human-made viruses quickly sweep across the globe like sneeze-passed cyber versions of their biological namesakes. The good news is that security precautions don’t stop at the borders either. Some of the most effective cybersecurity weapons that MSPs have in their toolkits originated from the research being done at universities, both in the U.S. and abroad.
“We have stepped up our behavioral cybersecurity and consultancy,” says Alana Maurushat, the professor of cybersecurity and behavior at the University of Western Sydney in Australia. However, cybercriminals are changing their tactics rapidly. In 2018, artificial intelligence emerged on the scene and in 2019 we saw a drastic increase in AI-assisted cybercrime.
“It moved from emergence to prevalence,” Maurushat shares.
Australian National University recently released a public report on an advanced persistent attack that was orchestrated over several years. The attack stole account details, personal information, and high-value intellectual property. The issue today is a live attack that spans across every Australian university, Maurushat explains, with many media reports pointing to Chinese infiltration. Other experts that we have spoken to have warned us that in 2020, we will see a rise in state-sponsored attacks.
Another event that will continue to shape 2020, is the public disclosure of trusted and encrypted communication systems such as WhatsApp, had been compromised. The issue arose in the context of the Hong Kong situation, but we can assume that governments other than China also have the capability of decrypting these apps.
Is 2020 the year of deepfakes?
Perhaps the most alarming of her predictions, is that AI will continue to be a threat, but cybercriminals will also take this one step further and blend voice mimicking through automation. Maurushat calls this development a “potential game-changer for criminals and espionage.”
“Organizations are slowly getting on top of phishing emails, and are slowly putting in controls that make diversion fraud more difficult. But AI-enabled voice fraud will have us once again back to social engineering over voice calls,” Maurushat explains.
Voice calls are often overlooked as a security threat because so much of the focus is on cyber threats, but the lines are becoming increasingly blurred.
“Companies no longer view the telephone as a threat vector, and if they do, they are not training for voice over phishing attacks and information gathering practices,” Maurushat says.
In the past, hackers might socially engineer their way into a system by calling up a company and fishing for information, but payments haven’t been diverted to criminals via telephone, Maurushat explains. That is the danger AI mixed with voice creates. A criminal can obtain a voice sample, and with AI-infused programs they can analyze the mannerisms and dialect to create voice patterns that can create an authentic-sounding copy.
With a voice sample and AI-infused programs, #Cybercriminals can analyze the mannerisms and dialect to create the voice patterns of an authentic-sounding copy. #Deepfake
“Imagine you’re a lawyer closing a merger deal or the company accountant sending and processing hundreds of payments per day. If you receive a phone call from someone whose voice is indistinguishable from your supervisor requesting that you divert a payment, you’re going to comply,” Maurushat states. And this presents a huge danger.
“Companies don’t know about this new vector. I predict that this is going to play a huge part in cybercrime in the next couple of years, and organizations are entirely unprepared for this scenario,” Maurushat says.
MSPs could also be targets, the top tech person at one of your client’s offices could be called by someone mimicking you – the MSP owner – and you could be asked to change a password or download a file.
Five years ago, these threats would have seemed unthinkable. While these attacks are still rare, Maurushat thinks that this will be the year you start to see them.
Safeguards to implement
A few deepfake safeguards to consider implementing in the near future are:
Face-to-face meetings: Ironically, the technology that makes the world seamless, is the very thing that may revive the old-fashioned in-person meeting. While these advancements are helpful, mission-critical commands and tasks are safest being relayed face-to-face. (That is until hackers develop three-dimensional hologram versions of people, right down to mimicking a favorite cologne or perfume.)
Avoid personalized voicemail greetings: Yes, the opportunity to infuse the workplace with a little personality in your voicemail greeting is nice, but it may also serve as “buffet” for hackers to grab helpings of your voice which could then be repurposed into a convincing deepfake. Instead, consider using an automated greeting.
Watch social media postings from the CEO: This is especially troubling because there is deep, inherent value in a CEO engaging and interacting with customers and potential clients. Social media videos showing a CEO in a dunking booth or a boardroom can humanize them. But it can also expose them. Hackers can grab snippets of these videos to repurpose into convincing deepfakes.
Expect the battle between good guys and bad guys to continue on the deepfake front, but until the good guys get the upper hand, you’ll want to be extra careful.
We are reluctant to espouse eliminating valuable tools for public interaction, but MSPs should also be making clients aware of these dangers. The mission for MSPs is to educate customers and stay ahead of these threats. Provide your clients with advice and guidance on issues like social media exposure and let them take the lead.
Photo: Nicole Kwiatkowski/ Shutterstock.