Share This:

Artificial intelligence cyber warfareFending off ransomware and other malware attacks has already become part of the daily MSP landscape. But in the ever-evolving world of threats, a new one will begin to emerge with greater frequency in the months and years ahead: weaponized artificial intelligence.

Imagine an attack that doesn’t feel like an attack because an outside AI-driven intruder gets into your client’s system, mimics the users, and blends in seamlessly and strategically. Meanwhile, unbeknownst to you, your client’s files are scanned, proprietary data stolen, perhaps funds vanish, and malware is unleashed from within, while all the while hollowing out your defenses. Sound scary? It is. And MSPs are going to have to find a way to stay on top of the threat.

A recent survey by Webroot found that 91 percent of security experts fear they’ll soon be battling AI cyberattacks. The Webroot report, which surveyed more than 400 IT professionals, lays out an evolving cat-and-mouse game where the “good guys” deploy AI technology to thwart the “bad guys” and an AI arms race commences.

A senior artificial intelligence research engineer with Intel told Smarter MSP: “Weaponization of AI is a riveting topic, one which will continue to grow in visibility and importance.”

Artificial intelligence technology is already ubiquitous. But asking Alexa to turn on your living room lights is a pretty benign application. As with any technology, once it lands in the hands of bad actors it’s hard to tell where it might go.

Artificial intelligence in battle

Tim Baldwin, associate professor of computing and information systems at the University of Melbourne in Australia has been warning about the dark side of AI. He shared some thoughts with Smarter MSP.

His first concern is about physical weaponry powered by AI. Think landmines that pick and choose who to bump off. Fortunately, most MSPs won’t have to navigate the actual fog of a physical battlefield. Still, Baldwin’s prognostications are chilling. Through the use of computer vision, weapons will be able to identify and attack particular target types and ignore others. Target types, Baldwin says, could be a particular individual (using facial recognition in a sniper’s smart weapon). The technology also exists to use sound to target a particular individual based on their speech pattern or a particular demographic group based on language spoken or accent.

“For example, think personalised bombs and mines which are detonated only when a certain individual/class of person is detected. All of this is very much within research of current AI,” Baldwin says.

Smarter cyberattacks

The part MSPs need to be concerned about most is when AI starts being used in cyberattacks. And, the question is not if, but when.

“Cyberattacks themselves could be imbued with AI, of course, in not just hacking into systems in various ways, but subsequently identifying data of particular types,” Baldwin says. For instance, this could mean mining reputation-damaging content for a high-profile public figure or IP-rich content for an organization.

Right now, the tools that MSPs have to combat AI once a system is breached are limited, Baldwin says.

“Once an attacker is successful, it’s not clear what can be done, other than subtle things like, if you know what models are likely to be used to detect IP-rich content, etc., then you could devise “adversarial decoys” that are likely to fool the model into misclassifying valuable content as worthless, and worthless content as valuable,” Baldwin says. But even this type of approach is scattershot.

“This tends to require knowledge of the model to determine the adversarial examples. I suspect there’s also a lot that can be done to detect “smart” cyberattacks via the signature of the attack, and launch adversarial counter-attacks/decoys, similar to the sort of thing I describe above. But again, hard to do without some access to the model,” Baldwin says.

Baldwin was one of 60 AI researchers worldwide to sign a letter urging a boycott of Korea Advanced Institute of Science and Technology for their partnership with Hanwha Systems in developing AI-powered weaponry. Baldwin advocates the uses of AI for good, not weaponry. Meanwhile, for MSPs, you’ll have to watch out for virtual landmines. They’ll appear. It’s just a matter of time.

Photo: spartakas/Shutterstock.com


Share This:
Kevin Williams

Posted by Kevin Williams

Kevin Williams is a journalist based in Ohio. Williams has written for a variety of publications including the Washington Post, New York Times, USA Today, Wall Street Journal, National Geographic and others. He first wrote about the online world in its nascent stages for the now defunct “Online Access” Magazine in the mid-90s.

Leave a reply

Your email address will not be published. Required fields are marked *