Share This:

artificial intelligenceWith warnings coming fast and furious from tech luminaries as diverse as Bill Gates, Elon Musk, and the late Stephen Hawking, to name a few, most of us are conditioned to think of the potential dangers of artificial intelligence in the hands of bad actors. While armies of AI-powered humanoid bots have not yet materialized, AI is being harnessed by hackers in the creation of “smart malware” that overcomes traditional defenses by using predictive technology.

But if this is true, can’t the opposite be also? Can AI be harnessed to turn against malware?

Experts emphatically say yes, and some, like Barracuda and Cylance, are already doing it.

Artificial intelligence as a tool to stop the bad guys is a clear example of how the technology can be used in good ways, something that some experts think isn’t emphasized enough.

AI’s bad reputation

Thomas Davenport, a professor of cognitive technologies at Babson College, blames the media for a lot of AI’s bad rap.

“There’s been so much hype in the media about it, and this is just journalists trying to extend the hype by talking about the negative side,” Davenport told CIO recently.

Jim Furstenberg is an assistant professor of information security and intelligence at Ferris State University and has built an extensive 30-year career in the field. He prefers to focus on the sunny side of AI.

“I like to use the positive side of AI,” Furstenberg told Smarter MSP. He says that AI can help protect users and refute suspected behaviors.

“It is quite common for attackers to gain legitimate credentials on the system or network, so how does one know if malware is using legitimate, hacked, or stolen credentials on the system? Hiding in plain sight…. AI can help that situation,” Furstenberg says.

The traditional methods of intercepting malware are “too static” Ferstenberg says. And these will “soon lose out to the dynamic nature (if not already) of malware,” Furstenberg explains.

Artificial intelligence will also be able to help MSPs streamline their malware offerings.

“Tools from an analyst perspective are simple interfaces with drill-down interconnectedness tying disparate databases which can provide a holistic view of behavior and activities,” Furstenburg says. “Many organizations have too many silos, and AI can help with that.”

Biological intelligence versus artificial intelligence

As AI takes hold in more and more malware, it will take AI to fight AI.

“With the exponential increase in the amount of data that flows through enterprises today, manual/human methods of cybersecurity are increasingly become less and less viable,” says Niranjan Mayya, founder and president of Toronto upstart cyber security firm RANK Software.

The evolving nature of AI threats also makes an AI approach more and more preferable because, Mayya says, traditional approaches that look for specific patterns or signatures are by definition limited to detecting known threats, leaving unknown threat actors or rogue insiders free rein to create mayhem.

AI/ML [machine learning] is increasingly becoming the technique of choice for modern day cybersecurity solutions. AI-based approaches will measure the normal behaviour of users and machines in an enterprise, and then monitor deviations from this baseline to detect anomalous behaviour. This allows for the detection of unknown threats without the use of rules and signatures,” Mayya explains.

“The nature of modern cyber threats burden businesses with sifting through tens of millions of security events, wasting time, money, and increasing the chance that a credible threat will slip through the cracks,” Mayya says.

Global threats, global solutions

Researchers across the globe are working on ways to incorporate AI into malware interception.  

Moses Dlamini is a lecturer on cyber security at the University of KwaZulu Natal in South Africa who has studied and written about the application of artificial intelligence on malware interception.

He tells Smarter MSP that the biggest impediment so far to AI-infused security systems is that so far the bad guys have been faster than the good guys.

The cybersecurity threat landscape is forever changing, and the changes comes very rapidly, more especially from those who are developing malware to breach systems. The cybersecurity defence community is just too slow, giving more advantage to the cyber criminals,” Dlamini says. Dlmani has been studying security for more than 20 years, and the theme of malware has been changing.

“The one constant thing in all malware that we have seen is the speed and agility to change form in order to avoid detection,” he says.

This puts anti-virus developers, and by extension MSPs, on the defensive.

The developers of malware detection systems have been working in a reactive manner all along using signature-based systems to detect malware. The major problem of such systems is that it can only detect that which have already compromised systems and already have a signature in the anti-virus/malware database,” Dlamini explains.

And this creates a huge opening for AI-infused antivirals.

Reactive versus proactive AV

“This is where we actually see a massive role of AI in combating malware and closing the gap created by reactive AV to provide proactive AVs that are able to anticipate new variants of malware before their authors can release them to wreck systems,” Dlamini says.

Dlamini and his fellow researchers call these more common malware “derived viruses” because the authors are simply changing the signature of old variants, but the general methodology of infection is unchanged.

Dlamini says AI can help malware detection by classifying and learning the patterns of malware over time and be in a better position to anticipate and predict new variants before they hit their targets.

In this way, AV acts as AI-infused driverless vehicles may someday, anticipating the moves of its drivers based on past patterns.

“AI has the capability to provide predictive analytics to help combat future malware using the powerful capability of neural networks to do proper classification and accurate prediction of new variants,” Dlamini says.

AI arms race

Dlamini, though, is not optimistic about the long-term capability of AI to thwart malware.

“It will help for now  or up to about two or three years from now or until such a time that the cyber-criminals stop being lazy to just use what has always been there and to start thinking about new ways of TTPs [tactics, techniques, and procedures],” Dlamini says.

Cybercriminals, he predicts, will start using the same AI capabilities for developing malware that avoid even AI-inspired detection products.

“This would mean developing malware that learns the detection techniques of all AV, i.e. even those that have predictive analytics, and shows 100-percent success rate of detection avoidance,” Dlamini says.

This opens up the chilling reality that we’ll then be in a world of an AI arms race of the good guys and bad guys constantly trying to outdo one another.

“Unsupervised learning is key to achieving such malware. Malware that learns on the fly and changes forms before it can be detected. This is the future of malware. Malware that would spoof attacker to gather intelligence on the things that trigger the alarms and build their defense mechanisms around those triggers and even up-to-date anti-malware becomes useless,” Dlamini explains.

MSPS and AI

MSPs will play their usual roles as gatekeepers and guards, remembering that prevention is always preferable to the cure. When it comes to AI, though, the world will descend into the fog of war because it’s hard to know the proper prevention when you don’t know what you’re preventing.

Dlamini says nimble threat intelligence tools, predictive analytics, and innovative malware classifiers will all be part of an MSP’s anti-viral toolkit. Plus, new methodology needs to be developed.

“New algorthims must be developed to improve the accuracy of most of the proposals,” he says.

Dlamini says dedicated malware anlaysis/test labs would help MSPs create sandboxed or air-gapped environments where intelligent AV could learn more about the behavior of malware and incorporate learned data into new signatures.

“Proactivity is key,” Dlamini says.

Photo: Tatiana Shepeleva/Shutterstock.com


Share This:
Kevin Williams

Posted by Kevin Williams

Kevin Williams is a journalist based in Ohio. Williams has written for a variety of publications including the Washington Post, New York Times, USA Today, Wall Street Journal, National Geographic and others. He first wrote about the online world in its nascent stages for the now defunct “Online Access” Magazine in the mid-90s.

Leave a reply

Your email address will not be published. Required fields are marked *