Share This:

2023 has been the year of generative AI. Will it transform cybersecurity as we know it? Or will the status quo prevail as a continual cat-and-mouse game between the cyber-good and the cyber-bad? SmarterMSP is talking to a variety of AI experts across the world this summer to hear what they are saying. Opinions are as varied as the algorithms that power AI.

This week we caught up with Dr. Shivani Shukla, an assistant professor at the University of San Francisco’s School of Management. Shukla has five years of experience in cybersecurity through government research and is now a consultant within a company building cybersecurity solutions for small to medium-sized businesses.

It’s not all doom and gloom

Shukla says it is the arrival of generative AI and its widespread accessibility that creating the threat. Simply put, humans aren’t needed to write cheap code to launch an attack.

But despite the accessibility and some industry alarm, Shukla is an optimist regarding AI and its risks. “If the technology can cause harm, the same technology can be used to prevent harm,” Shukla explains, adding that AI’s power can be harnessed to launch and prevent phishing attacks.

“I am not as worried as most people are. The biggest risk is an automated attack system that can be created, and it can keep learning. If something is not working, it can change itself by modifying behaviors,” Shukla describes.

But AI, for at least now, has limits and often her experience has been that if an AI attack doesn’t work, she will fall into an endless loop of trying the same strategies.

“AI is not as advanced as we think, it will get more advanced, but only some people will have access. I don’t think the threat is imminent or urgent. I am not scared,” Shukla adds, telling us that AI’s ability to decrypt and launch more minor attacks is real, but we will be able to ward those off. “I don’t think AI is a powerful beast we should all be scared of. We are far away from that,” she says.

The one caveat that Shukla warns could change the equation is if state sponsors throw themselves behind AI.

“When countries get involved and start attacking each other, and If state actors start to harness AI, the threat becomes very real and scary,” Shukla warns, adding that a country would have the resources and funds to invest in AI limitlessly.

So how do smaller businesses and the MSPs that often provide security fend off the arrival of AI? Shukla says, “It all boils down to training and education for smaller businesses and schools.”

People must learn to discern between real and fake and which emails to open or ignore.

The generator vs. the discriminator

Shukla explains that up until now, much of the AI industry’s innovation and investment has been focused on the generative side of AI, the side that solves problems and comes up with answers. But now the “discriminator” side is catching up, and that aspect of AI will serve as a “check” on the other side.

“The more you power the discriminator in your AI model, the more you will avoid attacks; investment has been on the generator, not the discriminator,” Shukla adds. There will also more and more companies quarantining their data as they choose not to participate in AI modeling. Generative AI can only work if it has data to access.

Once the generator and discriminator sides are at parity, you’ll have an AI-balanced cybersecurity situation. Even in very sophisticated AI-generated phishing attempts, an equally sophisticated AI-generated “phishing catching” program will catch it.

“A machine can catch another machine; it won’t be a human being,” Shukla says.

She also shares that universities can help use training to offset the price tag of learning to live with AI. Some universities have opened community-level cybersecurity clinics where people can learn.

“Students and professors can help. There will be a need for organizations that can leverage AI for cheap,” notes Shukla. Universities can help community entities learn how to detect AI algorithms that might be a threat to smaller businesses.

Where to next?

Shukla says trying to peer far into the future, say a decade from now, and predict where AI will be is challenging. “We have no idea where this is headed or where generative AI will go. Up until now, it was who will be first,” she replies. With that now behind us, she adds that the availability of open-source AI would make it much more difficult to contain.

Still, Shukla believes that segmentation will occur where very specialized AI healthcare disciplines, education, and government are involved and that cybersecurity controls can be built at each step. “AI is here and will not go away,” she emphasizes. “Cybersecurity will have to focus on getting better and better and use the same technology to better.”

Photo: metamorworks / Shutterstock


Share This:
Kevin Williams

Posted by Kevin Williams

Kevin Williams is a journalist based in Ohio. Williams has written for a variety of publications including the Washington Post, New York Times, USA Today, Wall Street Journal, National Geographic and others. He first wrote about the online world in its nascent stages for the now defunct “Online Access” Magazine in the mid-90s.

Leave a reply

Your email address will not be published. Required fields are marked *