Share This:

Artificial intelligence (AI) has been a slowly growing part of the cat-and-mouse game between cybercriminal and Chief Information Security Officer (CISO)/Managed Service Provider (MSP) since the first crude AI tools became available decades ago. But AI has exploded into the public realm this year with the arrival of ChatGPT and Google Bard. With AI’s sudden accessibility to everyone, it has expanded the cast of criminals who can use it for cyber attacks.

With AI generating unprecedented conversation, I’ll be chatting with industry experts over the coming weeks ahead to get their take on AI and what MSPs can do to harness its power.

AI’s unanticipated arrival

John Virden is Assistant Vice-President for Security and Compliance and CISO for Miami University, a 20,000-student state university in Oxford, Ohio. Virden has spent over a quarter century studying cybersecurity in the US military and university settings. AI’s sudden arrival seemingly everywhere has been a jolt.

“I think it is going to be disruptive,” Virden told Smarter MSP. The speed at which AI has catapulted both into the imagination and working tools has been the most surprising factor, Virden stated.

“This has been such a quick onslaught; I think it will be an amazing thing to observe. AI’s arrival is faster than anything we have seen,” Virden says, adding that many cybersecurity specialists had been worried about quantum computing’s arrival on the horizon and how that could impact encryption. Still, AI has upended and outpaced those worries.

Bad actors may reap the benefits of AI

Virden says that AI can give cybercriminals advantages in creating more realistic phishing and scam attacks.

“AI will help the bad actors make attacks seem more realistic and relatable and appear more friend-like or as a company they trust. AI will allow cybercriminals to achieve the next level of phishing or scamming attacks using deep fakes.” Virden noted, adding that “AI’s power for criminals is in its ability to open copious amounts of data to refine attacks. Unlocking the sheer volume of data can make the attacks more sophisticated and harder against automated defenses to detect.” Virden adds the bad guys are now using AI to determine why attacks fail and how to improve them.

“The bad guys will figure out how to weaponize to their advantage and their sole purpose,” Virden says they will also use it to spread misinformation.

The good news is that the good guys have access to the same tools. But the bad guys, driven by pure profit and often without the demands of a day job, are usually a step ahead.

“But we should be able to detect new threats,” Virden adds of AI, “and to find databases stored in vulnerable spots before bad actors get to it.”

Virden further comments that it isn’t just large organizations who are vulnerable to AI-driven attacks, businesses small and large will feel the impacts.

“We all have a commensurate number of threats,” says Virden, adding that phishing, ransomware, and privacy/disclosure of sensitive data are the significant threats out there, now supercharged with AI-driven tools.

“Bad actors send out scans to find vulnerabilities; they are not looking to see whether a company is large or small,” Virden states, adding that cybersecurity specialists will have to keep pace as best as we can to the bad actors with how their tactics change. Virden also stated some companies are already offering products like AI protection with claims to be looking for attacks from the AI world, but until these tools are proven, Virden says MSPs can help keep customers safe with heightened awareness and data protection.

MSPs to the rescue

“The MSP can help with the privacy side, educating people to not put sensitive data in places it shouldn’t be in,” Virden says. Data loss protection scans should be run to look for sensitive data.

Virden stated that people need to be aware that proprietary information in ChatGPT and Bard does not stay private and can be incorporated into their algorithmic tools. That can be a challenge on a college campus.

“We are educating staff and students to not put sensitive info into ChatGPT to write a report,” Virden says.

Everyone needs to be educated about the dangers of AI. Virden comments that college campuses are terrific learning labs for defending against AI because a large campus houses many vulnerable verticals: healthcare, finance, and education, all in one spot. There’s a lot of freedom to experiment with what works and what doesn’t. But Virden says over time, AI will become apart of everyone’s cyber life.

“I think in 10 years, we will be completely used to it like we are used to the internet now. It will grow quickly as it permeates the browser windows. It will be in front of everybody,” Virden states.

Photo: Andrey Suslov / Shutterstock


Share This:
Kevin Williams

Posted by Kevin Williams

Kevin Williams is a journalist based in Ohio. Williams has written for a variety of publications including the Washington Post, New York Times, USA Today, Wall Street Journal, National Geographic and others. He first wrote about the online world in its nascent stages for the now defunct “Online Access” Magazine in the mid-90s.

Leave a reply

Your email address will not be published. Required fields are marked *