Share This:


The adoption of generative artificial intelligence (AI) by organizations continues to rise. Therefore, there is a strong need to make sure the data being shared with these platforms is secure.

The National Security Agency (NSA), in collaboration with multiple security and law enforcement agencies from around the world, just shared an advisory. It identifies the need to secure AI models. The advisory also describes what guardrails should be in place to ensure services such as ChatGPT are safely in use.

The challenge organizations face as they embrace AI is first ensuring the data they are sharing via prompts with a generative AI platform are not being monitored by cybercriminals. Much of that information could easily find its way into a phishing attack. It would be even more credible if it appeared to be related to something someone was working on. The second thing is making sure that there is no sharing of sensitive data such as credit card numbers. This could inadvertently result in the training of the next iteration of an AI model.

AI model development and deployment is risky

The cybersecurity challenges are even greater for organizations building or customizing AI models. Just like any other application, the use of components in constructing an AI model might have vulnerabilities that could be exploited once it is deployed in a production environment. Cybercriminals could also employ stolen credentials to attempt to poison a model by exposing it to false data. Additionally, they could do this by inducing deliberate hallucinations or adjusting the weights that dictate the generated output.

Finally, an AI model is arguably going to be among the most valuable intellectual property any organization is likely to have. Rather than sabotaging the AI, cybercriminals might simply opt to steal it altogether. An organization could very well wake up to the realization that its rivals now have access to the same AI models that it spent millions of dollars building.

MSSPs play a critical role in protecting organizations

All these issues create demand for additional cybersecurity expertise that managed security service providers (MSSPs) are in a good position to provide. Most data science teams don’t have a lot of cybersecurity experience, so the probability of mistakes is high. A recent PwC survey found nearly two-thirds of CEOs (64 percent) are concerned about the cybersecurity implications of AI.

It might be a little while before that concern turns into meaningful action. It usually requires a few well-publicized breaches to motivate the allocation of budget dollars. Given the number of employees using generative AI platforms with or without permission, it’s a matter of time.

In the meantime, the NSA advises organizations to put governance frameworks in place to ensure their data remains secure. Organizations building AI models are also advised to secure application programming interfaces (APIs) and the platforms where the AI models are deployed. MSSPs should align their services now around the best practices cited by the NSA. This is to make sure they are actually ready to help when inevitably called upon.

Photo: 3rdtimeluckystudio / Shutterstock

Share This:
Mike Vizard

Posted by Mike Vizard

Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot. Mike blogs about emerging cloud technology for Smarter MSP.

Leave a reply

Your email address will not be published. Required fields are marked *