Artificial intelligence (AI) is everywhere. Companies of all sizes and in every market, including MSSPs, are deploying or experimenting with how the technology can improve everything from call center operations to threat intelligence to marketing to quality control on the manufacturing floor. McKinsey estimates that Gen AI could add up to $4.4 trillion to the global economy. However, that opportunity comes with risks. Cybercriminals are already utilizing AI to improve the effectiveness of their attacks. Still, AI opens up companies to other types of vulnerabilities – some of which we are just beginning to recognize and understand.
How AI is reshaping cyberattacks
McKinsey found that while the majority of companies are putting a high priority on AI implementations, more than 90% don’t feel like they are adequately prepared to do so. That means, in many cases, speed may be prioritized over security. However, suppose companies implement risk management approaches to their AI deployments. In that case, they can reduce the likelihood of their AI solutions being used against them by bad actors or inadvertently creating new security vulnerabilities.
New AI deployments may expose companies in several ways. First, new applications may not be sufficiently secure, which could open up backdoor entry to your network via third-party providers. Second, AI-based inward or outward-facing applications could be tricked into exposing or sharing sensitive data. In addition, the inherent flaws in AI (like the potential for model collapse or hallucination) could result in AI-based solutions or automated workflows acting in unpredictable ways that could leave networks or apps exposed to breaches. Data sent to generative AI models needs to be handled securely and in ways that ensure privacy.
Companies must also guard against untrusted AI models or model-sharing that could introduce malware or result in data breaches. Access keys used for communication among different AI applications should also be managed securely.
AI also creates risks outside of cybersecurity. For example, AI chatbots might develop biases that could offend customers or damage your brand. AI algorithms can also generate unreliable outputs, creating downstream design, production, or workflow problems.
A proactive approach: Best practices for AI security
Before implementing AI technology, how can you ensure your networks and applications are sufficiently protected? There are some recommended best practices MSSPs can implement for greater AI security, including:
- Create a comprehensive view of the potential AI-related risks across use cases and map out options for managing those risks (both technical and non-technical). A cross-functional team should be established for this task to review and validate risk assessments.
- Implement a governance structure that can include requiring references and fact-checking for AI responses, keeping humans in the loop, and protecting against problematic third-party data usage.
- Embed the governance structure in an operating model and provide training for end users. An AI steering group should meet regularly to evaluate risks and mitigation strategies.
- Automate data governance and information management (including archiving and deletion) to help avoid having employees overshare or expose sensitive information. Role-based access and elimination of manual intervention can reduce the risk of human error.
- Reassess your data backup and recovery capabilities. AI tools like Microsoft CoPilot and others will exponentially increase the volume of data generated across every company. Ensure you have sufficient storage in multiple locations and regular backups to help mitigate against system failures, cyberattacks, and other disasters. This will be critical for managing AI-generated data and ensuring you have sufficient data to train new AI applications.
- Conduct customer training on cybersecurity awareness and AI risk management. Establish acceptable use policies for AI and regularly train staff about proper usage and the potential risks of AI-based workflows and solutions. There should also be rules around using public AI tools like ChatGPT.
AI has the potential to unlock new levels of innovation and productivity. Still, if companies do not fully understand the risks around AI and use best practices to ensure the secure use of the technology, they will not be able to use AI to its full potential. Plus, they could leave themselves open to new and difficult-to-detect vulnerabilities.
This article was originally published at MSSP Alert.
Photo: feriagashi / Shutterstock
AI offers great potential but also cybersecurity risks, requiring strong governance, secure data handling, and staff training to mitigate threats.
Still not sure how to feel about the whole AI boom. It’s very useful for some cases, but can so easily be abused…
I believe this is critically important for MSPs/MSSPs to be THE prudent advisors in this fledgling space for SMBs. I am seeing far too many businesses dive in headfirst without thoroughly evaluating their true needs and the security related to AI.
It seems like it’s more likely that the bad actors will be implementing AI to attack SMBs long before it is feasible/cost-effective for most of what end users consider “AI” to be implemented at those SMBs.
This article does a great job of showing both the benefits and risks of AI. It’s exciting to see how AI is improving industries like customer service and manufacturing, but at the same time, the potential for cybercriminals to misuse it is pretty concerning. I really liked the focus on risk management—it’s so important for businesses to have strong safeguards in place while taking advantage of AI’s potential. Finding the right balance between innovation and security is going to be key moving forward!
Great article, I use AI often and it’s a great tool to help increase what I can accomplish in a day. However there are risks involve, and understanding that and implementing protections is a great step.
Not sure how I feel about AI. For me the are still to many security risks.