Share This:

AI toolsThe rapid spread of generative artificial intelligence (Gen AI) tools has reached a tipping point, raising serious cybersecurity concerns, and creating opportunities for managed service providers (MSPs).

A survey of 200 North American security leaders conducted by OpinionRoute on behalf of 1Password finds nearly two-thirds (63 percent) now feel the biggest internal security threat is their employees unknowingly giving AI agents access to sensitive data. 

Additionally, half (50 percent) acknowledge that their organization has already experienced a confirmed or suspected cyber incident caused by AI or AI agents in the last six months. 

Only 21 percent of security leaders say they have full visibility into all AI tool utilizations, and nearly one third (32 percent) believe up to half (50 percent) of their employees are using unauthorized AI tools. In total, only 2.5 percent of organizations believe they have full visibility into AI applications and the level of data they can access. 

AI risks are escalating fast

Well over half (54 percent) describe their enforcement of AI governance policies as being weak. According to the survey, 56 percent of respondents estimate that 26 to 50 percent of AI agents and tools in their organization remain unmanaged compared to those that are governed.

A similar survey from ManageEngine finds that 70 percent of IT decision makers (ITDMs) have identified unauthorized AI use within their organizations and 60 percent of employees are using unapproved AI tools more than they were a year ago. A full 91 percent have implemented policies, but only 54 percent have implemented clear, enforced AI governance policies and actively monitor for unauthorized use of generative AI tools. 

A full 85 percent also report that employees are adopting AI tools faster than their IT teams can assess them, with 32 percent of employees having entered confidential client data into AI tools without confirming company approval. Well over two-thirds (37 percent) have entered private, internal company data. More than half (53 percent) said that using personal devices for work-related AI tasks creates a blind spot in their organization’s security posture. 

The risks that come from AI usage

At this point, it’s probable that it’s only a matter of time before there is a serious breach involving AI tools, applications, and services. Many end users are now routinely pasting sensitive data into chat interfaces without reading the fine print of its user licensing agreement. Organizations will use much of that data to train the next iteration of an AI model, increasing the risk that sensitive information could surface in future AI outputs—often in unpredictable ways.

Cybercriminals continue to sharpen their prompt engineering skills to access data, frequently bypassing existing guardrails. They are also targeting autonomous AI agents. If they are compromised, it will provide them with an ability to compromise an entire process. 

In fact, vulnerabilities in AI tools and platforms are now being discovered at a rapid pace. For example, a vulnerability in Microsoft 365 Copilot, dubbed EchoLeak, allows cyberattackers to exfiltrate sensitive data from Copilot’s context window when interacting with a large language model (LLM) without phishing and minimal user interaction. The attack chain, dubbed LLM Scope Violation, bypasses measures meant to thwart prompt injection attacks. 

Discovery is key to securing AI tools

As usual, organizations are rushing to adopt AI and overlooking critical cybersecurity concerns. Cybersecurity teams cannot secure what they don’t know about, so the first task frequently assigned to MSPs is simple discovery. MSPs must actively assess the scope of AI tool usage. With that insight, they can craft appropriate policies and controls to ensure responsible use of these tools and services.

Implementing those security measures won’t eliminate all AI-related incidents. However, it will significantly reduce the number of issues that are likely to occur. In that context, a managed security service dedicated to securing AI tools and platforms holds immense value.

Photo: phive / Shutterstock


Share This:
Mike Vizard

Posted by Mike Vizard

Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot. Mike blogs about emerging cloud technology for Smarter MSP.

Leave a reply

Your email address will not be published. Required fields are marked *