Share This:

I am willing to bet that almost every managed service provider (MSP) will, or at the very least, experiment with agentic AI in 2025. Rapid advances of large language models (LLMs) to probabilistically manage tasks have led IT vendors, IT service providers, and even end-user organizations to develop a slew of AI agents.

Moving beyond the simple chatbot experience

It’s not clear just yet how reliable AI agents can consistently execute tasks, but the opportunity to reduce costs is too great for MSPs to ignore. The challenge now is determining which use case(s) make the most sense to assign to an AI agent. Once the use cases are determine, MSPs can then figure out how to orchestrate a workflow involving multiple agents.

On top of personifying AI agents, such as Einstein by Salesforce.com, another advancement would be to train the AI agents to take the experience beyond the simple chatbot, but into creating business value for the company as well. For example, taking an end-user service request and guide the conversation into an upsell opportunity. This will require deep understanding of processes and requirements—i.e.. lots of training, before this can become a reality.

Making AI agents more accessible

Fortunately, it’s becoming much simpler to experiment with agentic AI. Amazon Web Services (AWS) has both the Amazon Q Business and Amazon Q Developer services available. The former is designed to make it simple for business executives to construct workflows using AI agents while the latter provides advanced capabilities for application developers. A hybrid cloud option is also available from koderAI, a startup that is previewing a namesake platform that enables anyone to build AI agents.

It’s not clear to what degree customers will appreciate engaging with AI agents. Some may still prefer the human approach, but over time, customers will see an  organization that doesn’t provide some AI agent capability to resolve issues as antiquated. We all have more important things to do than to engage with an IT support specialist. Of course, if the AI agent experience winds up being suboptimal, an MSP should avoid providing this type of capability and wait until it is better vetted.

Using caution when leveraging agentic AI for mission-critical tasks

A recent report from Anthropic and Redwood Research finds that in some cases, LLMs will ignore safety guardrails that organizations have put in place. MSPs should avoid relying heavily on AI agents for mission-critical tasks that require deterministic execution. Since LLMs are probabilistic, they will never produce the same output, even when given the same prompt.

Despite these concerns, however, it’s important to remember that AI agents will become smarter as the underlying models leverage additional computer resources to invoke more powerful reasoning engines. No MSP should ignore the rise of AI agents. You should always exercise some level of prudence by having human resources to ensure the AI agents are performing the tasks as requested and nothing more.

Photo: TippaPatt / Shutterstock


Share This:
Mike Vizard

Posted by Mike Vizard

Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot. Mike blogs about emerging cloud technology for Smarter MSP.

Leave a reply

Your email address will not be published. Required fields are marked *