Share This:

Just about every organization will be experimenting with artificial intelligence (AI) in the coming year. Many of them will quickly discover that operationalizing AI will require them to get their data management house in order. After all, no matter how advanced AI becomes, the same fundamental principle applies: the more garbage that goes in, the more will come out.

Curate reliable data to reduce risk

Unlike a general-purpose platform, such as ChatGPT, which is prone to hallucinations, any AI solution that an organization adopts need to be consistently reliable. No matter how often an AI platform might guess right, Murphy’s Law dictates that if something does go wrong, it will inevitably affect a mission-critical process. The best way to minimize that risk is to carefully curate the data exposure of an AI model.

The on-premises vs. in the cloud argument

In fact, one of the reasons organizations are investing in data lakes hosted in the cloud is to jump-start this process. A survey of 500 full-time enterprise IT and data professionals conducted by Dremio, a data lakehouse provider, shows that 70 percent of respondents will have more than half of their analytics in a data lakehouse within the next three years. Another 56 percent estimate they are saving more than 50 percent on analytics cost by moving to the data lakehouse.

While many organizations may opt for data lakes hosted in the cloud, there are also going to be plenty of reasons organizations will opt to manage data on-premises. Security concerns is the top factor. In most cases, organizations will default to some type of federated approach to data management. Regardless of the approach, the more data there is to manage, the more likely that organizations will require the expertise of a managed service provider (MSP). For example, an MSP’s expertise will be required to set up a retrieval-augmented generation (RAG) platform to train a large language model (LLM) that enables them to add generative AI capabilities to a workflow.

The AI opportunity for MSPs

MSPs stand to benefit greatly from the rise of AI. The biggest opportunity may not be AI itself, but helping organizations put their AI building blocks in place. Much like the tycoons that got richer selling equipment to miners during the California Gold Rush, MSPs that can help organizations manage their data as a critical business asset will be in high demand.

In addition to managing what could easily become petabytes of data, those organizations will need assistance navigating what promises to be a byzantine set of rules and regulations that will soon be enacted. Any organization that applies AI to a process will need to be able to explain how it was created. This starts with any inherent biases that might be lurking in the data being used to train a model.

Given all the data management issues organizations face, it may, despite the current level of enthusiasm for all things AI, be a while before AI becomes pervasive. However, at this juncture, it’s all but inevitable. The main issue is making sure organizations use it responsibly versus in a random fashion. That can do a lot more bad than good if the data they rely on isn’t of the quality they need.

Photo: Yavdat / Shutterstock


Share This:
Mike Vizard

Posted by Mike Vizard

Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot. Mike blogs about emerging cloud technology for Smarter MSP.

Leave a reply

Your email address will not be published. Required fields are marked *