Share This:

AI

Nearly half (47 percent) of the 1,000 senior technology executives surveyed in a recent study shows that they are seeking external assistance to implement AI. Of which, 22 percent are partnering with a technology services provider, while 25 percent partners with domain expert to help build and deploy AI models.

S&P Global Market Intelligence conducted the survey on behalf of Vultr. It finds that, on average, organizations have deployed 150 models in production environments. Two-thirds (66 percent) rely on open-source AI models. Overall, 87 percent of respondents said they expect to see a moderate or significant increase in AI spending. 5 percent expect to have achieved advanced usage of AI in the next two years.

Organizations with a lot of resources will likely rely on partners to implement AI.  Therefore, it stands to reason that smaller to medium-sized companies that have even fewer resources will similarly follow suit. Umpqua Bank conducted another survey of 1,200 owners, executives, and financial decision-makers at U.S. small and middle-market businesses. It finds more than half of the mid-sized companies (56 percent) polled said they are making AI investments their top priority. Meanwhile, nearly 8 in 10 are planning to implement generative AI. Almost as many small businesses (45 percent) similarly plan to adopt generative AI.

Change creates a growing need for AI expertise

The Vultr survey finds areas where survey respondents specifically need additional expertise. This includes optimizing insufficient CPU or graphical processing unit (GPU) resources (65 percent). It also includes managing data locality issues (53 percent), and improving storage performance (50 percent). These are followed by security (35 percent), open ecosystems (33 percent), and cost (29 percent).

A full 85 percent also noted they anticipate growing the number of inference engines required to run AI models at the network edge in the near future.

It’s already apparent that AI will soon drive a massive wave of infrastructure upgrades. While a lot of the training of AI models may happen in the cloud, organizations generally deploy inference engines as close as possible to the data they need to access. In many organizations, they store the bulk of their data in an on-premises IT environment or at the network edge where it is being created and consumed.

Shifting responsibilities in order to transform

As a result, many organizations are increasingly turning to traditional IT teams to actively manage the deployment of the inference engines needed to run AI models. This enables data science teams to focus more of their efforts on training AI models. A separate survey of 500 software developers, engineers, managers, and directors conducted by Civo, a cloud service provider, finds less than half (47 percent) have a designated machine learning operations (MLOps) team in place to train and deploy AI models.

Many organizations will deploy a mix of generative, predictive, and causal AI models extensively. Most of those models today still confine themselves to proof-of-concept (PoC) projects. As more organizations seek IT services partners to operationalize AI, the pace at which businesses will transform is only going to accelerate.

Photo: PopTika / Shutterstock


Share This:
Mike Vizard

Posted by Mike Vizard

Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot. Mike blogs about emerging cloud technology for Smarter MSP.

One Comment

  1. Very informative. There’s a “Blocked site error message when I try to download the report. Is there another way to get the report?

    Thanks Mike.

    Reply

Leave a reply

Your email address will not be published. Required fields are marked *