Nearly half (47 percent) of the 1,000 senior technology executives surveyed in a recent study shows that they are seeking external assistance to implement AI. Of which, 22 percent are partnering with a technology services provider, while 25 percent partners with domain expert to help build and deploy AI models.
Organizations with a lot of resources will likely rely on partners to implement AI. Therefore, it stands to reason that smaller to medium-sized companies that have even fewer resources will similarly follow suit. Umpqua Bank conducted another survey of 1,200 owners, executives, and financial decision-makers at U.S. small and middle-market businesses. It finds more than half of the mid-sized companies (56 percent) polled said they are making AI investments their top priority. Meanwhile, nearly 8 in 10 are planning to implement generative AI. Almost as many small businesses (45 percent) similarly plan to adopt generative AI.
Change creates a growing need for AI expertise
The Vultr survey finds areas where survey respondents specifically need additional expertise. This includes optimizing insufficient CPU or graphical processing unit (GPU) resources (65 percent). It also includes managing data locality issues (53 percent), and improving storage performance (50 percent). These are followed by security (35 percent), open ecosystems (33 percent), and cost (29 percent).
A full 85 percent also noted they anticipate growing the number of inference engines required to run AI models at the network edge in the near future.
It’s already apparent that AI will soon drive a massive wave of infrastructure upgrades. While a lot of the training of AI models may happen in the cloud, organizations generally deploy inference engines as close as possible to the data they need to access. In many organizations, they store the bulk of their data in an on-premises IT environment or at the network edge where it is being created and consumed.
Shifting responsibilities in order to transform
As a result, many organizations are increasingly turning to traditional IT teams to actively manage the deployment of the inference engines needed to run AI models. This enables data science teams to focus more of their efforts on training AI models. A separate survey of 500 software developers, engineers, managers, and directors conducted by Civo, a cloud service provider, finds less than half (47 percent) have a designated machine learning operations (MLOps) team in place to train and deploy AI models.
Many organizations will deploy a mix of generative, predictive, and causal AI models extensively. Most of those models today still confine themselves to proof-of-concept (PoC) projects. As more organizations seek IT services partners to operationalize AI, the pace at which businesses will transform is only going to accelerate.
Photo: PopTika / Shutterstock
Very informative. There’s a “Blocked site error message when I try to download the report. Is there another way to get the report?
Thanks Mike.