In an ideal world a managed service provider (MSP) would find themselves involved in the building and deploying of cloud applications every step of the way. However, the reality is many organizations are both building new cloud applications and lifting and shifting legacy applications into the cloud faster than they can operationally handle.
A new survey of 338 IT professionals conducted by Dimensional Research on behalf of Ixia, a unit of Keysight Technologies that provides network testing tools, finds that 95 percent of respondents experienced an application or network performance issue due to visibility problems. A total of 38 percent named insufficient visibility as a key factor in application outages, while 31 percent cited the same factor when they experienced network outages. A full 87 percent of respondents said they find it difficult to predict application performance in the cloud.
95% of #IT professionals surveyed by @IXIAcom experienced an application or network performance issue due to visibility problems in the #cloud
Fewer than 20 percent of IT professionals had complete, timely access to data packets in public clouds. That compares to 55 percent having access to data packets in a private cloud, and 82 percent having access in an on-premises IT environment.
The top three reasons cited for wanting access to data packets is to monitor and ensure application performance (60 percent), enabling threat identification (59 percent), and identifying indications of security breaches (57 percent).
The next steps
The issue many of these IT organizations are about to discover is that monitoring applications in the cloud is only going to get more difficult. During the first phase of the cloud, most organizations essentially swapped out an on-premises IT environment based on, for example VMware, for an IT environment based on a derivative of an open source virtual machine. The hardest part of that transition was arguably refactoring existing applications to run on top of those open source virtual machines.
But now organizations are entering the next phase of the cloud. This phase is marked by the rise of so-called cloud native applications based on container such as Docker. These applications are frequently deployed on virtual machines, but large numbers of them will soon be deployed on bare-metal servers. To make matters even more challenging, many of these applications will inherently be hybrid, in the sense that they employ Kubernetes clusters to invoke serverless computing frameworks on demand to process workloads that tend to be “spikey” in nature.
To enable IT organizations to monitor those environments, the Cloud Native Computing Foundation (CNCF) that oversees developed of Kubernetes has also been driving development of an open source Prometheus monitoring project. In some cases, cloud service providers are making Prometheus available as a monitoring service on their clouds. In other cases, Prometheus is being incorporated into a variety of application performance monitoring (APM) services that are typically delivered as a cloud service.
54% of #IT professionals surveyed by @IXIAcom ranked cloud monitoring as a very important capability.
First, configuring those APM services and then providing IT organizations with actionable intelligence clearly represents a major opportunity for MSPs. The reason most internal IT teams don’t do this themselves is because of a lack of time and skill. It takes a significant amount of effort to instrument a cloud application. Multiply that by the number of applications deployed in the cloud, and the scope of the problem becomes immediately apparent. Unsurprisingly, 54 percent of the Ixia survey respondents ranked cloud monitoring as a very important capability.
If the success of any IT service provider comes down to finding where a customer has a problem and solving it, then cloud monitoring, which is one of the biggest pain points in all of IT, represents a major opportunity for MSPs to add significant value.
Photo: Gorodenkoff / Shutterstock