A significant number of organizations are finding it increasingly difficult to manage and maintain observability of cloud computing environments that are becoming more complex with each passing day. A survey of 400 senior IT professionals in the United States, Canada, and the United Kingdom conducted by Aptum, a provider of managed services, finds 81 percent of survey respondents report visibility and control over cloud environments is a challenge.
In fact, well over half of respondents (57 percent) note they have encountered unexpected cloud costs to one degree or another. To be fair, many of the respondents probably had some unrealistic expectations.
Unrealistic expectations of observability costs
The truth is unexpected costs are always going to be experienced regardless of where an application workload is deployed. It’s also not uncommon for organizations to realize higher costs for a variety of reasons, not the least of which is very few make a wholesale switch to the cloud overnight.
IT organizations quickly discover they need additional management and security tools, along with specialists to deploy them, to successfully operate what is really a separate IT environment. Add in the cost of migrating data to the cloud and making the transition becomes even less trivial.
On the plus side, the survey makes it clear that most organizations are attaining a return on their cloud investments. A full 80 percent of respondents report they have seen success using cloud services to unlock greater business profitability, with 72 percent reporting increased efficiency is a still a common driver of cloud computing adoption.
Maintaining efficiency with growing complexity
The challenge going forward is to what degree can organizations maintain that efficiency as IT environments become more complex. Organizations are starting to deploy “cloud-native” applications based on microservices, that take advantage of Kubernetes orchestration software that enables infrastructure resources to be consumed more efficiently, by scaling them up and down in more dynamically.
In theory, that approach reduces the cost of infrastructure by enabling IT teams to avoid having to dedicate specific amounts of infrastructure resources to every application. At the same time, microservices enable applications to be more resilient. Instead of crashing outright, a microservices-based application in theory will degrade gracefully as calls are rerouted around any microservice that suddenly becomes unavailable.
The issue is all the dependencies that exist between microservices makes it hard to determine the source of performance issue. As a result, organizations are starting to invest in observability platforms that promise to provide more context via a single plane of glass for monitoring applications and infrastructure. Those observability platforms are starting to replace a plethora of monitoring tools that were designed for a single platform, simply because IT teams don’t want to waste time correlating conflicting alerts generated by multiple tools that all have different user interfaces.
Increased observability, of course, comes at a cost. New platforms will be required. Theoretically, rationalizing existing monitoring tools might defer some of that expense.
However, deciding to replace legacy tools with an observability platform does create one of those seminal moments where organizations tend to reexamine to what degree do they want to be in the business of managing infrastructure at a time when enterprise IT is becoming more extended than ever. Naturally, that creates a unique opportunity MSPs to lead that shift toward observability.
One way or another, increased observability will soon be achieved. The only real debate at this point is over who is going to provide it.
Photo: PanyaStudio / Shutterstock