Share This:

Cloud computing has been positioned as a means of providing a highly available platform at a low price. This is true, but only to an extent. We have seen that even big platforms, such as AWS and Google Cloud Platform, occasionally wobble with services being hit because high availability is not the same as business continuity.

This may help MSPs who want to differentiate themselves from the crowd. It should even be possible for those who are using third-party cloud platforms.

High availability versus business continuity

High availability is the never-ending chase of getting closer to 100 percent uptime. The gold standard around the 1990s used to be simply 99.9 percent uptime, which equalled 8.76 hours of downtime per year. Over time, this has improved with most offering at least ‘five nines’ (99.999 percent) uptime, or 5.25 minutes of downtime per year.

It may not sound like much, but that is just the downtime of the platform. It does not include getting the application and the data back up and running. This is where things can go wrong, because it can take many hours for full service to be resumed. Meanwhile, customers may go elsewhere.

With business continuity, the aim is that a level of capability is maintained for the user to continue to run their applications and for activities to be maintained with customers and suppliers, no matter what. 

So, that means 100 percent availability, then? Luckily, no. What it means is creating an architecture that allows for failure.

The role of cloud platforms with multiple data centres

The idea is to create an environment which is cost effective, yet has the required redundancy of function to create a business continuity capability. As previously stated, the best way to create an environment capable of being flexible and having business continuity is through microservices held and provisioned in containers. The joy of small containers is that they can be spun up in a different location rapidly. With the right provisioning software in place, they can be stateless and have no dependencies on specific IP addresses or storage locations.

For most organisations, it can take as little as a minute or two to create a new instance of a microservice. With the right software in place, this can be triggered automatically. There is no need to wait for a human to notice that something has gone wrong and initiate a new instance manually.

The data associated with a microservice or composite application is a slightly different problem because it is far more dynamic and ephemeral. Whereas the actual microservice will only change when being patched or upgraded, the data changes every time a transaction occurs.

Spinning up a new instance of the data from a stored backup copy is not the answer to business continuity. The inability to know exactly how the new instance reflects the old data instance means that reconciliation is nigh-on impossible without the use of complex and costly database virtualisation approaches.

‘Hot’ data copies are needed

Mirroring data across different locations is a lot easier than it used to be, and the costs are far more tolerable. Remember, the idea of business continuity is not necessarily to provide the exact performance as prior to the issue, but to provide a level of performance that is acceptable to those using the system.

Therefore, the hot mirror can be placed on a lower-cost data storage platform. Once the original problem is solved, data reconciliation can be carried out and activity moved back to the prime data store. 

Sure, there is still the possibility that some transactional data could be lost if the primary data store goes down before it mirrors some data across to the copy. However, this should be minimal and something that is a minor irritation, rather than a major problem across the customer base. This provides a cost-effective means of going beyond high availability into the world of business continuity, creating a saleable proposition.

Whether you do this across your own data centres or through using multiple data centres from a third-party is neither here nor there. You are using the capabilities of a cloud platform, containers, and microservices along with data mirroring to move from “we are not sure when, but the part of our platform you are using may go down at any time for up to 5 minutes in any one year” to “we can provide you with a platform/service that will maintain a level of function no matter what.”

This is something that a customer can buy into at the business level, once the discussions move away from being purely technical. Your offerings as an MSP become a business investment, rather than an IT expenditure. As the arguments change, the sale should become easier. High availability becomes a moot point when you are using the platform to sell business continuity.

Photo: Gajus / Shutterstock


Share This:
Clive Longbottom

Posted by Clive Longbottom

Clive Longbottom is a UK-based independent commentator on the impact of technology on organizations and was a co-founder and service director at Quocirca. He has also been an ITC industry analyst for more than 20 years.

Leave a reply

Your email address will not be published. Required fields are marked *