How do you deal with customers who need variable resources? The ones who, for part of the day, or on a cyclic basis of once per week, month, or quarter, show a spike in usage of CPU, network, or storage requirements?

Should you cap their usage, call them up to tell them off, or throttle their services back? Hopefully not. Hopefully you have already worked out that prescriptive service agreements are counter-productive.

It is tempting – and, in some cases, correct – to take a simple approach. The simplest approach, (but most likely to lose you the customer) is to draw a high-water line of resources and charge the customer at that rate. For example, the customer hits a high usage level of CPU at 9 a.m. every day as people log into the system. Therefore, you would charge them at that CPU usage level, no matter what. Their usage will be a fraction of that for more than 23 hours per day, but they hit that level, so they should pay that amount.

Should they? Of course not. Maybe a mid-level approach makes sense – an organisation running payroll every week will have a peak for a couple of hours during the week. A usage of 80 percent at that point and 10 percent at others means that a cost at 45 percent is a more palatable rate, right? Yes – but it is still likely to upset the customer, as it is not representative of their real usage.

A proper average starts to get closer to reality, but requires constant monitoring and feedback from the MSP to the customer. Any changes in usage can be handled rapidly, without any shocks to the customer’s finance department.

The reality is nothing like this

A modern organisation will have peaks and troughs in usage by month, week, day, hour, or even second. Remember the days of application service providers (ASPs) trying to charge via transaction costs (T-Costs)? For most ASPs, it became rapidly apparent that the cost to them of monitoring the sheer volume of transactions meant that the actual T-Cost itself was often two or three times higher than it needed to be.

What MSPs should be looking at is how to provide a customer with what they need under a cost model that makes sense to both parties. Something that is relatively predictable for the customer and makes a profit for the MSP. 

Here lies the need for flexi-flexible plans. I’ve written before about the need for plans that work across tiers, rather than being prescriptive. As a reminder, it was about how setting tier pricing can be used to encourage customers to use more services through making the cost per usage cheaper the more they use. There are many different versions of this approach, but a progressive MSP needs to add a further nuance to keep its customers completely happy.

Sometimes, a resource spike happens for a very short time. This can rarely be planned for – it is just an artifact of activities the customer is carrying out. For example, an action by a user may trigger a large file transfer that uses a lot of bandwidth for a few seconds or a large database query that requires a large amount of CPU.

However, if such a spike is not impacting other customers and can be accommodated by the platform, there should be no reason for the MSP to get upset about things. Sure, the MSP must continue to monitor for repeats of such spikes, and make sure that the spike doesn’t become more prolonged or that  is not something more suspicious. Allowing such small transgressions of contractual resource usage should be okay, as long as the customer is made aware of what happened and both parties can come to an agreement as to why it happened and whether it will be allowed to happen again.

My recommendation is that MSPs use tier-based pricing policies, with a high degree of flexibility around spikes not triggering the tier it goes into. This will become the new normal, as MSPs that are seen as being too inflexible will not attract or keep customers. Those who can demonstrate high levels of flexibility and be more customer friendly will win.

There are some caveats: the spike may not be down to customer activity — it could be down to malicious activity from outside, as black hats make attempts to break into the system or look at how distributed denial of service (DDoS) defences are configured.

Photo:  Raihana Asral / Unsplash

Clive Longbottom

Posted by Clive Longbottom

Clive Longbottom is a UK-based independent commentator on the impact of technology on organizations and was a co-founder and service director at Quocirca. He has also been an ITC industry analyst for more than 20 years.

Leave a reply

Your email address will not be published. Required fields are marked *