Share This:

MSP platforms by nature, are ‘chatty.’ Data is being created, not only by the customers using the platform, but by the platform itself. Every device creates data, and it is generally held in some form of syslog data log. Some devices, such as firewalls, will create extra data, such as on patterns around user activities. Others will be creating data around the process activities that are going on based on the users’ actions.

Yet, all this data is often not used for much beyond basic analysis. In the case of firewalls for example, it is just used to try to figure out if the pattern seen is a malware payload or a third-party attack trying to gain access to the network.

With the rise of machine learning (ML), deep learning (DL), and artificial intelligence (AI), all that should be changing.

Machine learning’s role

Already, devices — such as hard disk drives — utilise ML through self-monitoring, analysis, and reporting technology (SMART) technology to predict when failure could occur and raise alerts before this happens, giving administrators the capability to replace a device before it fails and maintain data fidelity. Other areas are also using similar approaches — for example, the raft of sensors on motherboards can pick up on areas such as failed fans, hot-spots, and other problem areas — and they can raise alerts when things go awry.

This enables an MSP to try and maintain a more available platform for all of its users. However, ML is often dependent on rigid data sets against which it compares what is happening. These data sets could be out of date. For example, using data built up around 5-year-old motherboards is unlikely to apply to modern ones which are better engineered for air flows and can withstand greater operating temperatures.

Deep learning can help

DL is a more real-time analytic version of ML. While it still needs those starting data sets, it can build on these in real time by adding additional data found in its own environment. For a private network, particularly for an SMB, this is of little real use — the data from the small number of available devices is unlikely to be enough to change the underlying base data sets much.

However, an MSP is working against a much larger set of devices — and should be able to tap into other data sets as well. Being able to learn the way that your own platform deals with temperatures around motherboards means that you can use DL to either extend the life of motherboards where the base data set predicts failure in the near future or use it to pre-empt failures that are more prevalent in your own environment. By using DL, it might be possible to identify why you are getting more failures than the base data set would indicate. This enables you to rearchitect how your platform is operating to gain the extra benefits.

AI brings extra value

Often, you will see AI positioned as a self-learning environment that does not need a starting data set. I would argue that this isn’t the case and that a base data set is still required to get you up and running. However, I would then say that you will be using ML and DL until the AI capabilities of the system can kick in and give the value that can be gained.

For example, let’s assume that ML will tell you when a motherboard is likely to fail, based on its knowledge of hundreds or thousands of other similar motherboards. DL should be able to adapt this over time to reflect how the same motherboards will last in your specific environment. AI should be able to aggregate data from multiple different sources and provide insights into how you could get the most out of that motherboard.

Is the CPU being overworked, while other motherboards are hardly being used at all? Is the presence of a large spinning disk array under the particular motherboard leading to higher temperatures than should be expected? What would happen if we moved the workloads from that motherboard to another one – could we continue to use the motherboard above the storage array for other workloads without any need for an engineer to go in and move it? Is there, in fact, a strong financial reason to move the motherboard based on customer needs?

This is good news for an MSP who is monitoring and gaining feedback on their own environment. However, ML, DL, and AI should also be rolled out as services to your own customers. With the additional data that their applications or their service usage are creating, it then becomes possible for them to make better informed decisions on how they should be using your services. In some cases, this may mean that the customer lowers its overall usage of the service. In other cases, they may increase their usage.

Your job is to leverage your own data and work with the customer, so that they can receive the most benefits from your environment. Such partnerships around data will then breed greater trust and loyalty between both parties. It may also enable you to move from a transactional partner selling just technical services to becoming a consultancy-led provider selling advisory services in how best to optimise workflows and business processes — all based on a deep knowledge of the customer’s activities.

Sure, you will need additional tools, but it will undoubtedly be worth it.

Photo: archy13 / Shutterstock


Share This:
Clive Longbottom

Posted by Clive Longbottom

Clive Longbottom is a UK-based independent commentator on the impact of technology on organizations and was a co-founder and service director at Quocirca. He has also been an ITC industry analyst for more than 20 years.

Leave a reply

Your email address will not be published. Required fields are marked *