Subscribe to industry newsletters

Advertise on Bizcommunity

Customer experience metrics: The key to success

It is true that a good customer experience will lead to more business, or better "word-of-mouth marketing." Certainly, we all know that in the era of social media, a single negative customer interaction can lead to a public relations nightmare.
All of the contact centre metrics we use to measure "service" are proxies for this most-important-of-all contact centre scores. Service Level and Average Speed of Answer (ASA) are maintained because we believe that long wait times lead to customer dissatisfaction. Abandons are a great proxy for customer satisfaction, because a customer who hangs up is almost always, by definition, not happy with their wait time. Agent quality scores are maintained because we would like to maintain a consistently excellent interaction with our customers, and the agent quality score is the mechanism we use to ensure consistent excellence.

Different flavours of experience metrics

There are as many "best" customer experience metrics as there are customer experience consultants. Different types of metrics can include customer satisfaction, first call resolution, net promoter score, agent quality score, and others.

Internally, companies will focus on experience scores that can vary from other business units that focus on customer scoring. But even if the scores are called the same thing, they will almost always be calculated using different algorithms. This, of course, makes perfect sense as different customers - calling the same company - are contacting our contact centres for different purposes. The experience should therefore be attuned to the purpose of the contact.

How can planners use customer experience metrics?

Customer experience scores exhibit seasonality, trends, and differences across contact centres. What does this mean to us planning analysts?

Data streams that exhibit this sort of behaviour are similar to many of the other time- series data we typically work with, like contact volumes, handle times, attrition, and shrinkage. We analysts cut our teeth developing forecasts of items that look just like experience data. This means, that we should be able to forecast experience data streams.

This adds yet another dimension of planning. If we forecast customer experience scores by centre and staff group, we can use these new forecasts in a host of ways.

First, we can draw out the week-over-week customer experience trends, simply to view where we are heading. These forecasts then act to set executive-level expectations. If the trends are favourable, we can see that actual expectations are met. If they are trending in the wrong direction, it will show that our given path needs to be adjusted. In effect, this time-series experience data will act as our early warning device.

Similarly, forecasts, and the resulting expectations, serve to soothe executives, too. If we have a traditional seasonal dip in customer experience scores, then we shouldn't be too alarmed when it comes to pass this year as well. But also, if we expect a seasonal dip in experience scores, we may be able to head it off this year by developing an agent training program in time.

Another great use of a customer experience forecast is as a point of comparison. The best companies view all of their forecasts (volumes, handle times, attrition, shrink, etc.) as a baseline for variance analyses.

As weekly performance data is tallied, it can be compared to the forecast. Any difference between forecasted and actual performance implies that something has changed. If we are forecasting and tracking customer experience scores, any deviation should be noted, explained, and potentially acted upon. In order for this sort of analyses to have any meaning, it must be compared to seasonally adjusted customer experience forecasts.

We can use customer service forecasts to plan better

The final, most interesting use of forecasts of customer experience metrics is as an input into the staff planning process. We have heard from several customers that customer experience scores are used to help allocate their calls amongst their competing call centre vendors. Those companies are actively attempting to improve their customer satisfaction by sending more contacts to those vendors who score best.

Who can blame them? But there is also no reason why a company couldn't increase staff levels in their centres that also score well. If improving customer satisfaction is important to your company - and your execs all think that it is - then it makes perfect sense to include the customer experience forecasts in your staff planning process and decision-making.

It is simple. By developing customer experience time-series data, using this data to forecast expected performance, and applying this forecast to your variance analyses and staff planning, you can greatly improve your customer's experience.

About Deon Scheepers

Deon Scheepers is Regional Business Development Manager, Interactive Intelligence Africa.