Anybody who’s read much of what I have to say about Customer insights and Voice of the Customer (VoC) knows I’m quite critical of legacy KPIs like CSAT and NPS. Part of that, naturally, is my inherent wiseass contrarian nature, sure. That said, if I didn’t really believe that there’s a better way to do things, I’d not have invented a new top-level CX metric of my own (the Brand Alignment Score).
Sure, it’s easy to beat up on “the way we’ve always done it” and call it a day, congratulating oneself on being so much better than the rest. But it’s also important to understand the limitations not only of our own cleverness, but the principles underpinning our own arguments. For example, it’s important to understand (and admit) that even the metric I made up doesn’t deliver a panacea.
So let’s talk about how bad metrics are bad in two categories:
- Universal—these are properties of every top-level metric, and it doesn’t matter which one you use, there’ll always be problems
- Specific—this is why this particular metric is bad, and ideally this other one is good.
There are some things that, no matter how great a new metric you develop or how clever you are (or think you are, or your LinkedIn circle of connections says you are), there’s not any easy way—least of all by simply switching to a new type of question or measuring scale—to overcome the shortcomings that exist for all VoC metrics. This is the category one needs to be aware of before parading out a great new KPI…it’s what should keep you from thinking you’ve solved the world’s problems.
For example, every metric can be gamed or lead to unintended behaviors based on how it’s applied. Even operational measures (think: Average Handle Time, First Contact Resolution, etc.) often drive people to do something that negatively impacts the Customers’ experiences. It’s like squeezing a balloon…that pressure and air goes somewhere and causes it to bulge out in a different place. If you emphasize scores uber alles, you’ll get folks doing whatever it takes…to drive that score, not necessarily to drive good CX. This is why you get service professionals often asking you to give them a 9 or 10 on the survey that’s on the way. There’s no perfect metric that’ll keep that from happening if the attitude of the organization is centered simply on hitting that target. Every KPI will get you to that bad place.
Similarly, CSAT, NPS, Customer Effort Score, and yes even Brand Alignment Score are all too vague. I’ve written before about Score Data and Amplification Data and if nothing else, that concept emphasizes that no top-level KPI will ever give you any information about what to do about your CX…i.e., how to improve it. It’s a good sign that an organization is simply paying lip-service to “doing CX” if they don’t look any further than their top-level KPIs. If the purpose of “doing CX” isn’t centered around making your CX better, it won’t matter which metric you use because the key to acting on what you learn entails learning more deeply why and how you’re falling short, not simply that you have (then of course, there’s the huge matter of doing something about it). So regardless of what KPI you’re using, if that’s all you’re using, you’re missing the biggest part of the picture.
Finally in the universal collection of failures when it comes to top-level metrics is that they’re all imprecise. Negative self-selection bias (only Customers who’re upset are bothering answering your surveys), errors in sampling, selectively choosing who gets surveys (sometimes unintentional, sometimes deliberate), and the overall understanding that survey results aren’t really quite as “quantitative” as you think they are—after all, it’s just a collection of peoples’ opinions, and what’s one person’s 7 may be another one’s 8 or 9—will skew and sometimes make interpretation of your results pretty much impossible…or at least risky.
These universal drawbacks of—let’s be honest—every single possible top-level CX metric you could choose cannot be addressed by shuffling from one to another. There’s no new one that’ll address all, or frankly any, of these issues. Truth be known, there’s really no way to fully and successfully account for them, but the only hope in helping to overcome them is attitudinal, not operational: Come at your VoC program with a curiosity and wanting to know not just where you are but how to get better will open up new ways of asking questions, and even gathering insights from other sources altogether.
But what about NPS? It sucks, right?
Well, yes. Aside from all these other universal failings (that, I’ll reiterate, Brand Alignment Score won’t fix either), it has a different fatal flaw: It makes no sense.
Ask yourself what it means to ask the question of your Customer as to whether he or she would recommend you to friends, colleagues, and peers. What does that have to do with anything? I know, I know…the whole purpose of the likelihood to recommend formulation is that it’s an improvement on the, “Hey, are you satisfied?” question. Who wants simply satisfied Customers? We want them to be so enthusiastically loyal/supportive/engaged that they’ll go so far as to recommend us as unofficial brand advocates to people they know. I get it, I get it. But this is what happens when marketing takes over CX.
If your marketing strategy entails word-of-mouth grassroots reputational outreach, I can see the value from that perspective, but what does that have to do with your Customers? Your CX program’s purpose should be to improve your Customers’ experiences, not your brand’s reach. You know what’s good for reputation? A good Customer Experience. So how about we not jump over those important steps in between, and measure our CX efforts for what they’re worth: Driving improved Brand Alignment.
Which is to say, if the strategy of “doing CX” in the first place is to reduce and remove the gaps that may exist between what you say you’re all about (your Brand Promise) and what your Customers experience when they interact with your brand (pretty fundamental to operating a truly values-based company!), why not, when you’re asking your Customers what they think of you, ask them about how you’re delivering on that? What does your Customer care about how you market? Why should you be making them do the hard work of getting your brand out there to other people?
Also consider, NPS is simply lazy. It’s become a standard, so the thinking goes, if everybody else is doing it, why shouldn’t we? After all, how can so many Elvis fans be wrong? “We have to do what the industry is doing” makes sense if you’re not looking to differentiate yourself. And if NPS makes sense for you, go for it (and by the way, NPS is potentially still a pretty darned good marketing metric insofar as it gives indications of how well word-of-mouth may be performing for you…but that’s not the purpose of CX). But your brand is different from that of your competitors, so why not treat yourself that way?
The bottom line is that, CSAT and NPS won’t really get you anywhere in determining how well your overall CX strategy is performing because they’re not tied to your strategic CX purpose in the first place. CSAT isn’t, because “making our Customers satisfied” is pretty weak sauce when it comes to a CX charter, and NPS is…well, what the hell is NPS anyway from a CX perspective? I’m a fan of Matt Dixon and his Customer Effort Score; in fact, the CES was what got me thinking about a more strategically-aligned metric way back when. It may be perfect for you if ease-of-use is what you’re promising to your Customers as a brand. But otherwise, you’ve got to make what you’re asking (and measuring) make sense in your strategic plan. If not, you’re just wasting your (and your Customers’) time.