I wrote a while back about questions raised concerning wide ranges in top-level NPS or C-SAT scores, even for Customers who may have had the same experience. The point I called out there was mostly an indictment of the use of NPS or C-SAT in the first place. These metrics allow for way too much variation in interpretation, even in how they’re asked in the first place (i.e., what’s ‘satisfaction’ mean to you as a Customer, and how likely are you to recommend anything to anybody, regardless of your experience?).
To recap, the concern was that (theoretically, although I’m sure it also happens in practice), a Customer may respond ‘0’ to either question—satisfaction or likelihood to recommend—based on the exact same experience that another Customer may rate as a ‘10’. My point at the time was that’s because both satisfaction and likelihood to recommend are in the eye of the beholder, irrespective of an experience. We all enter into any experience with our own prejudices and preferences…and if mine aren’t the same as yours, it’s unlikely we’ll see the experience (and interpret from it our satisfaction or likelihood to recommend) the same way. That’s because, since what you and I value are different, what will end up satisfying us is likely also to be different. Likewise, since you and I have different levels of proclivity to share our experiences with others by virtue of our different personalities, even if we do value the same things and have identical experiences, our likelihood to recommend will not necessarily be the same.
I argued there that the solution was to use something more deliberate and directive: The Brand Alignment Score. Don’t leave so much open for interpretation if you’re trying to make heads-or-tails of the results. Draw your Customers’ attention to what you’re trying to achieve (“our Brand Promise is to deliver this experience…”), and then ask them if you’re hitting the mark (“…did we or not?”). There’ll still be variation in the responses…we all have our own prejudices, after all…but they won’t be because the question you posed was vague or completely detached from your strategy in the first place.
During a forum in which this topic came up, another shortcoming I noted about that query was that folks were putting too much emphasis on the number, rather than on the insights to be gained from asking your Customers for their feedback. I’ve written before many times about the futility of putting all your VoC eggs in the basket of the top-line KPI.
Aside from the dangers of Goodhart’s Law, looking simply at a KPI doesn’t do you much good: So we’re now at 70. Last week we were at 68. So that’s better, I see…that’s a 2.95% improvement. Um, what does that mean? How much better should we get? What did we do to get there, and how can we get to whatever our goal should be?
It all just kind of turns into an arithmetical shell game, chasing blindly after a number that, in fact (to get back to the original quandary) may not even represent what our Customers think about us, nor be related to what we’re trying to accomplish in the first place. How do we overcome that? By asking more thoughtful questions.
Sure, start with the Brand Alignment Score instead of NPS or C-SAT. But also, ask probing Amplifying questions. “Why did you give us that score?” “What would have made your experience better?” “Where, precisely, did we miss the mark?”
Notice the beauty that, by asking these follow-on, free-text questions, you don’t even need to move away from NPS or C-SAT (although you should anyway, and use the Brand Alignment Score instead). Even if you’re still asking those outmoded, non-strategic questions, by asking follow-up amplification questions, you can better curate your feedback for action: If, for example, a Customer rates you as a ‘0’ on satisfaction or likelihood to recommend, but answers in the follow-on question that it was because your cost is too high, that can offer insights that are more robust than simply the ‘0’ gave originally. What’s more, if your Brand Promise is luxury or high-quality, it probably will weigh less because you’re not striving to be the low-cost leader in the market anyway. Sure, you’re not trying to fleece anybody, and if you start to see a lot of responses like this, you may want to investigate how to deliver a luxury experience for less. For that matter, you may come to the realization that there’s not even an addressable market for luxury for this product in the first place! (I.e., if everybody’s criticizing your price, perhaps nobody really wants that luxury experience enough to pay for it.) Conversely, if your Brand Promise is value or discount, it should have more weight if your Customers are telling you you’re too expensive…that’s because that’s what you’re trying to deliver as your brand. That’s a huge problem.
But if you’re wrapped up in wondering how simply to parse the numbers that your Customers are giving you on your KPI measurement, you may never get that far. So yes, surely you’ll want to check out that top-level KPI. But the real useful meat of your VoC program and your survey results aren’t coming from there…they’re in the amplification questions.