I recently fielded a question from someone regarding moving NPS ratings from 8s to 9s. I asked why that was important. It was noticeable that the question was posed in terms of numerical scores, rather than moving Passives to Promoters, so I was curious. Digging a little further I found that the source of the question was a desire among the associates to have their individual interactions move from 8s to 9s. Peeling back the onion I discovered that they were seeking this because their bonuses were based on their own individual NPS ratings. Ah, paydirt–in more than one way of meaning.
This is yet another one of those ways that, as I like to say, universal truths are pretty universal: Just because you predicate your employees’ raises (or any other sort of incentive) on a nominally “CX metric” doesn’t mean it will obviate the nearly iron-clad applicability of Goodhart’s Law. This little tidbit posits that once a metric becomes a goal it ceases to be a good measure. The general concept is that, people being people, motivation causes them to aim for a target and sometimes tunnel-vision their quest for the prize, whether that prize be a simple pat-on-the-back, official recognition, or monetary incentives…what I call ‘kibble in the bowl.’
Now, we should also apply Hanlon’s Razor, or at least a corollary of it. This principle is that one mustn’t attribute to malice what can otherwise be accounted for by stupidity. No, I’m not calling your frontline agents or associates stupid. In fact, I’m only mentioning this as a warning that we not dive off the deep end and presume that people will try to dishonestly or unfairly game a system in order to get better results on their Customer surveys. Just that the motivations aren’t even the point in the first place: You put down a marker, tell folks to get to that goal, and they’ll go after it. But sometimes it won’t have the results you mean for it to.
Here’s an example (not involving NPS): I worked with a group once that was looking to shorten the Average Handle Times (AHT) in their call center. I don’t remember the motivation (that, by the way, is the most important part, and better understanding why this was important could probably have avoided the issue I’m about to show), but they went about it by posting a tote-board for the entire call center to see everybody’s AHT in near-real-time. If you took longer on a call, your score went down (longer average) and vice versa…right up on the wall for all to see. The management felt it would drive a good “healthy” competitive environment (it should be noted that the morale and teamwork of the organization had traditionally been very positive, and in spite of a short time doing this, it rebounded quickly after this practice ended). The result? In order to drive down AHT, individual associates (although not all of them) were abruptly and prematurely ending calls before resolution. People weren’t intentionally disserving their Customers, and generally they weren’t rude about cutting them off. Although it was easy, I’d been told, to blame a Customer getting cutoff in the middle of a call to “technical issues we’ve been having lately.” Yes, the “technical issue” was that their metrics and incentives were wrong! Anyway, needless to say, the CX within the group took a nosedive. Now imagine if the entire purpose of trying to drive down AHT had been an effort to improve CSAT or NPS! It’d have had the complete opposite effect.
Another example? Sure: This one has to do with NPS more directly, as it was the purpose of this particular violation of Goodhart’s Law in the first place. Another (different) organization was trying to do the same sort of thing as the group at the top of this article: Drive up NPS. So they started broadcasting NPS far-and-wide, and at all levels of the organization it became part of the incentive structure. That’s a great idea, right? It gets everybody on board with it and that’s where you get the buy-in, no? Well, yes and no. The problem is that, since the leadership of the organization didn’t totally understand NPS and the NPS process, they treated it as any other quantitative metric, like cost, time, quality, etc. So what do you suppose happened? The folks in the contact center, much like in the other example, went after it with gusto. But they treated it as if physics had something to do with it…that they could just turn a knob and it’d create 9s and 10s. Some folks went about it in a ham-fisted way, just asking Customers to rate them high (ever been to a car dealership?). Some less scrupulous team members found ways to close out cases that didn’t go so well in the CRM with certain flags that meant the Customers wouldn’t be surveyed at all! But the most curious thing I noticed was that even culture changed. Before the push for NPS and the enthusiasm around it, the organization was thirsty for feedback from the Customers. In fact, it’s fair to say that the most valuable feedback—negative feedback—was prized and sought out. Once people realized they were being measured and rated (or even judged!) based on an NPS score, everything changed. The organization’s survey response rate kept steady, but the raw number of surveys went way down, drawing into question the score’s validity in the first place. More importantly—and sadly—the discussions about satisfying the Customers changed into either accusations or celebrations about where the number was this month compared to last. It was a really sad thing to witness.
Keep in mind, this wasn’t necessarily due to dishonest manipulation (remember Hanlon), but a blind drive toward a goal without understanding what’s actually being measured drove these organizations to results they didn’t want and hadn’t intended.
The moral of these stories? Make sure your folks understand why you’re using the quantitative measures you’re using so they understand what those numbers represent in terms of your overall strategy. You’d rather have their day-to-day work focused on your mission and vision than simply heads-down blindly chasing a numeric goal.