With all due deference to Matt Dixon, sometimes “effort” is a tricky thing to define. I worked with one team that ran around and around about it constantly it seemed. Matt’s Customer Effort Score (CES) metric basically asks the Customer to rank his or her satisfaction with the amount of effort expended to solve an issue or otherwise accomplish something.
Now, right away you can see the question begged: How do we even know the issue has been solved in the first place? This, of course, goes to an age-old conundrum of how we can ensure a Customer’s issue has been solved before we send out a survey for feedback, regardless of the survey type. After all, it’s insult added to injury if we ask, “hey, how’d we do?” while the Customer is still waiting for a solution. But let’s put that issue aside for now as it’s a common concern (NPS, C-SAT, and all the others have the same limitation).
Customer Effort Score also highlights another universal limitation of surveys: Customers have their own definition of terms, regardless of how you want to define them (see here, for example, a bit about resolution itself). There are a couple different ways to ask for CES, but for now, let’s say it’s: “From 0 to 10, how much effort did it take to solve your problem?”
You can see from the get-go how inexact this can be:
- What’s a “0”? What’s a “10”? Come to think of it, wouldn’t “0” mean that the problem hadn’t happened in the first place?
- How much is enough “effort” to move from, say, a 4 to a 5? (If we use a 1 to 5 scale, the problem doesn’t go away, it just takes a different form.)
- How am I, as a Customer, to even know what level of effort is a lot, and what’s not much? I’m no expert in your product, after all, I just plugged it in and it didn’t come on.
- And what part do you mean by ‘effort’?
- I spent 45 minutes sleuthing around for your support phone number online,
- Waited on hold for another 30 minutes (after navigating your atrocious IVR…when will you ever realize that those voice-activated ones, as cool as you think they might be are horrible?),
- But when I finally got a representative on the line, he knew exactly what my issue was and knocked my problem out in no time flat…two easy steps for him!
Was that low effort? I don’t know; the agent surely shouldn’t have his score diminished because your company has such a bad system that made it so hard to get to him (and a product that failed right out of the box in the first place).
(No, I’m not speaking of a specific case.)
Anyway, all this to say that, no survey system is perfect, and what we really need, beyond the top-line CX KPI is underlying Experience Understanding. Let’s call that “XU”. But how do you get the XU?
One useful way always makes me think of Seinfeld. (But then, what doesn’t make me think of Seinfeld?)
There’s an episode where Kramer’s phone line (remember those?) is crossed with “Movie Phone” and people looking for showtimes get him instead of that automated system. Being Kramer, he plays the part, taking the calls (when Jerry inquires incredulously why he’d do such a thing, his response is a shrug and mirthful: “I’ve got time.”) and puts on an affected voice asking callers, “Using your touch-tone keypad, please enter the first three letters of the movie title, now.” Of course, the resulting series of beeps means nothing to Kramer and after a couple incorrect random guesses, he’s reduced to pleading “Why don’t you just tell me the name of the movie you’ve selected.” (Hey, come to think of it, was he the original voice-recognition IVR? It hasn’t gotten any better since!)
Anyway, my usual long-way around of saying: Why don’t you just ask your Customers what was so much hassle for them?
This is what I call Amplification Data (a topic for another article altogether, coming soon). You can ask further questions that you’d presume would be related: Did you consider the hold time to be excessive?* Did you have to make multiple calls? Was it easy, in your opinion, to find the right place to look for a solution? These could go on. These also have the benefit of being easily quantifiable: request the response on a scale from 0 to 10 or 1 to 5; perhaps it’s a yes/no question; perhaps it’s a counting (How many times did you have to call?) question. Tie these (through correlation analysis) to the overall KPI (CES, for example) to identify reasons why Customers felt dealing with you was a hassle or a pleasure.
The other option for Amplification Data is, a la Kramer, to simply ask: What made it a hassle for you? What could we have done, besides avoiding the issue altogether, to have made this process easier for you today? (Or… try it without that caveat: ‘What could we improve to have avoided your need to contact us today?’ Your Product team will thank you for that one!)
All these bits of Amplification data can serve to illuminate your overall top-level Customer Effort Score question. It’s great to know that your CES is getting better (bad to know it’s not). It’s even better to know what’s ticking your Customers off so you can go fix it!
*This allows me to highlight something I learned from a CX guru: In this instance, never ask, “How much time did you spend on hold?” That’s a foul for two reasons: First, the Customer won’t give you an accurate response. They’re not lying, they just don’t know. And second, this violates a Cardinal Rule of surveying: NEVER ask a Customer a question you should already have the answer to. This goes for: model number, membership number, etc. etc. Likewise, here, you’ve got a (likely very expensive) telephony system. You’ve got a ticket/case number. You’ve got a specific link for a survey request. Don’t make your Customer do data gathering on your behalf. Tie the metrics from the call to the response from the survey. QED. </soapbox>