Recently I fielded a question about NPS survey data: How do you process it? How do you use it? I popped off a quick response, but I’d like to go a little deeper on the topic here.
First of all, it’s not a specific NPS question. It doesn’t matter what sort of survey you send out or (exactly) what sort of question(s) you ask. I’m also limiting this article to a discussion about survey data per se. There are tons of other sources of data that should be part of your broader Voice of the Customer (VoC) program. But this is just about the data that comes from your surveys. From those surveys, at the end of the day, you’ll likely be faced with two different types of data, each with its own purpose: Score data and Amplification data.
Score data, as the name implies, is basically a KPI-type number. Even if your top-line CX survey question is more open-ended (i.e., doesn’t necessarily lend itself to a straightforward quantitative analysis), this is your output and will be turned into a score. It’s what you and your organization use to signify how things are going. Keep in mind that this metric, even though it’s the ultimate CX metric isn’t necessarily going to be interesting to anybody besides you as the CX practitioner, unless you make it compelling (more on that here). But at the end of the day, this Score data is what usually will end up on the slides you show or the dashboard you’re part of.
Score data will be tabulated and/or calculated to show a top-line number. With NPS as an example, you’ll take the percentage of Promoters (those who answer 9 or 10 on the 0-to-10 scale question of likelihood to recommend you) and subtract the percentage of those who score as Detractors (those who respond in the 0 through 6 range). The difference of those numbers becomes your Net Promoter score. I’ll pause here to allow statisticians to gasp in horror. I know, I’m with you.
Anyway, the Score data needn’t come in that form, it could be simply the proportion of people who rate you at (or above) a certain level on whatever you want (how satisfied they were with how long it took, how satisfied they were with your agent’s efficiency, how satisfied they were with the level of their own effort needed to accomplish what they were trying to do, how satisfied they were overall with their experience, etc.), based on what’s important to you…ideally what you’ve discerned is important to them. But at the end of the day, Score data is simply the top-line CX number you’re setting as your goal. It’s important in its way, namely, it gives you a good North Star aim-point and you can track your progress. There are ways in which Score data can actually become a distraction from improvement, but that’s a topic for a different time.
The other type of data you’re ideally also collecting via your survey is Amplification data. The purpose of Amplification data is, as you’d expect, to provide further insights into why your Score data is where it is, and ultimately how to move it into better territory. I’ll break down Amplification data into two further subsets: Attribute data and Verbatims.
Attribute data is pretty similar to the Score data in that it’s mostly quantitative and a straight-forward result of some simple number-crunching. Below the top-level question (“How likely are you to recommend us?” for the NPS survey, for example), you may ask for some more specific, perhaps transactional impressions from your Customers: How satisfied are you with how long it took? How pleasant was the agent who helped you? Were you able to understand the agent’s instructions? How many times did it take you contacting us to solve your issue? Is your issue even solved yet? Notice not only that the answers to these questions can be readily quantified (use a scale, yes/no, a cardinal number, etc., as a response), depending on what’s important to you (i.e., what’s important to your Customer) any one of them could actually be elevated to become your Score data, or your top-level CX KPI. There’s really no reason other than convention that your top-level CX metric needs to be the NPS likelihood-to-recommend response, nor that it need be the overall end-to-end C-SAT question. If, through analysis and investigation, you find that time is the most important thing to your Customers, maybe what would otherwise be Amplification data from the Attribute question of satisfaction with time spent actually becomes your Score metric. (Ostensibly this is what happened with Customer Effort Score, or CES, thanks to the study and work of folks like Matt Dixon and others.)
But simply put, the Attribute data is Amplification data that comes from the other quantifiable questions you ask your Customers on your survey. This data is usually useful for localized (i.e., agent-specific or team-specific) coaching and improvement efforts. You can look to this information for acute insights into what went wrong or right on a specific incident…this one, in the Customer’s eyes, went too long, took too much effort, was unnecessarily complicated, etc. Agents can use this information to improve their performance on the fly and it should be used by supervisors for such encouragement. Likewise, at scale, this Attribute data can quantify and help prioritize what systemic improvements are needed in order to improve your CX (resulting in improved Score data). If you’re wondering where to go to improve that Score, categorize (i.e., your Detractors) and look for trends in how your Customers answered these Attribute questions. For example, if a majority of your Detractors are also saying it takes too long, guess what? It’s taking too long and that’s likely a source of your poorer scores. This approach allows you to prioritize your internal improvements to get the biggest impact for your effort.
The other type of Amplification data is Verbatims, or the open-ended responses your survey may (and should!) offer to your Customers. These are questions that invite free-form text responses such as, ‘What could we do to improve your experience?’, ‘Why did you respond [to an Attribute or Score question] the way you did?’, or simply, ‘Tell us about your experience…’ These responses, as you can imagine, are much harder to analyze because they aren’t simply radio-buttons on a survey or yes/no binary choices or even quantifiable numbers in an answer box. As clumsy as it may seem, the best way to do an initial analysis of these sorts of responses is to categorize and classify the responses from the Customer’s perspective and count them up similar to how you do with the Attribute quantitative responses.
Now, I add the emphasis on Customer in that previous sentence deliberately: I’ve seen way too many instances of (and have been pulled into far too many pointless arguments over) classifying these comments based on internal processes and systems rather than creating categories based on what the Customers’ experiences are. The former is usually motivated by not much more than blame-placing: People want to know whose neck to wring, so they ‘assign’ the verbatims to teams based on who should ‘own’ this or that problem. Here we get it exactly backwards, which is a topic for a whole ’nother (as we say in Colorado) post. But beyond that, the goal is to leverage these Verbatims ultimately in a similar way as you do with the Attribute data: Quantitatively. This doesn’t mean that you should just classify them and then throw out the words…Keep those to help add color to and amplify further what’s going wrong (or right!). This is very powerful and specific information that will serve you better than the quantitative information you get otherwise.
In the end, Score data tells you overall ‘where we are’ and can be used as top-line KPIs; but also to focus on which Customers to look to (those who are detractors, for example), while Amplification data tells you what you need to improve (or keep doing!) in order to improve the Customers’ experiences, which you’ll see in the Score.