I just got off the phone with a colleague and we were having a conversation about data. He brought up some good questions and points and so I figured I’d jot a few of them down here before they went right out of my head again.
The gist of the discussion was the difference between quantitative and qualitative data. He was having some trouble explaining the difference to his boss. One line of questioning I asked right off the top was, “Why is this a controversy? Why the heck does this even matter? How did this become a topic of conversation?” It’s always useful to get to the bottom of why someone is asking a question so as to better address it in the first place. Plus, this seems more like an academic inquiry rather than something that’ll actually make a huge difference in what they’re doing.
It is, however, pertinent in a discussion like this, because I’ve used the mechanism that quantitative data can tell us where to look, while qualitative data can tell us what to do once we get there. More on that in a moment.
First, though, let’s acknowledge that technically speaking, just about any information can be made quantitative. And no, I’m not obligated to say that just because I’m an analyst at heart and a statistics professor. Think about it: even if it’s simply counting and taking note of the number of times (quantitative) Customers use a four-letter curse word in their survey text box (clearly that’s qualitative), you can turn anything into a dot on a graph. So specificity is usually the name of the game here: “Customers are pissed off,” can often easily be translated for better impact into, “84% of those responding last week mentioned this as a pain point.” That can help to focus the mind and the organization…people are more prone to sit up and take notice (and give you the leverage you need in order to take action) if you can put numbers to it.
But when it comes to that specificity, we can fool ourselves, too. What’s especially surprising to me is how often leaders mistake the quantification of subjective assessments for some sort of iron-clad actionable intelligence. That’s really the fatal flaw of even the most well-regarded top-line CX KPIs. The Net Promoter System score, for all its fame, is still based on a collection of individual Customers’ caprice about the difference between an 8 and a 9 on that scale. And if it doesn’t cause flashbacks to college, let’s keep in mind that those same professors who “don’t give As” probably also never give 10s. The point being, there’s actually less, not more, to that score than meets the eye. That’s mostly a topic for another time, and it shouldn’t be interpreted to suggest there’s no value in those higher-level numbers. But it’s the worship of them that inevitably leads to chasing a number rather than doing something to make the world a better place (yet another topic, for another time).
But that’s also why it’s important, before taking action too far based strictly on your quantitative data, to do things like open the free-text and verbatim transcriptions to better gain insight into and assess what was really happening. That’s not as easy when you’re briefing the leadership team, but some anecdotal verbatims are helpful in adding color to the story, and even better when used to help defend decisions you’ve made about what to do with the data. It’s a two-part solution: On the one hand, here’s the KPI that says things are going pear-shaped; on the other hand, here’s some amplification and insights into what’s behind that and what can be done about it.
And that’s where the difference really comes into play. You should use the quantitative data to offer high-level broad insight into where things are going wrong. Things that can be counted up easily like that are offering you a map as to where to look. In a lot of organizations, this is used simply to assign blame (or in a nicer world, responsibility). But once the finger is pointed, someone’s got to do something…after all, problems don’t solve themselves simply because they’ve been identified and assigned. For that matter, just because they’re located doesn’t even mean they’ve been identified in the first place.
That’s why I say that the quantification can lead you to where an issue exists. Some call it blame, but the enterprising among us see it as an opportunity. But once it’s been assigned, use those qualitative measures to elaborate on what went wrong and how it could have worked better. At the risk of seeming unscientific, I’d not even worry at that point about the prevalence or frequency of specific issues: Take what your Customers are saying to heart and address what you can.
In the end, the difference between quantitative and qualitative data isn’t nearly as important as how you leverage both. It takes both kinds to improve your Customers’ experiences, and getting too bogged down in the definitions of them is surely a distraction. Count up how frequently broadly-defined things happen. Then, as you narrow down your search for solutions, pull out the words people are using to describe the problem and use that to assist you in ascertaining what went wrong and what to do about it.