The late actor (who also did some other things) Ronald Reagan had a saying: “If you’re explaining, you’re losing.” Now, when it comes to popular politics, what he meant by that was that it’s important to keep your arguments simple: The more succinct you can make your point, the more likely people are to agree with you, and more quickly. If all it takes to come to your conclusion is along the lines of: ‘Well, here’s the fact, therefore, here’s the conclusion,’ you’re likely to win lots of people over to your way of thinking on a topic. There’s a corollary that goes, if you can’t explain it simply, you don’t know it thoroughly enough. As such, conversely, the thought goes, if you have to go through a complicated chain of logic to demonstrate the point you’re trying to make, people may either think you’re lying or don’t really believe it yourself or any of myriad other reasons not to share your perspective. Keep it simple, stupid, is another way of saying it, at least in practice.
I’d like to suggest that, when it comes to CX, we apply a similar philosophy. And it’s most easily demonstrated in your Customer Service/Support/Care/Success organizations. Are the team members you put in Customer-facing positions in these organizations explaining your policies? Or are they solving your Customers’ issues? […]
If you read much of what I write, you’re aware that one of the best sources of Customer insights and the one I most frequently recommend is Walking in the Customer’s Shoes. It’s one of the best—if not the best—way for you to understand what your Customers go through when they interact with your brand and your processes. It can be tricky sometimes to get out of your own head (you understand, from the back end, after all, how your systems work, so that perspective is hard to shed completely), but getting out there and experiencing things the same way your Customers do is invaluable.
It occurred to me the other day how important it can be to view things through the eyes of your competitors’ Customers as well. Check it out: […]
As a math and stats professor, I assign a lot of homework (my cadets can attest to that), but that homework is distinct from the exams I administer. That’s because the purpose of each is different.
When we give homework, the purpose is for practice and learning; getting better at stuff, which is often pretty tremendous to watch from the teaching perspective, because in a lot of topics I teach, that’s going from zero knowledge to proficiency in a short amount of time. Homework, and the errors made during the process of completing it, is part of the learning process: We fail in homework so we know how to recover and do things right; nobody’s expected to get the homework perfect, certainly not in the first try…that’s why students take it home and work on it, bringing it back when they’ve completed it properly (oftentimes with the aid of a correct answer in the back of the book to check their work).
Conversely, when we administer exams, those are an assessment of mastery of the subject material. By the time students get to the exam, they should be familiar enough with the concepts and how to do what they’re learning to demonstrate proficiency. In fact, the homework plays a huge role in obtaining that knowledge in the first place (try, try again, and all that). Here’s the thing that this all leads up to: The last thing a cadet wants to gain from a test is knowledge…the time for that has passed, and he or she needs to have it by then! […]
At one point a while back in my career I spent most of my days teaching Lean Six Sigma around the company for which I was working at the time. From a (much more seasoned) colleague I learned a witty thing to say at the end of each session: “If you have negative feedback, please share it with me; if you have positive feedback, please share it with my boss.” It was a clever turn of phrase that made people think a little bit and at the end of the day send them off with a smile. But it really was a great encapsulation of what I learned was so important about feedback: I need to know where I’m falling short so I know where I can improve, and it also never hurts for your boss to hear praises of your work.
Fast forward now to my life as a Fractional Chief Customer Officer and CX consultant (and constant pest when it comes to preaching about the value of feedback), and it occurs to me: Maybe it’s okay sometimes to ask for a positive review. […]
I’ll often describe CX in terms of an analogy to other operations within your company. If it’s to make an impact, it should have a charter that includes responsibility for and the moving parts needed to drive changes. Otherwise, there’s no point in having a CX department in the first place.
The conversation usually comes as a side note to the dreaded ROI of CXdiscussion. Let me pull on that thread (picking on HR this time around, but it’s probably applicable for other examples as well).
My argument goes, that HR doesn’t have to defend its own existence. Sure, it has a budget (sorry, we can’t afford to send the whole team to that career fair in the Bahamas!), but it’s never faced with the existential question of, Why should we “do HR?” […]
Anybody who’s read much of what I have to say about Customer insights and Voice of the Customer (VoC) knows I’m quite critical of legacy KPIs like CSAT and NPS. Part of that, naturally, is my inherent wiseass contrarian nature, sure. That said, if I didn’t really believe that there’s a better way to do things, I’d not have invented a new top-level CX metric of my own (the Brand Alignment Score).
Sure, it’s easy to beat up on “the way we’ve always done it” and call it a day, congratulating oneself on being so much better than the rest. But it’s also important to understand the limitations not only of our own cleverness, but the principles underpinning our own arguments. For example, it’s important to understand (and admit) that even the metric I made up doesn’t deliver a panacea.
So let’s talk about how bad metrics are bad in two categories:
Universal—these are properties of every top-level metric, and it doesn’t matter which one you use, there’ll always be problems
Specific—this is why this particular metric is bad, and ideally this other one is good.