Why NPS Isn't a Useful Metric
Every now and then, I get an email from a brand that reads, “Stephanie, we’d love your feedback!” I open the email and inevitably it contains the following:
“On a scale from 1 to 10, how likely are you to recommend us to a friend or colleague?” (Insert picture of a scale from 1-10 here).
This pointedly is not an opportunity to share feedback, but it’s also not random. It’s called Net Promoter Score (NPS), a metric used by marketers to measure customer loyalty. Respondents are asked to choose a number between one and ten, which corresponds with the following three categories:
Scores 9-10: “Promoters"
Scores 7-8: “Passives”
Scores 6 and below: “Detractors”
It goes without saying that most companies want to see scores that are 9 and 10; in other words, they want to see a higher percentage of promoters over passives and detractors. NPS is seen as an attractive metric because it is widely known across industries and it simplifies goal setting: we want to increase promoters by X, versus detractors.
But NPS doesn’t tell businesses what they really need to know, such as:
Why did a customer choose us over a competitor?
Why did respondents give us the score they did?
How do customers/users like the product experience? What about the customer experience?
How many repeat customers do we have? Why do they keep coming back?
Have customers recommended us to others already? How?
The last question is of particular interest because humans are bad at accurately predicting our own future behavior. This is something already known to user experience practitioners; what people say they do and what they actually do are two different things. Asking someone what they are likely to eat tomorrow is less useful than asking them what they ate that morning or what they’re eating now. In other words, NPS at best can only tell you “this person is likely a promoter, but we don’t know that for certain.” And the same is true of passives and detractors; since NPS questions aren’t asking people what they did, but what they’re likely to do at some indeterminate point in the future, you’re going to get responses that are informed by how people are feeling at that point in time.
Here are more problems with using NPS as a metric:
Every respondent weighs each score differently: some people will never score anything a 9 or a 10, meaning an 8 for them is excellent. Others will consider a 6 to mean “I really dislike this”. Without any indications to help people understand what each number on the scale corresponds to, you’re likely getting skewed results since everyone will interpret the value differently.
“Passives” are removed from the calculation: Oddly enough, passives are completely discounted from the final calculation. This promotes a selection bias which skews the data for reasons that are unclear, and no further action is taken to understand why someone would score a 7 or 8 versus a 6 (detractor) or 9 (promoter). It begs the question: why are passives a category to begin with if they aren’t being used in the calculation?
People can self-select to respond: is the NPS respondent sample representative of your entire customer population? How many people received the email to give an NPS score, and how many responded? Is it statistically significant?
NPS doesn’t answer what the company is doing successfully: how do you know your marketing campaigns are working? How are you acquiring users? What are we doing right?
Here’s what you should do instead:
Ask for feedback and mean it. I’m sure someone found that the word “feedback” increased the NPS email open rate, but consider what that means: it means people probably want to share feedback but aren’t being given the opportunity to do so. Qualitative feedback can give marketing teams a lot of information about their customers and their needs, some of which can be addressed by marketing efforts. Other feedback can be relayed to product teams and even customer success. So that person who would otherwise be considered a detractor in NPS? You could turn them into a promoter if you give them space to share feedback, send that feedback to the appropriate team, and respond. (If you’re concerned about scalability, design a survey and leave a few free-form fields in addition to multiple-choice questions that roll up to a research question.)
Make it easier for people to do what you want them to. If you want them to recommend your product or service, make it easier for them to share it over social media. Offer referral codes and perks. Plus, you can see how many new customers you’re acquiring via referral codes — that’s much better than asking if someone is likely to refer you.
Look at all incoming referral sources. Referral sources can show you where unique visitors are coming from and how they’re most likely to find you. While “word of mouth” is hard to quantify in web analytics, you can use proxies for it, like social media referrals. It’ll help you identify the channels that are primed for growth.
Ask your existing customers what keeps them coming back. You’ll find a wealth of information here about your biggest fans, and this information could be used to help your messaging. How about someone who canceled their subscription or only ordered something once? Send them a re-engagement email to learn why. What could your business do differently to keep them happy?
Identify the metrics that align with your business goals. What does the business really care about, and which metrics are associated with those goals or outcomes? Report on those metrics instead.
NPS can be tempting because of its pervasiveness in companies large and small, but it does nothing to get you closer to learning about your customers or why they behave the way they do. Thankfully, there are other measurements and tactics that can help you accomplish that goal with more precision and accuracy.
Looking for practical guidance around creating and promoting content? Purchase The Developer's Guide to Content Creation for content-related tips, exercises, and templates.