Net Promoter Score (NPS®) is presented as a whole number that can vary between 100 (when everyone is a Promoter) and -100 (when everyone is a Detractor). It comes from subtracting the percentage of Detractors from the percentage of Promoters and multiplying this answer by 100.
It is a survey estimate, a percentage, just like any other. That means it ought to be possible to understand how accurate it is and how much confidence you can have in the number, just like any other survey estimate. Or is it?
It’s not quite like any other estimate – it is the difference between two percentages, so it’s not itself a percentage of anything. Therefore we cannot just treat is as if it were a proportion and stick it into any confidence interval calculator we happen to have lying around.
We can do some statistics work on it by thinking outside the box a little bit. For many people (and I include myself in this), part of the problem with statistics is understanding the formulas, which can look very complex. Even when statisticians try to explain them (at least to me) it is easy to get lost after line two. This is where my own knowledge of statistics, which is ropey at best, doesn’t help anyone. I did see an explanation online that might work for you.
This visualisation of the problem asks you to imagine coloured balls in a bag (where do statisticians get these metaphors from?). Each Blue (Promoter) ball is worth +1, each Red (Detractor) Ball is worth -1 and each White (Passive) ball is worth 0. If you take a random sample of balls from the bag and add up the values of the balls you have you can work out the mean value of the balls, just as if you pulled people from a bag and measured their height or weight. This mean is, in fact, the NPS® as we shall see.
Imagine you pull 40 Blue balls, 30 White balls and 30 Red balls. The mean of what you pulled out would be (40*1)+(30*0)+(30*-1) divided by 100. This is 0.10. Looking at it in NPS®speak, forty percent of the balls were Promoters and thirty percent were Detractors. The NPS® is then 10 (which is 10% * 100).
It is not a proportion, it is a number, a discrete random variable, since you cannot pull out half a ball. To work out confidence intervals for discrete random variables, we first need to calculate the variance.
The variance of a discrete random variable is given by the formula:
SUM ALL THE (value MINUS mean)SQUARED MULTIPLIED BY probability of “occurring’s.”
And if you wonder why I write it like that, it’s because I can never remember, or work out, the ‘correct’ formula which is:
We know that the ‘mean’ () is the NPS®. We have assigned ‘values’ () and our best estimate of the ‘probability of occurring’ () is what we actually observe when we draw the balls out of the bag. So we now have everything we need to calculate the variance of our mean, our NPS®:
(1 – NPS®)2*probabilityBlue PLUS (0 – NPS®)2 * probabilityWhite PLUS (-1-NPS®)2 * probabilityRed
(1-0.1)2*0.4 + (0-0.1)2*0.3 + (-1-0.1)2*0.3
Which is 0.69 (I promise!).
Once you have calculated the variance you can then go on to work out the Standard Deviation (by square rooting it) and the Standard Error (by dividing the Standard Deviation by the square root of the sample size).The Standard error is 0.083.
At last, you can take your standard error and work out your confidence interval.
Many people know that the ninety-five percent confidence interval is 1.96 standard errors from the mean and so I can predict, with ninety-five percent confidence that the NPS® of this particular bag of balls is 10 +/- 16.
I’d probably do well to draw out a few more balls to be more sure before I tell my client his NPS® is somewhere between -6 and +32…
If I increase the number of balls I draw to 1000 then it comes as no surprise that the confidence interval shrinks, shrinks in fact to +/-5 (5.1 to be overly precise).
With any calculation of confidence intervals the outcome is determined on multiple dimensions. As already mentioned the confidence intervals get narrower as the sample size increases. The confidence intervals also get wider or narrower as you change how sure you want to be. So if I wanted to be ninety-nine percent sure then the relevant confidence interval would be +/-7, if I was happy to be ninety percent sure it would be +/-4.
The third dimension is of course the value of NPS® itself. When we are dealing with proportions we are well used to seeing general confidence intervals calculated on fifty percent being the answer as this is the “worse case,” and answers closer to zero percent or one-hundred percent are more accurate. Is there a similar rule of thumb with NPS®?
Unfortunately not. The confidence we have in an NPS® does change as it changes, given the same sample size, but not in an easily predictable direction. This is because there are so many ways to get to the same NPS®.
With my bag of balls example you can imagine that NPS® might increase from ten to 20. There are lots of ways that this might occur. The number of Blue Promoter balls might increase to 50 whilst the number of Red Detractor balls remains on 30. This is an NPS® of 20. Alternatively the number of Red Detractor balls might decrease to 20 while Blue Promoter balls remains on 40. In the first case the confidence limits increases (from 5.1 to 5.4) and in the second case it gets smaller (from 5.1 to 4.6).
In fact it is impossible for any given NPS® and sample size to read off a confidence interval, you have to know the Promoter and Detractor numbers to be sure. The variation in the NPS® can be large.
A final consideration is that you can also use the same mathematics to work out the answer to the perennial question; “how big should my sample size be?” In NPS® studies you need the answers to three questions. The normal two: “How accurate do you want to be?” and “How sure of that accuracy do you need to be?” plus “what do you think your NPS® is likely to be and how will it be made up?”
NPS® is a registered trademark, and Net Promoter Score and Net Promoter System are service marks, of Bain & Company, Inc., Satmetrix Systems, Inc. and Fred Reichheld.