Research, evaluation, analysis

Why Good Statistics Go Bad
Let's suppose you're comparing sales by different branches of your company. Sales targets are proportional to the population of each branch's region, and the measure of effectiveness is the percentage that actual sales are of the target (naturally, you'd like the percentage to be above 100%).

This year, you find that the branch which exceeded its sales target by the greatest percentage last year is still exceeding its target, but by a smaller percentage. Can you take this decline as evidence that the branch's performance has slipped?

No, you can't. It may simply be an example of a phenomenon called regression to the mean.

The idea of regression to the mean is simple. The factors affecting sales may be divided into three classes: a) sales ability, b) effort, and c) everything else. Sales ability combined with effort will have a positive effect on sales, while the factors in the third category will have a number of positive and negative effects which on the whole will cancel each other.

However, although they cancel each other on the whole, the specific balance of positive and negative factors in the regions your company serves will vary randomly from region to region. In some regions the balance will by accident turn out to be positive, while in others it will by accident turn out to be negative.

The highest performance in any year is therefore most likely to come from a region in which skill and effort have also been favoured by good luck – the balance of the other factors was positive. Since the balance of the other factors is randomly determined, a year of very good luck is likely to be followed by a year of less luck. Performance therefore declines even though skill and effort remain the same. Similar, the worst branch in any year is likely to do better the next – its luck was probably unusually bad, and can be expected to get better.

You need better evidence before you conclude your best branch is going downhill. For example, a little simple statistical testing of changes from year to year can clarify matters greatly.

In the article on data mining I mentioned that regression to the mean is a problem you'll run into when you're mining data. For example, if you find a group that spends more money than other groups, you should expect their spending to be less spectacular when you market to them. That's one reason people do pilot tests of models rather than just assuming that the model is valid and applying it wholesale. In general, you'll get more effective models if you never take the results that data mining software produces without human intervention at face value. Data mining can produce valuable information, but like the ore that comes out of real mines it must be assayed and refined before it's of any use.

Why Good Statistics Go Bad © 2000, John FitzGerald
Home page | Decisionmakers' index | E-mail