Professor of Statistics Martin Hazelton has some consumer warnings about the business of prophecy.
“Beware of prophets” is the blanket warning given by Professor Martin Hazelton – even of those who have been right before. That successful stockbroker of the moment with a record of picking winner after winner may be truly prescient, but in any population of stockbrokers there will always be a few who have had unbroken runs of good luck – and that is an equally plausible explanation. Statisticians are natural sceptics.
Does more data always mean better predictions? Not necessarily, says Hazelton. In theory, the more information the better, but only if you can winnow out the relevant information from among the irrelevant chaff; the signal from the noise.
It is human nature to want to impose patterns, even where there is none. “If you toss a coin repeatedly, you will see patterns emerging.” Patterns like these are artefacts.
Then there is the issue of black swans: events with major consequences that are so seemingly unlikely that they are completely discounted. The Global Financial Crisis was a black swan, as was the Christchurch earthquake. “I don’t think we are very good at say distinguishing between the one-in-a-hundred-year event or the one-in-ten-thousand-year event.”
Hazelton stresses the importance of factoring in the consequences of an event alongside its likelihood.
“Even if the percentage chance of a major earthquake is tiny, a few percent, it is still something for which you would want to prepare.”
Finally, he warns against the biases that arise from prediliction and gut instinct.
In the recent US national elections, a media consensus seemed to have been reached that it was ‘too close to call’. Every poll result with any sort of swing – even if that swing were close to the margin of error – would bring a flood of punditry, ‘experts’ offering rationalisations – the latest employment figures, or the success of a campaign ad – for things trending the way they were, divining patterns where there was none, and bringing in herds of other experts on their heels.
Meanwhile, Romney was reportedly certain enough of victory to have ordered a fireworks display.
“He would have been following the pundits and listening to the voices of encouragement in his inner circle.”
Yet some more sober-minded statisticians always gave the odds to an Obama win, with one of them, Nate Silver of the FiveThirtyEight blog, pulling off a coup by picking all 50 state winners.
The secret? “He used statistical models to aggregate and weight the poll results across states and across time to filter out noise.”
If Hazelton were to devise a code of practice for prognosticators, central to it would be one thing, confidence intervals – those plus-or-minus margins of error we know from election polls.
Silver was lucky to get his numbers precisely right; he probably surprised himself. In many states the tipping point lay only just outside his 95 percent confidence intervals: in one in 20 instances he might have expected to be wrong.
The importance of confidence and prediction intervals is the first thing Hazelton impresses on his first-year statistics students.
“If a financial adviser recommends a stock to you on the basis that it will rise 10 percent in value in the next year, you will probably invest. If he says that the stock is likely to rise between 8 and 12 percent in value, the same applies. But if he says that the stock’s range of values is likely to be somewhere between a 20 percent loss and a 40 percent gain, you might have second thoughts.”