By Robert Hickson 04/12/2017


It’s coming up to that time of year when predictions start popping out like buskers playing Christmas jingles.

We all know a litany of bad predictions:

“This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.” Western Union internal memo, 1876

“I think there is a world market for maybe five computers.”  Thomas Watson, chairman of IBM, 1943

“It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter, …” Atomic Energy Commission Chairman Lewis Strauss, 1954

“Before man reaches the moon, your mail will be delivered within hours from New York to Australia by guided missiles.  We stand on the threshold of rocket mail.”  Arthur Summerfield, U.S. Postmaster General, 1959

“We don’t like their sound, and guitar music is on the way out.” Decca Recording Company on declining to sign the Beatles, 1962

“There’s just not that many videos I want to watch.” Steve Chen, co-founder of YouTube, 2005

But, of course, there are predictions that worked out, though not so many:

“Robots will neither be common nor very good in 2014, but they will be in existence.” Issac Asimov, 1964

Despite its poor record prediction is a growth industry. It is natural to want to be able to see what is going to happen when there is rapid change and high levels of uncertainty.

However, research suggests that prediction can more often be about wish fulfillment than objective analysis. The more desirable a future event is, the more likely we may be to think that it will happen.

Prediction, like its sibling punditry, is usually art rather than science.

“Punditry’s an art form. To have a viable long-term career as a pundit, you have to become very adept at appearing to go out on a limb without actually going out on a limb. You have to be saying things that sound very emphatic about what the dreadful consequences of this or that might be, but they have to be vague consequences and they have to be linked with elastic probability terms that cover both sides of maybe.”

Philip Tetlock

Many so-called futurists also hedge their bets by predicting that the next big thing is “five or ten years away.”

There is another trick to make it seem that your predictions are accurate. Make as many as possible, hoping that at least one comes true. Then keep pointing to that to establish your credibility.

Ray Kurzweil promotes himself as having great accuracy in predicting the future. Some of his predictions seem quite prescient. However, others suggest that at times he’s really only stating what is already happening. In other cases he can get creative in deciding how successful he has been.

Good predictions should be specific and with a clear timeframe, so there is no doubt about what will happen and when.

More information results in better predictions, right?

The failed predictions noted above are all based on belief rather than analysis. It can seem that more information improves accuracy. And this assumption is the principle upon which companies like Quid operate. They take a “big data” approach to helping organisations spot what’s next, or help answer strategic questions.

However, more information doesn’t necessarily improve how accurate you are. The CIA has studied this in detail [Pdf]

“Once an experienced analyst has the minimum information necessary to make an informed judgment, obtaining additional information generally does not improve the accuracy of his or her estimates. Additional information does, however, lead the analyst to become more confident in the judgment, to the point of overconfidence.”

Sometimes that “minimum information necessary” may be known. For example, by experienced horse handicappers, which is the experiment that the CIA used to produce the graph below [Chapter five in the CIA’s Psychology of Intelligence Analysis]

More information can improve confidence, not accuracy. Source: CIA 1999. Psychology of Intelligence Analysis

In other cases the information critical to the decision may not be known.

“Experienced analysts have an imperfect understanding of what information they actually use in making judgments. They are unaware of the extent to which their judgments are determined by a few dominant factors, rather than by the systematic integration of all available information. Analysts actually use much less of the available information than they think they do.” Psychology of Intelligence Analysis

What makes a good predictor?

For both predictors and pundits it is a case of “know thyself”. There are a variety of conscious and unconscious biases that influence how we use and interpret information and make decisions.

“Intelligence analysts should be self-conscious about their reasoning processes. They should think about how they make judgments and reach conclusions, not just about the judgments and conclusions themselves.” Psychology of Intelligence Analysis

Philip Tetlock and his collaborators have undertaken  research to understand what makes someone a good predictor. At least for relatively short term, very specific predictions.

In their studies they identified so-called “super forecasters”; people who were able to consistently make accurate predictions about the probability of particular events (usually political or military) happening. These people they found had several traits in common:

  • Self-awareness – knowing your limitations and foibles
  • Open mindedness – particularly how you deal with uncertainty
  • Taking an outside, historical perspective of the problem – looking at what’s happened previously in similar situations

Having a lack of expertise on the particular situation usually wasn’t a hindrance either. Curiosity and a willingness to look at the issue from a range of perspectives is more useful. The fox rather than the hedgehog mindset.

The Good Judgment Project, which Tetlock helped design and run, found that predictive abilities can be improved. This involves

  • Spotting talent individuals – identifying people with the right attributes
  • Training them to remove cognitive biases, and teach them good techniques
  • Getting them to work in teams, since collective intelligence is usually more powerful
  • Aggregating forecasts to combine the wisdom of the crowds with those who have are known good forecasters
  • Keep revisiting the predictions, since new information may help

What they don’t delve into is the ability of  the forecasters to explain their reasoning. What I see as more useful than simply what is predicted is to outline the things that need to happen (or not happen) to get to that point.

Jumping straight to “We’ll all be in self-driving cars by 2040” is less insightful and than setting out what needs to happen for self-driving cars to be viewed as safe, affordable, reliable, and acceptable.

I Predict

Predictions tend to be like Christmas crackers. Snappy, offering a little trinket of dubious utility and quality, a flimsy tissue of logic, and in the end a disappointing forgettable tale.

I can confidently predict though that predictions won’t go away. So when you read, watch or hear predictions bear the following in mind:

  • Is it specific or vague?
  • Do they explain their reasoning?
  • Is what is being predicted really a case of something becoming more common, rather than something novel?
  • Do they have a record of accurately predicting similar kinds of things?
  • Do you agree or disagree with the prediction because it does or doesn’t align with your expectations or beliefs?

 

Featured image: Photo by Justin Clark on Unsplash

This post is part of the Sciblogs Consuming Science series, exploring the science behind everyday consumer items and services. Read more here