Who will win the November 2016 United States presidential election? Who will be elected Senators from Geneva? Will the price of oil stay low? Will the stock market go up? Should I put money aside for my grandchildren’s education? Where should I go on vacation?
We like to think we can predict the future. We go into certain professions or make choices about what jobs to take because we think we know what will happen years later. We put our savings in stocks, bonds or simple savings accounts depending on what we think will give us the best returns. We vote for certain candidates or parties because we think they will be the best to lead us in the future. In our personal lives, we choose partners because we believe that this person is the one we want to spend the rest of our lives with. We make choices for our children because we think we know what will be the best for them when they grow up.
We go to experts for advice if we don’t think we know the answer to a problem. If we don’t feel well, we go a doctor. If we have questions about money, we go to a financial adviser. If we are having difficulties in a relationship, we go to a family therapist or psychiatrist. If our children are having trouble at school, we consult the teacher or guidance counsellor. All of the above are experts. In other words, if we think we cannot predict the future, we go to experts to help us predict. After all, one cannot be expected to be an expert in everything.
But are experts really better than we are in predicting?
A new book, Superforecasting, The Art and Science of Prediction by Philip Tetlock and Dan Gardner is causing quite a buzz because it contradicts our implicit understanding that experts are better than the average person in making predictions. The authors analysed 82,361 predictions made by experts in a variety of fields. The results are startling. Fifteen percent of events experts thought would not happen did happen; 27% of events they predicted would happen didn’t happen.
Tetlock, who teaches at Pennsylvania’s Wharton School of Business, had concluded after a 20-year forecasting tournament that the average expert is “roughly as accurate as a dart-throwing chimpanzee.”
The authors of Superforecasting went even further in analyzing the art of prediction. If the experts were no better than chimpanzees, were ordinary people better? Sponsored by the American government, Tetlock and Gardner looked at how average people are able to predict political events. After all, the United States employs 20,000 people and spends over $5 billion in tax payer money for “experts” to predict what is going to happen. Was the average person better than the expert?
The Good Judgment Project recruited 2,800 people interested in current events. Using public information, they were asked to make predictions about what would happen over a four year period. Some of the volunteers were surprisingly accurate, better than their peers and even better than the experts.
The question is why certain people were more accurate than others in their predictions. The book presents a contest run by the U.S. government to see what type of person made impressive predictions. No special, inside knowledge was required.
A small percent of the volunteers were better than the other volunteers and the experts. They included a wide range of people. What did they have in common? They were smart, but not necessarily geniuses; they were not members of Mensa, the high Intelligence Quotient (IQ) society open to people who score at the 98th percentile or higher on a standardized, supervised IQ or other approved intelligence test. They were people without prejudices, subtle thinkers not tied to statistics or models, people able to learn from their mistakes; people who were more interested in process than final results and naturally skeptical about their own assumptions.
So don’t ask me who is going to win the November 2016 American election. Your guess is as good as mine, if not better, assuming you fit the above description.