- The Washington Times - Wednesday, December 12, 2012

By 2030, the United States will no longer stand as the world’s sole superpower. Islamist terrorism will mostly be a thing of the past; cyber warfare will be a major threat. The ranks of the world’s middle class will triple, food and water supplies will be pinched in places like the Middle East, natural gas will trump renewables as a primary energy source, and climate change will intensify extreme weather, with damp areas wetter and dry regions more arid.

Oh, and we’ll be enjoying night vision eye implants, simple replacement organs created on three-dimensional printers and cars that drive themselves.

Such are some of the forecasts found in “Global Trends 2030: Alternative Worlds,” the latest edition of a quadrennial report released Monday by the National Intelligence Council, which serves under U.S. Director of National Intelligence James Clapper. Intended to help new or returning presidential administrations make forward-thinking policy decisions, the document is the intelligence community’s collective and expert best guess at what the long term future holds.

Which means, of course, that much of what it contains will either be too obvious to be of any particular use, or else just totally wrong.

“Imagine you were an analyst in 1900 trying to predict what the world would look like 20 years in the future,” said Michael Horowitz, a professor of political science at the University of Pennsylvania. “Think of how much of the world you would have missed, like WWI. Or imagine you were in 1930, predicting 20 years forward. You would have missed the rise of Hitler and WWII. The world is really complicated and unpredictable.”

Indeed. From astonishment over the Arab Spring to the unexpected resurrection of Apple Computer, from the spread of smartphones to the shock of 9/11, ours is a world that mocks expert judgment and confounds informed prognostication.

According to research on the psychology and efficacy of predictions, long-term expert predictions have been found to be about as accurate as monkeys tossing darts at a board labeled with potential future outcomes. And yet forecasting remains a growth industry, in both the intelligence community and televised political punditry.

“In the short term, we can beat the dart-throwing chimp by pretty good margins,” said Phillip Tetlock, a professor at the University of Pennsylvania and author of “Expert Political Judgment: How Good Is It? How Can We Know?” “But as you get further out into the future, it gets increasingly difficult.”

Future imperfect

“Seinfeld” was hot. Netscape was cool. The year was 1997, and the U.S. intelligence community released its first Global Trends report, a look ahead to 2010 that did not predict:

• The 2008 financial crisis, perhaps the biggest global event of the decade;

• The Asian financial crisis and subsequent meltdowns in Brazil and Russia that began later that same year.

Global Trends 2010, however, did predict that the “erosion in the authority of the central Russian government that has occurred will not be easily reversed.” Oops. The document also foresaw that “the next 15 years will witness the transformation of North Korea and resulting elimination of military tensions on the peninsula.” Oops again.

Still, the failures in the 1997 report were less cringe-worthy than the biggest miss in “Global Trends 2015.” Released in January of 2001, it perfunctorily noted that “terrorist groups will continue to find ways to attack U.S. military and diplomatic facilities abroad” while mentioning neither al Qaeda nor the possibility of a terror attack on American soil.

Like the experts that produced it, the report did not see the 9/11 attacks coming.

“I think what they wrote about terrorism reflects the fact that people knew the threat of non-state actors was growing in the 1990s,” said Mr. Horowitz, who co-authored a Foreign Policy article criticizing the shortcomings of the Global Trends reports. “That was a clear trend noticed by people inside and outside the intelligence community. But there wasn’t enough imagination to see that something like 9-11 could happen. What they are doing with these reports is extremely hard.”

Ironically, the National Intelligence Council’s signature forecasting product — National Intelligence Estimates, which represent the intelligence community’s best collective judgement and are roughly akin to the Global Trends reports, albeit focused on national security issues — was born from a similar predictive failure, a series of miscalculations involving China and North Korea that preceded the Korean War.

Did the establishment of NIEs lead to better predictive judgements? Not necessarily. A 1962 document erroneously concluded that the Soviet Union would not put offensive weapons in Cuba; a 1964 report mistakenly stated that Israel had “not yet decided” to build nuclear weapons; a 2002 NIE estimate of Iraqi weapons of mass destruction proved erroneous.

Of course, the above missteps join a long, undistinguished line of confident, informed, forward-looking analysis that later ran aground on the rocky shores of unexpected reality, including: Federal Reserve Chairman Ben Bernanke’s 2005 assurance that rising housing prices were the result of “strong economic fundamentals,” the book “Dow 36,000,” and competing models among dovish and hawkish American policymakers of the future of the Soviet Union — none of which foresaw its demise.

In fact, it was the bipartisan failure to predict the relatively sudden dissolution of the Soviet empire under Mikhail Gorbachev and the end of the Cold War that prompted Mr. Tetlock to begin studying a difficult, mostly unexamined question: What, if anything, distinguishes political analysts who are more accurate with their predictions on particular issues from those who are less accurate?

Moreover, can those political analysts perform appreciably better than chance?

Foxes and hedgehogs

In 2006, Mr. Tetlock published his answers in “Expert Political Judgment: How Good Is It? How Can We Know?” The book — which includes a 20-year study of 284 experts from a variety of fields making roughly 28,000 predictions about the future — was both revelatory and much-discussed, finding that political analysts:

• Are less accurate than simple extrapolation algorithms;

• Are only slightly more accurate than chance;

• Become significantly less accurate — less likely to better the dart-throwing monkey — when their predictions project more than one year into the future;

• Are overconfident, believing they know much more about the future than they actually do — for example, when they reported themselves as 80 or 90 percent confident about a particular prediction, they often were correct only 60 or 70 percent of the time;

• Are strongly disinclined to change their minds even after being proven wrong, preferring instead to justify their failed predictions or shoehorn them into their cognitive biases and preferred ways of thinking about and understanding the world.

Mr. Tetlock divided forecasters into two types of thinking styles: hedgehogs, who are deeply knowledgable about and devoted to a particular subject or body of knowledge; and foxes, who have eclectic interests and know a little about a lot of things.

Fox-style thinkers, he discovered, were more successful at predicting than hedgehogs — a counterintuitive finding that cuts against the whole notion of expertise.

“You find serious overconfidence by hedgehogs,” Mr. Tetlock said. “With a rigid, self-justifying style of thinking, combined with a lot of content knowledge, combined with trying to see far in the future — well, you are more likely to go off a cliff.

“On the other hand, a key factor underlying superior fox performance was that they were more modest about what they could predict. The longer you try to see into the future, the more advantageous it is to be modest. I think they also derive some befits by being more self-critical and aware of alternative possibilities.”

The most common problem with forecasts like the Global Trends report, Mr. Horowitz said, is that they fall into a hedgehog-style trap of treating current conventional wisdom as a blueprint for the long-term future. For example, extrapolating from the late 1990s economic boom, the 2001 Global Trends report predicted that the global economy of 2015 would “return to the high levels of growth reached in the 1960s and early 1970s.”

“The most natural thing to do intellectually is look at the present,” Mr. Horowitz said. “What is going on now is going on for a reason, and therefore it is most likely to continue. There’s nothing wrong with that. To do better is difficult. It requires both being more aware of your own biases and being forced out of your intellectual comfort zone.”

The future of forecasting?

Mr. Horowitz and Mr. Tetlock are involved in an ongoing, multi-year project funded by the Intelligence Advanced Research Projects Activities Agency (IARPA) that is attempting to create better metrics for evaluating analyst accuracy.

Encompassing studies at a half-dozen universities and involving thousands of forecasters making predictions on hundreds of questions — Who will win the upcoming election in Ghana? Will Iran test a nuclear weapon by the end of the year? — the research aims to help participants recognize biases, avoid common errors and make better predictions in the future, lessons that can then be applied to American intelligence gathering.

One of the most important lessons, Mr. Tetlock said, is learning to avoid something he calls “accountability ping-pong” — that is, overreacting to errant prediction by abstaining from predictions going forward.

“We’re making progress in that direction,” he said. “Our forecasters are gradually becoming better calibrated. How far can they can go? Nobody knows for sure.”

Spoken like a fox.

• Patrick Hruby can be reached at phruby@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide