Guest essay by J. Scott Armstrong
In his 2007 book, Assault on Reason, former U.S. Vice President Albert Gore claimed that “many scientists are now warning that we are moving closer to several ‘tipping points’ that could, within as little as 10 years, make it impossible for us to avoid irretrievable damage to the planet’s habitability for human civilization.” Ten years later, the results are now in, or so UN Climate Change Executive Secretary Patricia Espinosa would have us believe. In the first week of May 2018, the UN released its annual report with the conclusion that “Climate Change is the single biggest threat to life, security and prosperity on Earth.” And, on May 23 this year, an editorial in Nature published projections of severe economic damage from dangerous global warming. So is the Earth really becoming dangerously warmer?
In 2007, Kesten Green, from the University of South Australia, and I published “Global Warming: Forecasting by Scientists versus Scientific Forecasting.” In it, we evaluated the forecasting methods the 2001 U.N. Intergovernmental Panel on Climate Change (IPCC) used to derive their forecasts of dangerous warming. We concluded that their methods violated 72 of the 89 relevant forecasting principles in the Principles of Forecasting handbook. Even a single violation could render a forecast useless. The IPCC report included nothing to suggest that the authors were aware of the enormous improvements in forecasting methods and principles over the past century. (See here for the most recent description of methods, principles, and evidence).
The IPCC used selected expert opinions about the effects of carbon dioxide levels and other variables to create computer models that provide the numerical and graphical basis for their “scenarios”—detailed stories of “possible” futures. Scenarios have no validity for forecasting, as they are based only on experts’ ideas about what might happen. In practice, the use of scenarios encourages extreme forecasts and gives decision-makers unwarranted confidence in the forecasts. (See a review of the evidence.) Other experts consider the IPCC’s scenarios to be implausible; see, for example the Global Warming Petition Project, which was signed publicly by more than 30,000 [scientists and engineers].
In 1980, I published a review of research on expert forecasts. The review led to the conclusion that experts’ forecasts about complex uncertain situations are no more accurate than those of non-experts. Most people resisted the conclusion, resulting in the Seer-Sucker Theory: “No matter how much evidence exists that seers do not exist, suckers will continue to pay for the existence of seers.”
This general conclusion about experts applies to environmental issues. For example, at the first Earth Day in 1970—when global cooling was a major concern—a widely quoted expert, Paul Ehrlich, predicted that between 1980 and 1989, four billion people, including 65 million Americans, would perish due to lack of food and other resources.
The above failings prompted me to challenge Mr. Gore to a ten-year $10,000 bet on the most accurate way to forecast global temperatures. The bet was proposed as an objective effort to focus attention on assessing the accuracy of alternative forecasting methods. I forecasted that there would be no long-term trend. Interestingly, this is consistent with the IPCC report’s conclusion in their technical section on forecasting, which stated that due to the complexity of the problem: “the long-term prediction of future climate states is not possible.” (That observation was ignored in the IPCC’s administrative summary.) My forecast is also consistent with the Golden Rule of Forecasting, which is to be conservative when forecasting in an uncertain situation. Nevertheless, I was not highly confident that I would win the bet due to the common range of natural variation over a ten-year period.
Mr. Gore declined my challenge. So, I used the 2001 U.N. Intergovernmental Panel on Climate Change (IPCC) projection of 3°C-per-century warming as a relatively conservative forecast to stand for his undefined but rather alarming sounding “tipping point.”
Kesten Green monitored the bet from 2008 through 2017 by using monthly satellite data from the University of Alabama at Huntsville. Satellite data are more reliable than land-based stations, which are often contaminated by poor maintenance, elimination of stations, urban areas, and unexplained adjustments to historical data. Furthermore, in line with the IPCC report’s concern—“reverse the decline of observational networks in many parts of the world. Unless networks are significantly improved, it may be difficult or impossible to detect climate change over large parts of the globe”—the satellite data cover the whole Earth. Thus, we refer to the satellite data graph as the “Whole Earth Thermometer.”
Which method provided forecasts with the smallest errors? Over the 120 months of the original “bet,” ending on December 31, 2017, the absolute errors of the no-trend forecast were 12% smaller than those of the IPCC’s “dangerous warming” projection.
I am extending the bet by ten years. My bet is that the no-change model will be more accurate over the 20-year period beginning in 2008 and running through 2027, retaining 2007 as the base year.
I am confident about winning the extended, 20-year bet. Why? In 2009, Kesten Green, Willie Soon and I published a paper in the International Journal of Forecasting in which we compared the accuracy of forecasts based on the IPCC projection with the accuracy of no-trend forecasts. The IPCC authors had explained that, “global atmospheric concentrations [of carbon dioxide, etc.]…have increased markedly as a result of human activities since 1750.” So, we used the U.K. Met Office Hadley Centre’s complete land-temperature data from 1850—when the industrial revolution was in full swing—through 2007. We made forecasts every year from 1850 through 2007 (the complete series) to forecast one-year-ahead, two-ahead, and so on, up to 100-years-ahead. As the forecast horizon increased, the IPCC forecast errors increased consistently and rapidly versus the no-trend forecast. For horizons 91to100 years, the errors were, on average, 12.6 times larger than those from the no-trend forecast. To our knowledge, that paper is the only peer-reviewed article published in a scientific journal that provides long-range forecast of global mean temperatures that complies with scientific forecasting methods and principles.
In 2014, we used the same procedures to analyze temperatures covering the 1,820 years from 116 through 1935, (the complete time series) again finding no evidence of warming. Instead, the forecast errors of the global warming hypothesis were twice those of the global cooling hypothesis, which, in turn, were twice as large as the forecast errors from the no-trend model.
Finally, Kesten Green and I contacted experts for examples of situations analogous to global warming, and also consulted the literature. Out of 71 analogous situations, 26 met our criteria that the alarm was: (1) based on forecasts of human catastrophe arising from effects of human activity on the physical environment, (2) endorsed by experts, politicians and the media, and (3) accompanied by calls for government action. The government acted in 23 of these 26 situations (e.g., DDT, acid rain, global cooling, and mercury in fish.) All of those alarming forecasts were based on experts’ opinions, rather than valid forecasting methods. None of the forecasts came true. The government policies were found to be harmful in 20 of the situations, and beneficial in none. (Green and Armstrong, 2011.)
Failing scientific evidence to support their case, advocates of the dangerous manmade global warming hypothesis have turned to the “Precautionary Principle,” which argues that uncertainty—ignorance about the situation—requires immediate action. That is a political principle, not a scientific principle, and the appropriate citation for it is George Orwell’s Nineteen Eighty-Four.
In contrast, I suggest that you ignore expert opinions about climate change. Instead, follow the Golden Rule of Forecasting to be conservative. In addition, monitor temperatures using the objective Whole Earth Thermometer along with a predetermined benchmark year, whether Gore’s revelation or the first use of satellite data, to determine whether there is indeed a very long-term warming (or cooling) trend that goes dangerously beyond normal variations before considering actions.
Dr. J. Scott Armstrong is a Professor at the University of Pennsylvania. In the interest of full-disclosure, he states that he is biased toward more warming because, in his opinion, the Earth is below its optimum temperature level.