Guest essay by Michael R. Smith, C.C.M.
Forbes, “Absolute Return” column, April 21, 2008, page 246:
Here’s another name you should own, Freddie Mac ($29 per share)…Freddie is cheap at 1.1 times book [value].
Less than five months later, Freddie Mac’s stock was worth 25¢ per share, a loss of 99%. It has since recovered to 70¢ per share, so the loss is “only” 97.6%.
A forecast of a stock of a single company five months into the future seems easy. The company had government backing (federally sponsored corporation). What could go wrong?
Yet, the forecast published by Forbes, short of an outright bankruptcy, could not have been more inaccurate. It is worth examining how a situation that seemed rock solid (government-backed securities!) became catastrophic to see if there are any lessons that might apply to the atmospheric sciences.
The assumptions that Freddie Mac (and other financial stocks) were low risk was primarily a result of computer models. As one expert stated (using pseudonym at http://blogs.zdnet.com/Murphy/?p=1265 ),
The problem is inherently complex – imagine being asked to value a portfolio of 10,000 residential mortgages issued to a total of something like 17,652 individuals. Each mortgage balances some issue amount against some payment stream; each has had zero or more payments recorded against it, each has an initial interest rate; an interest computation method; zero or more early payment opportunities; some mention of late or missed payment penalties and conditions, and an expiry, renegotiation, or call date.
While I do not doubt that is “complex,” the level of complexity is miniscule when compared to the complexity of the earth-atmosphere-ocean system and their interactions. Yet, faith in these model valuations led to a prediction that Freddie Mac stock was “cheap” when a meltdown of the financial system, largely due to the incorrect valuations and risk estimates by computer models, was less than 180 days away.
After the meltdown occurred, a second Forbes article stated, “All existing models for calculating risk, he [Nassim Taleb] says, should be thrown out because they underestimate extreme price swings. ‘The track record of economists in predicting events is monstrously bad,’ he says.” (February 2, 2009, p. 21) Of course, we learn this after our home values and values of our 401K’s are wrecked.
Given the failure of these models to predict the implosion six months hence, would you invest the remainder of your 401K on what the same model predicts for the next six years or, if you are in your 20’s, what it predicts for sixty? I don’t know what your answer might be, but common sense would indicate applying the forecasts from these models to your portfolio with extreme caution.
June 1, 2009, we learned from The New York Times that “Models’ Projections for Flu Miss Mark by Wide Margin.” The model predicted, according to the Times, “by the end of May, there would only be 2,000 to 2,500 cases in the United States… On May 15, the Centers for Disease Control and Prevention estimated there were upwards of 100,000 cases in the country…”
Just six months earlier, the models’ predictive capability were touted because of real time input from Google (www.cidrap.umn.edu/cidrap/content/influenza/panflu/news/nov1308google-jw.html ). Now, the flu has been declared a “Pandemic” by the World Health Organization (/www.pandemicflu.gov/ ) in spite of the modest number of cases projected to be in existence by June, 2009, by the models. Another critical short-term modeling failure.
Question: If the model predicts low risk for the next six months, would you decide to forego a flu shot? Again, your answer might be different, but common sense would dictate getting the shot.
How do these examples relate to climate modeling and policy?
We currently have climate models that have missed the fact that atmospheric temperatures peaked 11 years ago and that oceanic heat content has, at best, failed to increase. See: http://climatesci.org/2009/03/04/large-uncertainty-in-the-simulation-of-the-global-average-surface-temperature-by-the-ipcc-models-a-study-reported-on-the-weblog-the-blackboard/ , http://climatesci.org/2009/02/09/update-on-a-comparison-of-upper-ocean-heat-content-changes-with-the-giss-model-predictions/ , among many others.
Given the inadequate performance of these models over the last 5 to 10 years, why do we believe we can make accurate, highly specific forecasts 50 to 100 years in the future? Is it because we are so close to the problem we are blinded to the dangers like the economists who did not see the meltdown coming?
Almost no one familiar with meteorology or climate models would disagree that they are more complex than the mortgage valuation or influenza prediction models. The basic processes of the earth-ocean-atmosphere are incompletely understood and we barely understand many of their interactions.
We also know that forecasting the weather beyond five days is dicey at best. Then why are we making 29,000-day weather forecasts? Don’t think we are doing that? Consider the following:
“By the period 2080-2099, devastating heat waves of the kind that killed more than 700 people in Chicago in 1995 will occur three times per year.” (USCCP, p. 119, citation below)
That is a weather forecast – a forecast of specific meteorological conditions at a specific time and place. The document is filled with similar predictions, along with recommendations based on those predictions.
We are sometimes told that climate forecasts can be made because the “weather” errors will be cancelled out because they are “random.” Here is what was said about the mortgage computer models,
Now, because you can predict roughly the probable range for most of these assumptions but not the actual values the variables involved will have for each of the time periods you have to consider, what you do is write a monte carlo simulation in which you try tens of thousands of value combinations and plot the results to see what, on average expectations, the portfolio might be worth.
Notice, that at this point even something as large as 0.0005% error in the outcome would be completely insignificant – so randomization error should have no effect, right? (op. sit.)
It was believed by most the mortgage instruments were safe because the errors (i.e., a higher default rate of subprime lenders) would cancel out (because the risks were spread) and because, if desired, default insurance could be purchased from institutions like AIG. Of course, AIG used similar models to determine its risk. We just learned how well that worked.
In spite of these spectacular failures of less complex computer modeling in economics and public health, the atmospheric sciences seem to be making similar miscalculations. If your common sense would lead you to disregard these models’ forecasts when planning your portfolio and whether you get a flu shot, I would suggest we adopt a much more modest approach to the use of climate models. While they are useful research tools, the numerous uncertainties (cloud feedback, particulates, volcanic ash, the current quiet sun, etc.) are so great we cannot claim to have forecast skill decades into the future.
Otherwise, when I read, during a period of falling temperatures and ocean heat content,“Global warming is unequivocal,”* I hear, “Freddie Mac is cheap.”
- U.S. Climate Change Program, Key Finding #1, January, 2009, http://downloads.climatescience.gov/sap/usp/prd2/usp-prd-executive-summary.pdf
Michael R. Smith is CEO of WeatherData Services, Inc., An AccuWeather Company, and a Fellow of the American Meteorological Society. This weblog represents his personal opinion. AccuWeather’s Global Warming Blog can be accessed at: http://global-warming.accuweather.com/ .