A climate science milestone: a successful 10-year forecast!

From the Fabius Maximus Blog.  Reposted here.

By Larry Kummer. From the Fabius Maximus website, 21 Sept 2017.

Summary: The gridlock might be breaking in the public policy response to climate change. Let’s hope so, for the gridlock has left us unprepared for even the inevitable repeat of past extreme weather — let alone what new challenges the future will hold for us.

The below graph was tweeted yesterday by Gavin Schmidt, Director of NASA’s Goddard Institute of Space Sciences (click to enlarge). (Yesterday Zeke Hausfather at Carbon Brief posted a similar graph.) It shows another step forward in the public policy debate about climate change, in two ways.

 

(1) This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years. CMIP3 was prepared in 2006-7 for the IPCC’s AR4 report. That’s progress, a milestone — a successful decade-long forecast!

(2) The graph uses basic statistics, something too rarely seen today in meteorology and climate science. For example, the descriptions of Hurricanes Harvey and Irma were very 19th C, as if modern statistics had not been invented. Compare Schmidt’s graph with Climate Lab Book’s updated version of the signature “spaghetti” graph — Figure 11.25a — from the IPCC’s AR5 Working Group I report (click to enlarge). Edward Tufte (The Visual Display of Quantitative Information) weeps in Heaven every time someone posts a spaghetti graph.

Note how the graphs differ in the display of the difference between observations and CMIP3 model output during 2005-2010. Schmidt’s graph shows that observations are near the ensemble mean. The updated Figure 11.25a shows observations near the bottom of the range of CMIP5 model outputs (Schmidt also provides his graph using CMIP5 model outputs).

Clearing away the underbrush so we can see the big issues.

This is one in a series of recent incremental steps forward in the climate change policy debate. Here are two more examples of clearing away relatively minor issues. Even baby steps add up.

(1) Ocean heat content (OHC) as the best metric of warming.

This was controversial when Roger Pielke Sr. first said it in 2003 (despite his eminent record, Skeptical Science called him a “climate misinformer” – for bogus reasons). Now many climate scientists consider OHC to be the best measure of global warming. Some point to changes in the ocean’s heat content as an explanation for the pause.

Graphs of OHC should convert any remaining deniers of global warming (there are some out there). This shows the increasing OHC of the top 700 meters of the oceans, from NOAA’s OHC page. See here for more information about the increase in OHC.

 

(2) The end of the “pause” or “hiatus”.

Global atmospheric temperatures paused during period roughly between the 1998 and 2016 El Ninos, especially according to the contemporaneous records (later adjustments slightly changed the picture). Activists said that the pause was an invention of deniers. To do so they had to conceal the scores of peer-reviewed papers identifying the pause, exploring its causes (there is still no consensus on this), and forecasting when it would end. They were quite successful at this, with the help of their journalist-accomplices.

Now that is behind us. As the below graph shows, atmospheric temperatures appear to have resumed their increase, or taken a new stair step up — as described in “Reconciling the signal and noise of atmospheric warming on decadal timescales“, Roger N. Jones and James H. Ricketts, Earth System Dynamics, 8 (1), 2017. Click to enlarge the graph.

 

What next in the public policy debate about climate change?

Perhaps now we can focus on the important issues. Here are my nominees for the two most important open issues.

(1) Validating climate models as providers of skillful long-term projections.

The key question has always been about future climate change. How will different aspects of weather change, at what rate? Climate models provide these answers. But acceptable standards of accuracy and reliability differ for scientists’ research and policy decisions that affect billions of people and the course of the global economy. We have limited resources; the list of threats is long (e.g., the oceans are dying). We need hard answers.

There has been astonishingly little work addressing this vital question. See major scientists discussing the need to do so. We have the tools to do so. A multidisciplinary team of experts (e.g., software engineers, statisticians, chemists), adequately funded, could do so in a year. Here is one way to do so: Climate scientists can restart the climate policy debate & win: test the models! That post also lists (with links) the major papers in the absurdly small literature — and laughably inadequate — about validation of climate models.

There is a strong literature to draw on about how to test theories. Let’s use it.

  1. Thomas Kuhn tells us what we need to know about climate science.
  2. Daniel Davies’ insights about predictions can unlock the climate change debate.
  3. Karl Popper explains how to open the deadlocked climate policy debate.
  4. Milton Friedman’s advice about restarting the climate policy debate.
  5. Paul Krugman talks about economics. Climate scientists can learn from his insights.
  6. We must rely on forecasts by computer models. Are they reliable? (Many citations.)
  7. Paul Krugman explains how to break the climate policy deadlock.

 

(2) Modeling forcers of climate change (greenhouse gases, land use).

Climate models forecast climate based on the input of scenarios describing the world. This includes factors such as amounts of the major greenhouse gases there are in the atmosphere. These scenarios have improved in detail and sophistication in each IPCC report, but they remain an inadequate basis for making public policy.

The obvious missing element is a “business as usual” or baseline scenario. AR5 used four scenarios — Representative Concentration Pathways (RCPs). The worst was RCP8.5 — an ugly scenario of technological stagnation and rapid population growth, in which coal becomes the dominant fuel of the late 21st century (as it was in the late 19th C). Unfortunately, “despite not being explicitly designed as business as usual or mitigation scenarios” RCP8.5 has often been misrepresented as the “business as usual” scenario — becoming the basis for hundreds of predictions about our certain doom from climate change. Only recently have scientists began shifting their attention to more realistic scenarios.

A basecase scenario would provide a useful basis for public policy. Also useful would be a scenario with likely continued progress in energy technology and continued declines in world fertility (e.g., we will get a contraceptive pill for men, eventually). That would show policy-makers and the public the possible rewards for policies that encourage these trends.

Conclusions

Science and public policy both usually advance by baby steps, incremental changes that can accomplish great things over time. But we can do better. Since 2009 my recommendations have been the same about our public policy response to climate change.

  1. Boost funding for climate sciences. Many key aspects (e.g., global temperature data collection and analysis) are grossly underfunded.
  2. Run government-funded climate research with tighter standards (e.g., posting of data and methods, review by unaffiliated experts), as we do for biomedical research.
  3. Do a review of the climate forecasting models by a multidisciplinary team of relevant experts who have not been central players in this debate. Include a broader pool than those who have dominated the field, such as geologists, chemists, statisticians and software engineers.
  4. We should begin a well-funded conversion to non-carbon-based energy sources, for completion by the second half of the 21st century — justified by both environmental and economic reasons (see these posts for details).
  5. Begin more aggressive efforts to prepare for extreme climate. We’re not prepared for repeat of past extreme weather (e.g., a real hurricane hitting NYC), let alone predictable climate change (e.g., sea levels climbing, as they have for thousands of years).
  6. The most important one: break the gridlocked public policy by running a fair test of the climate models.

For More Information

For more about the close agreement of short-term climate model temperature forecasts with observations, see “Factcheck: Climate models have not ‘exaggerated’ global warming” by Zeke Hausfather at Carbon Brief. To learn more about the state of climate change see The Rightful Place of Science: Disasters and Climate Change by Roger Pielke Jr. (Prof of Environmental Studies at U of CO-Boulder).

For more information see all posts about the IPCC, see the keys to understanding climate change and these posts about the politics of climate change…

Get notified when a new post is published.
Subscribe today!
1 1 vote
Article Rating
512 Comments
Inline Feedbacks
View all comments
September 23, 2017 11:52 am

I think that the spaghetti graph is informative: it shows clearly that the model errors are serially correlated, meaning a model that is “high” some of the time is almost always “high” (and similarly for “low”). It also shows clearly that almost all of the forecasts have been “high” almost all of the time, enhancing the likelihood that the recent “nearer” approximation is a transient.
Missing from both graphs is a display of a 95% confidence interval on the mean trajectory: Schmidt’s graph shows the 95% interval of the sample paths, but the interval on the mean is much narrower. The data are outside the 95% CI of the mean trajectory most of the time. On the evidence of these two graphs, the prediction for the next 10 years is likely consistently high.
Save the prediction. If the model mean stays close to the data mean for the next 10 continuous years (2018-2027), then you might acquire confidence in its prediction for the subsequent 10 years (2028 – 2037).

September 23, 2017 12:03 pm

Models!
If a hindcast is not prefect, why bother running a simulation past todays date?
A hindcast, by definition has available to it all of the necessary global measurements.
Are the modelers and their supporters suggesting that we either do not have enough initial data or that model formulas are incorrect?
How can we expect to run a model for 10/20/50/100 years without starting with perfect hindcast?

Tom Dayton
Reply to  Steve Richards
September 23, 2017 12:18 pm

Your typing is not “prefect” so why should anyone bother reading it?

Reply to  Tom Dayton
September 23, 2017 12:41 pm

If only the CACA Cabal would dismiss what another member of “The Cause” says when he/she/it makes an insignificant error!
They’ve make HUGE errors but are still welcome (as long as they get a headline).

Tom Dayton
Reply to  Steve Richards
September 23, 2017 12:50 pm

Steve, in case you are genuinely interested in an answer despite your flippant question: Hindcasts differ from forecasts only in that hindcasts use the actual forcings that happened, but forecasts necessarily use estimates of the forcings that will happen. Climate models do not try to predict forcings; forcings are inputs to the models. In the absence of infinite computer time, climatologists enter a limited set of estimated forcings into the models, in an attempt to span the range of reasonable possibilities. For CMIP5 those were called RCPs, which you can find explained thoroughly here: https://skepticalscience.com/rcp.php

September 23, 2017 12:34 pm

Look at the scale of all of the graphs. Imagine what they would look like if the scale were reasonable – matching with the accuracy range of the instruments. Then imagine them using figures that weren’t created out of whole cloth by averaging a non-representative set of temperature measurements to find an “anomaly” figure using a statistically insignificant fraction of the accuracy of the instruments. Then imagine them including error bars for all of their misleadingly averaged figures, showing the statistical significance of the “anomalies.” Then imagine them using real temperatures instead of anomalies based on an adjusted baseline calculated using the same statistical games they use to come up with the average surface temperature. Then imagine them using only the actual temperature readings from instrumental measurement and not fake temperatures fabricated by averaging (again) the closest averaged fake temperatures in the grid that eventually are based on “real” adjusted readings somewhere halfway around the globe.
You can’t propagandize radical political policymaking using a nearly flat line with minor squiggles, can you? Not scary enough.

Uncle Gus
September 23, 2017 12:59 pm

“(1) Ocean heat content (OHC) as the best metric of warming.
This was controversial when Roger Pielke Sr. first said it in 2003 (despite his eminent record, Skeptical Science called him a “climate misinformer” – for bogus reasons). Now many climate scientists consider OHC to be the best measure of global warming. Some point to changes in the ocean’s heat content as an explanation for the pause.
Graphs of OHC should convert any remaining deniers of global warming (there are some out there).”
Oh dear. Oh deary deary me…
“It’s not air temperature, coz there ain’t no rise in air temperature, so it must be all going into the oceans, so OCEAN WARMING IS GOING TO KILL US ALL!!! (Give us your money…)”

Reply to  Uncle Gus
September 24, 2017 9:46 am

Uncle Gus,
That’s quite an odd comment. Evidence of global warming does not mean “OCEAN WARMING IS GOING TO KILL US ALL!!! (Give us your money…)”.
That’s hysteria just like that of alarmists. As the climate wars conclude their third decade, the fringes on both sides have come to resemble each other in tone and nature. Sad.

September 23, 2017 1:10 pm

” Hindcasts differ from forecasts only in that hindcasts use the actual forcings that happened,”
After the data had been archived and then retrieved?

Tom Dayton
Reply to  Gunga Din
September 23, 2017 1:13 pm

Which data are you talking about?

Reply to  Tom Dayton
September 23, 2017 1:18 pm

Good question!
How many of the actual values still exist?

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 1:24 pm

Actual values of what? Forcing data used in hindcasts are actual values observed over the past–by definition. So of course they are “stored”–they are written down, stored in files, databases, and so on. Do you expect climatologists to memorize everything? How could data not be stored? I don’t know what you are asking about.

Sixto
Reply to  Tom Dayton
September 23, 2017 1:33 pm

Possibly the “data” lost by Hadley CRU? It supposedly still exists somewhere, but Phil Jones doesn’t know where, so can’t say which stations he used, for instance, to determine that urbanization had no effect on temperatures in China.
Without a valid historical record, you can’t hindcast. Without a valid recent record, you can’t say how well models match reality, since who knows what actually is reality? Certainly not GIGO “climate scientists”, who aren’t climatologists or any kind of real scientist, but computer gamers.

Reply to  Tom Dayton
September 23, 2017 1:46 pm

Tom “How could data not be stored? I don’t know what you are asking about.”
It could be archived. With the “tests” of the actual data set up to skew numbers are actually stored.
https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/#comment-2617789

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 1:49 pm

Sixto: You are talking about temperature observation data. Climate models do not take temperature observations as inputs. They are not statistical models. They are physical models that take in forcing data as inputs, and produce temperature data as outputs. The only “historical record” needed for hindcasting is the history of the forcings–solar irradiance, greenhouse gas amounts in the atmosphere, volcanic aerosols, and so on (not ENSO, because that is internal variability). Here, for example, are the CMIP5 forcing data: http://cmip-pcmdi.llnl.gov/cmip5/forcing.html. Here is an introduction to climate models: https://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 1:56 pm

Gunga Din: Your comments are unclear. Now I’m going to guess you are talking about the forcing data that are input to the climate models. Yes, those are stored and publicly available. For example, here are the forcings used by the CMIP5 models: http://cmip-pcmdi.llnl.gov/cmip5/forcing.html

Sixto
Reply to  Tom Dayton
September 23, 2017 2:06 pm

Tom Dayton September 23, 2017 at 1:49 pm
The discussion was about hindcasting. You can’t hindcast models without supposed temperature data. If the “data” are bogus, man-made artifacts, as the “surface” sets indubitably are, then what good is the hindcasting?
And as noted, what good is comparing model outputs to phony recent “data”?

Michael Jankowski
Reply to  Tom Dayton
September 23, 2017 2:17 pm

“…Climate models do not take temperature observations as inputs…”
So climate models are independent of temperature? They produce the same results if the average global temp is 0K vs 273K vs 300K? Sounds pretty ridiculous since so many physical processes are temperature-dependent.

Sixto
Reply to  Tom Dayton
September 23, 2017 2:24 pm

Michael,
GCMs put in latent heat. As you observe, they have to start from some approximation of conditions as of run initiation.

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 2:31 pm

Sixto: “They put in latent heat.” Not exactly. You should learn about climate models: https://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/
Michael Jankowski: Of course models start with temperatures as inputs. But those temperatures are for times very long before the times that are to be projected–decades or even hundreds of years before. The models are initialized far enough in the past for their weather variations to stabilize within the boundary conditions. The temperature observations that Gunga Din and Sixto were referring to (in accusatory, conspiratorial ways) are temperatures from the times that the models are being used to project. Those temperatures are not inputs to the models, they are outputs. Learn about climate models at the link I provided above.

Sixto
Reply to  Tom Dayton
September 23, 2017 2:41 pm

Tom,
I know enough about them to know that they are for sh!t. They’re worse than worthless GIGO exercises in computer gaming to game the system. They’re not fit for purpose and have cost the planet millions of lives and trillions of dollars. Their perpetrators are criminals.

Michael Jankowski
Reply to  Tom Dayton
September 23, 2017 2:45 pm

“…Climate models do not take temperature observations as inputs…”
“…Of course models start with temperatures as inputs…”
LOL.
“…The temperature observations that Gunga Din and Sixto were referring to (in accusatory, conspiratorial ways) are temperatures from the times that the models are being used to project…”
That’s now how I read the comments I saw from them. And common-sense would dictate that the models would match the observations if that were the case instead of being wrong.

Sixto
Reply to  Tom Dayton
September 23, 2017 2:54 pm

I should add that the Father of NOAA’s GCMs, Syukuro Manabe, derived an ECS of 2.0 °C from his early models. IMO any run with an implied ECS higher than that should be tossed as unphysical.
Guess who in the 1970s came up with the preposterous ECS of 4.0 °C? If you guessed Jim “Venus Express” Hansen, you’re right.
A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by meteorologist Jule Charney, estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available. Syukuro Manabe’s exhibited a climate sensitivity of 2 °C, while James Hansen’s showed a climate sensitivity of 4 °C.
Manabe says that Charney chose 0.5 °C as a not unreasonable margin of error, subtracted it from his own number, and added it to Hansen’s figure. Hence the 1.5 °C to 4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since. No actual physical basis required. Just WAGs from primitive GCMs. In Hansen’s case, designed upon special pleading rather than science.

Sixto
Reply to  Tom Dayton
September 23, 2017 3:05 pm

But of course if GCMs implied an ECS with a physical basis, ie in the range of 0.0 to 2.0 °C per doubling of CO2, then the output wouldn’t show the desired scary projections out to AD 2100.
Using models manufacturing phony, unphysical, evidence-free higher ECSes has led to the mass murder and global robbery that is CACA.

Reply to  Gunga Din
September 23, 2017 2:42 pm

*sigh*
An archive can be set up honestly or “skewed” to only record/save the data that moves in the desired direction.
What program and what “tests” are used to choose what has been used to pick out which data point has actually be recorded ? What “tests” were used in retrieving the “data”?
Query an archive for a particular date and time, for example, and a value would be returned. That value may not be an actual record but rather one interpolated from the previous and former actual values, that is, actual values the passed the “skewable” tests to actually be archived.
What program and “tests” does Gavin use to archive current data? What did Hansen use to archive (and maybe re-archive) past data?

Reply to  Gunga Din
September 23, 2017 2:46 pm

Another *sigh*.
Meant as a response to Tom Dayton here:
https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/comment-page-1/#comment-2618486
(Think I’ll take my nap now. 😎

Tom Dayton
Reply to  Gunga Din
September 23, 2017 2:48 pm

I gave you a pointer to the CMIP site, where you can find model code, forcing input data, and model output data. And documentation about all that. And there are peer reviewed publications describing much of that. I’m not going to do more of your homework, nor cater to your batshit crazy conspiracy theories.

Reply to  Gunga Din
September 24, 2017 9:40 am

Tom,
+1

September 24, 2017 9:41 am

Newspapers love weather porn, filling the space between ads with easy-to-write exciting nonsense like this (explaining climate science is more difficult to do).
https://www.washingtonpost.com/news/capital-weather-gang/wp/2017/09/23/harvey-irma-maria-why-is-this-hurricane-season-so-bad/comment image

Wight Mann
September 24, 2017 2:45 pm

Show me the proof that this guy wasn’t just accidentally right. As the Climateers said when confronted with the 20 year long pause…come back when it has been fifty years and we will discuss it.

Reply to  Wight Mann
September 24, 2017 6:37 pm

Wight,
There are a several powerful statistical tests for forecasts. From memory, fifteen years is significant at the 90% confidence level (hence it is a milestone); twenty years is significant at the 95% level.
Science, like almost everything in the real world, advances incrementally. Step by step.

Editor
September 25, 2017 4:31 am

The personal attacks on Larry Kummer are totally uncalled for.
While I strongly disagree with his characterization of “Gavin’s Twitter Trick” as a demonstration of predictive skill in a climate model and even more strongly disagree with half of his conclusions (1, 4 & 5), this was a very thoughtful essay.

JohnKnight
Reply to  David Middleton
September 25, 2017 3:05 pm

“The personal attacks on Larry Kummer are totally uncalled for.”
Make you case(s), or tone down the bulk judgmentalism, I suggest . . human ; )

September 26, 2017 1:15 pm

“A CLIMATE SCIENCE MILESTONE: A SUCCESSFUL 10-YEAR FORECAST!”
Kummer appears to have little understanding of the relationships between climate policies and climate science. Including Krugman in a list of references on how to test climate models has to be a mistake. Suggesting that we continue marching arm-in-arm in the wrong direction singing kumbaya on climate change is not going to happen. Until the we get the science right, we will continue be clueless on what policies are right.
“THIS GRAPH [the one from CMIP3] SHOWS A CLIMATE MODEL’S DEMONSTRATION OF PREDICTIVE SKILL OVER A SHORT TIME HORIZON OF ROUGHLY TEN YEARS. …………. THAT’S PROGRESS, A MILESTONE — A SUCCESSFUL DECADE-LONG FORECAST!”
The CMIP3 graph is out-of-date and misleading. Since the El Nino peak, HadCRUT4 monthly temperatures from March 2016 to July 2017 have declined nearly 40 percent. The various temperature curves that zigzag through the so-called 95% certainty range are meaningless. Those values are not best estimates. The most that can be said is a future prediction might lie somewhere between the estimated 95% extreme values. No single prediction is more likely than another value in the range, and the likely error is very large for a long-term prediction. The “new” CMIP3 is no better than the “spaghetti” graph, and neither has any long-term predictive value. Kummer declared victory far too soon.
“GRAPHS OF OHC SHOULD CONVERT ANY REMAINING DENIERS OF GLOBAL WARMING (THERE ARE SOME OUT THERE). THIS SHOWS THE INCREASING OHC OF THE TOP 700 METERS OF THE OCEANS.”
Even if there were an adequate OHC database, unless there has been a Second Coming, no one would know what to do with it. Many physicists posit that natural processes can only be modeled with particle physics, which current models barely touch. Application of particle physics in the CERN CLOUD experiments suggests the possibility of a century of non-warming in which CO2 does not play a significant role. CERN concludes IPCC estimates of future temperatures are too high and the models should be redone. IPCC reports are not credible sources for anything. Denigrating those with opposing viewpoints by labeling them “deniers” on climate change does nothing to advance the cause of those represented by Kummer.
“AS THE BELOW GRAPH SHOWS [Global and Land Temperature Anomalies, 1950-2017], ATMOSPHERIC TEMPERATURES APPEAR TO HAVE RESUMED THEIR INCREASE, OR TAKEN A NEW STAIR STEP UP.”
Kummer’s cited bar graph showing two straight-line trends does not support Kummer’s conclusion that the “pause” is behind us. In the following graph, a simple numerical analysis of the HadCRUT4 time-temperature series shows the rate of increase of the global mean temperature trendline has been constant or steadily decreasing since October 2000. The temperature anomaly decrease from March 2003, the El Nino peak, to July 2017 has been nearly 40 percent. The rate of increase will likely become negative within the next 20 years, reaching the lowest global mean trendline temperature in almost 40 years. Stock up on cold-weather gear.
https://imgur.com/a/p7Hcx
Kummer’s six conclusions are hardly worthy of comment. His “plan” is to develop and fund (more funding being the core objective of the “plan)” an expanded laundry list of climate research activities, to add more government employees to develop policies for extreme weather events that have yet to be forecast and to begin the conversion to non-carbon-based energy resources that are not economically feasible and not needed.
A simple thought experiment suggests to me the, thus far, fruitless attempts to model the earth’s climate system should be put on the back burner for the time being. Think of the earth’s climate system as a black box and the earth’s temperature the output from the black box. Assuming the black box contains an aggregation of spinning, zigging and zagging, oscillating particles, photons and assorted waves that may be mathematically represented by periodic functions (This assumption could be a stretch.), it would follow that the output, the temperature, can also be represented by a periodic function, which can be decomposed into various oscillatory components by Fourier analysis. The focus of climate research should be on analyzing the output of the black box rather than spinning wheels trying to analyze the countless interactions within the black box. Ultimately, the results might lead to a better understanding of how the climate system works. The U.S. is on the verge running off a cliff if we cannot make a midcourse correction on the current direction of climate change research and policies.

Reply to  Tom Bjorklund
September 26, 2017 6:52 pm

Figure for above post.

//s.imgur.com/min/embed.js

September 27, 2017 2:06 pm

Kummer is an enigma. Stopped reading his blog a while ago. A waste of time, at best.
But this article is Kummer at his worst. His genius for references for this article is Paul Krugman. Krugman, in his own area, economics, is a pitiful failure. He is wrong regularly and woefully.
Why would anyone think that applying Krugman’s “genius” to another area would be useful. Something is badly wrong in the whole article above.
A great overview of Krugman’s failures, from Quora:
https://www.quora.com/What-things-has-Paul-Krugman-been-very-wrong-about
1) The survival of the Euro:
Krugman was unable to fathom how the peripheral countries of Europe could possibly stay on in the Eurozone. He wrote a number of blog posts effectively saying that the Euro was doomed and that Greece would leave any day now, with Spain and possibly Italy following suit. Not only did that not happen, the Euro club is infact slated to grow further.
Sources:
Another Bank Bailout
Crash of the Bumblebee
Apocalypse Fairly Soon
Those Revolting Europeans
Europe’s Economic Suicide
What Greece Means
Legends of the Fail
The Hole in Europe’s Bucket
An Impeccable Disaster
Op-Ed Columnist – A Money Too Far – NYTimes.com
Op-Ed Columnist – The Euro Trap – NYTimes.com
2) The mechanism of the housing bust:
A number of people, including Krugman saw the housing bubble and predicted its demise, but Krugman was wrong about the details of how the bursting of the bubble would play out. He thought that it would involve a crisis in junk bonds and a fall of the dollar (None of these two happened)
He did say that subprime mortgages would go bust, but he underestimated the effect of that… He did not understand the risks posed by securitization and therefore, was not predicting an outright recession until well into 2008, a pretty big miss when dealing with the biggest worldwide slowdown since the great depression.
Krugman predicting fall of the dollar accompanying the housing bust:
Debt And Denial
3) Deflation:
Krugman was confidently predicting deflation starting in early 2010. It never materialized. Inflation remained stubbornly positive.
Source:
Core Logic
4) Relative performance of the worst-hit European countries:
For a very long-time, Krugman kept praising Iceland for implementing capital controls and predicted that it would do better than others which kept their capital markets free (like Estonia, Latvia, Lithuania and Ireland). Did not pan out….
Krugman on Iceland vs Baltics and Ireland in 2010:
The Icelandic Post-crisis Miracle
The council of foreign relations questions Krugman’s claim:
Geo-Graphics » Post-Crisis Iceland: Miracle or Illusion?
Geo-Graphics » “Iceland’s Post-Crisis Miracle” Revisited
Krugman, as classy as ever calls the people at CFR stupid:
Peaks, Troughs, and Crisis
CFR pwns:
Geo-Graphics » Paul Krugman’s Baltic Bust—Part III
5) The US under Bush would be attacked by bond vigilantes:
Long before Krugman started publicly ridiculing people who are worried that interest rates on US Government debt could spike suddenly as “Verious Serious People who are spooked by Invisible Bond Vigilantes”, Krugman was one of them…… He wrote a number of columns and blog posts arguing that the reckless policies of the Bush administration were certain to cause a loss of confidence in the credit worthiness of the US Government.
Source:
Mistakes
6) The sequester of 2013 would cause a slowdown in the US and the stimulus of 2009 would reduce unemployment:
Krugman issued dire warnings about the sequester, predicting that it would cause a slowdown in the US pointing to papers that predicted 2.9% growth without the sequester and 1.1% with it. In reality, the sequester was passed and growth was 4.1%.
Sources:
Krugman (as usual) name-calling people who proposed the sequester:
Sequester of Fools
Keynesian models showing reduced growth because of sequester (linked in above article)
MA’s Alternative Scenario: March 1 Sequestration
Krugman gloating when he thought things would go his way calling it a test of the market-monetarist view:
Monetarism Falls Short (Somewhat Wonkish)
Final reality check:
Mike Konczal: “We rarely get to see a major, nationwide economic experiment at work,”
This mirrored the experience of 2009 (but in reverse), when Keynesian models championed by Krugman predicted that US unemployment would top out at 9% without the stimulus and at 8% with it. The stimulus was passed and unemployment went up to 10%.
7) The recession would be over soon:
Krugman and Greg Mankiw had a spat in early 2009 on something known as the unit root hypothesis. The discussion is technical but it essentially boiled down to this: Team Obama had predicted that the economy would bounce back strongly from the great recession and their models predicted that real GDP would be 15.6% higher in 2013 than it was in 2008.
Mankiw disputed this on the basis of the unit root hypothesis and said that recessions sometimes tend to linger and therefore, predictions should give some positive probability weight to that event …. Krugman described Mankiw as “evil” for refuting the administration’s forecast based on what he believed to be flawed economics implicitly supporting the administration’s forecast. Mankiw invited him to take a bet on the issue which Krugman ignored.
In reality, it was not even close. Mankiw won by a landslide. Real GDP in 2013 was infact only 6% higher than 2008.
Sources:
Team Obama on the Unit Root Hypothesis
Krugman harshly criticizing Mankiw for the above:
Roots of evil (wonkish)
Mankiw responds by asking Krugman to take a bet:
Wanna bet some of that Nobel money?
The final reality check showing that Mankiw would have won handily:
The forces of evil easily triumph over Krugman and DeLong

1 3 4 5