Oh this is hilarious. In a “Back To The Future” sort of moment, this press release from the National Center for Atmospheric Research claims they could have forecast “the pause”, if only they had the right tools back then.
Yes, having tools of the future would have made a big difference in these inconvenient moments of history:
“We could have forecast the Challenger Explosion if only we knew O-rings became brittle and shrank in the cold, and we had Richard Feynman working for us to warn us.”
“We could have learned the Japanese were going to bomb Pearl Harbor if only we had the electronic wiretapping intelligence gathering capability the NSA has today.”
“We could have predicted the Tacoma Narrows Bridge would collapse back then if only we had the sophisticated computer models of today to model wind loading.”
Yes, saying that having the tools of the future back then would have fixed the problem, is always a big help when you want to do a post-facto CYA for stuff you didn’t actually do back then.
UPDATE: WUWT commenter Louis delivers one of those “I wish I’d said that” moments:
Even if they could have forecast the pause, they wouldn’t have. That would have undercut their dire message that we had to act now because global warming was accelerating and would soon reach a point where it would become irreversible.
Here’s the CYA from NCAR:
Progress on decadal climate prediction
Today’s tools would have foreseen warming slowdown
If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way, according to new research.
The analysis, led by NCAR’s Gerald Meehl, appears in the journal Nature Climate Change. It highlights the progress being made in decadal climate prediction, in which global models use the observed state of the world’s oceans and their influence on the atmosphere to predict how global climate will evolve over the next few years.
Such decadal forecasts, while still subject to large uncertainties, have emerged as a new area of climate science. This has been facilitated by the rapid growth in computing power available to climate scientists, along with the increased sophistication of global models and the availability of higher-quality observations of the climate system, particularly the ocean.

Although global temperatures remain close to record highs, they have shown little warming trend over the last 15 years, a phenomenon sometimes referred to as the “early-2000s hiatus”. Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.
The hiatus was not predicted by the average conditions simulated by earlier climate models because they were not configured to predict decade-by-decade variations.
However, to challenge the assumption that no climate model could have foreseen the hiatus, Meehl posed this question: “If we could be transported back to the 1990s with this new decadal prediction capability, a set of current models, and a modern-day supercomputer, could we simulate the hiatus?”
Looking at yesterday’s future with today’s tools
To answer this question, Meehl and colleagues applied contemporary models in a “hindcast” experiment using the new methods for decadal climate prediction. The models were started, or “initialized,” with particular past observed conditions in the climate system. The models then simulated the climate over previous time periods where the outcome is known.
The researchers drew on 16 models from research centers around the world that were assessed in the most recent report by the Intergovernmental Panel on Climate Change (IPCC). For each year from 1960 through 2005, these models simulated the state of the climate system over the subsequent 3-to-7-year period, including whether the global temperature would be warmer or cooler than it was in the preceding 15-year period.
Starting in the late 1990s, the 3-to-7-year forecasts (averaged across each year’s set of models) consistently simulated the leveling of global temperature that was observed after the year 2000. (See image at bottom.) The models also produced the observed pattern of stronger trade winds and cooler-than-normal sea surface temperatures over the tropical Pacific. A previous study by Meehl and colleagues related the observed hiatus of globally averaged surface air temperature to this pattern, which is associated with enhanced heat storage in the subsurface Pacific and other parts of the deeper global oceans.
Letting natural variability play out

Although scientists are continuing to analyze all the factors that might be driving the hiatus, the new study suggests that natural decade-to-decade climate variability is largely responsible.
As part of the same study, Meehl and colleagues analyzed a total of 262 model simulations, each starting in the 1800s and continuing to 2100, that were also assessed in the recent IPCC report. Unlike the short-term predictions that were regularly initialized with observations, these long-term “free-running” simulations did not begin with any particular observed climate conditions.
Such free-running simulations are typically averaged together to remove the influence of internal variability that occurs randomly in the models and in the observations. What remains is the climate system’s response to changing conditions such as increasing carbon dioxide.
However, the naturally occurring variability in 10 of those simulations happened, by chance, to line up with the internal variability that actually occurred in the observations. These 10 simulations each showed a hiatus much like what was observed from 2000 to 2013, even down to the details of the unusual state of the Pacific Ocean.
Meehl pointed out that there is no short-term predictive value in these simulations, since one could not have anticipated beforehand which of the simulations’ internal variability would match the observations.
“If we don’t incorporate current conditions, the models can’t tell us how natural variability will evolve over the next few years. However, when we do take into account the observed state of the ocean and atmosphere at the start of a model run, we can get a better idea of what to expect. This is why the new decadal climate predictions show promise,” said Meehl.
Decadal climate prediction could thus be applied to estimate when the hiatus in atmospheric warming may end. For example, the UK Met Office now issues a global forecast at the start of each year that extends out for a decade.
“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,” Meehl added, “though we need to better quantify the reliability of the forecasts produced with this new technique.”
The paper:
Meehl, Gerald A., Haiyan Teng, and Julie M. Arblaster, “Climate model simulations of the observed early-2000s hiatus of global warming,” Nature Climate Change (2014), doi:10.1038/nclimate2357
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“We could have forecast ‘the pause’ – if we had the tools of the future back then”
They use hind casting to try to validate their forecasting of a climate system that has more variables than we have yet identified, let alone measured empirically. We do not have sufficient duration, quantity, quality, or precision of data on the known let alone unknown primary climate variables to construct climate models that can pass validation tests. The results are climate fortune telling, with the climate models serving the modern day function of the crystal ball.
I see warming…. CO2 induced warming, in your future. You must stop exhaling!
I do not believe the climate modelers could find their ‘hind c’ass’ with both hands, while using a well lit bidet….
A wordy response that could’ve been simplified to “We underestimated (or ignored) natural variations”.
Do they still do so?
Not to worry about not having the tools back then. With the tools they have now for restating the recorded temperature data to make it homogeneous, in a few years they will have eliminated the pause from the historical records and all those old models will be accurate again.
I’ ve read this article several times and it is still not clear to me how they came up with the results for the ‘decadel’ models. They used 16 contemporary models and ran them for each year from 1960 to 2005. Each model was initialized at the start for the particular conditions of that year and then allowed to run for ‘3 to 7’ years. The claim is that starting in the late 1990’s, the models showed a temperature plateau similar to what has been measured.
So, my questions are;
1) If a decadal analysis was desired, why not run the models for 10 years? (The model result deviated from the desired result long before 10 years??)
2) If a shorter time period was desired, why not, for the sake of easier analysis, pick a single interval of 3, 5, or 7 years? What is the reason for a variable run time? ( perhaps the models deviated after only 3-7 years. Picking a fixed interval of only 3 years is too short for a decadal analysis so the models were allowed to run until they deviated and the 3-7 year reporting interval is the result??)
3) Why stop at 2005 when the temperature plateau continues to this day? ( perhaps the models do not work after 2005??)
4) Why start at 1960? ( perhaps the models do not work before 1960??)
5) How well did the models predict the temperature before the late 1990’s? (This is not mentioned. Who knows??)
6) How were the results determined? Specifically,for any given year, results would be available for that year from the 16 runs that were initiated that year as well as from the previous 3-7 years. Does this imply that the results for any given year are derived from 64-128 runs? If so, are you using the average or mean of the results? Or, are you picking a single run or a few runs that randomly match the plateau and disregarding the rest? ( similar to the 10 out of 262 long term runs that randomly match the temperature plateau. By the way, the idea of these 10 ‘randomly’ matching is from the study author, not me??)
Cold and “O” rings was well known before the tragic Challenger launch. The knowledge had not been passed on or was ignored. Even today new aircraft are subjected to extreme testing to prove systems will not fail at extremes of temperature.
More twaddle.
Seems there’s a lot of it about.
re: “Even if they could have forecast the pause, they wouldn’t have. That would have undercut their dire message that we had to act now because global warming was accelerating and would soon reach a point where it would become irreversible.”
There hasn’t been a pause in global warming, (as sea level rise uncontroversially shows).
There has been a slowing of the warming of the near surface air. It’s the most measured of the places where the excess heat goes, but it’s unimportant proportion of the energy.
There’s been plenty of warming.
Seth
Please don’t be silly.
Global warming is and always was increase to global average surface temperature anomaly (GASTA). Global warming is and never was anything else.
You are trying to change the definition of global warming because global warming has stopped. But your attempt to ‘move the goalposts’ is too late.
There are now 52 published excuses for global warming having stopped.
Richard
And now we have more excuses for the models’ failures: Last decade’s slowdown in global warming enhanced by an unusual climate anomaly. (Application of the Singular Spectrum Analysis Technique to Study the Recent Hiatus on the Global Surface Temperature Record. PLoS ONE, 2014; 9 (9) http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0107222). A funny one. When the men caused climatic anomaly seems to disappear and the models fail, one just need to demand an extra non-human climatic anomaly that cancels it and be sure that it (men caused climatic anomaly) can still exist but disappears from sight.
So much for the 2007 IPCC 4th Assessment Report… the so called “gold standard in climate science”… “the settled science”… “incontrovertible”!!!!!!
Well I assume that the “Tools” they talk about needing, would be a real physical model of this planet, and all its physical interactions, that affect weather /climate.
Such a tool, if it existed, would of course be able to replicate the past, since we already know what that was.
So for all you modelers out there, who create real models of this physical planet; perhaps using a set of data numbers that gets added to maybe once every day,
Since you already know what the most recent day’s numbers are; maybe today’s, maybe yesterday’s: Using those numbers and your best model, why don’t you predict whether tomorrow’s new number, will be less than today’s number, or will equal today’s number. or will be greater than today’s number.
That is a one choice out of three possibilities test of your model. No need to predict / project / whatever out 100 years; just to tomorrow will do.
Now having used your model to get tomorrow’s direction. Bet your entire net worth on the accuracy of your selection; to be determined, once tomorrow’s number is known.
I predict / project / whatever, that you will likely lose everything you have.
Your model cannot, even answer that simple question about the future.