Oh this is hilarious. In a “Back To The Future” sort of moment, this press release from the National Center for Atmospheric Research claims they could have forecast “the pause”, if only they had the right tools back then.
Yes, having tools of the future would have made a big difference in these inconvenient moments of history:
“We could have forecast the Challenger Explosion if only we knew O-rings became brittle and shrank in the cold, and we had Richard Feynman working for us to warn us.”
“We could have learned the Japanese were going to bomb Pearl Harbor if only we had the electronic wiretapping intelligence gathering capability the NSA has today.”
“We could have predicted the Tacoma Narrows Bridge would collapse back then if only we had the sophisticated computer models of today to model wind loading.”
Yes, saying that having the tools of the future back then would have fixed the problem, is always a big help when you want to do a post-facto CYA for stuff you didn’t actually do back then.
UPDATE: WUWT commenter Louis delivers one of those “I wish I’d said that” moments:
Even if they could have forecast the pause, they wouldn’t have. That would have undercut their dire message that we had to act now because global warming was accelerating and would soon reach a point where it would become irreversible.
Here’s the CYA from NCAR:
Progress on decadal climate prediction
Today’s tools would have foreseen warming slowdown
If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way, according to new research.
The analysis, led by NCAR’s Gerald Meehl, appears in the journal Nature Climate Change. It highlights the progress being made in decadal climate prediction, in which global models use the observed state of the world’s oceans and their influence on the atmosphere to predict how global climate will evolve over the next few years.
Such decadal forecasts, while still subject to large uncertainties, have emerged as a new area of climate science. This has been facilitated by the rapid growth in computing power available to climate scientists, along with the increased sophistication of global models and the availability of higher-quality observations of the climate system, particularly the ocean.

Although global temperatures remain close to record highs, they have shown little warming trend over the last 15 years, a phenomenon sometimes referred to as the “early-2000s hiatus”. Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.
The hiatus was not predicted by the average conditions simulated by earlier climate models because they were not configured to predict decade-by-decade variations.
However, to challenge the assumption that no climate model could have foreseen the hiatus, Meehl posed this question: “If we could be transported back to the 1990s with this new decadal prediction capability, a set of current models, and a modern-day supercomputer, could we simulate the hiatus?”
Looking at yesterday’s future with today’s tools
To answer this question, Meehl and colleagues applied contemporary models in a “hindcast” experiment using the new methods for decadal climate prediction. The models were started, or “initialized,” with particular past observed conditions in the climate system. The models then simulated the climate over previous time periods where the outcome is known.
The researchers drew on 16 models from research centers around the world that were assessed in the most recent report by the Intergovernmental Panel on Climate Change (IPCC). For each year from 1960 through 2005, these models simulated the state of the climate system over the subsequent 3-to-7-year period, including whether the global temperature would be warmer or cooler than it was in the preceding 15-year period.
Starting in the late 1990s, the 3-to-7-year forecasts (averaged across each year’s set of models) consistently simulated the leveling of global temperature that was observed after the year 2000. (See image at bottom.) The models also produced the observed pattern of stronger trade winds and cooler-than-normal sea surface temperatures over the tropical Pacific. A previous study by Meehl and colleagues related the observed hiatus of globally averaged surface air temperature to this pattern, which is associated with enhanced heat storage in the subsurface Pacific and other parts of the deeper global oceans.
Letting natural variability play out

Although scientists are continuing to analyze all the factors that might be driving the hiatus, the new study suggests that natural decade-to-decade climate variability is largely responsible.
As part of the same study, Meehl and colleagues analyzed a total of 262 model simulations, each starting in the 1800s and continuing to 2100, that were also assessed in the recent IPCC report. Unlike the short-term predictions that were regularly initialized with observations, these long-term “free-running” simulations did not begin with any particular observed climate conditions.
Such free-running simulations are typically averaged together to remove the influence of internal variability that occurs randomly in the models and in the observations. What remains is the climate system’s response to changing conditions such as increasing carbon dioxide.
However, the naturally occurring variability in 10 of those simulations happened, by chance, to line up with the internal variability that actually occurred in the observations. These 10 simulations each showed a hiatus much like what was observed from 2000 to 2013, even down to the details of the unusual state of the Pacific Ocean.
Meehl pointed out that there is no short-term predictive value in these simulations, since one could not have anticipated beforehand which of the simulations’ internal variability would match the observations.
“If we don’t incorporate current conditions, the models can’t tell us how natural variability will evolve over the next few years. However, when we do take into account the observed state of the ocean and atmosphere at the start of a model run, we can get a better idea of what to expect. This is why the new decadal climate predictions show promise,” said Meehl.
Decadal climate prediction could thus be applied to estimate when the hiatus in atmospheric warming may end. For example, the UK Met Office now issues a global forecast at the start of each year that extends out for a decade.
“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,” Meehl added, “though we need to better quantify the reliability of the forecasts produced with this new technique.”
The paper:
Meehl, Gerald A., Haiyan Teng, and Julie M. Arblaster, “Climate model simulations of the observed early-2000s hiatus of global warming,” Nature Climate Change (2014), doi:10.1038/nclimate2357
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Even if they could have forecast the pause, they wouldn’t have. That would have undercut their dire message that we had to act now because global warming was accelerating and would soon reach a point where it would become irreversible.
“Even if they could have forecast the pause, they wouldn’t have.”
Classic line. Absolutely true.
2 decades of screaming “climate/weather wolf!!”
Turns out, that CO2 is actually wolf repellent (:
No, but they may have shown wider uncertainty ranges, with a continued upward trend — more like the variability we’ve seen in the last century (with a continued upward trend).
They don’t show nor explain the uncertainty and error bars now. What makes you think they’d change anything but start with the propaganda phase instead of their science.
They’d never let the skeptics look through a crack in the door much less enter the conversation.
Weren’t many of the recent explanations of the pause available many years ago and could have been factored in to the models?
How right you are. The forecast of a “pause” would have been so politically incorrect that they would have tied themselves in scientific knots to “hide the decline”.
Man, they just don’t give up. Shameless crap. The oceans ate our heat. Why now? How tiresome it all is.
“particular past observed conditions”? If you let me pick and choose the starting conditions, I can model any dadgum thing I want. What a bunch of hogwash!
“Meehl pointed out that there is no short-term predictive value in these simulations, since one could not have anticipated beforehand which of the simulations’ internal variability would match the observations.”
Translation
We don’t know what really is going on and are making wilg guesses hoping one will be right.
“If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way”
Bull$hit. They can’t even explain the pause now after the fact.
Obligatory Niels Bohr quote:
“Prediction is very difficult, especially about the future”
So they figured it all out, yet still do not have the testicular fortitude to make a 15 year prediction. Words like ‘could’ and ‘though’ in a prediction means you do not know and are taking a guess.
Oh, they must mean THAT settled science.
Indeed. Was not the pause just screamed out of existence when Ridley brought it up in his article in the WSJ?
>>The dreams are all right enough, but the art of interpreting is lost. 1500 yr ago they were getting to do it so badly it was considered better to depend on chicken-guts & other naturally intelligent sources of prophecy, recognizing that when guts can’t prophecy, it is no use for Ezekiel to go into the business. Prophecy went out with the chicken guts.
– working notes for No. 44, The Mysterious Stranger, published in The Mysterious Stranger Manuscripts, pp. 464-463.<< Mark Twain
Replace "chicken guts" by "models"….
I have been predicting last weeks lottery numbers for years. Why won’t anyone buy my software??!
The great circles of climate science.
1. Ignore an almost two decade long halt in any global warming.
2. For each year therein, insist it was warmer than the previous one.
3. Go back, retweak the parameters of your models to predict the pause.
4. Announce you’re still infallible.
http://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/
Pointman
Hilarious – essentially Meehl is advocating broadening the error bands – a simulation for every occasion.
“If we don’t incorporate current conditions, the models can’t tell us how natural variability will evolve over the next few years.”
Now all we need to do is to accurately parametise each and every on of these underlying natural events, their interrelationship; and Bob’s your mother’s brother.Or sister.
They haven’t been merely predicting on a decadal time scale. They have been making projections on a multi-decadal time scale. Does it seem to anyone else that they are avoiding the implications of their sudden discovery of natural variability as they relate the accuracy of previous very long range projections? Of course Dr. R. Pielke, Sr., has repeatedly pointed to the folly of assigning any real value to multi-decadal climate model projections, anyway, as he did here:
http://wattsupwiththat.com/2014/02/07/the-overselling-of-climate-modeling-predictability-on-multi-decadal-time-scales-in-the-2013-ipcc-wg1-report-annex-1-is-not-scientifically-robust/#more-102804
The future has returned to what it was before it got changed. Or something like that. It’s science.
Always reminds me of the old saying:
“If your Aunt had balls, she’d be your Uncle”.
What’s this “plateaued at high levels” nonsense in the caption to the first figure? “High” compared to what? There’s ample indication in reconstructions of significantly higher temps for long periods of time. We’re still just barely back to “normal” after the little ice age.
Shouldn’t the logic be:
“If only we had these tools at our disposal earlier we would have been able to predict that global warming would not be as serious as we first thought?”
TBH, while in the context of the debate surrounding CAGW this looks post hoc rationalization, in reality it is sort of thing that routinely goes on in science, which is usually and generally self-correcting. Models need to be able to capture the range variability in order to say something useful about what our climate might do, regardless of our impact on it. This can be seen as an attempt to refine and improve modelling which is most definitely needed, since they clearly have to date done a rather poor job of characterizing accurately our climate, especially in terms of being informative for policy-making.
So, IMHO, the scorn should not be directed so much on the post-hoc rationalization (deserved though it might be) but on the consequences to policy that this supposedly more accurate modelling might have had, had we had it sooner.
But we really don’t know that the tweaked models are any more realistic than the old models. Tweaking to improve the hindcasting does not assure that the models will not immediately go off the rails in the future.
Hansen et al in 1988 (arguably the seminal paper which gave impetus to found the UN FCCC and IPCC) had access to temperature records from the 19th Century to the present. They knew there was a pause (slight decline) starting in the early- to mid-1940s and lasting approx. 30 y, which had followed a temperature rise from approx. 1910 to 1940, itself preceded by a pause (slight decline) from approx. 1880-1910.
http://woodfortrees.org/plot/hadcrut3gl/from:1850/to:1988
So a natural quasi-cycle of approx. 60 y was the most salient part of the record (apart from a mild temperature increase of < 2%. Yet the actual atmospheric temperature anomaly record has been BENEATH their Scenario C, which corresponded to a utopian world which “… drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.”
I KID YOU NOT.
This is a major fail.
I am not aware of Hansen et al having done any second-guessing of their own work, however.
How can this be possible?
What other endeavour with the audaciousness and pretentiousness to include itself amongst the “Sciences” can endure such a massive (and growing) cleft between data and models without being called to task?
These silly hand-waving, after-the-fact model adjustments, searches for hidden heat, etc. are merely efforts to prolong the belief that these fellows somehow had it right 2.5 decades ago. They didn’t. It of course represents the vested interests in maintaining research grants and massive cash flows, all of which are based on the premise of Climate Armageddon – which the lowly masses must rightly fear (and submit their tithes to the clergy accordingly).
Kurt in Switzerland
errata (text below was lost in formatting):
From 2nd paragraph, this should have read: “So a natural quasi-cycle of approx. 60 y was the most salient part of the record (apart from a mild temperature increase of far less than one deg. C – more like 0.5 deg C.).
Their Business as Usual Scenario A called for human GHG emissions to increase by 1.5 % per year; instead, human GHG emissions have increased at 2.1% per year. Scenario A five-year-smoothed projection called for a temperature anomaly rise rate of about 0.5 deg C PER DECADE by the 2010s.”
Kurt in Switzerland
Dr. James Hansen – NASA GISS – 15 January 2013
“The 5-year mean global temperature has been flat for a decade, which we interpret as a combination of natural variability and a slowdown in the growth rate of the net climate forcing.”
@Mary
Hasen claimed that there was 0.2C/decade “locked in” for the first two decades of the 21st century. NO MATTER WHAT WE DID, including halting all CO2 emmissions, the first 20 years would see an increase of 0.4C. A real man would say “Sorry, I got it wrong,” not the science equivalent of “The dog ate my homework.”
So now they should be able to predict how long the haitus is going to last or does that only happen when it is over.
One cannot call the lack of warming a pause, or a hiatus unless you know the future. All one can say is warming has stopped.
And now that they’re so much better, when does the next hiatus start, and how long does it last?
But their models simulated the past pretty accurately it’s only when they had to predict the future they went wrong. Was different maths involved predicting the future?
Surely they weren’t rigged to make them look like they knew, that the science was settled? Oh yeah so they were, proved by climategate.
If this crock wasn’t costing us so much money it’d be funny
“Tools” meaning “Data”…
in Aus a “tool” has another meaning..
and these “tools” sure are!!
the grunions usual cheersquad were soooo all over this as proof theyre right still
cognitive dissonance to the max 🙂
Friends:
Please note that the above article says
In other words, after the event they altered the model so its possible behaviour could be similar to what was observed. They then ran the model 262 times and of those 262 runs there were 10 which matched what happened in reality.
And, on the basis of that, Meehl asserts that “new decadal climate predictions show promise”.
Can anybody please explain how having amended a computer program so it can sometimes agree with past climate behaviour is evidence that the amended program is more likely to indicate future climate behaviour?
Richard
No. Not with a straight face.
This is climate science. No one is asking you to keep a straight face.
As long as their grant pockets are wide open, who’s even looking at their faces?
Prior to the “tweaking”, 0 out of 262 were able to somewhat simulate past behavior. Getting to 10 out of 262 is a big improvement for them.
MarkW
Yes, I can see the “big improvement” in being able to emulate the past but – with my limited understanding of the magic called ‘climate science’ – I fail to understand how that improves ability to predict the future.
Richard
Can anybody please explain how having amended a computer program so it can sometimes agree with past climate behaviour is evidence that the amended program is more likely to indicate future climate behaviour?
===============
I have a pair of dice that sometimes agrees with the past, and surprisingly when I look back, it also agree with the future.
What the computer modellers are really saying is that their computers 20 years ago could not ever get the future right. They could not even perform as well as a pair of dice. 20 years ago none of the models predicted “THE PAUSE”.
However, after years of research and millions of dollars, they have now got to the point where there models are almost able to equal the performance of a pair of dice. Another 20 years and who knows, maybe they will someday final achieve the holy grail of climate science, and ultimately match the accuracy of a pair of dice.
ferdberple
Thankyou for that.
I understand you to be saying that the modellers have reduced their certainty that their indications of future climate behaviour are wrong.
The development seems less than helpful.
If their ‘projections’ were certainly wrong then those projected climate behaviours could be removed from the list of possible future climate behaviours. So, as a result of the development we now have an increased number of possible future climate behaviours.
At this rate of model development they will never match the accuracy of a pair of dice.
Richard
If they were in Vegas they would have “crapped out” a long time ago and would be waiting at their gate at the airport with those long, sad, loser faces. Luckily for them. they are playing with other people’s money.
Over-fitting anyone?
“…little warming trend over the last 15 years…”. Surely they mean “no warming trend”?
And “Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans”. I thought that this was just one of 20+ attempted explanations? I didn’t realise it “has been shown”. A guess isn’t proof!
What this seems to be saying is that they couldn’t really forecast the climate or weather 10 yrs ago but now they can. So Trenberth’s heat in the oceans is crap because and the EPA stuff is crap and the IPCC stuff is crap because it was all predicated on the models that they are now saying were crap.
Captain Hindsight.
http://www.youtube.com/watch?v=Q2WfMO0UKB4
From the famed Betts computer model at the UK MO.
Important
Long-range forecasts are unlike weather forecasts for the next few days
Forecasts show the likelihood of a range of possible outcomes
The most likely outcome in the forecast will not always happen
Forecasts are for average conditions over a wide region and time period
For more details on interpretation, see How to use our long-range predictions.
Averaged over the five-year period 2014-2018, forecast patterns suggest enhanced warming over land, and at high northern latitudes. There is some indication of continued cool conditions in the Southern Ocean, and of a developing cooling in the north Atlantic sub-polar gyre. The latter is potentially important for climate impacts over Europe, America and Africa.
Averaged over the five-year period 2014-2018, global average temperature is expected to remain high and is likely to be between 0.17°C and 0.43°C above the long-term (1981-2010) average. This compares with an anomaly of +0.26°C observed in 2010, the warmest year on record.
For this forecast the baseline period has been updated to be 1981-2010 (compared to 1971-2000 used previously). This provides a more recent context and is consistent with our seasonal forecasts.
Joe Bastardi has been giving the base of this forecast for several years without a £130 m / yr office and £60m computer.