Great moments in climate science: "we could have forecast 'the pause' – if we had the tools of the future back then"

backtothefuture_warming1Oh this is hilarious. In a “Back To The Future” sort of moment, this press release from the National Center for Atmospheric Research claims they could have forecast “the pause”, if only they had the right tools back then.

Yes, having tools of the future would have made a big difference in these inconvenient moments of history:

“We could have forecast the Challenger Explosion if only we knew O-rings became brittle and shrank in the cold, and we had Richard Feynman working for us to warn us.”

“We could have learned the Japanese were going to bomb Pearl Harbor if only we had the electronic wiretapping intelligence gathering capability the NSA has today.”

“We could have predicted the Tacoma Narrows Bridge would collapse back then if only we had the sophisticated computer models of today to model wind loading.”

Yes, saying that having the tools of the future back then would have fixed the problem, is always a big help when you want to do a post-facto CYA for stuff you didn’t actually do back then.

UPDATE: WUWT commenter Louis delivers one of those “I wish I’d said that” moments:

Even if they could have forecast the pause, they wouldn’t have. That would have undercut their dire message that we had to act now because global warming was accelerating and would soon reach a point where it would become irreversible.

Here’s the CYA from NCAR:

Progress on decadal climate prediction

Today’s tools would have foreseen warming slowdown

If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way, according to new research.

The analysis, led by NCAR’s Gerald Meehl, appears in the journal Nature Climate Change. It highlights the progress being made in decadal climate prediction, in which global models use the observed state of the world’s oceans and their influence on the atmosphere to predict how global climate will evolve over the next few years.

Such decadal forecasts, while still subject to large uncertainties, have emerged as a new area of climate science. This has been facilitated by the rapid growth in computing power available to climate scientists, along with the increased sophistication of global models and the availability of higher-quality observations of the climate system, particularly the ocean.

Global temperature anomalies, 1880-2013, from NOAA/NCDC
After rising rapidly in the 1980s and 1990s, global surface air temperature has plateaued at high levels since around 2000. (Image courtesy NOAA National Climatic Data Center.)

Although global temperatures remain close to record highs, they have shown little warming trend over the last 15 years, a phenomenon sometimes referred to as the “early-2000s hiatus”. Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.

The hiatus was not predicted by the average conditions simulated by earlier climate models because they were not configured to predict decade-by-decade variations.

However, to challenge the assumption that no climate model could have foreseen the hiatus, Meehl posed this question: “If we could be transported back to the 1990s with this new decadal prediction capability, a set of current models, and a modern-day supercomputer, could we simulate the hiatus?”

Looking at yesterday’s future with today’s tools

To answer this question, Meehl and colleagues applied contemporary models in a “hindcast” experiment using the new methods for decadal climate prediction. The models were started, or “initialized,” with particular past observed conditions in the climate system. The models then simulated the climate over previous time periods where the outcome is known.

The researchers drew on 16 models from research centers around the world that were assessed in the most recent report by the Intergovernmental Panel on Climate Change (IPCC). For each year from 1960 through 2005, these models simulated the state of the climate system over the subsequent 3-to-7-year period, including whether the global temperature would be warmer or cooler than it was in the preceding 15-year period.

Starting in the late 1990s, the 3-to-7-year forecasts (averaged across each year’s set of models) consistently simulated the leveling of global temperature that was observed after the year 2000. (See image at bottom.) The models also produced the observed pattern of stronger trade winds and cooler-than-normal sea surface temperatures over the tropical Pacific. A previous study by Meehl and colleagues related the observed hiatus of globally averaged surface air temperature to this pattern, which is associated with enhanced heat storage in the subsurface Pacific and other parts of the deeper global oceans.

Letting natural variability play out

Depiction of global temperature during hiatus as captured by ten ensemble members
A set of 262 model simulations for the last century that were assessed in the most recent IPCC report show the long-term warming trend produced by greenhouse gases, along with short-term trends produced by natural variability. A total of 10 simulations randomly produced variations for the period 2000–2013 that were similar to those actually observed during this period. Above is a map showing trends in sea surface temperature for those 10 model runs, with the characteristic cooling evident across the tropical Pacific. (Image courtesy Nature Climate Change.)

Although scientists are continuing to analyze all the factors that might be driving the hiatus, the new study suggests that natural decade-to-decade climate variability is largely responsible.

As part of the same study, Meehl and colleagues analyzed a total of 262 model simulations, each starting in the 1800s and continuing to 2100, that were also assessed in the recent IPCC report. Unlike the short-term predictions that were regularly initialized with observations, these long-term “free-running” simulations did not begin with any particular observed climate conditions.

Such free-running simulations are typically averaged together to remove the influence of internal variability that occurs randomly in the models and in the observations. What remains is the climate system’s response to changing conditions such as increasing carbon dioxide.

However, the naturally occurring variability in 10 of those simulations happened, by chance, to line up with the internal variability that actually occurred in the observations. These 10 simulations each showed a hiatus much like what was observed from 2000 to 2013, even down to the details of the unusual state of the Pacific Ocean.

Meehl pointed out that there is no short-term predictive value in these simulations, since one could not have anticipated beforehand which of the simulations’ internal variability would match the observations.

“If we don’t incorporate current conditions, the models can’t tell us how natural variability will evolve over the next few years. However, when we do take into account the observed state of the ocean and atmosphere at the start of a model run, we can get a better idea of what to expect. This is why the new decadal climate predictions show promise,” said Meehl.

Decadal climate prediction could thus be applied to estimate when the hiatus in atmospheric warming may end. For example, the UK Met Office now issues a global forecast at the start of each year that extends out for a decade.

“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,” Meehl added, “though we need to better quantify the reliability of the forecasts produced with this new technique.”

The paper:

Meehl, Gerald A., Haiyan Teng, and Julie M. Arblaster, “Climate model simulations of the observed early-2000s hiatus of global warming,” Nature Climate Change (2014), doi:10.1038/nclimate2357

 

0 0 votes
Article Rating
224 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Louis
September 10, 2014 12:08 am

Even if they could have forecast the pause, they wouldn’t have. That would have undercut their dire message that we had to act now because global warming was accelerating and would soon reach a point where it would become irreversible.

Mary Brown
Reply to  Louis
September 10, 2014 5:44 am

“Even if they could have forecast the pause, they wouldn’t have.”
Classic line. Absolutely true.

Reply to  Mary Brown
September 10, 2014 9:45 am

2 decades of screaming “climate/weather wolf!!”
Turns out, that CO2 is actually wolf repellent (:

Ben M.
Reply to  Louis
September 10, 2014 5:58 am

No, but they may have shown wider uncertainty ranges, with a continued upward trend — more like the variability we’ve seen in the last century (with a continued upward trend).

Reply to  Ben M.
September 10, 2014 6:51 am

They don’t show nor explain the uncertainty and error bars now. What makes you think they’d change anything but start with the propaganda phase instead of their science.
They’d never let the skeptics look through a crack in the door much less enter the conversation.

Don E
Reply to  Louis
September 10, 2014 10:03 am

Weren’t many of the recent explanations of the pause available many years ago and could have been factored in to the models?

Robert Austin
Reply to  Louis
September 10, 2014 10:21 am

How right you are. The forecast of a “pause” would have been so politically incorrect that they would have tied themselves in scientific knots to “hide the decline”.

Mike Bromley the Kurd
September 10, 2014 12:09 am

Man, they just don’t give up. Shameless crap. The oceans ate our heat. Why now? How tiresome it all is.

Ken
Reply to  Mike Bromley the Kurd
September 10, 2014 9:18 am

“particular past observed conditions”? If you let me pick and choose the starting conditions, I can model any dadgum thing I want. What a bunch of hogwash!

Dr Paul mackey
September 10, 2014 12:11 am

“Meehl pointed out that there is no short-term predictive value in these simulations, since one could not have anticipated beforehand which of the simulations’ internal variability would match the observations.”
Translation
We don’t know what really is going on and are making wilg guesses hoping one will be right.

Ian Schumacher
September 10, 2014 12:13 am

“If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way”
Bull$hit. They can’t even explain the pause now after the fact.
Obligatory Niels Bohr quote:
“Prediction is very difficult, especially about the future”

Jared
September 10, 2014 12:15 am

So they figured it all out, yet still do not have the testicular fortitude to make a 15 year prediction. Words like ‘could’ and ‘though’ in a prediction means you do not know and are taking a guess.

Kurt in Switzerland
September 10, 2014 12:15 am

Oh, they must mean THAT settled science.

Brute
Reply to  Kurt in Switzerland
September 10, 2014 4:13 am

Indeed. Was not the pause just screamed out of existence when Ridley brought it up in his article in the WSJ?

Non Nomen
September 10, 2014 12:21 am

>>The dreams are all right enough, but the art of interpreting is lost. 1500 yr ago they were getting to do it so badly it was considered better to depend on chicken-guts & other naturally intelligent sources of prophecy, recognizing that when guts can’t prophecy, it is no use for Ezekiel to go into the business. Prophecy went out with the chicken guts.
– working notes for No. 44, The Mysterious Stranger, published in The Mysterious Stranger Manuscripts, pp. 464-463.<< Mark Twain
Replace "chicken guts" by "models"….

LewSkannen
September 10, 2014 12:24 am

I have been predicting last weeks lottery numbers for years. Why won’t anyone buy my software??!

September 10, 2014 12:30 am

The great circles of climate science.
1. Ignore an almost two decade long halt in any global warming.
2. For each year therein, insist it was warmer than the previous one.
3. Go back, retweak the parameters of your models to predict the pause.
4. Announce you’re still infallible.
http://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/
Pointman

Admin
September 10, 2014 12:32 am

Hilarious – essentially Meehl is advocating broadening the error bands – a simulation for every occasion.

lee
September 10, 2014 12:35 am

“If we don’t incorporate current conditions, the models can’t tell us how natural variability will evolve over the next few years.”
Now all we need to do is to accurately parametise each and every on of these underlying natural events, their interrelationship; and Bob’s your mother’s brother.Or sister.

Ken L.
September 10, 2014 12:35 am

They haven’t been merely predicting on a decadal time scale. They have been making projections on a multi-decadal time scale. Does it seem to anyone else that they are avoiding the implications of their sudden discovery of natural variability as they relate the accuracy of previous very long range projections? Of course Dr. R. Pielke, Sr., has repeatedly pointed to the folly of assigning any real value to multi-decadal climate model projections, anyway, as he did here:
http://wattsupwiththat.com/2014/02/07/the-overselling-of-climate-modeling-predictability-on-multi-decadal-time-scales-in-the-2013-ipcc-wg1-report-annex-1-is-not-scientifically-robust/#more-102804

Gary in Erko
September 10, 2014 12:36 am

The future has returned to what it was before it got changed. Or something like that. It’s science.

Richard
September 10, 2014 12:37 am

Always reminds me of the old saying:
“If your Aunt had balls, she’d be your Uncle”.

Toby Nixon
September 10, 2014 12:37 am

What’s this “plateaued at high levels” nonsense in the caption to the first figure? “High” compared to what? There’s ample indication in reconstructions of significantly higher temps for long periods of time. We’re still just barely back to “normal” after the little ice age.

Agnostic
September 10, 2014 12:44 am

Shouldn’t the logic be:
“If only we had these tools at our disposal earlier we would have been able to predict that global warming would not be as serious as we first thought?”
TBH, while in the context of the debate surrounding CAGW this looks post hoc rationalization, in reality it is sort of thing that routinely goes on in science, which is usually and generally self-correcting. Models need to be able to capture the range variability in order to say something useful about what our climate might do, regardless of our impact on it. This can be seen as an attempt to refine and improve modelling which is most definitely needed, since they clearly have to date done a rather poor job of characterizing accurately our climate, especially in terms of being informative for policy-making.
So, IMHO, the scorn should not be directed so much on the post-hoc rationalization (deserved though it might be) but on the consequences to policy that this supposedly more accurate modelling might have had, had we had it sooner.

Robert Austin
Reply to  Agnostic
September 10, 2014 10:28 am

But we really don’t know that the tweaked models are any more realistic than the old models. Tweaking to improve the hindcasting does not assure that the models will not immediately go off the rails in the future.

Kurt in Switzerland
September 10, 2014 12:55 am

Hansen et al in 1988 (arguably the seminal paper which gave impetus to found the UN FCCC and IPCC) had access to temperature records from the 19th Century to the present. They knew there was a pause (slight decline) starting in the early- to mid-1940s and lasting approx. 30 y, which had followed a temperature rise from approx. 1910 to 1940, itself preceded by a pause (slight decline) from approx. 1880-1910.
http://woodfortrees.org/plot/hadcrut3gl/from:1850/to:1988
So a natural quasi-cycle of approx. 60 y was the most salient part of the record (apart from a mild temperature increase of < 2%. Yet the actual atmospheric temperature anomaly record has been BENEATH their Scenario C, which corresponded to a utopian world which “… drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.”
I KID YOU NOT.
This is a major fail.
I am not aware of Hansen et al having done any second-guessing of their own work, however.
How can this be possible?
What other endeavour with the audaciousness and pretentiousness to include itself amongst the “Sciences” can endure such a massive (and growing) cleft between data and models without being called to task?
These silly hand-waving, after-the-fact model adjustments, searches for hidden heat, etc. are merely efforts to prolong the belief that these fellows somehow had it right 2.5 decades ago. They didn’t. It of course represents the vested interests in maintaining research grants and massive cash flows, all of which are based on the premise of Climate Armageddon – which the lowly masses must rightly fear (and submit their tithes to the clergy accordingly).
Kurt in Switzerland

Kurt in Switzerland
Reply to  Kurt in Switzerland
September 10, 2014 1:08 am

errata (text below was lost in formatting):
From 2nd paragraph, this should have read: “So a natural quasi-cycle of approx. 60 y was the most salient part of the record (apart from a mild temperature increase of far less than one deg. C – more like 0.5 deg C.).
Their Business as Usual Scenario A called for human GHG emissions to increase by 1.5 % per year; instead, human GHG emissions have increased at 2.1% per year. Scenario A five-year-smoothed projection called for a temperature anomaly rise rate of about 0.5 deg C PER DECADE by the 2010s.”
Kurt in Switzerland

Mary Brown
Reply to  Kurt in Switzerland
September 10, 2014 5:51 am

Dr. James Hansen – NASA GISS – 15 January 2013
“The 5-year mean global temperature has been flat for a decade, which we interpret as a combination of natural variability and a slowdown in the growth rate of the net climate forcing.”

D.J. Hawkins
Reply to  Kurt in Switzerland
September 10, 2014 10:38 am

@Mary
Hasen claimed that there was 0.2C/decade “locked in” for the first two decades of the 21st century. NO MATTER WHAT WE DID, including halting all CO2 emmissions, the first 20 years would see an increase of 0.4C. A real man would say “Sorry, I got it wrong,” not the science equivalent of “The dog ate my homework.”

Richard
September 10, 2014 12:55 am

So now they should be able to predict how long the haitus is going to last or does that only happen when it is over.

Paul Marko
Reply to  Richard
September 10, 2014 8:23 am

One cannot call the lack of warming a pause, or a hiatus unless you know the future. All one can say is warming has stopped.

arnriewe
Reply to  Richard
September 10, 2014 8:07 pm

And now that they’re so much better, when does the next hiatus start, and how long does it last?

andydaines
September 10, 2014 12:57 am

But their models simulated the past pretty accurately it’s only when they had to predict the future they went wrong. Was different maths involved predicting the future?
Surely they weren’t rigged to make them look like they knew, that the science was settled? Oh yeah so they were, proved by climategate.
If this crock wasn’t costing us so much money it’d be funny

Nylo
September 10, 2014 12:59 am

“Tools” meaning “Data”…

ozspeaksup
Reply to  Nylo
September 10, 2014 3:23 am

in Aus a “tool” has another meaning..
and these “tools” sure are!!
the grunions usual cheersquad were soooo all over this as proof theyre right still
cognitive dissonance to the max 🙂

September 10, 2014 1:04 am

Friends:
Please note that the above article says

Meehl pointed out that there is no short-term predictive value in these simulations, since one could not have anticipated beforehand which of the simulations’ internal variability would match the observations.

“If we don’t incorporate current conditions, the models can’t tell us how natural variability will evolve over the next few years. However, when we do take into account the observed state of the ocean and atmosphere at the start of a model run, we can get a better idea of what to expect. This is why the new decadal climate predictions show promise,”

said Meehl.

In other words, after the event they altered the model so its possible behaviour could be similar to what was observed. They then ran the model 262 times and of those 262 runs there were 10 which matched what happened in reality.
And, on the basis of that, Meehl asserts that “new decadal climate predictions show promise”.
Can anybody please explain how having amended a computer program so it can sometimes agree with past climate behaviour is evidence that the amended program is more likely to indicate future climate behaviour?
Richard

Robert B
Reply to  richardscourtney
September 10, 2014 1:26 am

No. Not with a straight face.

Reply to  Robert B
September 10, 2014 2:12 am

This is climate science. No one is asking you to keep a straight face.

RockyRoad
Reply to  Robert B
September 10, 2014 8:48 am

As long as their grant pockets are wide open, who’s even looking at their faces?

MarkW
Reply to  richardscourtney
September 10, 2014 5:21 am

Prior to the “tweaking”, 0 out of 262 were able to somewhat simulate past behavior. Getting to 10 out of 262 is a big improvement for them.

Reply to  MarkW
September 10, 2014 5:26 am

MarkW
Yes, I can see the “big improvement” in being able to emulate the past but – with my limited understanding of the magic called ‘climate science’ – I fail to understand how that improves ability to predict the future.
Richard

ferdberple
Reply to  richardscourtney
September 10, 2014 6:07 am

Can anybody please explain how having amended a computer program so it can sometimes agree with past climate behaviour is evidence that the amended program is more likely to indicate future climate behaviour?
===============
I have a pair of dice that sometimes agrees with the past, and surprisingly when I look back, it also agree with the future.
What the computer modellers are really saying is that their computers 20 years ago could not ever get the future right. They could not even perform as well as a pair of dice. 20 years ago none of the models predicted “THE PAUSE”.
However, after years of research and millions of dollars, they have now got to the point where there models are almost able to equal the performance of a pair of dice. Another 20 years and who knows, maybe they will someday final achieve the holy grail of climate science, and ultimately match the accuracy of a pair of dice.

Reply to  ferdberple
September 10, 2014 6:59 am

ferdberple
Thankyou for that.
I understand you to be saying that the modellers have reduced their certainty that their indications of future climate behaviour are wrong.
The development seems less than helpful.
If their ‘projections’ were certainly wrong then those projected climate behaviours could be removed from the list of possible future climate behaviours. So, as a result of the development we now have an increased number of possible future climate behaviours.
At this rate of model development they will never match the accuracy of a pair of dice.
Richard

daveandrews723
Reply to  ferdberple
September 10, 2014 12:18 pm

If they were in Vegas they would have “crapped out” a long time ago and would be waiting at their gate at the airport with those long, sad, loser faces. Luckily for them. they are playing with other people’s money.

Reply to  richardscourtney
September 10, 2014 8:46 am

Over-fitting anyone?

September 10, 2014 1:13 am

“…little warming trend over the last 15 years…”. Surely they mean “no warming trend”?
And “Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans”. I thought that this was just one of 20+ attempted explanations? I didn’t realise it “has been shown”. A guess isn’t proof!

Stephen Richards
September 10, 2014 1:22 am

What this seems to be saying is that they couldn’t really forecast the climate or weather 10 yrs ago but now they can. So Trenberth’s heat in the oceans is crap because and the EPA stuff is crap and the IPCC stuff is crap because it was all predicated on the models that they are now saying were crap.

Galvanize
September 10, 2014 1:25 am
Stephen Richards
September 10, 2014 1:26 am

From the famed Betts computer model at the UK MO.
Important
Long-range forecasts are unlike weather forecasts for the next few days
Forecasts show the likelihood of a range of possible outcomes
The most likely outcome in the forecast will not always happen
Forecasts are for average conditions over a wide region and time period
For more details on interpretation, see How to use our long-range predictions.
Averaged over the five-year period 2014-2018, forecast patterns suggest enhanced warming over land, and at high northern latitudes. There is some indication of continued cool conditions in the Southern Ocean, and of a developing cooling in the north Atlantic sub-polar gyre. The latter is potentially important for climate impacts over Europe, America and Africa.
Averaged over the five-year period 2014-2018, global average temperature is expected to remain high and is likely to be between 0.17°C and 0.43°C above the long-term (1981-2010) average. This compares with an anomaly of +0.26°C observed in 2010, the warmest year on record.
For this forecast the baseline period has been updated to be 1981-2010 (compared to 1971-2000 used previously). This provides a more recent context and is consistent with our seasonal forecasts.
Joe Bastardi has been giving the base of this forecast for several years without a £130 m / yr office and £60m computer.

Stephen Richards
September 10, 2014 1:27 am

There is some indication of continued cool conditions in the Southern Ocean, and of a developing cooling in the north Atlantic sub-polar gyre. The latter is potentially important for climate impacts over Europe, America and Africa
How about that then. AMO and PDO are” potentially important” WOW!!

September 10, 2014 1:33 am

Anthony, for some reason the link to the paper is mail to. Just need to correct that.

DirkH
September 10, 2014 1:39 am

“We could have learned the Japanese were going to bomb Pearl Harbor if only we had the electronic wiretapping intelligence gathering capability the NSA has today.”
Well some say the USA did crack the code before Pearl Harbour, and that was the reason the carrier left harbour a day before, leaving only obsolete battleships for destruction and delivering a convenient reason to sway public opinion for entry into the war.
The USA does admit that they cracked the Japanese code a while later, enabling them to intercept and shoot down a plane in which General Yamamoto was travelling; killing him and paralysing the Japanese navy for months – which led to the delay of their super U Boat project, of which Yamamoto was the driving force.

Konrad
Reply to  DirkH
September 10, 2014 3:22 am

Almost correct, it was the Brits who cracked the code as it was a variant of the four wheel German army enigma code (not the five wheel navy). They informed only trusted members of the US admin.

Keith Willshaw
Reply to  Konrad
September 10, 2014 4:53 am

Incorrect. The Japanese JN25 naval code had nothing to do with enigma, it was based on code books that were replaced from time to time. The last issue prior to Pearl Harbor was Dec 4 1944. Even had the USN being able to read the codes it would have been of no use. Admiral Yamamoto had ordered that no operational information was to be transferred by radio and the fleet maintained radio silence until the attack was under way.

Reply to  Keith Willshaw
September 10, 2014 12:54 pm

– Don’t you mean Dec 4, 1941?

Reply to  Konrad
September 10, 2014 7:28 am

Actually, wasn’t it the Poles who sent the original code-work (and machines) to the Brits – lest they fall into the hands of the encroaching Nazis?

Reply to  Konrad
September 10, 2014 7:30 am

Oops. That would be the German code stuff. My bad.

Earl Smith
Reply to  Konrad
September 10, 2014 8:57 am

So what if Yamamoto ordered radio silence. The commander of the fleet did NOT maintain silence. Following the storm the type commanders began a long series of radio messages to round up the scattered ships. The message level was enough to wake up anyone even if it was in code. Except that the specific stations listening only reported to DC (even if one was located on Oahu). Nothing was to be sent to Pearl. Then we had the interesting fact that the Headquarters of the Red Cross (DC) sent a vast stockpile of disaster supplies to Pearl but did not reveal the secret to the local chapter until after the attack. We even managed to redirect civilian shipping (to Soviet Union) so that they would not come across the IJN.
Remember that only a year earlier the Commander of the Pacific Fleet resigned in protest over the Roosevelt order to move the Pacific Fleet from San Diego to the poorly prepared and provocative position at Pearl.
As to negligence, remember that a review board (still lacking the real data on the lead up) exonerated Kimmel and Short. To put it bluntly they were patsies. Then we have the message that MacArthur sent that the Japanese Fleet was spotted passing the Philippines on the way to Indochina. (why did Roosevelt give the MoH to Mac??? It could hardly be the heroic command to leave their prepared defenses and attack the Japanese in the field that led to Bataan Death March. Or his Heroic hiding in the caves of Corregidor. Or his graft and corruption looting the Defense spending of the country. ) That message was enough for anyone to let down their guard.
The honest evaluation of the data is that Roosevelt did everything he could to force Japan to attack and then hid the evidence of the pending attack so as to create the national outrage. The Army and Navy came to just such a conclusion during the war. (and the post war revelations of the secret actions put the icing on the cake)
We have always gone to war as a result of “erroneous data” Recent examples are Tonkin Gulf and Saddam’s nukes but the practice goes back to before the Mexican war. It is just a matter of convincing the masses that we are innocent parties and have been attacked through no fault of our own.

Pho
Reply to  Konrad
September 11, 2014 6:06 am

Sorry, but you are all wrong.
The code was broken by Michael Mann. He cracked it with his Hockey stick and was later awarded a Nobel Prize for his efforts.

Geoff
September 10, 2014 2:03 am

And I can develop a model which infallibly generates the winning lotto numbers for the past 10 weeks. It’s just getting it to generate next week’s numbers that’s proving a challenge.

Reply to  Geoff
September 10, 2014 2:16 am

Why? All you have to do is wait another week.

kadaka (KD Knoebel)
September 10, 2014 2:03 am

Although global temperatures remain close to record highs, they have shown little warming trend over the last 15 years, a phenomenon sometimes referred to as the “early-2000s hiatus”.

Also over the last 20 years. That’s a great “sometimes referred” name when they’re sitting around spitballing a better name than “the stopping” or “the halting”. I suggest “Warming Sensation Vacation”.

Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.

Note the strange careful wording, “…heat trapped by additional greenhouse gases…”
ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_annmean_mlo.txt
Over 15 years, from 1998 to 2013, the increase was 30ppm. With the evidence the logarithmic greenhouse effect of CO2 is pretty much saturated, I don’t see how the extra heat trapped by the additional 30ppm could have been accurately measured being such a tiny amount, let alone tracked and verified as going into the deeper ocean layers.
Wait, let me guess. It was shown as happening by the models therefore it was shown as happening in reality, right?

David A
Reply to  kadaka (KD Knoebel)
September 10, 2014 2:25 am

Well yes, the LWIR back radiation (losing none of that energy to evaporation) somehow bypassed the first 700 meters of ocean, and at less then1/2 of the models predicted rate, added (poorly measured with large error bars) heat to the deep oceans, where it will hide for some short time, keeping separate from the rest of the deep oceans, and soon it will, or maybe could come screaming out of the oceans and cause global catastrophic disaster world wide.
(Climate science 2014 in a nut shell)

Varco
September 10, 2014 2:17 am

Anthony,
congratulations to you and the WUWT team on the milestone it looks will be achieved today!

Keith Willshaw
September 10, 2014 2:29 am

DirkH I dont know what your knowledge of climate science is like but you are flat wrong regarding the run up to Pearl Harbor
The commanders at Pear Harbor had been sent a flash message on November 27th 1941 that began with the words
“*THIS DESPATCH IS TO BE CONSIDERED A WAR WARNING*. NEGOTIATIONS WITH
JAPAN LOOKING TOWARD STABILIZATION OF CONDITIONS IN THE PACIFIC *HAVE
CEASED* AND AN AGGRESSIVE MOVE BY JAPAN IS EXPECTED WITHIN THE NEXT FEW
DAYS. ”
In the face of this stark warning Kimmel and Short had their own meeting at Pearl and decided that despite being specifically warned that war was coming AND the Japanese Fleet was at sea AND they knew the IJN had the capability to mount an attack there was no real risk and so took no action at all. The base remained on a peacetime footing, the emergency room was only manned 9 to 5 and no air patrols were flown. The army anti-aircraft guns which were supposed to protect the fleet failed to fire a single shot as no ammunition had been issued to them.
Yamamoto was of course an Admiral , the IJN was not paralyzed by his death and the I-400 program had been initiated well before his death by Yamamoto himself in January 1942.

DirkH
Reply to  Keith Willshaw
September 10, 2014 2:53 am

Initiated yes.
The warning you cite is not very specific.

hunter
Reply to  DirkH
September 10, 2014 3:24 am

The commanders of Pearl Harbor were negligent. Leaders are supposed to think in terms of how apparently vague threats can be prudently prepared for. “War Warning” would be prudently and reasonably interpreted as “prepare for war”. The warning from reality that the models are failed is a prudent person’s warning to reconsider the claims of those promoted the models.

Keith Willshaw
Reply to  DirkH
September 10, 2014 4:57 am

A message that an attack is to be expected is about as explicit as any military leader can hope for. Did you expect a complete breakdown by aircraft and target ?

Mary Brown
Reply to  Keith Willshaw
September 10, 2014 6:01 am

Like climate science, with WWII, we can’t even agree on the past.

Stacey
September 10, 2014 2:33 am

What tools do they have know they didn’t in the 90’s?

MikeB
Reply to  Stacey
September 10, 2014 2:39 am

Hindsight

James Strom
Reply to  Stacey
September 10, 2014 2:42 am

Twenty years of climate records?

Steve Keohane
Reply to  Stacey
September 10, 2014 3:55 am

Better turd polishing apparatus.

Bernd Palmer
Reply to  Stacey
September 10, 2014 4:01 am

They better first get the data right on which the models are based.
HadCRUT4, a joint production of the UK Met Office Hadley Centre and the Climatic Research Unit of the University of East Anglia, is the world’s “official” global surface temperature time series.

[HadCRUT4 a combination of] CRUTEM4 and HadSST3 have been “corrected”.
It’s not possible to say exactly what corrections have been applied to CRUTEM4 because no one publishes a land surface air temperature time series that uses only raw records.
http://euanmearns.com/hadcrut4-strikes-out/

Greg
September 10, 2014 2:36 am

If we’d known what the right answer was, we could have arranged for our models to produce it. LOL
Basically, yes. They have so many free parameters they can tweak the models to fit just about anyting. But if they tune even the current models to only fit 1960-1990 they will still produce spurious warming.

September 10, 2014 2:47 am

The bad news is NCAR has determined empirically that extending the trend line based on 20 years of data does not produce accurate results. The good news is extending the trend line from only 5 years of data works much better. The conclusion is NCAR’s reputation should only be dependent on their last two predictions.

Greg
September 10, 2014 2:56 am

NCAR: “The hiatus was not predicted by the average conditions simulated by earlier climate models because they were not configured to predict decade-by-decade variations.”
They were configured to match decade-by-decade variations from 1970 to 1995, but not decade-by-decade variations.from 1915 to 1935 or decade-by-decade variations.from 1945 to 1965.
That is why they did not predict decade-by-decade variations from 1997 onwards.
Even the current models do correctly reproduce the early 20th c. warming. or variability.
They start from a false assumption that 1960-1990 variability was not “decade-by-decade” variability but long term climate change. This is nothing more than an assumption and is still without physical proof from observational data.
They wilfully ignore the fact that the models do not fit the majority of the climate record, even after it has been “corrected” to better fit the hypothesis.
So they start with an assumption that did not fit the evidence even in 1997, before the “pause”. It was purely dogma.
It is disingenuous to pretend this is just a question of decade-by-decade variations when the models were tuned to fit a restricted period of the data on exactly that scale.

MikeB
September 10, 2014 2:58 am

What makes you mistrust supposedly independent scientific organisations like this is their continual attempt to put some ‘spin’ on each statement. For example, the caption on the graph says “After rising rapidly in the 1980s and 1990s, global surface air temperature has plateaued at high levels…”
High levels? What level should the Earth’s temperature be? This after all is a cool period in the Earth’s history. It is technically an ice age (literally).
When we compare current temperatures to those of pre-industrial levels remember that period was the little ice age. Does anyone prefer to go back to those condition? Was the little ice age the optimum temperature? Of course not. Perhaps people in Florida or California may say yes but, personally, I would welcome the world being a couple of degrees warmer.

Ken L.
Reply to  MikeB
September 10, 2014 3:33 am

I’ve often wondered how many Climate Alarmists own ocean front property ;).

rogerknights
Reply to  MikeB
September 10, 2014 5:51 am

Well, at least they’re saying “plateaud” instead of “paused.” That’s progress.

September 10, 2014 3:00 am

Words fail me, almost as much as their models do…
…Ignore the chap behind the curtain tweaking the dials; move along, move along!

hunter
September 10, 2014 3:20 am

What they are admitting is that these multi-billion dollar models that climate policy, treaties, laws and taxes are based on are worthless.
We deserve better. Fire the climate hypesters.

Claude Harvey
September 10, 2014 3:21 am

If I’d known what future temperatures were going to be, I could have jockeyed the climate model inputs so they would show that. Eureka! I’ve just invented “hind-casting”. But wait! That’s what I’ve been doing all along (except for the Medieval Warm Period – just couldn’t get the models to do that, so I made the MWP disappear).

cedarhill
September 10, 2014 3:25 am

Yogi Berra is still alive. The climate folks should award him, before he takes his last strike, a PhD (not an honorary one) in Climate Global Warming. After all, he discovered “Forecasting is hard, especially about the future.”
At least a Nobel? Maybe he could share it with the hockey stick – or broken baseball bat modeled on the hockey stick?

AlexS
September 10, 2014 3:26 am

Proper scientists must have shame of this clowns.
It is enough to read how they write to see that whole thing was “managed” to have a “pause”.
Of course what they say about the future is “could” of nothing . We will have more lucky in a casino.

parochial old windbag
September 10, 2014 3:27 am

I knew in advance that this would happen.

September 10, 2014 3:29 am

This looks like a very interesting paper. If decadal GCM forecasting is getting as good as they claim, you’ll be hearing a lot more about it. The thing is, it does what people wrongly thought GCM’s were doing before – actually forecasting decadal weather (eg pause). GCM’s had previously been generating random weather subject to forcing, and could only be expected to generate long term climate averages. Now it seems they can synchronize with Earth weather, to some extent.
Of course, when such capability has come to exist, it will be first applied to hindcasting recent decades. What would be a better check?
Unfortunately, the paper seems effectively paywalled. However, there is another more discursive 2014 paper here by Meehl. Well worth reading.

Bernd Palmer
Reply to  Nick Stokes
September 10, 2014 6:05 am

“GCM’s had previously been generating random weather subject to forcing, and could only be expected to generate long term climate averages.”
If weather is random, how can you base models on it? How do you quantify ‘weather’ if it is random? Anyway, GCM’s are about temperature, not weather.
“Now it seems they can synchronize with Earth weather, to some extent”
Synchronize with random events? After the fact (hindcast) or before the fact (forecast)?

ferdberple
Reply to  Nick Stokes
September 10, 2014 6:26 am

10 out of 262 is not the sort of track record I would trust. yes, it is a big improvement from 0 out of 262, but nothing to write home about. unless of course you are paid by the government.
a pair of dice predicts there is 87.3 out of 262 chance temps will plateau, 87.3 out of 262 they will increase, and 87.3 out of 262 they will decrease. And this forecast was available 20 years ago!
So, on that basis the pair of dice is outperforming the billion dollar climate models, and outperforming them by a very wide margin.

ferdberple
Reply to  Nick Stokes
September 10, 2014 6:31 am

so now you know why your local weather forecast calls for a 1/3 chance of sun, 1/3 chance of rain, and 1/3 chance of mixed. All the money invested, and the most reliable forecast is still “today’s weather will be much like yesterday”.
Yup, that one forecast, “today’s weather will be much like yesterday”, is likely true in your location, at least a correct as your local weather forecast.
So there you have it. I’m predicting the weather for the entire planet. The first global weather forecast. Odds are, I will outperform the computers.

te53
Reply to  ferdberple
September 10, 2014 9:18 am

Your method works for Hawaii.

Admad
September 10, 2014 3:52 am

“Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.” Shown to be going? Shown? Could somebody, anybody please advise me where there is definitive evidence of this alleged effect, and a plausible hypothesis of method?

Harry Passfield
September 10, 2014 3:56 am

The hiatus was not predicted by the average conditions simulated by earlier climate models because they were not configured to predict decade-by-decade variations.

So when and where was this climate function predicted?

Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.

And how was it ‘shown’?

Scott
Reply to  Harry Passfield
September 10, 2014 4:19 am

“How was it shown”
By telling a lie big enough and often enough it becomes the truth, that’s how. The model re-run showed if they would have known to start telling the “all the heat is now going into the ocean” lie before the hiatus, many years ago, the lie would have been accepted and everything would be hunky-dory today.

nicholas tesdorf
September 10, 2014 4:00 am

If it wasn’t so serious, it would be hilarious. If they spent less time falsifying climate data, to show warming, they might have noticed what was happening in the climate.

September 10, 2014 4:02 am

In order for the tools to work correctly, you have to have accurate data. Instead, they still rely on Hokey sticks and undocumented and unwarranted adjustments. GIGO was around back then. And it still works the same way today.

jaffa
September 10, 2014 4:03 am

It’s silly & disingenuous to expect future predictions to be 100% perfect. Climate scientists understand temperature prediction, they always predict warmer weather in summer than in winter and they are always right. They also understand the climate well enough to know that rainy days will be wet and non-rainy days will be dryer. They know there’s ice at the poles (just not exactly how much and where). And every year they accurately predict that there will be tornadoes in tornado alley during the tornado season proving global warming is getting worse due to CO2. So they’re getting a lot of very important stuff right.
Such knowledge isn’t gained overnight, climate scientists are amongst the most intelligent people on the planet, requiring vast intellect way beyond a scientist who deals with one basic science like a physicist or chemist does. Often their intellect is doubted because they’re too engaged in the thought process to be able to understand trivialities like how to calculate a trend excel or which way up to use the data.
The problem is, future predictions can be taken out of context and that can get in the way of “the science”. If any future predictions are given that turn out not to appear to be right its usually because the scientist had to withhold some information because of confidentiality agreements.
If only climate scientists could travel to the future – what a future it would be.

ferdberple
Reply to  jaffa
September 10, 2014 6:35 am

to be accurate, not all climate scientists were able to correctly predict that rain is wet.

ImranCan
September 10, 2014 4:04 am

So translation is : we were wong before because we did not know what we were talking about… But trust us because we do now.
Utter bullshit.

September 10, 2014 4:11 am

“Although scientists are continuing to analyze all the factors that might be driving the hiatus, the new study suggests that natural decade-to-decade climate variability is largely responsible.”
Ummmm…yeah. DOH! Isn’t that what climate realists (us) have been saying all along. Climate change is NATURAL!

John S
September 10, 2014 4:22 am

Last paragraph so “the hiatus could last a few more years”. Seems like their models go right back to predicting warming once the new adjustments run through them. GIGO as usually.

September 10, 2014 4:30 am

“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,”
Talk about wiggle words – they clearly have no more confidence (and rightly so) in their models now than they did 2 decades ago. Just putting out puff pieces like this to try to keep the whole CAGW thing from dying completely.

Michael Babbitt
September 10, 2014 4:32 am

“We could have predicted the pause in global warming if only we knew how wrong we were back then.”

September 10, 2014 4:41 am

Today’s tools would have foreseen warming slowdown
Ri-i-i-i-i-ght…
OK then, predict when global warming will ‘resume’.

RockyRoad
Reply to  dbstealey
September 10, 2014 8:56 am

Or will it be a cool-down?
They haven’t a clue what’s going to happen even though their beloved CO2 continues to go up without a pause.

DonK31
September 10, 2014 4:46 am

10 out of 262 model runs came up with the solution that actually occurred. By my calculations, 96.8% of model runs are junk.

ferdberple
Reply to  DonK31
September 10, 2014 6:36 am

there you are, the 97% consensus confirmed. 97% wrong.

September 10, 2014 4:46 am

Reblogged this on gottadobetterthanthis and commented:
“Such decadal forecasts, while still subject to large uncertainties, have emerged as a new area…” In other words, they still haven’t a clue.

stevefitzpatrick
September 10, 2014 4:59 am

So 10 of 262 modeled simulations “lined up” with measurements? That is reassuring…. especially since the modelers already know the answer! The rubbish would be funny if it were not so potentially damaging. This is the kind of ‘science’ which merits nothing but cat calls and laughter.
Ridiculous is too kind a description. More accurate is that climate modeling is as intellectually corrupt a ‘scientific’ enterprise as I have ever encountered; can modelers honestly believe the public should be convinced by such post hoc tripe? Public defunding of UCAR is the only way to focus modelers’ minds, and is desperately needed.

Steve from Rockwood
September 10, 2014 5:00 am

When you’re always right, the most likely reason is you don’t know right from wrong.

tadchem
September 10, 2014 5:03 am

“We could have forecast the Challenger Explosion if only we knew O-rings became brittle and shrank in the cold” – but that was well known and dismissed by managers with more urgent priorities.
“We could have learned the Japanese were going to bomb Pearl Harbor if only we had the electronic wiretapping intelligence gathering capability the NSA has today.” – except the Japanese, having read Sun Tzu (500 BCE), understood that it was never a good idea to let your adversaries know your intentions.
“We could have predicted the Tacoma Narrows Bridge would collapse back then if only we had the sophisticated computer models of today” – but the Roman Legions well understood the difficulties (ignored by the Tacoma Narrows Bridge architects) of resonance with bridges.
“Of all sad words of tongue or pen, the saddest are these, ‘It might have been.'” – John Greenleaf Whittier
The fact is that ‘climatologists’ are still using the tools they used a third of a century ago, with disastrously wrong results. These tools – computer modelling – are essentially DRAFTING tools – and their ‘theories’ are still on the drawing board.

PiperPaul
Reply to  tadchem
September 10, 2014 5:27 am

Computer-aided design works VERY well, as long as the fabrication/construction crews actually follow the drawings. Engineering and design mistakes are just as likely now as they were in the drafting table days although these days there are fewer eyes actually checking the details and more eyes marvelling at the eye candy.

Ben M.
Reply to  tadchem
September 10, 2014 5:56 am

The Roman Legions understood the problems of marching on bridges, but I don’t think they understood vortex shedding.

MarkW
September 10, 2014 5:16 am

Looking at this another way, they are admitting just how inadequate their models were back when they started this scam.

mark, phd michigan state 93
September 10, 2014 5:17 am

“the rapid growth of computer power available to climate” clowns has made it possible to run 100’s of millions of oddly specified models, then conveniently choose the ones that will influence ignorant politicians to take the bait … if I had an i7 with 192 gig of ram back in 1988 I could have created Shrek and Despicable Me, and been a billionaire by now.

September 10, 2014 5:52 am

10 out of 262 lined up? It’s more statistically significant to say that m&m candies cause acne
http://xkcd.com/882/

ferdberple
September 10, 2014 5:56 am

If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way, according to new research. The analysis, led by NCAR’s Gerald Meehl, appears in the journal Nature Climate Change.
==============
OK, here is the Berple Challenge to Gerald Meehl. Tell us the date on which THE PAUSE WILL END.
If your models could reveal the slowdown, then for sure then can tells us when the slowdown will end. So will it be today, tomorrow, 1 year, 5 years, 10 years, 20 years?
Any by the way, are you willing to put money on your prediction? Because I’m extremely confident that you cannot predict the end to the pause any better than can a pair of dice. And a pair of dice tell us the odds are very much against being able to predict when the pause will end.

knr
September 10, 2014 6:00 am

‘Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.’
Really can they shows us the emprical data , not models , which prove this has happend , otherwise can they stop talking BS to cover the fact that the models ‘failed ‘ to work and so the need for ‘missing heat ‘ in the first place.

Reply to  knr
September 10, 2014 6:19 am

The ARGO floats give us some ocean temperature data but only down to about 2000 meters.
NOAA, here: http://oceanservice.noaa.gov/facts/oceandepth.html
says the average ocean depth is about 14,000 feet. (4267 meters).
We aren’t even getting good coverage of the upper 2000 meters of the ocean let alone either the average depth or the “deeper layers of the world’s oceans”.
The only thing being shown here is our lack of information.

knr
Reply to  JohnWho
September 10, 2014 8:08 am

true , but notice they are trying to make their claims a ‘fact ‘ by omission of the reality of how little is actual know and by claiming that ‘models ‘ produced empirically valued data for this which they do not.
Dishonest and massive egos seem to be career requirements for those working in climate ‘science ‘ these authors carry on that ‘fine tradition’

Pamela Gray
September 10, 2014 6:06 am

Getting closer to 7-day forecasts that predict anthropogenic global warming. Yep. Sounds just about right in terms of the ingredients for AGW Cool-aid. Weather is the new AGW. Climate is so in the past.

Richard Case
September 10, 2014 6:08 am

“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,” Meehl added, “though we need to better quantify the reliability of the forecasts produced with this new technique.”
So, despite all the hoopla, he’s basically saying that he still doesn’t have much of a clue. Geez, what a bunch of crap.

ferdberple
Reply to  Richard Case
September 10, 2014 6:40 am

the have already quantified the reliability – 10 out of 262 – 97% chance of error.

nielszoo
September 10, 2014 6:16 am

They were only off a little bit. 96.2% of the runs predicted Mann made warming instead of the 97% consensus of scientific warming that we’ve been told is real. That other 3.8% is skeptical noise and probably should have been dumped as extraneous error… but someone noticed it was historically correct. Oops…

Harry Passfield
Reply to  nielszoo
September 10, 2014 8:15 am

And there, Anthony, nielszoo (and others) have delivered the best response that should be shouted from the rooftops whenever the Cooked 97% of scientists etc, etc paper is quoted. Something along the lines of:
When talking of 97%, remember, 97% of climate computer models FAILED to predict that AGW has peaked for the last 17 years! The very same models that are currently predicting thermageddon! Stick that in your POTUS!

thisisnotgoodtogo
September 10, 2014 6:21 am

soooooo….
They’re back to calling them predictions?

Hawkward
September 10, 2014 6:22 am

It’s strange, I don’t recall hearing any of the prominent “Climate Scientists” back in the 1990’s saying something to the effect of, “we don’t really have the necessary tools yet to say with certainty, but we believe it’s likely that unless we reduce our Co2 output, there will be accelerated warming that could prove to be dangerous to mankind”. I remember them being quite certain of the predictions from their models, and as today, ridiculing anyone who dared to express any doubt about whether we could could really forecast the climate. But of course now they have even more tools and technology, so this time they’re really really certain that catastrophe awaits unless we heed their advice.

Bernd Palmer
Reply to  Hawkward
September 10, 2014 6:28 am

… but 97% of the scientists seem to be convinced that they already had the right tools in the past and that they are sure that the warming is all man-made.

ezeerfrm
September 10, 2014 6:24 am

They will always be able to tell you two things:
1. What is going to happen 50+ yrs into the future
2. What they really meant to say would happen 5 minutes into the past

Matthew R Marler
Reply to  ezeerfrm
September 10, 2014 1:14 pm

That’s astute.

Tim
September 10, 2014 6:33 am

See,that proves it! Our climate models are so good now compared to what they were just 10 years ago,that now we really do know what will happen from now on with the global warming climate change thing. The hiatus will end really soon and the warming is coming back with vengence.The permafrost will belch out ch4,the oceans will turn to vinegar,and the poley bears will be dead.No one will be laughing at Big All then,no siree!

JJ
September 10, 2014 6:36 am

If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way, according to new research.

Oh bullshit.
If today’s tools for multiyear climate forecasting could have predicted the current halt in warming 25 years ago, then those same tools could have confirmed that it had been stopped for more than a decade 5 years ago, and they could right now be used in hindcast to demonstrate the precise cause of the halt. Yet those “climate scientists” had to be dragged kicking and screaming to the realization that their beloved warming has not been present for almost two decades, and many of them still won’t admit it even today.
And instead of newer, “more omniscient” tools with greater understanding providing a definitive explanation for why the warming stopped, we have a parade of ad hoc bullshit rationalizations that now total what? some 40 or so contradictory excuses for why they were incredibly wrong but are still completely right about global warming? Uh-huh.
The problem here is not the capability of the tools for multiyear climate forecasting. It’s the attitude of the tools running them.

herkimer
September 10, 2014 6:39 am

I think we should clarify or redefine the “ pause”. Is it the time duration between when global temperatures stopped rising to when they resume ? What if the temperatures do not remain flat as they are now but start dipping and reach a trough point and then resume warming and reaching the same point when they first stopped warming some time much later. The latter case is the real pause as happened between 1880 and 1930 and again 1945 to 1980. These past cases are the real historical pauses which we may face again and not the one to two decade pauses that the alarmists now accept .

ferdberple
Reply to  herkimer
September 10, 2014 7:03 am

5 years ago the 97% consensus among climate scientists was that the Pause was not happening. Now the 97% consensus is that the Pause is real and has been going on for 15 years, but will end soon.
question: the consensus was clearly wrong 5 years ago. why would anyone trust them to be right now?
fool me once, shame on you. fool me twice, shame on me.

ferdberple
September 10, 2014 6:52 am

If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way, according to new research.
===============
BS. Only 10 out of 262 runs showed a pause. this means the models are predicting there is only a 3% chance “that a slowdown in global warming was likely on the way”.
this is consistent with what the models showed in the past. that a slowdown in warming was not likely. yet, the facts have turned out differently.
this is the inherent problem in prediction. there are a near infinite number of futures possible given our current present situation. some are long odds, some more likely, but no computer model can tell us which one we will actually arrive at.
toss a coin 10 times. the odds are identical that you will get 10 heads in a row as compared to alternating heads and tails. yet most people believe that the second possibility is more likely. These sorts of programming errors then get transferred from people’s beliefs to computer models. Since the models are never validated, the errors are replicated from model to model, similar to a genetic defect.

Joel O'Bryan
Reply to  ferdberple
September 10, 2014 7:11 am

Failure of the 97%

PiperPaul
Reply to  ferdberple
September 10, 2014 7:41 am

There’s that magical 97% again!

wws
September 10, 2014 6:59 am

I could have made a fortune on Wall Street last month if would have had today’s WSJ then.

ferdberple
September 10, 2014 7:00 am

Although scientists are continuing to analyze all the factors that might be driving the hiatus, the new study suggests that natural decade-to-decade climate variability is largely responsible.
==============
if natural variability is largely responsible:
1. Why did the IPCC insist that natural variability was low?
2. Why is natural variability not responsible for the warming as well as the hiatus?
3. How can it be that natural variability only works in one direction, to stop warming?

Tim
Reply to  ferdberple
September 10, 2014 7:32 am

I still say we are doomed!

Gaadolin
September 10, 2014 7:00 am

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk” John v. Neumann
So true.

more soylent green!
September 10, 2014 7:06 am

Sounds like an admission that what they did use for forecasting was crap. (BTW, crap is a technical term used by programmers to describe junk.) How can we trust any of their past work?

MikeN
September 10, 2014 7:10 am

So the 10 simulations out of 262 is <5%, while they report 95% confidence in high levels of warming.
Now what did those 10 simulations have to say about warming in the future?

ferdberple
September 10, 2014 7:19 am

Here are my climate predictions:
1. unadjusted rural temperatures will remain the same or decrease for the next 15 years. 2. adjusted temperatures for the past will continue to decrease, showing an alarming trend in current temperatures.
3. the difference between unadjusted and adjusted temperatures will show a high level of correlation with global warming. a much better correlation than with CO2.
4. climate science will be found to be technically wrong but politically correct.

herkimer
September 10, 2014 7:21 am

If one believes that the major ocean cycles are the key natural climate drivers then the 60-70 year historical Pacific and Atlantic Ocean SST anomaly cycle, pole to pole predicts that the SST’s of these oceans have peaked and may now trend to cooling until about 2030/2040 and not peak again to the same 2000/2010 level until 2060/2075 . They previously peaked about 1880 and again 1940/1945 and troughed 1910 and again 1975 .The current pause hence may last for many more decades and not just a decade or two as some now claim. Even if one accepts that the solar cycle is the main driver and not the oceans , a further 2 solar cycle cooling period is indicated by its cycle.

looncraz
September 10, 2014 7:34 am

Hind-casting is of only of limited use for complex open chaotic systems, IMHO. Particularly when the model was designed around the very events it was designed to model.
We could construct a model that adequately hind-casts winning lottery numbers with 90% accuracy or better – but still has only a 1% success rate for future numbers – maybe better than a random guess, but not enough so to make solid predictions (though, in this particular case, it would be well worth the gamble – unlike with climate models’ long term predictions).

Reply to  looncraz
September 10, 2014 7:41 am

If a list of past winning lottery numbers counts as a model, I can guarantee 100% accuracy.

Sciguy54
September 10, 2014 7:47 am

For followers who are not from the US, Yogi Berra was a baseball catcher for the New York Yankees in their golden era. Although reviled by the local press because he was not handsome and elegant like Joe DiMaggio, Berra was upbeat, one of the best hitting and most effective catchers of all time, was lietime married to a beautiful and faithful wife and generally got the last laugh on his detractors by living life fully and well.
Although he was ridiculed by by the MSM for being mentally slow and inarticulate, many of his utterances are now recognized for their uncanny kernel of truth, whether due to brilliance or random luck. A few that apply here:
“If you don’t know where you’re going, you might end up some place else.”
“It’s like deja vu all over again.”
“The future ain’t what it use to be.”
“We made too many wrong mistakes.”
“You can observe a lot just by watching.”
“All pitchers are liars or crybabies.” -substitute climatologists for pitchers
and of course “It ain’t over till it’s over.”

Sciguy54
Reply to  Sciguy54
September 10, 2014 7:49 am

lifetime married….

Steve P
Reply to  Sciguy54
September 10, 2014 2:50 pm

10 World Series, 14 pennants, 3-time AL MVP,
The greatest catcher in the history of baseball – #8 – Yogi Berra
http://en.wikipedia.org/wiki/Yogi_Berra#mediaviewer/File:Yogi_Berra_1956.png
Baseball Digest, Sept. 1956
public domain, via:
http://en.wikipedia.org/wiki/Yogi_Berra

Sciguy54
Reply to  Sciguy54
September 10, 2014 7:49 pm

A few more stats for the baseball inclined…..As a catcher, on a team with the likes of Mantle and DiMaggio, he led the team in RBIs for seven consecutive years. During that streak in 1950 he struck out only 12 times in 597 official at bats.
For 5 seasons he had more home runs than strikeouts. Since 1901 this has only occurred 45 times (with 20 HR or more). Berra did it 5 times. And always remained a humble, upbeat, and positive guy, more interested in winning each daily challenge and enjoying life than worrying about the sometimes vicious personal attacks of his critics.

Reply to  Sciguy54
September 10, 2014 7:59 am

You forgot “I never said most of the things I said”

RobertC
Reply to  Johan
September 10, 2014 3:01 pm

I love Yogi’s comment about a famous New York restaurant. ” No one goes there anymore, it’s always too crowded”. (Paraphrased).

herkimer
September 10, 2014 7:48 am

If you want a reliable decadal climate forecast “ who you going to call”. Certainly not the climate alarmists . The authors of the paper seem to claim that tools did not exist in the 1990,s to predict climate pauses. That is like saying we could not tell time then as modern electronic watches did not exist . Yet the all the records of natural variability factors existed decades if not a half a century before. In their scientific ignorance they belittled the impact of all natural variability factors that they are now embracing because their own global warming science failed to predict pauses…

September 10, 2014 7:55 am

wow, lots of snark here. drive by snarking. i did find this which is a good post.
“TBH, while in the context of the debate surrounding CAGW this looks post hoc rationalization, in reality it is sort of thing that routinely goes on in science, which is usually and generally self-correcting. Models need to be able to capture the range variability in order to say something useful about what our climate might do, regardless of our impact on it. This can be seen as an attempt to refine and improve modelling which is most definitely needed, since they clearly have to date done a rather poor job of characterizing accurately our climate, especially in terms of being informative for policy-making.”
1. Yes this sort of thing is routine, in fact it is part of the scientific method.
2. yes, they are damned if the models didnt work and damned if they try to improve them.

Reply to  Steven Mosher
September 10, 2014 8:15 am

Steven Mosher
You assert

1. Yes this sort of thing is routine, in fact it is part of the scientific method.

Please explain the “part of the scientific method” which is a post hoc excuse for model failure.
Richard

Reply to  richardscourtney
September 10, 2014 8:24 am

Exactly. To paraphrase Feynman: ““Climate scientists don’t make predictions, they make excuses!”

more soylent green!
Reply to  Steven Mosher
September 10, 2014 8:40 am

Another great example of “consensus science.”

David A
Reply to  Steven Mosher
September 10, 2014 8:52 am

Please tell me what parameters they improved; or why do these ten models now have better hind casting, and what do they mean for climate sensitivity in the future?
You see, I could have made 100% of the models better just by lowering the climate sensitivity to CO2, and placing in a well known at the time ocean cycle PDO and AMO factor. But common sense is not the goal of the climate science community is it Mr. Mosher.

more soylent green!
Reply to  Steven Mosher
September 10, 2014 9:22 am

Learning from past mistakes is a part of the scientific method. But that first requires an admission of being wrong.

KNR
Reply to  Steven Mosher
September 10, 2014 9:39 am

Yes this sort of thing is routine, in fact it is part of the scientific method.
Oddly your right but you left out that for YEARS we were told this models worked and that anyone who questioned them was fool at best. And the fact they not proved the models can predict better than flipping a coin but still think that massive changes to peoples lives and the spending of huge amounts of money can be based on them. The ‘missing heat ‘ is a product of models failure and a very anti-science view that their unproven theory’s can very be wrong .

Sciguy54
Reply to  KNR
September 10, 2014 9:53 am

sorry, KNR, I see I largely repeated your observation…. shoulda refreshed!

Sciguy54
Reply to  Steven Mosher
September 10, 2014 9:51 am

Re-evaluation and synthesis is part of the scientific method. But it is incongruous behavior from a community which has allowed (or encouraged) its proponents to argue that anyone who at any time doubts their methodologies and/or conclusions can only be motivated by some combination of mental illness, toothless ignorance, greed, or criminal malice.

Todd
Reply to  Steven Mosher
September 10, 2014 12:43 pm

No, they are simply damned for boldly claiming that their unverified models are sufficiently accurate to justify programs that adversely impact the future of hundreds of millions of people and the way of life of every person on the planet.

Patrick B
Reply to  Steven Mosher
September 10, 2014 12:50 pm

This is true only if the scientist says with the initial model “This is my initial model, I think it is correct but since it is unproved, we must wait until proper testing/real life data proves it works before we trust it.”
On the other hand, when the scientists either say or refuse to correct politicians who say “This is my model and it is absolutely correct and you should change the world’s economy because of my model.” – then I get to call them frauds and not scientists and never give them another chance to “fix” the model.

Matthew R Marler
Reply to  Steven Mosher
September 10, 2014 1:07 pm

Steven Mosher: 1. Yes this sort of thing is routine, in fact it is part of the scientific method.
2. yes, they are damned if the models didnt work and damned if they try to improve them.

Nobody has been damned. It has simply been pointed out that there is no good reason to believe that the current models constitute an actual improvement in predictive ability. The authors seem unaware of the need for continuous testing against out-of-sample data until “forecasting skill” has been demonstrated.
The snark is in response to a couple of decades of scientists claiming that there was nothing important missing from their models. Now they believe that there is nothing important missing from the current models. Nothing has been demonstrated to have been learned! They merely chose a few of the large number of ways to tweak their models to fit an extant data set after they learned that tweaking was necessary and what the results of the tweaking had to look like. Had they chosen, they could have eliminated CO2 from the models entirely and tweaked to the same result (more properly, somewhere in the midst of the range of results.)

Reply to  Steven Mosher
September 11, 2014 5:26 am

“yes, they are damned if the models didnt work and damned if they try to improve them.”

Actually way off point! They never have admitted the models do not work. If your position is that the models work, why are you fiddling with them?
And that is the problem. They want their cake and eat it too. Fiddle with them without admitting they have not worked.

hunter
September 10, 2014 7:57 am

So today’s tools would have, could have, should have, predicted the pause, but today’s tools cannot predict the end of the pause.
So this paper is just more arm waving post hoc bs from the climate kooks.

the pause expert
September 10, 2014 8:17 am

The AMO & PDO’s have been known for 200+ years (granted we went sure what they were or what was causing them, but they were still known) – The renowned scientists finally recognized them in the late 1990’s and/or figured out what they were.
Our esteemed high priests “we are smarter than everyone else” climate scientists chose to ignore science and are now seeking an excuse for missing what was obvious common scientific sense to us dumb layman.

Randy
September 10, 2014 8:20 am

So we still cant model PDO and AMO, nor do we have the data to even claim with any assurance that the deep ocean is warming (or cooling), nor can we explain why the heat went from collecting in the atmosphere to hiding in the ocean… BUT if we had todays tools back in the 90s wed totally be able to predict a pause!!! PFFFT. This is NOT science…

Joseph Bastardi
September 10, 2014 8:24 am

Just when you thought the excuses could not get any more absurd, they do. Laughable in the real world.. Imagine me telling a client 5 years after I cost him with a bad forecast, I could have forecasted that now. What arrogance. Its astounding

thisisnotgoodtogo
Reply to  Joseph Bastardi
September 10, 2014 10:40 am

Gerald Meehl, in “On the Waterfront”

Joseph Bastardi
September 10, 2014 8:25 am

And by the way, what is their forecast, not for 2050, but for 2020, 2025, 2030.. I am out with mine and have been out with it since 2007 ( cooling back to where it was in 1978 via satellite)

Steve Oregon
September 10, 2014 8:35 am

The schizophrenia, hypocrisy and mendacity of these climate clowns is astounding. The entire adventure has been is is today peddled with the false assertion that natural causes cannot have and do not explain the prior warming. That there’s high certainty in most or all of the warming is human caused.
Now while they continue that chant they are simultaneously attempting to redefine the natural causes which over powered the supposed AGW as also AGW.
“Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.”
In science, what does “has been shown” mean? Is that the same as proven, scientifically measured, data demonstrated or anything like evidence?
Or is “has been shown” merely made up crap?
Also, is this claim of robust modern tools and fresh hind-casting intended to bolster the convenient claim that the pause may last just long enough for no one to be held accountable?
If modern climate model tools are trained to support both AGW and the pause life is good?
This way as the length of pause grows to greatly exceed the relatively short (late 70s-late 90s) warming period our climate friends will be saying it was entirely expected while continuing the alarmists mission.

The budgeting expert
September 10, 2014 8:35 am

My corporate budgets are always match the final numbers – especially when I do the budget 6 months after the year end!

lemiere jacques
September 10, 2014 8:53 am

it is so hilarious, the point is not the pause of course but what surprises me most is they don’t question the meaning of having hindcast past temperatures “well” ignoring such factors.

PeteLJ
September 10, 2014 9:07 am

“However, the naturally occurring variability in 10 of those simulations happened, by chance, to line up with the internal variability that actually occurred in the observations. These 10 simulations each showed a hiatus much like what was observed from 2000 to 2013, even down to the details of the unusual state of the Pacific Ocean.”
“Meehl pointed out that there is no short-term predictive value in these simulations, since one could not have anticipated beforehand which of the simulations’ internal variability would match the observations.”
Why don’t they present the results of the 10 “recreations” and see what these predict in the future and see how they lined up prior to 1950 or do they not show catastrophic warming ahead and thus, no “predictive value.”
Waste public money much?

Matthew R Marler
September 10, 2014 9:42 am

“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,” Meehl added, “though we need to better quantify the reliability of the forecasts produced with this new technique.”
Do they happen to present summaries of all the “indications” from all of the most recent simulations?
The highlighted “link” appears to me to be an email address.
Just think how much better they would have been able to forecast the record of 2015-2025 based on the knowledge that they gain by 2030! It’s a part of the self-correcting nature of science. It also gives hope that by 2030 a good plan of action for 2015-2025 might be available.
Meanwhile, with flooding again in Kashmir and Pakistan, could someone take an interest in improving the flood control and irrigation infrastructure up there? Phoenix might think along those lines as well. California is having its worst drought in about 100+ years; we shouldn’t have to wait long for the worst flooding since, oh, sometime between 1840 and 1900. Is California preparing for it? Would the line of the “bullet train to nowhere” survive the flooding? Might Californians come up with a plan to recover/replace all those dead and dying fruit and nut orchards?

hunter
September 10, 2014 9:48 am

So the authors are stating:
“Our modern tools are good enough to predict in hindsight, but not good enough to predict in the future.

LogosWrench
September 10, 2014 9:55 am

They can’t forcast the length of the current”pause” even with their”future tools”
By the way by everyone calling it ‘the pause” they have already won the P.R. battle.

more soylent green!
September 10, 2014 10:14 am

If I had a time machine, I coulda killed Hitler!

Tim Obrien
September 10, 2014 10:14 am

Considering their prophecies keep coming out wrong, as Hillary would say, “What difference does it make?”

September 10, 2014 10:19 am

Too bad théy don’t have global climate modełs from the future – the ones that won’t bother to include the trivial impact on climate caused by a small increase in atmospheric CO2.

David S
September 10, 2014 10:27 am

Part of the craziness about claiming to predict the climate future is there is no agreement about the climate past. The gathering of temperature data is perhaps 150 to 200 years old and the warmists in a state of confirmation bias homogenise the data to suit their theory. Then the skeptics and warmists look at historical information and indicators over thousands of years to model temperatures before then. The nett result is we use incomplete historical information to make incomplete future predictions.
Who cares what the climate does in the future? As if we can really change it anyway. Why not adapt like every other specie has had to do over the history of time.
The only logical impact that climate change policy will have on the future is to send the world into a regressive decline in global living standards. What warmists policies will do is the exact opposite of what they claim they are doing, destroying the future for future generations .

Tom D
September 10, 2014 10:34 am

Attention “Climate Scientists”:
For Sale: State of the Art (and shiny) DeLorean Automobile, Mr. Fusion Included . . . Get her up to 88 mph and your climate forecasting capabilities go through the roof. . . . well at least you’ll get to see some serious sh*t!
Sorry I’m keeping the Hoverboard.
– Marty McFly

herkimer
September 10, 2014 10:48 am

“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,” Meehl added,
This statement will become famous in the annals of climate science as the pause stretches decades into the future in my opinion.( like the one about the disappearing snow)
The problem is not that we did not have the tools in the 1990’s to potentially predict pauses( there were many such predictions ) but that the mainstream climate science community and the technical papers community seemed not to allow anyone that predicted pauses or slower warming or any cooling to publish their findings. Even the media seemed to have been requested not to publish this information. Only unprecedented global warming predictions seemed to be allowed Those who did release their findings by other means seemed to be blackballed , fired or ostracized by other mainstream climate scientists, their university or their employer. It still seems to be happening even today. Attempts seemed to have been made to even hide the fact that the past 17 year pause had actually taken place as we saw with the last IPCC report. So having more modern scientific tools is no panacea if the basic system seems flawed to start with and any predictions of possible pauses or future extended pauses are withheld from the public.

John G.
September 10, 2014 11:10 am

If the climate is a complex non-linear or chaotic system (and it almost surely is one of those) you can’t forecast the pause even if you have a climate model that is complete and perfect in every part (which is impossible as a practical matter). The state of any such system at a given future time is extremely sensitive to small perturbations that might occur (like a butterfly flapping its wings) and in a chaotic system on perfectly knowing the initial conditions. You might be able to demonstrate that a pause can happen but you can’t forecast it.

Sun Spot
September 10, 2014 11:11 am

If the future had been known back then we could have written better curve fitting climate software predicting the future climate.

September 10, 2014 11:17 am

“We could have forecast the Challenger Explosion if only we knew O-rings became brittle and shrank in the cold, and we had Richard Feynman working for us to warn us.”
Or they could have listened to the engineers (before the launch) who said don’t launch because the air temperature was below the design limits. Plus the warning s about the design before hand.

more soylent green!
September 10, 2014 11:38 am

So it’s official now — there is a pause? Is there a consensus on that?

September 10, 2014 11:38 am

Short translation: Our models were wrong.

James Evans
September 10, 2014 11:46 am

So, when do the models say that the pause will end? I’m all ears.
Predict that now, with your incredible new high powered models. If you get it right I’ll be impressed. Go!

rgbatduke
September 10, 2014 11:55 am

I actually think that it isn’t entirely unlikely that their assertion is correct. Even the earlier models followed the existing trend for a decade or so before diverging. However this doesn’t really resolve the problem with the models. They have, and likely will continue to have, two basic problems.
a) The result above directly puts the lie to the assertion that we know that more than half of the warming of the last half of the 20th century was manmade. Actually, it is pretty much an explicit proof that it probably wasn’t.
b) The point isn’t whether or not one can build models that are initialized in “the late 1990s” and that can be made to work through the first decade of the 2000’s. It is whether or not models can be initialized in 1950 and run to the present. It is whether or not models can be initialized in 1980 and run to the present (much lower hanging fruit!). It is whether or not models can be initialized in 1850 and run to the present, correctly tracking the rise and fall of HADCRUT4. When they can do those things, come talk to me. In fact, when they can actually predict the future with skill come talk to me, because all they are saying is that now, using different training data, they can “predict” a particular holdout set of trial data if they work pretty hard and know the answer before they start so that they can tweak things until they get it.
Baaad modellers, baaad. Convince me that your build process was double blind so that you didn’t know what you wanted to get. Oh, wait, it wasn’t, any more than the first round of model building was blind. It got what the modellers wanted to get. Only time will tell whether or not the “super-model” (model of models) thus built has any predictive skill, because it’s a lot easier to predict the future when you know what it is going to be beforehand and doesn’t really mean the model has any real skill. If they were totally honest in its construction it might be. But then, EVEN if they were honest and really did get the next decade right without cheating, they still have to see if the model works outside of the training+trial data to predict the definitely truly unknown future.
Is it “likely” that the models are now good enough to predict the future where it matters, thirty to fifty years out? Not terribly. Consider:
The top article doesn’t even say on which side of the major climate event of the late 1990s (the 1997-1998 Super El Nino) they initialize on — I have to guess on the FAR side of it since without putting the ENSO event in no model is going to get the right answer because most of the warming observed was rather obviously directly driven by that discrete event, not by anything gradual associated with CO_2.
The top article asserts that heat going into the deep ocean is responsible for the pause, but that doesn’t explain why the heat did not go into the deep ocean in the 15 year stretch from 1983 to 1998, or why it did before that, or why it is now. Sure, they built a (set of models) in which this could happen and it worked better, but now do those models still work on the other events in the non-uniform climate record it needs to explain?
The top article asserts that they are still forming a MME mean, and it is this mean that is predictive. Why? A single working model ought to give the right answer. The climate isn’t voting to accept the average of many PPE runs, or the average of the average of many PPE runs from many “independent” (not!) models. It has a unique dynamical evolution that isn’t the outcome of any sort of “vote”. To the extent that the envelope of PPE runs of any given model includes the actual climate one cannot, perhaps, reject the model (one model at a time) but neither can one use the PPE mean of the model as a particularly good predictor of the climate unless the PPE mean itself is in good agreement (one model at a time). To assert anything else is to assert that somehow, the “differences” between climate models that share substantial parentage and initialization are normally distributed without skew across a space of random deviations from a perfect model, and while that is, of course, accidentally possible one does not have any good reason or argument for thinking that it is true.
The average of many bad models does not, in general, in physics, make a good model. Indeed, it nearly always makes a worse model than the best model in the ensemble. One doesn’t gain by mixing five ab initio Hartree models in with one semi-phenomenological density functional model, one loses. Don’t “improve” it by adding two more DF models — pitch the Hartree models. This alone would substantially improve the silly CMIP5 MME as presented in AR5. Let me put it in bold so nobody can miss it:
Get rid of the broken models. Subject all models to a multidimensional hypothesis test and stop using models that egregious fail, or even merely do poorly, on them. Use models that actually do, empirically, turn out to have maximal skill.
Or some skill at all. At predicting the future, not the short-time evolution from a carefully chosen initial condition after making changes that are guaranteed to reduce the warming observed for just the right interval to reproduce the trial set.
Still, this is good news. This is basically a formal announcement of what everybody knows by now anyway — the models of CMIP5 have now officially failed. They do not work out to 20 years, let alone 30 or 40. A selected, much smaller, set of revised, possibly improved models have been created that once again appear to work, when initialized in such a way as to avoid a point where they would instantly fail (and a cynic has to believe probably DID fail, motivating their choice of starting point) over a decade plus reference period (and don’t you just know that they were tweaked until they did — I very, very much doubt that this was a double blind experiment or blind to the start date!). Perhaps this smaller set, improved, and re-initialized, will work better and make it out to twenty whole years before egregiously diverging when nature does something else unexpected and dynamically invisible at the model resolution.
One does, however, end up with many questions.
First: If these new, improved models are run with the new ocean-heat-sucking dynamics from year 2000 initial conditions so that they remain flat for 14 years in spite of all the new CO_2 and “committed” warming from past CO_2 (whatever that is) out to 50 and 100 years, do they still produce 5 C warming by 2100? Certainly not, one would think. The interesting question now is how much do they end up with? It pretty much has to be less than the central estimate of AR5, because that estimate was made without any consideration of a heat-sucking multilayered ocean, which can eat the “missing heat” pretty much for centuries without warming the atmosphere a whole degree — or not — depending on nonlinear switches we have yet to discover. So what is it? 2C? 1.5C? 1.0?
Note that if they assert — sorry, if their honest and well-intentioned models now predict — errr, I mean project (non-falsifiable version of predict) 1.5 C of warming by 2100 (a third of which we’ve already seen) then the models are basically agreeing with what has been said by lukewarmists and rationalists on WUWT for some time now. Hooray! The crisis is over! Perhaps now we can try to cure world poverty, end the pointless deaths of children who live in energy-poor squalor, invest in universal literacy, work for World Peace ™ — that sort of thing — with the share of our gross product that is currently going to solve a non-problem leading to a probable non-catastrophic, fairly gentle, warming that might well prove to be beneficial more than harmful. Much like the fairly gentle warming that has persisted since the LIA.
Second: According to these new, improved models what fraction of the warming of the last half of the 20th century was natural? Again, it is difficult to imagine that that fraction will not have to be substantially downgraded, because the new models permit the ocean both to eat the heat (so to speak) and to cough it up again (or at least, to stop eating it). I repeat, it would be lovely to understand what precisely triggers one mode vs another, because honestly, I have a hard time imagining what it could be. They are attributing the pause to natural variation, so clearly natural variation can be greater than any anthropogenic warming for at least the length of the pause.
Again, this is no surprise to anyone on WUWT, but this will directly contradict statements made repeatedly, with ever greater completely unfounded “confidence”, in the SPM of the ARs. Dare we hope to discover that according to the new models more than half of the warming observed from 1950 could be natural, since they obviously have tied their models to something like the PDO, some persistent alteration in circulation that can modulate the ability of the ocean to take up heat and buffer climate change?
Third: I’m certain that the models have no room for Mr. Sun to play any significant role, but one thing that the paleoclimatological record clearly indicates is that at certain points the Earth’s climate system is naturally not only sensitive, but enormously sensitive, to small changes in the drivers. Some of the climate transitions of 5 to 15 C appeared to have occurred over times as short as a single decade! Mostly to the cold/glacial phase, it has to be acknowledged, but coming out of glaciation could be quite rapid as well. Also, the Eemian — without CO_2 — was much warmer than the Holocene now or even the Holocene Optimum, and we don’t know why. We haven’t any clear idea of what can create the natural conditions for rapid warming during an interglacial to temperatures much warmer than today but we know that such conditions have existed in the past.
Could Mr. Sun have any nonlinear impact on the Earth’s climate? The late 20th century was a time of high solar activity (not grand maximum high, but high). There were some alterations in climate chemistry and possibly planetary albedo that were at least interestingly coincident with the reduction of solar activity in the 21st century. There isn’t any really good or compelling correlation between solar state and climate over the time we have pretty good records of solar state (which can be measured anywhere, and is) and terrible records of global climate (which has to be measured everywhere, but isn’t), but that isn’t surprising given the uncertainties. One of the great virtues of our era is the existence of far, far better data sources — in particular satellites that can actually make systematic global measurements over long periods of time — that might enable us to address this as one of many mechanisms that might be the nonlinear “switch” controlling the ocean’s role in buffering warming, or the nonlinear “switch” that can cause rapid changes in average albedo, or something else neither I nor anybody else has thought of yet that might be the mechanism responsible for the rapid, catastrophic as in mathematical catastrophe theory catastrophic climate changes in the past, transitions between two (or more) locally stable climate configurations/phases potentiated by comparatively tiny shifts in the system.
So I personally welcome this paper. It is what science is all about. It is a step in the right direction. I expect that it will have a substantial impact — primarily on the excessive credibility assigned to earlier model-based conclusions, and hopefully to the credibility assigned to the new models and their conclusions. Good words to use frequently in bleeding edge science: “We really don’t know that yet”. “I’m not sure”. “Future cloudy, try again later”. (Oh, wait, that’s the 8-ball…:-)
In the meantime, don’t worry if it oversteps its bound and overextends its conclusions. Science is, in its own ponderous way, eventually self-correcting, if only after Nature reaches out and bitch-slaps you with a direct contradiction of your pet theory. If “the pause” continues for a few more years, this is only the first of many papers that will be produced to try to understand it, and every one of them will at the same time refute earlier work that pretty much excluded any such event. Some of the models built might prove in the future to have some actual skill. Or not.
Interesting times.
rgb

Matthew R Marler
Reply to  rgbatduke
September 10, 2014 12:55 pm

rgbatduke: Note that if they assert — sorry, if their honest and well-intentioned models now predict — errr, I mean project (non-falsifiable version of predict) 1.5 C of warming by 2100 (a third of which we’ve already seen) then the models are basically agreeing with what has been said by lukewarmists and rationalists on WUWT for some time now.
Good post. I had not wanted to read or write a long post on this topic, but that was worth the read.
It is a step forward, but there is no reason to believe that the models they have now will do a better job at actual *prediction* than the models that they admit have failed.

john robertson
September 10, 2014 12:08 pm

Once again proving Climatology is near impossible to satire.
Collecting tar and feathers.
Feathers are easy, tar is a little harder to find, perhaps I should substitute honey.Much more environmentally correct.
At least the bears will find these charlatans attractive.
Possibilities for a new reality/survival TV show, cocooned in an insulating layer of feathers our stalwart Climate Shaman is released into wild bear habitat.
Will he save the bear from starvation?
Will the bear spurn this tainted bait?
Film at…
After all now that it is normal to post the barbarity of beheading prisoners all over the internet, whats a little wildlife/charlatan interaction?

JJ
September 10, 2014 12:18 pm

Steven Mosher unscientifically pontificates:
1. Yes this sort of thing is routine, in fact it is part of the scientific method.
2. yes, they are damned if the models didnt work and damned if they try to improve them.

1. No. Ad hoc reasoning is a logical fallacy, not a component of the scientific method.
2. No. They are damned when they didn’t bother to check to determine whether or not the models worked before they attempted to use them for a political takeover, and they are doubly damned for now claiming to have improved those models without first checking that bald assertion, either. In sum, it amounts to lying.

Robert of Ottawa
September 10, 2014 12:24 pm

Did they go to the John Kerry school of excuses:
We were against the pause before we were for it

Billy Liar
September 10, 2014 12:37 pm

“If only we’d had the right monkeys in the 1990’s we’d have produced Hamlet by now.”

jarthuroriginal
September 10, 2014 12:48 pm

You only need one simulation, the correct one.

DrTorch
September 10, 2014 12:56 pm

So some models predicted the pause, but it’s of no value b/c “scientists” don’t have any way to figure out a priori which prediction is right and which ones are not.
Same thing happens to astrologers.

rgbatduke
Reply to  DrTorch
September 10, 2014 1:55 pm

Well, no, they didn’t predict the pause, as that is a statement about the future, and the models in question have not yet been exposed to the pitiless gaze of the future. They hindcast the pause, after it already happened, when initialized “in the late 90’s” or right before the pause occurred. It is difficult — or even impossible — to say how much the modellers tweaked their models so that this fortuitous occurrence occurred. It is difficult — or even impossible — to say whether or not they would have had the same success if they’d started the models off in 1995 or 1990 or 1980 or 1950 and tried to hindcast all of the temperature record after any of these dates with the same success (or how long the models ran, on average, before deviating signficantly) because AFAICT from the top article, this simply hasn’t (yet) been done.
But even so, getting a consensus pause out of a collection of models at all is pretty impressive. The CMIP5 models don’t allow for any real possibility for that.
Still, I have to say in boldface to the authors of the study: Beware Data Dredging!
If one takes the models of CMIP5 and run them, starting in (say) 2000, some of them are going to run hotter than others. Some of them will exhibit a trend closer to the pause and others will exhibit a trend farther from the pause, when started with these particular initial conditions at this particular time. Overall, rejecting the obvious losers is a good idea, but really if we did that almost all of the models would already be rejected on the basis of performance post the CMIP5 reference period. And it is a simple fact that if we do reject the losers, the winners will by construction end up closer to the actual data. That does not necessarily mean that the winners are better models and are going to be more likely to predict the future.
Let me ‘splain. No, there’s too much. I will sum up.
Suppose I had a timeseries curve I wanted to “predict” and twenty models to use to predict it. The models, however, are nothing but random number generators (all different) geared up to produce a random walk in the curve’s variable. I initialize them all from common data (the same seed) I run them, and reject the worst ten of them after computing some measure of their goodness of fit to the curve.
Then it is a simple fact that the mean of the remaining ten will be a much better fit to the curve than the original mean of all twenty — probably more accurate, certainly less variance.
I might be tempted, then, to assert that this selected set of random number generators, initialized with any common seed, generate a good fit to the timeseries, and that its predictions/projections/prophecies should be “trusted”.
Anyone here think that this is a good bet?
That’s one of many reasons that while it is good that they winnowed out the worst performers in the CMIP5 collection, used “improved” versions of the rest (and I have no reason to doubt that this is true and that they are in fact improved in e.g. spatiotemporal resolution or are supported by more runs and get better statistics or have fixes to previously poor implementations of some of the physics) and found an initialization such that they could come much closer to the actual climate, this is creating a new super-model with a new training set, not validating either the collective super-model or the individual models contributing to it. At best one can say that its average managed to reproduce a single trial set from a special start. It remains to be seen whether or not it can predict/prjoect/prophecy the future with any skill. It might — if all work was done honestly there is some reason to hope that it might. But sadly, it might not. One might well ask why we shouldn’t just take the best model and use that as the model that is most likely to predict the future. One might ask (since we want to use terms such as “most likely”) what the quantitative basis is for assigning any “likelihood” at all (as, for example, some actual estimate of probability) that any given model will be predictive in the future, or the collective mean of all of the models, or the prognostications of the local bookie.
That’s really the one question nobody ever asks, isn’t it? Statistics is all about being able to
quantitatively compute probabilities. We don’t usually use statistics to say that “A is more likely than B”, we try very hard to say “The probability of A, according to the following (possibly data based) computation is P(A), the probability of B is P(B), and the number P(A) > P(B)”. They don’t ask, because that questions is essentially meaningless in this context. We have no way to axiomatically assign any particular probability that the MME mean of any set of models (especially models of this complexity that share a substantial code and assumption base) will in any traditional sense “converge” to the true behavior or deviate from it by some sort of computable standard error. We literally cannot say how likely it is that the models will have skill in the future by doing any defensible computation.
Amazingly, that stops absolutely nobody from making all sorts of pseudo-statistical nonsense assertions of “confidence” concerning all kinds of predictions/projections/prophecies regarding the climate. The SPM for AR5 reasserts high confidence that over half the warming observed in the latter 20th century to the present is anthropogenic. How, exactly, do they compute that confidence? What constitutes “high”? If I make a statement about the confidence I have that a random number generator I’m testing is or isn’t a “good” one, at least I completely understand how I compute p, and know exactly how to interpret the p-value I compute and how much to trust it as a sole predictor of “confidence”. How is this done in climate science? To take models that are actively failing and base the predictions on them? That makes the models Bayesian priors to the computation of probability, and I would be roughly a zillion dollars that even this was never actually done, probably not even done without the Bayesian correction that would render the posterior probability of the truth of the statement basically unknown, garbage in to a garbage model, garbage out.
Otherwise, where exactly in AR5 are these confidence levels quantitatively computed, and from what assumptions and data? All one learns in chapter 9 is that one cannot have much confidence in any of the predictions of the models in CMIP5 — as the top article finally de facto acknowledges by doing some of what should have been done before AR5 was written, even though it would have devastated its conclusions by rendering them almost completely uncertain, so low computable “confidence” that they aren’t worth the paper they are printed on.
We’re still in that state for the new, improved, smaller set of better performing models. They arguably have better performance, at least for one interesting trial set. Let’s see how they do in the future. Maybe one day, one can assign actual confidence intervals with some quantitative backing instead of using a word that sounds all professional and statistical when what one means is “In my own, possibly biased, professional and well-informed opinion…”
Statistics was invented partly to get away from all of that, as those are the weasel words of the racetrack tout. “Hey, mate, bet on Cloud Nine in the seventh race. I have the greatest confidence (based on my erudition and many years of experience at the tracks) that he will win!”
rgb

Reply to  rgbatduke
September 10, 2014 1:59 pm

If they claim they could have predicted the “pause”, then let’s see them predict when the “pause” will end, and when global warming will resume.

David A
Reply to  rgbatduke
September 10, 2014 3:28 pm

RGB, thank you for your two posts, I decided to ask an earlier poster defending the models in general this…
“Please tell me what physics parameters they improved; or why do these ten models now have better hind casting, and what do they mean for climate sensitivity in the future?
You see, I could have made 100% of the models better just by lowering the climate sensitivity to CO2, and placing in a well known at the time ocean cycle PDO and AMO factor. But common sense is not the goal of the climate science community is it Mr. Mosher.” (end quote)
I really do not understand climate science at all I guess. Regarding the ocean ate my homework ; does this summarize the claim?…
CO2 increased LWIR back radiation, which (losing very little of that energy to evaporation) somehow bypassed the first 700 meters of ocean, and at less then1/2 of the models predicted ocean warming rate, added (poorly measured with large error bars) heat to the deep oceans, where it will hide for some short time, keeping separate from the rest of the deep oceans, and soon it will, or maybe could come screaming out of the oceans and cause global catastrophic disaster world wide.
Did I get it right?

herkimer
September 10, 2014 1:14 pm

If one sets oneself up as an expert in predicting global temperatures 100 years ahead and you then issue a climate report showing unprecedented warming with an almost straight line temperature curve to 2100 and with the absence of any pauses or clarifying notes to that effect, you can expect people to naturally ask questions . If over the last 130 years of climate history there have been at least two major pauses ( no additional temperature anomaly increases during these periods ) like the periods from 1880 to 1930 and again 1945 -1980, you better have some solid undisputable scientific evidence why at least two such pauses will not happen again during the next 100 years . The least that one should present is a risk analysis of what the future global temperatures might be should the greenhouse gas theory prove to be wrong or not as significant compared to natural variability influence of the 60-70 year ocean cycle. Presenting only worst case scenarios for global temperature rising options is not a true or complete risk analysis. You might not be able to predict the exact timing or duration but you should comment on their possibility and the risk thereof should they happen and should they prove to be somewhat similar to the past ones( and not just a pause of decade or two).

September 10, 2014 1:45 pm

Uhhhh….so now they admit the models we mortgaged our future on were wrong but now they almost have it down do its time to take out a second mortgage?

Tom in Florida
September 10, 2014 2:15 pm

You must experience current climate before you can say what your prediction would have been.

mikewaite
September 10, 2014 2:39 pm

This is a very frustrating report. All that those of us with no access to a library with a subscription to Nature Climate Change know is that 10 models (is that 10 different teams or 1 team with 10 different sets of parameters ) have successfully tracked global data from 1800s to 2010, including therefore the tricky 0.3C step at 2000/1.
This should be cause for general celebration . Assuming that these are US institutes that have succeeded, then the US Govt should be justly proud that its funding in this area has at last borne fruit.
From now on the world knows what to expect , by running the models forward with the set of parameters that accurately modelled past behaviour.
Forget the failed 252 models , why are the successful models not headline news and what exactly were the critical parameters that made these 10 models work so well.
Is there an academic on this site who can find his/her way to the University library , read the paper and report back? (The doi link leads to the paper but the figures in the abstract are too small to read).
Could it be that the 10 models that work do not award CO2 and its radiative forcing the assumed predominant role that it has previously enjoyed?

rgbatduke
Reply to  mikewaite
September 10, 2014 7:23 pm

Don’t get carried away. These models are far from “proven”. The good news is that they apparently threw away a whole stack of bad models instead of continuing to average them in as if they were good models. That alone can do nothing but improve the agreement of the models with reality, as one carves an elephant by cutting away everything that doesn’t look like an elephant. But that does not mean that the model (or models) they have created will track the elephant of the past as it evolves into a giraffe, or a mouse, or a T Rex. The parts they cut away probably wouldn’t have done the job, but the parts they have left may not either.
I wrote a couple of fairly detailed posts up above on what we can hope for, what we can expect, and what I’d like to know in terms of omitted (from the top article, anyway) details. What I hope for is that this publication ends up being a tacit acknowledgement that the models upon which AR1-AR5 were based are, for the most part and being very polite, “in error” and are not useful or to be relied on in any way. What I also hope for is that the new/rebuilt/culled models, as you suggest, at the very least downgrade the direct effect of CO_2, the feedbacks (which have been egregious) and upgrade significantly the component of past and present warming that is probably natural to more than half “with confidence” (hey, I can use the term in an unsupportable way too — this is politics or at best a pure guess on anybody’s part until the day they can present a quantitative basis for any apportioning that doesn’t depend on a small mountain of debatable Bayesian assumptions). Finally, I hope, and rather expect, that the rebuilds significantly drop the overall climate sensitivity, by around a factor of 2 relative to AR5 but I’d be happy to get a factor of 1.5, down to solidly under 2 C by 2100.
What I expect is that this result will be initially heavily downplayed and quite possibly even bashed as some sort of betrayal of a political party line, or that calculations will quickly be run with the models that show that the climate rapidly turns around and catches up and in the end there is just as much warming, but it all happens (safely) later, as otherwise if they predict “warming will start up again by year 2017” they run a pretty serious risk of being proven wrong while the metaphorical ink is still dry on the result. But I don’t think that they will be able to avoid a substantial (and, may I say, enormously well-deserved) weakening of public and political confidence in the overall CMIP5 models and the often and loudly overstated conclusions of AR1-AR5.
rgb

Geo
September 10, 2014 3:05 pm

Wow, that’s freakin’ brilliant. They figured out how to have their cake (1990s alarmism) and eat it too (but *now* we could have predicted it).
So what do their predictions show for the next 10 years? Or are they only going to tell us 10 years from now that they had it right all along, no matter the result?

Neil Jordan
September 10, 2014 3:44 pm

“I could have been a contender.”

TYoke
September 10, 2014 4:47 pm

We shouldn’t take our eye off the ball.
The 2007 version of the “settled science” is now acknowledged to be WRONG, FALSIFIED, INCORRECT.
This is so despite thousands of assurances at that time that they knew what they were doing.
Al Gore was refusing to debate because the “science is settled”. Instead, his idea was to see that “climate change denial” was treated as a sin like racism, and those who were skeptics towards the 2007 settled science were ruled outside of polite society.

thingadonta
September 10, 2014 7:11 pm

I wouldn’t have lost all that money on the GFC either.

Mac the Knife
September 10, 2014 8:13 pm

“We could have forecast ‘the pause’ – if we had the tools of the future back then”
They use hind casting to try to validate their forecasting of a climate system that has more variables than we have yet identified, let alone measured empirically. We do not have sufficient duration, quantity, quality, or precision of data on the known let alone unknown primary climate variables to construct climate models that can pass validation tests. The results are climate fortune telling, with the climate models serving the modern day function of the crystal ball.
I see warming…. CO2 induced warming, in your future. You must stop exhaling!
I do not believe the climate modelers could find their ‘hind c’ass’ with both hands, while using a well lit bidet….

beng
September 11, 2014 5:32 am

A wordy response that could’ve been simplified to “We underestimated (or ignored) natural variations”.
Do they still do so?

Pat
September 11, 2014 8:05 am

Not to worry about not having the tools back then. With the tools they have now for restating the recorded temperature data to make it homogeneous, in a few years they will have eliminated the pause from the historical records and all those old models will be accurate again.

Ray H.
September 11, 2014 11:52 am

I’ ve read this article several times and it is still not clear to me how they came up with the results for the ‘decadel’ models. They used 16 contemporary models and ran them for each year from 1960 to 2005. Each model was initialized at the start for the particular conditions of that year and then allowed to run for ‘3 to 7’ years. The claim is that starting in the late 1990’s, the models showed a temperature plateau similar to what has been measured.
So, my questions are;
1) If a decadal analysis was desired, why not run the models for 10 years? (The model result deviated from the desired result long before 10 years??)
2) If a shorter time period was desired, why not, for the sake of easier analysis, pick a single interval of 3, 5, or 7 years? What is the reason for a variable run time? ( perhaps the models deviated after only 3-7 years. Picking a fixed interval of only 3 years is too short for a decadal analysis so the models were allowed to run until they deviated and the 3-7 year reporting interval is the result??)
3) Why stop at 2005 when the temperature plateau continues to this day? ( perhaps the models do not work after 2005??)
4) Why start at 1960? ( perhaps the models do not work before 1960??)
5) How well did the models predict the temperature before the late 1990’s? (This is not mentioned. Who knows??)
6) How were the results determined? Specifically,for any given year, results would be available for that year from the 16 runs that were initiated that year as well as from the previous 3-7 years. Does this imply that the results for any given year are derived from 64-128 runs? If so, are you using the average or mean of the results? Or, are you picking a single run or a few runs that randomly match the plateau and disregarding the rest? ( similar to the 10 out of 262 long term runs that randomly match the temperature plateau. By the way, the idea of these 10 ‘randomly’ matching is from the study author, not me??)

Ian
September 11, 2014 12:57 pm

Cold and “O” rings was well known before the tragic Challenger launch. The knowledge had not been passed on or was ignored. Even today new aircraft are subjected to extreme testing to prove systems will not fail at extremes of temperature.

catweazle666
September 11, 2014 1:52 pm

More twaddle.
Seems there’s a lot of it about.

Seth
September 11, 2014 11:59 pm

re: “Even if they could have forecast the pause, they wouldn’t have. That would have undercut their dire message that we had to act now because global warming was accelerating and would soon reach a point where it would become irreversible.”
There hasn’t been a pause in global warming, (as sea level rise uncontroversially shows).
There has been a slowing of the warming of the near surface air. It’s the most measured of the places where the excess heat goes, but it’s unimportant proportion of the energy.
There’s been plenty of warming.

Reply to  Seth
September 14, 2014 2:03 pm

Seth
Please don’t be silly.
Global warming is and always was increase to global average surface temperature anomaly (GASTA). Global warming is and never was anything else.
You are trying to change the definition of global warming because global warming has stopped. But your attempt to ‘move the goalposts’ is too late.
There are now 52 published excuses for global warming having stopped.
Richard

Emerson
September 14, 2014 1:30 pm

And now we have more excuses for the models’ failures: Last decade’s slowdown in global warming enhanced by an unusual climate anomaly. (Application of the Singular Spectrum Analysis Technique to Study the Recent Hiatus on the Global Surface Temperature Record. PLoS ONE, 2014; 9 (9) http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0107222). A funny one. When the men caused climatic anomaly seems to disappear and the models fail, one just need to demand an extra non-human climatic anomaly that cancels it and be sure that it (men caused climatic anomaly) can still exist but disappears from sight.

Mervyn
September 15, 2014 1:03 am

So much for the 2007 IPCC 4th Assessment Report… the so called “gold standard in climate science”… “the settled science”… “incontrovertible”!!!!!!

george e. smith
September 15, 2014 11:43 am

Well I assume that the “Tools” they talk about needing, would be a real physical model of this planet, and all its physical interactions, that affect weather /climate.
Such a tool, if it existed, would of course be able to replicate the past, since we already know what that was.
So for all you modelers out there, who create real models of this physical planet; perhaps using a set of data numbers that gets added to maybe once every day,
Since you already know what the most recent day’s numbers are; maybe today’s, maybe yesterday’s: Using those numbers and your best model, why don’t you predict whether tomorrow’s new number, will be less than today’s number, or will equal today’s number. or will be greater than today’s number.
That is a one choice out of three possibilities test of your model. No need to predict / project / whatever out 100 years; just to tomorrow will do.
Now having used your model to get tomorrow’s direction. Bet your entire net worth on the accuracy of your selection; to be determined, once tomorrow’s number is known.
I predict / project / whatever, that you will likely lose everything you have.
Your model cannot, even answer that simple question about the future.