Reader Markx writes:
The title says it all here: “…Retrospective prediction…” indeed. How could a researcher keep a straight face and write such a title? (Maybe a subversive element at work?)
Retrospective prediction of the global warming slowdown in the past decade
Virginie Guemas, Francisco J. Doblas-Reyes, Isabel Andreu-Burillo
& Muhammad Asif
The Abstract:
Despite a sustained production of anthropogenic greenhouse gases, the Earth’s mean near-surface temperature paused its rise during the 2000–2010 period1. To explain such a pause, an increase in ocean heat uptake below the superficial ocean layer2, 3 has been proposed to overcompensate for the Earth’s heat storage. Contributions have also been suggested from the deep prolonged solar minimum4, the stratospheric water vapour5, the stratospheric6 and tropospheric aerosols7. However, a robust attribution of this warming slowdown has not been achievable up to now.
Here we show successful retrospective predictions of this warming slowdown up to 5 years ahead, the analysis of which allows us to attribute the onset of this slowdown to an increase in ocean heat uptake. Sensitivity experiments accounting only for the external radiative forcings do not reproduce the slowdown. The top-of-atmosphere net energy input remained in the [0.5–1] W m−2 interval during the past decade, which is successfully captured by our predictions.
Most of this excess energy was absorbed in the top 700 m of the ocean at the onset of the warming pause, 65% of it in the tropical Pacific and Atlantic oceans. Our results hence point at the key role of the ocean heat uptake in the recent warming slowdown. The ability to predict retrospectively this slowdown not only strengthens our confidence in the robustness of our climate models, but also enhances the socio-economic relevance of operational decadal climate predictions.
http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate1863.html
Meanwhile, reality continues to be a bitch:

‘socio-economic relevance ‘ words tell us its a serious BS altert time
But a quick question, how much actual vaild temperture measurements do we have for the deep ocean , given its vast size?
John
“Now that “Ocean heat uptake” has been corrected what is the new outcome for 2100?
They would have to run the model to 2100. That is a months long job, but clearly it is something you would want to do. Maybe Ar6. Also, this is only ONE MODEL, there are 20+ models that would all have to go through the same process. Its years of work.
“Has it changed the uncertainty?”
What do you mean by “the uncertainity” uncertainity in what? this is one model. What they showed was that they could improve that one model. Now you will have to see if other modelling groups can duplicate what they did.
“Do we know (and can predict) the variations in “ocean heat uptake”? ”
We know that 2+2 = 4, everything else is an informed judgement. The issue here is are they doing something “unscientific’ by fixing a model. The answer is no. This is exactly how you
fix high complexity physics models. Does every improvement make a modell suddenly valid?
no. but I dont think its rational to criticize men who are working to improve their tools. You might
argue thatthese tools will never be good enough. But to be fair these guys are doing nothing different than anybody else who works with complex models.
“What is the long term destination of this energy? Will it continue transferring towards the abyss where for all practical purposes it would be lost forever without noticable effect? Is this a feedback to warming or some cyclic behavior? Etc. etc. etc.”
Those are good questions, but they have little to do with my point. my point is simple. these guys are doing what everybody else in the field does to improve a model. Build a model, make predictions. find your mistakes, change the model, re run, find the problems, fix the model, re run. find the problems, fix the model…
Dirk
‘The problem is that these people might have trained a hundred models, then tested the hundred models on the validation period, and eliminated 99 of them. Meaning that de facto the validation period has been used for selection, i.e. training as well.”
Looks like you have never looked at GCM code or looked at tuning excercises. One issue that isnt what you claim is the use of 100s of models. quite the opposite, when it comes to ocean models there are only a few code bases that 20+ GCMs use for ocean models.
Also the tuning process is different than what you think..
“The ability to predict retrospectively this slowdown not only strengthens our confidence in the robustness of our climate models, but also enhances the socio-economic relevance of operational decadal climate predictions.”
Ha ha ha ha thats sooooo funny, is this an april fools joke? Seriously, retrospective predictions somehow strengthen confidence in the models that didnt forecast it happening? It somehow enhances socio-economic relevence when a model can only get it right AFTER the fact? LMFAO!
The stupid… it burns!!!
DavidL
‘Scientists rework the model, claim their model predicted 50feet all along, and concludes the car is even more dangerous now because it’s “worse than we thought”
Hmm I dont see anyone claiming they predicted this all along.
I see two groups of people looking at ambiguous data drawing conclusions that confirm their biases… Skeptics screeching that the models are falsified and the climerati claiming that they are validated. The truth.. all models are wrong, some are useful. So, define your purpose and you’ll have a better handle on model evaluation that is free from subjective bias confirmation.
I guess I’m missing the part where ocean heat uptake increased.
It hasn’t. It has dropped to a tiny/nearly flat 0.46 W/m2.
Now if 0.5 W/m2 ocean heat uptake / top-of-the-atmosphere net energy imbalance is supposed to cause some large warming, someone better tell Mr. Stefan and Mr. Boltzmann that they got their formula wrong. It will result in next to no warming. Which just happens to be what is happening.
Mystery solved.
So, when skeptics predicted BEFORE it happened, they were wrong. But when alarmists predicted it AFTER it happened, they were right?
Steven Mosher says:
”these guys are doing what everybody else in the field does to improve a model. Build a model, make predictions. find your mistakes, change the model, re run, find the problems, fix the model, re run. find the problems, fix the model…”
Agree that’s what they should be doing, but without communicating to the politicians and public about how sure they are that this or that is going to happen and whoa how bad it’ll be; demonizing anyone who doubts either their “calculated” results or their prophesized consequences. I can remember being absolutely ridiculed for merely suggesting in the late 1990’s that the ocean, being a huge heat sink, might absorb a portion of any excess heat in the system lessening the consequences of an “Enhanced Greenhouse Effect”. And now here it is being touted as the saving grace of the whole scaremongering campaign. Give me a break.
So when they told us over and over again that global warming was “accelerating, irreversible, and catastrophic,” they were actually telling us that global warming would “slow” for a prolonged period of time. Who knew?
So if the ocean is currently absorbing the “missing” energy only to be released at a later time. Shouldn’t this mean, since the physics are the physics, that the warming observed 1980-2000 was from energy absorbed by the oceans from an earlier time? Or is the physics new now, and the oceans just NOW started absorbing energy to be released later, and all the warming 1980-2000 is was from CO2 emissions?
Steven Mosher says:
May 14, 2013 at 12:31 pm
“Looks like you have never looked at GCM code or looked at tuning excercises. One issue that isnt what you claim is the use of 100s of models. quite the opposite, when it comes to ocean models there are only a few code bases that 20+ GCMs use for ocean models.
Also the tuning process is different than what you think..”
First why should I look at the mess; I’ve seen more big Fortran codebases than I wanted to in my life. Second, you know the term parametrization. I don’t care HOW they arrive at the parameter set that gives them the best fit to their training data; I called it training because that is what a variation- and selection process is, whether you do it with a genetic algorithm or with the cerebral activity of a containerload of grant grabbers.
And I know that there are only a few actual codebases; and they all get parametrized.
In other words, a hundred different models are the same codebase each one being parametrized differently.
“Muhammad Asif helped out with this important work.” [RIck at 0610, 5/14/13]
Should do a lot to “make [Muslims] feel good about their … contribution to science… “*. LOL.
*Charles Bolden, head of NASA, has stated that the “foremost” task President Obama gave him is [quoting Obama] “to find a way to reach out to the Muslim world and engage much more with predominantly Muslim nations to help them feel good about their historic contribution to science, math, and engineering.” [See: http://www.powerlineblog.com/archives/2010/07/026682.php%5D
Steven Mosher says:
May 14, 2013 at 8:02 am
…
Suppose you have a model of car braking. You model predicts that if you slam on the brakes
at 72 MPH your stopping distance will be 102 ft. You go to the test, you record the data and when you look at the data you find that the the stopping distance was 105 feet. You examine the actual vehicle speed and find out that the driver hit the brakes at 74 mph. So you re run the model using 74 mph instead of 72. your answer comes out at 105.
….
————
m’kay, I agree with most of your larger points, but I’d like to point out the obvious and say that a simple linear or near linear relationship like that is alot more amenable to the ‘successive approximation’ approach than climate models are.
Speaking for myself, it’s just irritating that they decided to use that language Steven.
*So .. we were completely correct in our predictions because our models were wrong, but this only emphasizes how accurate our errors were as the they relate to the confirmed mistaken projections… what matters though is why our beliefs were invalidated, it gives us reasons to be incorrect in the future in our effort to save the planet.. OR SOMETHING…*
(*) interpretation based in ridicule and delivered with mocking sarcasm..
I have a headache now.. maybe I can trade some carbon credits for eco friendly analgesic.. or something..
Steven Mosher asks:
”What do you mean by “the uncertainity” uncertainity in what?”
Simply that if we don’t know why the ocean heat uptake rate changed or whether the increase is the usual or the unusual or whether it’s episodic or not or any idea how long this rate will last vs. another rate then we have less (not more) confidence in the modeled outcomes for 2100.
As others have already pointed out at this point we don’t know it wasn’t a reduction in ocean heat uptake rate that caused the 20th century warming.
At any rate, even if we assume we’re in an unusually high ocean heat uptake rate we can’t foresee how long it will last or how many of these episodes we’ll see between now and 2100.
The whole idea that we know enough about the climate to make any sort of century scale predictions or projections is ludicrous, has been ludicrous for last 20 years of doom saying, and will remain ludicrous until climate scientist stop looking at everything through CO2 colored glasses. JMHO.
“… the tuning process is different than what you think.” [Mosher at 1231, 5/14/13, addressed to Dirk]
No doubt it is, What is clear, nevertheless, is that the modellers are perpetually re-tuning their machine (a.k.a. “model”) to fight yesterday’s war. An exercise in futility, pointless but for their goal of promoting AGW.
They will NEVER win the battle for TRUTH.
(and, apparently, that is not their goal, either)
I think they are actually getting closer to the truth. Now all they need to do is realize the warming from 1975-2005 was due to oceans releasing heat to the atmosphere. Program that into the models while eliminating most of the CO2 forcing and they will not only be able to predict the past better, but they will now have a chance of predicting the future.
Affirmative action in science.
This really starting happening in the sixties, when we began giving awards to the losers. We started passing students who couldn’t pass, now we are a goner. Dishonesty is in academics is now a fully entrenched post-modern habit, the implication that there’s no such thing as being wrong. Don’t underestimate these folks. So long as scientists tow the progressive line, they will continue to receive affirmative action no matter the quality of the work.
They are going to get funding; you won’t. That’s that Jack.
Now they are unimpeachable since they approve themselves. No matter how far off, they may simply absolve themselves.
“I meant to do that!”
James Hansen is speaking at a public lecture at the LSE in London on Thursday this week. Perhaps he should be asked what new predictions he’ll now make in retrospect.
“These guys are doing what everybody else in the field does to improve a model. Build a model, make predictions. find your mistakes, change the model, re run, find the problems, fix the model, re run. find the problems, fix the model…”
What postulates of a chaotic system allow you to do this?
Here’s my analogy:
We have a diver jump off platform of various heights into a swimming pool. We measure the velocity when the diver strikes the water. Next we go to Acapulco and have the diver jump off various height cliffs and again measure the velocity.
Based on our observations, we come to the conclusion that the speed of a falling object increases with height. The science is settled.
Al Gore makes a documentary called: “The Inconvenient Weight”, warning about how we will all be crushed if we don’t give up fossil fuels, CARBON-ated beverages and snacky treats. Anyone that questions us is branded a denier and accused of receiving funding from Big Blimps.
We build a model and confidently predict (project) the velocity of the diver jumping out of plane at 3,000 feet. But our model is wrong. The diver only achieves a maximum velocity of about 120 MPH. We take the plane up to 5,000 feet and try again. We take the skydiver to South America and try again.
We make adjustments to our data for TOB. Still not right. We lament that the diver is not falling faster. We study fallen meteorites and use them as proxy data to homogenize (remove) the MGP (Medieval Gravity Period – a local gravity event).
We search and search for the “missing gravity”, finally blaming the Moon for stealing the Earth’s gravity. Our model was correct after all. Whew!
Do you think a business plan based on retrospective prediction of stock prices could hold water? I would especially enjoy having a huge government loan guarantee on developing that idea, along with juicy funds collected from environmental NGOs.
Surely never before has science been so corrupted. These people are a disgrace to a noble profession, and should be exposed as charlatans.
Is it my imagination, or does historical “warming” seem to occur when science goes from one level of technical detail to another? I.e. old mercury thermometers read manually to digital read automatically, temps taken from intake valves on ships or buckets thrown overboard to floating Argo bouys. What I’m saying is that I don’t think that changes to oceanic heat uptake have really been demonstrated.