Oh this is hilarious. In a “Back To The Future” sort of moment, this press release from the National Center for Atmospheric Research claims they could have forecast “the pause”, if only they had the right tools back then.
Yes, having tools of the future would have made a big difference in these inconvenient moments of history:
“We could have forecast the Challenger Explosion if only we knew O-rings became brittle and shrank in the cold, and we had Richard Feynman working for us to warn us.”
“We could have learned the Japanese were going to bomb Pearl Harbor if only we had the electronic wiretapping intelligence gathering capability the NSA has today.”
“We could have predicted the Tacoma Narrows Bridge would collapse back then if only we had the sophisticated computer models of today to model wind loading.”
Yes, saying that having the tools of the future back then would have fixed the problem, is always a big help when you want to do a post-facto CYA for stuff you didn’t actually do back then.
UPDATE: WUWT commenter Louis delivers one of those “I wish I’d said that” moments:
Even if they could have forecast the pause, they wouldn’t have. That would have undercut their dire message that we had to act now because global warming was accelerating and would soon reach a point where it would become irreversible.
Here’s the CYA from NCAR:
Progress on decadal climate prediction
Today’s tools would have foreseen warming slowdown
If today’s tools for multiyear climate forecasting had been available in the 1990s, they would have revealed that a slowdown in global warming was likely on the way, according to new research.
The analysis, led by NCAR’s Gerald Meehl, appears in the journal Nature Climate Change. It highlights the progress being made in decadal climate prediction, in which global models use the observed state of the world’s oceans and their influence on the atmosphere to predict how global climate will evolve over the next few years.
Such decadal forecasts, while still subject to large uncertainties, have emerged as a new area of climate science. This has been facilitated by the rapid growth in computing power available to climate scientists, along with the increased sophistication of global models and the availability of higher-quality observations of the climate system, particularly the ocean.

Although global temperatures remain close to record highs, they have shown little warming trend over the last 15 years, a phenomenon sometimes referred to as the “early-2000s hiatus”. Almost all of the heat trapped by additional greenhouse gases during this period has been shown to be going into the deeper layers of the world’s oceans.
The hiatus was not predicted by the average conditions simulated by earlier climate models because they were not configured to predict decade-by-decade variations.
However, to challenge the assumption that no climate model could have foreseen the hiatus, Meehl posed this question: “If we could be transported back to the 1990s with this new decadal prediction capability, a set of current models, and a modern-day supercomputer, could we simulate the hiatus?”
Looking at yesterday’s future with today’s tools
To answer this question, Meehl and colleagues applied contemporary models in a “hindcast” experiment using the new methods for decadal climate prediction. The models were started, or “initialized,” with particular past observed conditions in the climate system. The models then simulated the climate over previous time periods where the outcome is known.
The researchers drew on 16 models from research centers around the world that were assessed in the most recent report by the Intergovernmental Panel on Climate Change (IPCC). For each year from 1960 through 2005, these models simulated the state of the climate system over the subsequent 3-to-7-year period, including whether the global temperature would be warmer or cooler than it was in the preceding 15-year period.
Starting in the late 1990s, the 3-to-7-year forecasts (averaged across each year’s set of models) consistently simulated the leveling of global temperature that was observed after the year 2000. (See image at bottom.) The models also produced the observed pattern of stronger trade winds and cooler-than-normal sea surface temperatures over the tropical Pacific. A previous study by Meehl and colleagues related the observed hiatus of globally averaged surface air temperature to this pattern, which is associated with enhanced heat storage in the subsurface Pacific and other parts of the deeper global oceans.
Letting natural variability play out

Although scientists are continuing to analyze all the factors that might be driving the hiatus, the new study suggests that natural decade-to-decade climate variability is largely responsible.
As part of the same study, Meehl and colleagues analyzed a total of 262 model simulations, each starting in the 1800s and continuing to 2100, that were also assessed in the recent IPCC report. Unlike the short-term predictions that were regularly initialized with observations, these long-term “free-running” simulations did not begin with any particular observed climate conditions.
Such free-running simulations are typically averaged together to remove the influence of internal variability that occurs randomly in the models and in the observations. What remains is the climate system’s response to changing conditions such as increasing carbon dioxide.
However, the naturally occurring variability in 10 of those simulations happened, by chance, to line up with the internal variability that actually occurred in the observations. These 10 simulations each showed a hiatus much like what was observed from 2000 to 2013, even down to the details of the unusual state of the Pacific Ocean.
Meehl pointed out that there is no short-term predictive value in these simulations, since one could not have anticipated beforehand which of the simulations’ internal variability would match the observations.
“If we don’t incorporate current conditions, the models can’t tell us how natural variability will evolve over the next few years. However, when we do take into account the observed state of the ocean and atmosphere at the start of a model run, we can get a better idea of what to expect. This is why the new decadal climate predictions show promise,” said Meehl.
Decadal climate prediction could thus be applied to estimate when the hiatus in atmospheric warming may end. For example, the UK Met Office now issues a global forecast at the start of each year that extends out for a decade.
“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,” Meehl added, “though we need to better quantify the reliability of the forecasts produced with this new technique.”
The paper:
Meehl, Gerald A., Haiyan Teng, and Julie M. Arblaster, “Climate model simulations of the observed early-2000s hiatus of global warming,” Nature Climate Change (2014), doi:10.1038/nclimate2357
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Too bad théy don’t have global climate modełs from the future – the ones that won’t bother to include the trivial impact on climate caused by a small increase in atmospheric CO2.
Part of the craziness about claiming to predict the climate future is there is no agreement about the climate past. The gathering of temperature data is perhaps 150 to 200 years old and the warmists in a state of confirmation bias homogenise the data to suit their theory. Then the skeptics and warmists look at historical information and indicators over thousands of years to model temperatures before then. The nett result is we use incomplete historical information to make incomplete future predictions.
Who cares what the climate does in the future? As if we can really change it anyway. Why not adapt like every other specie has had to do over the history of time.
The only logical impact that climate change policy will have on the future is to send the world into a regressive decline in global living standards. What warmists policies will do is the exact opposite of what they claim they are doing, destroying the future for future generations .
Attention “Climate Scientists”:
For Sale: State of the Art (and shiny) DeLorean Automobile, Mr. Fusion Included . . . Get her up to 88 mph and your climate forecasting capabilities go through the roof. . . . well at least you’ll get to see some serious sh*t!
Sorry I’m keeping the Hoverboard.
– Marty McFly
“There are indications from some of the most recent model simulations that the hiatus could end in the next few years,” Meehl added,
This statement will become famous in the annals of climate science as the pause stretches decades into the future in my opinion.( like the one about the disappearing snow)
The problem is not that we did not have the tools in the 1990’s to potentially predict pauses( there were many such predictions ) but that the mainstream climate science community and the technical papers community seemed not to allow anyone that predicted pauses or slower warming or any cooling to publish their findings. Even the media seemed to have been requested not to publish this information. Only unprecedented global warming predictions seemed to be allowed Those who did release their findings by other means seemed to be blackballed , fired or ostracized by other mainstream climate scientists, their university or their employer. It still seems to be happening even today. Attempts seemed to have been made to even hide the fact that the past 17 year pause had actually taken place as we saw with the last IPCC report. So having more modern scientific tools is no panacea if the basic system seems flawed to start with and any predictions of possible pauses or future extended pauses are withheld from the public.
If the climate is a complex non-linear or chaotic system (and it almost surely is one of those) you can’t forecast the pause even if you have a climate model that is complete and perfect in every part (which is impossible as a practical matter). The state of any such system at a given future time is extremely sensitive to small perturbations that might occur (like a butterfly flapping its wings) and in a chaotic system on perfectly knowing the initial conditions. You might be able to demonstrate that a pause can happen but you can’t forecast it.
If the future had been known back then we could have written better curve fitting climate software predicting the future climate.
“We could have forecast the Challenger Explosion if only we knew O-rings became brittle and shrank in the cold, and we had Richard Feynman working for us to warn us.”
Or they could have listened to the engineers (before the launch) who said don’t launch because the air temperature was below the design limits. Plus the warning s about the design before hand.
So it’s official now — there is a pause? Is there a consensus on that?
Short translation: Our models were wrong.
So, when do the models say that the pause will end? I’m all ears.
Predict that now, with your incredible new high powered models. If you get it right I’ll be impressed. Go!
I actually think that it isn’t entirely unlikely that their assertion is correct. Even the earlier models followed the existing trend for a decade or so before diverging. However this doesn’t really resolve the problem with the models. They have, and likely will continue to have, two basic problems.
a) The result above directly puts the lie to the assertion that we know that more than half of the warming of the last half of the 20th century was manmade. Actually, it is pretty much an explicit proof that it probably wasn’t.
b) The point isn’t whether or not one can build models that are initialized in “the late 1990s” and that can be made to work through the first decade of the 2000’s. It is whether or not models can be initialized in 1950 and run to the present. It is whether or not models can be initialized in 1980 and run to the present (much lower hanging fruit!). It is whether or not models can be initialized in 1850 and run to the present, correctly tracking the rise and fall of HADCRUT4. When they can do those things, come talk to me. In fact, when they can actually predict the future with skill come talk to me, because all they are saying is that now, using different training data, they can “predict” a particular holdout set of trial data if they work pretty hard and know the answer before they start so that they can tweak things until they get it.
Baaad modellers, baaad. Convince me that your build process was double blind so that you didn’t know what you wanted to get. Oh, wait, it wasn’t, any more than the first round of model building was blind. It got what the modellers wanted to get. Only time will tell whether or not the “super-model” (model of models) thus built has any predictive skill, because it’s a lot easier to predict the future when you know what it is going to be beforehand and doesn’t really mean the model has any real skill. If they were totally honest in its construction it might be. But then, EVEN if they were honest and really did get the next decade right without cheating, they still have to see if the model works outside of the training+trial data to predict the definitely truly unknown future.
Is it “likely” that the models are now good enough to predict the future where it matters, thirty to fifty years out? Not terribly. Consider:
The top article doesn’t even say on which side of the major climate event of the late 1990s (the 1997-1998 Super El Nino) they initialize on — I have to guess on the FAR side of it since without putting the ENSO event in no model is going to get the right answer because most of the warming observed was rather obviously directly driven by that discrete event, not by anything gradual associated with CO_2.
The top article asserts that heat going into the deep ocean is responsible for the pause, but that doesn’t explain why the heat did not go into the deep ocean in the 15 year stretch from 1983 to 1998, or why it did before that, or why it is now. Sure, they built a (set of models) in which this could happen and it worked better, but now do those models still work on the other events in the non-uniform climate record it needs to explain?
The top article asserts that they are still forming a MME mean, and it is this mean that is predictive. Why? A single working model ought to give the right answer. The climate isn’t voting to accept the average of many PPE runs, or the average of the average of many PPE runs from many “independent” (not!) models. It has a unique dynamical evolution that isn’t the outcome of any sort of “vote”. To the extent that the envelope of PPE runs of any given model includes the actual climate one cannot, perhaps, reject the model (one model at a time) but neither can one use the PPE mean of the model as a particularly good predictor of the climate unless the PPE mean itself is in good agreement (one model at a time). To assert anything else is to assert that somehow, the “differences” between climate models that share substantial parentage and initialization are normally distributed without skew across a space of random deviations from a perfect model, and while that is, of course, accidentally possible one does not have any good reason or argument for thinking that it is true.
The average of many bad models does not, in general, in physics, make a good model. Indeed, it nearly always makes a worse model than the best model in the ensemble. One doesn’t gain by mixing five ab initio Hartree models in with one semi-phenomenological density functional model, one loses. Don’t “improve” it by adding two more DF models — pitch the Hartree models. This alone would substantially improve the silly CMIP5 MME as presented in AR5. Let me put it in bold so nobody can miss it:
Get rid of the broken models. Subject all models to a multidimensional hypothesis test and stop using models that egregious fail, or even merely do poorly, on them. Use models that actually do, empirically, turn out to have maximal skill.
Or some skill at all. At predicting the future, not the short-time evolution from a carefully chosen initial condition after making changes that are guaranteed to reduce the warming observed for just the right interval to reproduce the trial set.
Still, this is good news. This is basically a formal announcement of what everybody knows by now anyway — the models of CMIP5 have now officially failed. They do not work out to 20 years, let alone 30 or 40. A selected, much smaller, set of revised, possibly improved models have been created that once again appear to work, when initialized in such a way as to avoid a point where they would instantly fail (and a cynic has to believe probably DID fail, motivating their choice of starting point) over a decade plus reference period (and don’t you just know that they were tweaked until they did — I very, very much doubt that this was a double blind experiment or blind to the start date!). Perhaps this smaller set, improved, and re-initialized, will work better and make it out to twenty whole years before egregiously diverging when nature does something else unexpected and dynamically invisible at the model resolution.
One does, however, end up with many questions.
First: If these new, improved models are run with the new ocean-heat-sucking dynamics from year 2000 initial conditions so that they remain flat for 14 years in spite of all the new CO_2 and “committed” warming from past CO_2 (whatever that is) out to 50 and 100 years, do they still produce 5 C warming by 2100? Certainly not, one would think. The interesting question now is how much do they end up with? It pretty much has to be less than the central estimate of AR5, because that estimate was made without any consideration of a heat-sucking multilayered ocean, which can eat the “missing heat” pretty much for centuries without warming the atmosphere a whole degree — or not — depending on nonlinear switches we have yet to discover. So what is it? 2C? 1.5C? 1.0?
Note that if they assert — sorry, if their honest and well-intentioned models now predict — errr, I mean project (non-falsifiable version of predict) 1.5 C of warming by 2100 (a third of which we’ve already seen) then the models are basically agreeing with what has been said by lukewarmists and rationalists on WUWT for some time now. Hooray! The crisis is over! Perhaps now we can try to cure world poverty, end the pointless deaths of children who live in energy-poor squalor, invest in universal literacy, work for World Peace ™ — that sort of thing — with the share of our gross product that is currently going to solve a non-problem leading to a probable non-catastrophic, fairly gentle, warming that might well prove to be beneficial more than harmful. Much like the fairly gentle warming that has persisted since the LIA.
Second: According to these new, improved models what fraction of the warming of the last half of the 20th century was natural? Again, it is difficult to imagine that that fraction will not have to be substantially downgraded, because the new models permit the ocean both to eat the heat (so to speak) and to cough it up again (or at least, to stop eating it). I repeat, it would be lovely to understand what precisely triggers one mode vs another, because honestly, I have a hard time imagining what it could be. They are attributing the pause to natural variation, so clearly natural variation can be greater than any anthropogenic warming for at least the length of the pause.
Again, this is no surprise to anyone on WUWT, but this will directly contradict statements made repeatedly, with ever greater completely unfounded “confidence”, in the SPM of the ARs. Dare we hope to discover that according to the new models more than half of the warming observed from 1950 could be natural, since they obviously have tied their models to something like the PDO, some persistent alteration in circulation that can modulate the ability of the ocean to take up heat and buffer climate change?
Third: I’m certain that the models have no room for Mr. Sun to play any significant role, but one thing that the paleoclimatological record clearly indicates is that at certain points the Earth’s climate system is naturally not only sensitive, but enormously sensitive, to small changes in the drivers. Some of the climate transitions of 5 to 15 C appeared to have occurred over times as short as a single decade! Mostly to the cold/glacial phase, it has to be acknowledged, but coming out of glaciation could be quite rapid as well. Also, the Eemian — without CO_2 — was much warmer than the Holocene now or even the Holocene Optimum, and we don’t know why. We haven’t any clear idea of what can create the natural conditions for rapid warming during an interglacial to temperatures much warmer than today but we know that such conditions have existed in the past.
Could Mr. Sun have any nonlinear impact on the Earth’s climate? The late 20th century was a time of high solar activity (not grand maximum high, but high). There were some alterations in climate chemistry and possibly planetary albedo that were at least interestingly coincident with the reduction of solar activity in the 21st century. There isn’t any really good or compelling correlation between solar state and climate over the time we have pretty good records of solar state (which can be measured anywhere, and is) and terrible records of global climate (which has to be measured everywhere, but isn’t), but that isn’t surprising given the uncertainties. One of the great virtues of our era is the existence of far, far better data sources — in particular satellites that can actually make systematic global measurements over long periods of time — that might enable us to address this as one of many mechanisms that might be the nonlinear “switch” controlling the ocean’s role in buffering warming, or the nonlinear “switch” that can cause rapid changes in average albedo, or something else neither I nor anybody else has thought of yet that might be the mechanism responsible for the rapid, catastrophic as in mathematical catastrophe theory catastrophic climate changes in the past, transitions between two (or more) locally stable climate configurations/phases potentiated by comparatively tiny shifts in the system.
So I personally welcome this paper. It is what science is all about. It is a step in the right direction. I expect that it will have a substantial impact — primarily on the excessive credibility assigned to earlier model-based conclusions, and hopefully to the credibility assigned to the new models and their conclusions. Good words to use frequently in bleeding edge science: “We really don’t know that yet”. “I’m not sure”. “Future cloudy, try again later”. (Oh, wait, that’s the 8-ball…:-)
In the meantime, don’t worry if it oversteps its bound and overextends its conclusions. Science is, in its own ponderous way, eventually self-correcting, if only after Nature reaches out and bitch-slaps you with a direct contradiction of your pet theory. If “the pause” continues for a few more years, this is only the first of many papers that will be produced to try to understand it, and every one of them will at the same time refute earlier work that pretty much excluded any such event. Some of the models built might prove in the future to have some actual skill. Or not.
Interesting times.
rgb
rgbatduke: Note that if they assert — sorry, if their honest and well-intentioned models now predict — errr, I mean project (non-falsifiable version of predict) 1.5 C of warming by 2100 (a third of which we’ve already seen) then the models are basically agreeing with what has been said by lukewarmists and rationalists on WUWT for some time now.
Good post. I had not wanted to read or write a long post on this topic, but that was worth the read.
It is a step forward, but there is no reason to believe that the models they have now will do a better job at actual *prediction* than the models that they admit have failed.
Once again proving Climatology is near impossible to satire.
Collecting tar and feathers.
Feathers are easy, tar is a little harder to find, perhaps I should substitute honey.Much more environmentally correct.
At least the bears will find these charlatans attractive.
Possibilities for a new reality/survival TV show, cocooned in an insulating layer of feathers our stalwart Climate Shaman is released into wild bear habitat.
Will he save the bear from starvation?
Will the bear spurn this tainted bait?
Film at…
After all now that it is normal to post the barbarity of beheading prisoners all over the internet, whats a little wildlife/charlatan interaction?
1. No. Ad hoc reasoning is a logical fallacy, not a component of the scientific method.
2. No. They are damned when they didn’t bother to check to determine whether or not the models worked before they attempted to use them for a political takeover, and they are doubly damned for now claiming to have improved those models without first checking that bald assertion, either. In sum, it amounts to lying.
Did they go to the John Kerry school of excuses:
We were against the pause before we were for it
“If only we’d had the right monkeys in the 1990’s we’d have produced Hamlet by now.”
You only need one simulation, the correct one.
So some models predicted the pause, but it’s of no value b/c “scientists” don’t have any way to figure out a priori which prediction is right and which ones are not.
Same thing happens to astrologers.
Well, no, they didn’t predict the pause, as that is a statement about the future, and the models in question have not yet been exposed to the pitiless gaze of the future. They hindcast the pause, after it already happened, when initialized “in the late 90’s” or right before the pause occurred. It is difficult — or even impossible — to say how much the modellers tweaked their models so that this fortuitous occurrence occurred. It is difficult — or even impossible — to say whether or not they would have had the same success if they’d started the models off in 1995 or 1990 or 1980 or 1950 and tried to hindcast all of the temperature record after any of these dates with the same success (or how long the models ran, on average, before deviating signficantly) because AFAICT from the top article, this simply hasn’t (yet) been done.
But even so, getting a consensus pause out of a collection of models at all is pretty impressive. The CMIP5 models don’t allow for any real possibility for that.
Still, I have to say in boldface to the authors of the study: Beware Data Dredging!
If one takes the models of CMIP5 and run them, starting in (say) 2000, some of them are going to run hotter than others. Some of them will exhibit a trend closer to the pause and others will exhibit a trend farther from the pause, when started with these particular initial conditions at this particular time. Overall, rejecting the obvious losers is a good idea, but really if we did that almost all of the models would already be rejected on the basis of performance post the CMIP5 reference period. And it is a simple fact that if we do reject the losers, the winners will by construction end up closer to the actual data. That does not necessarily mean that the winners are better models and are going to be more likely to predict the future.
Let me ‘splain. No, there’s too much. I will sum up.
Suppose I had a timeseries curve I wanted to “predict” and twenty models to use to predict it. The models, however, are nothing but random number generators (all different) geared up to produce a random walk in the curve’s variable. I initialize them all from common data (the same seed) I run them, and reject the worst ten of them after computing some measure of their goodness of fit to the curve.
Then it is a simple fact that the mean of the remaining ten will be a much better fit to the curve than the original mean of all twenty — probably more accurate, certainly less variance.
I might be tempted, then, to assert that this selected set of random number generators, initialized with any common seed, generate a good fit to the timeseries, and that its predictions/projections/prophecies should be “trusted”.
Anyone here think that this is a good bet?
That’s one of many reasons that while it is good that they winnowed out the worst performers in the CMIP5 collection, used “improved” versions of the rest (and I have no reason to doubt that this is true and that they are in fact improved in e.g. spatiotemporal resolution or are supported by more runs and get better statistics or have fixes to previously poor implementations of some of the physics) and found an initialization such that they could come much closer to the actual climate, this is creating a new super-model with a new training set, not validating either the collective super-model or the individual models contributing to it. At best one can say that its average managed to reproduce a single trial set from a special start. It remains to be seen whether or not it can predict/prjoect/prophecy the future with any skill. It might — if all work was done honestly there is some reason to hope that it might. But sadly, it might not. One might well ask why we shouldn’t just take the best model and use that as the model that is most likely to predict the future. One might ask (since we want to use terms such as “most likely”) what the quantitative basis is for assigning any “likelihood” at all (as, for example, some actual estimate of probability) that any given model will be predictive in the future, or the collective mean of all of the models, or the prognostications of the local bookie.
That’s really the one question nobody ever asks, isn’t it? Statistics is all about being able to quantitatively compute probabilities. We don’t usually use statistics to say that “A is more likely than B”, we try very hard to say “The probability of A, according to the following (possibly data based) computation is P(A), the probability of B is P(B), and the number P(A) > P(B)”. They don’t ask, because that questions is essentially meaningless in this context. We have no way to axiomatically assign any particular probability that the MME mean of any set of models (especially models of this complexity that share a substantial code and assumption base) will in any traditional sense “converge” to the true behavior or deviate from it by some sort of computable standard error. We literally cannot say how likely it is that the models will have skill in the future by doing any defensible computation.
Amazingly, that stops absolutely nobody from making all sorts of pseudo-statistical nonsense assertions of “confidence” concerning all kinds of predictions/projections/prophecies regarding the climate. The SPM for AR5 reasserts high confidence that over half the warming observed in the latter 20th century to the present is anthropogenic. How, exactly, do they compute that confidence? What constitutes “high”? If I make a statement about the confidence I have that a random number generator I’m testing is or isn’t a “good” one, at least I completely understand how I compute p, and know exactly how to interpret the p-value I compute and how much to trust it as a sole predictor of “confidence”. How is this done in climate science? To take models that are actively failing and base the predictions on them? That makes the models Bayesian priors to the computation of probability, and I would be roughly a zillion dollars that even this was never actually done, probably not even done without the Bayesian correction that would render the posterior probability of the truth of the statement basically unknown, garbage in to a garbage model, garbage out.
Otherwise, where exactly in AR5 are these confidence levels quantitatively computed, and from what assumptions and data? All one learns in chapter 9 is that one cannot have much confidence in any of the predictions of the models in CMIP5 — as the top article finally de facto acknowledges by doing some of what should have been done before AR5 was written, even though it would have devastated its conclusions by rendering them almost completely uncertain, so low computable “confidence” that they aren’t worth the paper they are printed on.
We’re still in that state for the new, improved, smaller set of better performing models. They arguably have better performance, at least for one interesting trial set. Let’s see how they do in the future. Maybe one day, one can assign actual confidence intervals with some quantitative backing instead of using a word that sounds all professional and statistical when what one means is “In my own, possibly biased, professional and well-informed opinion…”
Statistics was invented partly to get away from all of that, as those are the weasel words of the racetrack tout. “Hey, mate, bet on Cloud Nine in the seventh race. I have the greatest confidence (based on my erudition and many years of experience at the tracks) that he will win!”
rgb
If they claim they could have predicted the “pause”, then let’s see them predict when the “pause” will end, and when global warming will resume.
RGB, thank you for your two posts, I decided to ask an earlier poster defending the models in general this…
“Please tell me what physics parameters they improved; or why do these ten models now have better hind casting, and what do they mean for climate sensitivity in the future?
You see, I could have made 100% of the models better just by lowering the climate sensitivity to CO2, and placing in a well known at the time ocean cycle PDO and AMO factor. But common sense is not the goal of the climate science community is it Mr. Mosher.” (end quote)
I really do not understand climate science at all I guess. Regarding the ocean ate my homework ; does this summarize the claim?…
CO2 increased LWIR back radiation, which (losing very little of that energy to evaporation) somehow bypassed the first 700 meters of ocean, and at less then1/2 of the models predicted ocean warming rate, added (poorly measured with large error bars) heat to the deep oceans, where it will hide for some short time, keeping separate from the rest of the deep oceans, and soon it will, or maybe could come screaming out of the oceans and cause global catastrophic disaster world wide.
Did I get it right?
If one sets oneself up as an expert in predicting global temperatures 100 years ahead and you then issue a climate report showing unprecedented warming with an almost straight line temperature curve to 2100 and with the absence of any pauses or clarifying notes to that effect, you can expect people to naturally ask questions . If over the last 130 years of climate history there have been at least two major pauses ( no additional temperature anomaly increases during these periods ) like the periods from 1880 to 1930 and again 1945 -1980, you better have some solid undisputable scientific evidence why at least two such pauses will not happen again during the next 100 years . The least that one should present is a risk analysis of what the future global temperatures might be should the greenhouse gas theory prove to be wrong or not as significant compared to natural variability influence of the 60-70 year ocean cycle. Presenting only worst case scenarios for global temperature rising options is not a true or complete risk analysis. You might not be able to predict the exact timing or duration but you should comment on their possibility and the risk thereof should they happen and should they prove to be somewhat similar to the past ones( and not just a pause of decade or two).
Uhhhh….so now they admit the models we mortgaged our future on were wrong but now they almost have it down do its time to take out a second mortgage?
You must experience current climate before you can say what your prediction would have been.
This is a very frustrating report. All that those of us with no access to a library with a subscription to Nature Climate Change know is that 10 models (is that 10 different teams or 1 team with 10 different sets of parameters ) have successfully tracked global data from 1800s to 2010, including therefore the tricky 0.3C step at 2000/1.
This should be cause for general celebration . Assuming that these are US institutes that have succeeded, then the US Govt should be justly proud that its funding in this area has at last borne fruit.
From now on the world knows what to expect , by running the models forward with the set of parameters that accurately modelled past behaviour.
Forget the failed 252 models , why are the successful models not headline news and what exactly were the critical parameters that made these 10 models work so well.
Is there an academic on this site who can find his/her way to the University library , read the paper and report back? (The doi link leads to the paper but the figures in the abstract are too small to read).
Could it be that the 10 models that work do not award CO2 and its radiative forcing the assumed predominant role that it has previously enjoyed?
Don’t get carried away. These models are far from “proven”. The good news is that they apparently threw away a whole stack of bad models instead of continuing to average them in as if they were good models. That alone can do nothing but improve the agreement of the models with reality, as one carves an elephant by cutting away everything that doesn’t look like an elephant. But that does not mean that the model (or models) they have created will track the elephant of the past as it evolves into a giraffe, or a mouse, or a T Rex. The parts they cut away probably wouldn’t have done the job, but the parts they have left may not either.
I wrote a couple of fairly detailed posts up above on what we can hope for, what we can expect, and what I’d like to know in terms of omitted (from the top article, anyway) details. What I hope for is that this publication ends up being a tacit acknowledgement that the models upon which AR1-AR5 were based are, for the most part and being very polite, “in error” and are not useful or to be relied on in any way. What I also hope for is that the new/rebuilt/culled models, as you suggest, at the very least downgrade the direct effect of CO_2, the feedbacks (which have been egregious) and upgrade significantly the component of past and present warming that is probably natural to more than half “with confidence” (hey, I can use the term in an unsupportable way too — this is politics or at best a pure guess on anybody’s part until the day they can present a quantitative basis for any apportioning that doesn’t depend on a small mountain of debatable Bayesian assumptions). Finally, I hope, and rather expect, that the rebuilds significantly drop the overall climate sensitivity, by around a factor of 2 relative to AR5 but I’d be happy to get a factor of 1.5, down to solidly under 2 C by 2100.
What I expect is that this result will be initially heavily downplayed and quite possibly even bashed as some sort of betrayal of a political party line, or that calculations will quickly be run with the models that show that the climate rapidly turns around and catches up and in the end there is just as much warming, but it all happens (safely) later, as otherwise if they predict “warming will start up again by year 2017” they run a pretty serious risk of being proven wrong while the metaphorical ink is still dry on the result. But I don’t think that they will be able to avoid a substantial (and, may I say, enormously well-deserved) weakening of public and political confidence in the overall CMIP5 models and the often and loudly overstated conclusions of AR1-AR5.
rgb
Wow, that’s freakin’ brilliant. They figured out how to have their cake (1990s alarmism) and eat it too (but *now* we could have predicted it).
So what do their predictions show for the next 10 years? Or are they only going to tell us 10 years from now that they had it right all along, no matter the result?
“I could have been a contender.”
We shouldn’t take our eye off the ball.
The 2007 version of the “settled science” is now acknowledged to be WRONG, FALSIFIED, INCORRECT.
This is so despite thousands of assurances at that time that they knew what they were doing.
Al Gore was refusing to debate because the “science is settled”. Instead, his idea was to see that “climate change denial” was treated as a sin like racism, and those who were skeptics towards the 2007 settled science were ruled outside of polite society.
I wouldn’t have lost all that money on the GFC either.