Earlier this week I to reported on some of the poster sessions at the American Geophysical Meeting but was told the next day that I’m not allowed to photograph such posters to report on them. However, when the authors send me the original, for which they own the copyright, there’s nothing AGU can complain about related to me violating their photography policy.
This poster from Pat Michaels and Chip Knappenberger builds on their previous work in examining climate sensitivity differences between models and reality.

Recent climate change literature has been dominated by studies which show that the equilibrium climate sensitivity is better constrained than the latest estimates from the Intergovernmental Panel on Climate Change (IPCC) and the U.S. National Climate Assessment (NCA) and that the best estimate of the climate sensitivity is considerably lower than the climate model ensemble average.
From the recent literature, the central estimate of the equilibrium climate sensitivity is ~2°C, while the climate model average is ~3.2°C, or an equilibrium climate sensitivity that is some 40% lower than the model average.
To the extent that the recent literature produces a more accurate estimate of the equilibrium climate sensitivity than does the climate model average, it means that the projections of future climate change given by both the IPCC and NCA are, by default, some 40% too large (too rapid) and the associated (and described) impacts are gross overestimates.
A quantitative test of climate model performance can be made by comparing the range of model projections against observations of the evolution of the global average surface temperature since the mid-20th century.
Here, we perform such a comparison on a collection of 108 model runs comprising the ensemble used in the IPCC’s 5th Scientific assessment and find that the observed global average temperature evolution for trend lengths (with a few exceptions) since 1980 is less than 97.5% of the model distribution, meaning that the observed trends are significantly different from the average trend simulated by climate models.
For periods approaching 40 years in length, the observed trend lies outside of (below) the range that includes 95% of all climate model simulations.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The abstract of this paper states
“We conclude that at the global scale, this suite of climate models has failed. Treating them as mathematical hypotheses, which they are, means that it is the duty of scientists to, unfortunately, reject their predictions in lieu of those with a lower climate sensitivity.
Unless (or until) the collection of climate models can be demonstrated to accurately capture observed characteristics of known climate changes, policymakers should avoid basing any decisions upon projections made from them. Further, those policies which have already be established using projections from these climate models should be revisited. ”
Section 1 of my post at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
concerns the inutility of the IPCC climate models for forecasting purposes. It concludes in complete agreement with Michaels and Knappenberger:
“In summary the temperature projections of the IPCC – Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted.”
Using a new forecasting paradigm ,the same post contains estimates of the timing and amplitude of the possible coming cooling based on the natural 60 and important 1000 year quasi-periodicities seen in the temperature data and using the 10Be and neutron data as the best proxy for solar “activity”.
So the models don’t actually work? They just stand there looking pretty? Shame we can’t have any photos to make our own observations of them.
So the models don’t actually work? They just stand there looking pretty?
That is work, for models…
http://upload.wikimedia.org/wikipedia/en/f/f4/PlNTM1.jpg
Thank you for that. I shall have to make very careful observations. It may take some time.
The baker’s dozen model ensemble presented so colorfully by Leo Smith contains a number if figures that I think should be thoroughly validated. I hereby nominate myself to take on this challenging task. Any other volunteers?
Patrick Michaels:Cato’s Climate Expert has History of getting it Wrong! http://www.skepticalscience.com/patrick-michaels-history-getting-climate-wrong.html
If people put up a poster they are openly publicising it. Don’t understand why you can’t photo them.
They’re more concerned with unflattering photos of attendees snoozing or picking their noses.
Because it might embarrass the conference organizers?
Not really as the AGU conference is not a free event open to the general public.
Not exactly true – you have to pay to get in. If you photograph & distribute, then the AGU is losing potential revenue from those who otherwise might have paid to see. This is pretty standard for most technical society conferences
They are not scientists – they are typical politicians. Real scientists actually seek to be proved wrong so that they can change and improve their theories (ref. Richard Feynmann). Politicians are just the opposite. You try make a politician admit he/she is wrong.
Events which charge admission frequently have contract clauses for the participants which give exclusivity or copyright to the hosting organisation. They also penalise stand and poster owners who do not appear or who depart early. The reason is they advertise “x-many stands and poster sessions” and “y-many classroom sessions”. When people pay and enter and see half the place empty or the stand staff heading off for beer at 3 PM they demand their money back. So there are clauses that keep the content unique and the stands attended for specified hours. The organiser is selling what you are paying to display or talk about. Nice work if you can get it.
Sending Anthony the poster before the end is probably a technical foul.
The main source of money (I know from the American Math Society, but I assume it’s similar here) is an annual membership fee of about $100-$200, from which they provide some professional services, organize events such as this one at which they pay the main speakers only. And publish a few journals and book series.
The fellows who happen to walk in front of the Moscone center, are curious to see the posters and pay the entrance fee are NOT a significant source of income…
The evidence points to the ‘Meth Society’ running the AGU 😉
The system is incompletely or insufficiently characterized and unwieldy, which ensures our perception will remain limited to correlation between cause and effect. It’s like the scientific consensus has lost touch with the scientific domain. They really should resist urges to dabble in predictions.
I think if the King came into the room naked whilst they all admired his clothes then Anthony’s photographs would reveal all 🙂
Denying the world’s leading climate blogger the opportunity to use photo’s of their oh-so-important climate models doesn’t sound much like transparency to me. I wonder why they would take such a position…?
Perhaps they are trying to reduce the dominance (high-jacking) of the climate change issue and moving climate scientists to the most remote presentation venues and limiting photography are part of that effort.
simple copyright. It works both ways.
I don’t mind copyright protection, although it is generous beyond belief compared to patent protection. And inventive ideas have to be useful, as well as novel, and non-obvious to one “of ORDINARY skill in the art”.
And if you continue to pay fees, you can get 17 years or so (maybe 20) of protection.
With copyright, even if it is total rubbish; totally fictional, of no commercial value to anyone, you get lifetime, I think plus 50 years and evidently extendability.
Such a deal.
But if taxpayer funded grant money is used to produce such materials, the public should be the copyright owner.
If I invent something while on the job, my employer, who pays the bills, retains ownership of any patents, and I’m happy for him.
But just remember; there is NO requirement that copyrighted materials be factual, or accurate, or even useful for any purpose.
And as we can see in the “science” literature, much of it is worthless rubbish.
Two thirds of all US Physics PhD “graduates” wrote a thesis on something so totally useless, (but original) that nobody is willing to pay them to come and work on their “speciality” for them. So they end up as Post doc fellows somewhere, where they can try and interest some new naïve students in their arcane trivia.
If there was some expectation that PhD thesis results, had to be of some redeeming value, rather than ‘nobody thought of doing this before’, there would be a lot less doctors, and a lot more actual working physicists doing useful things for a living.
But I’m happy that they can choose what to put their name to for posterity.
If my poster has nothing but a drawn circle on it, though you might not be able to photograph it and use the image elsewhere due to my copyright rights, there is nothing stopping you from re-creating the circle yourself and widely distributing its image.
Other types of copyrighted work, such as an image of Mickey Mouse can not be reproduced, unless it falls under the fair use provisions of the law.
And then there are trademarks…
“Black circles – multi-model mean trend” Huh? Which models ever showed a sharp dip like that?
The models now have Pinatubo tuned into them. That’s why there’s the dip.
What I notice more and more is no one wants to admit they are wrong when it comes to forecasting the climate while the reality is everyone thus far has been wrong. No one can take being held accountable. Every time I try to take that path it is met in a hostile way. If you can’t stand the heat get out of the kitchen which is what they should all do. Always an excuse or this did not happen or that.
I might add the solar forecast have been equally quite bad.
This is why I do not subscribe to any one particular when it comes to why and how the climate may change going forward. I have my own thoughts which I have expressed many times but my thoughts are tied to solar activity which still remains much stronger at present then I ever imagined at this point in time. If solar activity should reach my low value parameters and the climate does not respond the way I think it should I will say I am wrong. No excuses.
I will not know however unless solar activity becomes very minimal and last for quite sometime in duration. My confidence in this is not as high as it was some two years ago. I have been fooled by this cycle and really have no clue to what lies ahead going forward for solar activity. I think/guess it will be on the decline soon and stay quite weal for some time.
I guess monitoring is the best way. Time will tell.
I printed off the prediction for the last solar cycle. Nobody predicted what happened. It was suppose to be a lot like the one before it. Since it seems that the climate has gone into a hiatus, as far as temps, it will be interesting to see if the climate tracks the solar cycles if solar activity continues to decrease.
Salvatore – Don’t know if you saw this, but I think this point answers your first paragraph—”Once government takes up an issue it will expand and never be resolved. There is nothing ironic about the fact that, as always, the people will pay the price and the politicians and deceivers will not be held accountable.” ( http://wattsupwiththat.com/2014/12/18/ironically-change-catches-up-with-climate-change-alarmists-in-lima/ (next to last paragraph)
l Understand your frustration…
What happened about 1992 to change the modeled trend so abruptly?
Mt Pinatubo.
and it changed the trends of a lot of models forever after?
Governor Jay “I’ll Pass a Carbon Tax on my Watch” Inslee of the State of Washington in the good Ole’ US of A, really needs to be read aloud the results of this study, and the point must be made – preferably in a public forum, with cameras rolling – repeatedly, until he cannot just use the “but-it’s for the children” semi-truth as he proposes costly “carbon (sic) taxes” on “large carbon polluters” that purvey fuels based upon the carbon atom, in the (former) Great State of Washington. After all, it would only raise an additional (estimated) $947,000,000.00 in the first year of 2017 (equating to approximately $147.50 new tax for every man, woman, and child in the State or +/- 6.5 million inhabitants)
http://www.king5.com/story/news/politics/2014/12/18/inslee-capital-gains-tax/20593487/
http://www.washingtonpost.com/blogs/govbeat/wp/2014/12/18/washington-governor-proposes-billion-dollar-carbon-emissions-cap-and-trade-plan/
I doubt actual facts would be able pierce through his watermelon-rind of a cranium to actually influence his steamroll of a plan. Best scenario would be that the green taxing of the proletariat through secondary means such as proposed, would slow the immigration of Americans/Foreign Immigrants to the overtaxed State of Washington there are places to move to that do not tax the air!!!
Ramble ended, thanks for reading.
Michael C. Roberts
http://spaceweatherlive.com/en/solar-activity/solar-cycle-progression
See how far off this is becoming especially solar flux.
Methane it is said is way over 20 times more powerful than CO2 as a Green House Gas. I’ve found that the biggest reason for that claim is that at 1 or 2 ppm or so it doesn’t take much to double its concentration. That implies that the logarithmic nature of a Green House Gas is in operation at 1 or ppm. In the world of geese and ganders that should apply to CO2 as well.
Dr. James Hansen tells us in Chapter 8 of the IPCC’s AR4 Report
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2-3.html
… the climate response to a doubling of … CO2 … with no feedbacks operating … the global warming from GCMs would be around 1.2°C.
If you double 1 or 2 ppm 7 or 8 times you get around 400 ppm and it follows that the warming would be around 9°C.
We commonly hear that Green House Gas keeps us 33°C warmer than we otherwise would be. And we also hear that CO2’s contribution is anywhere from 9% to 26% or that. As can be seen from the above, warming with no feed backs 9°C is 27% of the 33 degrees and exceeds those limits.
Our friends at SkepticalScience provide this rebuttal:
http://www.skepticalscience.com/argument.php?p=2&t=318&&a=115
How sensitive is our planet?
#53 Glenn Tamblyn at 18:15 PM on 21 September, 2010
I remain unconvinced considering the limits of the logarithmic nature of Green House Gas that CO2’s Climate Sensitivity is much different with or without feed backs. If anything, it’s less.
Do you not simply shake your head when you see information saying CO2 represents between 9% and 26% of the 33C of supposed greenhouse warming? Not exactly a solid foundation of physics and mathematics to make a computer model from is it?
The interesting thing about the two extremes of what CO2’s greenhousyness is supposed to be is how each one can be used for the narrative of global warming. The 26% one fits nicely into the logarithmic graph you mentioned allowing proponents to blame the ice ages on low CO2. Zero CO2 being some 8.55C colder. Sadly that narrative doesn’t correspond well to the doomsday catastrophic warming scenario though. If the last 120ppm rise between 1860 and today only caused 0.9C of temperature increase, when the previous 280ppm is responsible for over 7.6C then CO2 is aready spent. So bring in the 9% narrative. That fits well with the “2C rise above pre industrial levels with a doubling of CO2 since 1860”. If you draw that one on a graph you get a beautiful straight line from zero through 280ppm to the present that corresponds perfectly to the 0.9C rise in temperature and predicts the total 2C of post industrial warming by 560ppm. The only issue is it puts the total greenhousyness of CO2 today at 2.97C and leaves the temperature of the ice ages at 1.34C lower than today. Then there’s Al Gore and Michael Mann’s graphs! Oh dear! The problem with trying to reconcile the last 0.9C of warming and 120ppm of CO2 rise with a compounding effect is twofold. First is that it takes the total greenhousyness of CO2 well below the 9% parameter as if it compounding going forward it is logarithmic going backwards. Then there is the pure problem of compounding itself. If you’re going to predict that the next 120ppm of CO2 is going to produce double the temperature rise of the last 120ppm then you have to admit you’re advocating Earth’s surface temperature to surpass that of the planet Venus before CO2 levels get to those commonly experienced in a room full of people as the exhale! I wouldn’t put it past Al Gore to argue for it, but even for him, it’s a hard sell!!
The rebuttal I linked to says that climate sensitivity changes with climate. And still if you try to make a sensitivity of 3.2°C fit you have to say that the green house effect isn’t logarithmic below 20 ppm. That’s a figure I’ve heard bandied about, and now I know why.
Claiming a sensitivity of 3.2°C is like trying to put ten gallons of gas in a five gallon can.
Thanks for the reply.
Methane it is said is way over 20 times more powerful than CO2 as a Green House Gas. I’ve found that the biggest reason for that claim is that at 1 or 2 ppm or so it doesn’t take much to double its concentration. That implies that the logarithmic nature of a Green House Gas is in operation at 1 or ppm. In the world of geese and ganders that should apply to CO2 as well.
Dr. James Hansen tells us in Chapter 8 of the IPCC’s AR4 Report
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2-3.html
… the climate response to a doubling of … CO2 … with no feedbacks operating … the global warming from GCMs would be around 1.2°C.
If you double 1 or 2 ppm 7 or 8 times you get around 400 ppm and it follows that the warming would be around 9°C.
We commonly hear that Green House Gas keeps us 33°C warmer than we otherwise would be. And we also hear that CO2’s contribution is anywhere from 9% to 26% or that. As can be seen from the above, warming with no feed backs 9°C is 27% of the 33 degrees and exceeds those limits.
I guess I can’t link to or mention John Cook’s web site but you can find a rebuttal for this argument over there on a search for the following title and post number:
“How sensitive is our planet?”
#53 Glenn Tamblyn at 18:15 PM on 21 September, 2010
I remain unconvinced considering the limits of the logarithmic nature of Green House Gas that CO2’s Climate Sensitivity is much different with or without feed backs.
I give up
I think most scientists and people would agree the climate models have failed. But I also think the politicians and activists will continue to use them.
The real takeaway from this study is that even if temperatures begin to fall back within even very generous error bounds, the models have spent so long so consistently out of range that they are wrong. That’s actually good news, at least good news once the climate modellers admit they are wrong.
“I have not failed. I’ve just found 10,000 ways that won’t work.”
~ Thomas A. Edison
Now the job begins trying to find a model that works.
Rob Try
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
See 8:54 comment above
why ? its silly and arrogant to think we can accurately model a chaotic system … its a waste of time and money …
It is silly to assume that the climate system is chaotic when it clearly isn’t. There are obvious periodicities in the Milankovitch cycles which have been stable for hundreds of millions of years>Similar quasi-periodicities are seen in the solar activity and temperature data. I agree that numerical reductionist models are useless. My approach is quite different – you should actually read the linked post before commenting.
It is silly to think that models of chaotic systems cannot provide useful information, or that a chaotic system has no stable or even periodic trajectory.
A model that works?
dT/d(CO2) ~= 0?
Much closer sensitivity equation than any other figures being bandied about. How can any “scientist” believe the 33 degree figure, after you research where it came from (flat earths etc)? Astounding.
Patrick J. Michaels and Paul C. Knappenberger,
It is a very well-conceived debate stimulating poster. Thanks, debate is what we need to have more of in climate focused science if that science is to regain some trust in the critical public’s eye.
My understanding of your poster is that the models are too warm to be statistically credible when compared to observations. I add to it my view that your poster’s finding is very much more significant when one considers that the observed temp dataset used in your poster can be reasonably considered to have significant biases in the warm direction.
Happy holiday season to you guys.
John
Thanks Anthony.
A very intresting information.
The summary of the introduction above to me reads like something as:
“The test of the models while compared to reality show that the model performance is not good enough.
The range of the possible AGW is not correct.
The AGW is not as strong as projected by the range of the models,but never the less is AGW.
While compared to the reality the AGW seems to be less strong and possibly not that ctrastophic, but again never the less an AGW.”
The main problem is that the models these guys complain about are much much better at estimating the possible range of the AGW than them.
That range means that there is no possibliity under any circumstances to have an AGW above the upper range…..and below the lower range there is no any chance of AGW. The models are very well performing at estimating that.
Simply the reality shows that the AGW while compared to the reality drops very low from a 97% certanty to nearly nothing, simply because the models project a very clear picture how the AGW should look like, and the reality seems no where near it.
These guys seem to claim that it should still be AGW, but below the range the models projected as possible.
My bet is that the models are far much better than these two at estimating how AGW should look like.
If it does not look like AGW most probably it is not. There is no need to maintain up the 97% certainty of AGW, by moving it to a lower range of impact.
Little tip:
Climate sensitivity (CS) been considered as ECS (equilibrium climate sensitivity) at any value above 2C means a metric for estimating AGW.
CS becomes and is considered ECS, as the same actually, only in the AGW, as a requirment to estimate the possible range of the AGW and measure it at any given point in time.
The ECS has no value for estimating or measuring variation in climate in regard to temp/CO2 variation, before the anthropogenic era in principle, as it been just an exageration of the CS in principle, as far as natural climate concerned. (actually makes no sense in principle as it does, in principle, mean a possible change of the metric to a different range due to the climate moving towards a new equilibrium, aka the ACC-AGW)
So if imagening some one contemplating with a GCM that the ECS is ~2C and requiring from the GCM to take that in account and adjust the AGW projections, the most probable thing that it will get as a response from a GCM will be something as:
“Please do grow up, and don’t ask for such silliness”.
cheers
That graph is really stretching it – 2.5th and 97.5th between .55 and MINUS 0.1 C/decade.
Robert – that’s because at the far right the trend is taken over a period of only 10 years, the shorter the time period of the trendline, the noisier the signal. The wide error bars at the right threw me too until I gave it some thought, then it made sense.
Make sure to visit
http://climateaudit.org/2014/12/11/unprecedented-model-discrepancy/
Great post & thread, one of McIntyres’s best, lots of Heavy Hitters in the comments. Not to be missed, to stay up-to-date on the model-failure problem.
Cheers, Pete Tillman
Professional geologist, amateur climatologist
From the abstract, the word “unfortunately” is inappropriate for professional statistical presentations. It is either acceptance or rejection of the null and there is no unfortunate this or that to it. Period
By trying to condense global temperatures to a single arithmetic average, we are throwng away a huge about of detail. We have time series at every point. Since the effect we are looking for is global, the hypothesis we are trying to test is whether or not the actual temperature at any given location has changed over time. We can partition the data set into locations where the temperature has increased and locations where it has not. If significant global warming is taking place, the majority or locations should show significant increases.
Another question – why use the arithmetic average as opposed to T squared or T to the fourth power?
It would seem that T to the fourth power might give something which relates to some physical property. (I know that the Earth and the oceans in particular do not behave as a perfect black body).
What strikes me:
Michaels and Knappenberger essentially show the inverse of model certainty on their graph – that is to say, the ‘error bars’ take into account a greater range of models than the primary trendlines.
When including another five percent of model runs, the aggregate accuracy gets pretty drastically worse.
This should be a learning moment for those in power… they shoud realize that the most extreme models are to be summarily disregarded rather than focused upon.
Sadly, I doubt there’s much money in pragmatism.
LeeHarvey,
If by the word ‘pragmatism’ one refers in any way to the American Pragmatism School of Thought (Philosophy)*** then an immense amount of American money is involved; specifically all the American money involved to date and ongoing in the myopic promotion of the observationally failed theory of significant climate change by CO2 from fossil fuel use.
The American Pragmatism Tradition in Philosophy basically says what is right (in the realm of cultural values, social structure and gov’t economic policy) is what works when constructing continuous politically implemented social/cultural/ economic experiments to see if they work. As to the meaning of ‘they work’ it has always referred in the American Pragmatism Tradition of Philosophy to mean they work to the benefit some collective. Pragmatism fully anticipates that the experiments, if they work, would likely only work for a limited period; thus they expect endless experimentation as a normal process. You can see in the Climate Change Cause in America the essence of American Pragmatism Traditions.
***American Pragmatism School of Thought ( Philosophy) is the well known and currently widely held philosophical tradition in both academia and in politics that was started and established by the following Amercians: Charles Sanders Peirce (1839–1914), William James (1842–1910) and John Dewey (1859–1952).
John
Naïve question, but I hope someone can answer. precisely what is meant by “Multi-model Mean Trend”, which is the big black dots. I guess the is the average trend of a certain set of models. Presumably, for each model the Trend is derivable from whatever Climate Sensitivity that model derives. Is that right? And, the Multi-model Men Trend changes over time. So, that apparently means that the set of model trends being averaged keeps changing. So, precisely which set of models is used for an average at any point in time?
Or, am I all wet and the Multi-model Mean Trend is calculated some other way?
Thanks to anyone who can explain.
Its the average of each of the IPCC’s 108 2014 ensemble models. And the error bars are based upon the spread of the model results. If you download our entire poster you will see that they are normally distributed and therefore we can do very straightforward tests on the mean output versus “reality”.
Thank you very much michaelspj. I am still confused about how the Multi-model mean trend varies with time. Are these model trends assigned to a point in time the same way the actual temperatures are? E.g., would a Model value for “50” show the Model trend for the period1964 to 2014? If so, would not the Model trend be close to the actual trend, since actual data was available when these models were created? Any further explanation would be appreciated.
Terrible chart. Too many lines, impossible for a non-specialist to interpret or use.
It’s a very simple chart. Read the legend and the text. We are totally straightforward and only ask that you read our words. Thanks.
Interesting, flawed, and curious. Interesting because it quantifies to some extent the observation that the climate models “collectively” fail a hypothesis test. Flawed because it once again in some sense assumes that the mean and standard deviation of an “ensemble” of non-independent climate models have some statistical meaning, and they do not. Even as a meta-analysis, it is insufficient to reject “the models of CMIP5”, only the use of the mean and variance of the models of CMIP5 as a possibly useful predictor of the climate. But we didn’t need a test for that, not really. The use of this mean as a predictor is literally indefensible in the theory of statistics without making assumptions too egregious for anyone sane to swallow.
What we (sadly) do not see here is the 105 CMIP5 model results individually compared to the data. This would reveal that the “envelope” being constructed above is a collective joke. It’s not as if 5 models of the 105 are very close to the actual data at the lower 5% boundary — it is that all of the models spend 5 percent of their time that low, but in different places. Almost none of the models would pass even the most elementary of hypothesis tests compared to the data as they have the wrong mean, the wrong variance, the wrong autocorrelation compared to the actual climate. Presenting them collectively provides one with the illusion that the real climate is inside some sort of performance envelope, but that is just nonsense.
The curiosity is they are plotting “trend”, not the data itself, that is, the derivative of the data (model or otherwise), and that the derivative of the data has a peak and complex structure over the last 20 years. Say what? Looking at figure 9.8a, AR5, this is difficult to understand. The “pause” is something that actually was neither explained nor anticipated as of 2006 in the models. So I’m a bit suspicious when CO_2 cranks up but “forcings” have somehow been found that moderate the trend, not in just one model but in the bulk of them, hindcasting the pause that wasn’t there four or five years ago. Really?
A final flaw is all of the usual nonsense about fitting a linear trend to a nonlinear timeseries in the first place. Note well that the authors say nothing about error in the fit trends themselves, they just plot out the mean fit trend and some sort of standard deviation of sample fit trends without ever talking about the probable error in each fit trend from data that itself is systematically diverging from the data being fit.
And a good thing, too — they’d be eaten alive by e.g. William Briggs:
http://wmbriggs.com/blog/?p=3266
or in m ore detail:
http://wmbriggs.com/blog/?p=5172
and ff. Not that they’d notice.
So, very curious. If I wish to compare two different timeseries (say) the measured global anomaly and the global anomaly predicted by a single model run of a single model, or for that matter the mean over many runs of a single model with perturbed parameters, there are straightforward ways to do it. One of them is to look at the linear trends, to be sure, because if they differ at all then the models will separate without bound over time. Better still one can use e.g. Kolmogorov-Smirnov tests, look at the symmetry of the models, look at the variance of the models, and so on — all of their statistical moments, not just one moment that is picked to make some point.
With that said, I agree well enough with the conclusion. My own fits to HadCRUT4 indicate a total climate sensitivity of around 1.8 C, just under the 2.0 C and far under the 3+ C (still) being asserted by various parties. 3 C cannot be rationally fit to HadCRUT4, period, unless one takes back the co-assertion that natural variation is irrelevant, which is (incidentally) confounded by the substantial variation in linear trend in the curves presented above given steadily increasing CO_2.
rgb
+1 . Good for you.
Moreover, even if this collection of model results were a valid statistical ensemble, where are the tests for statistical independence or that the distribution is Gaussian to justify plotted limits — or was nonparametric statistics used (and if so, where is this explained)?
In AR5 (which nobody ever reads, of course) it clearly states that they are not a valid statistical ensemble, and that in particular they are not independent. Indeed, of the 36 or so models in CMIP5 portrayed there, there are only maybe 7 to 11 independent models. If you read the names off, you can see furthermore that the big players (e.g. NASA GISS) have disproportionate representation with 7 or 8 named models that are part of the “ensemble” all by themselves. But the whole idea is silly beyond compare.
My favorite analogy is to the Hartree (mean field) model in quantum mechanics. We know that the Hartree model is fundamentally flawed as a means of computing e.g. electronic energy levels for a multi-electron atom. It ignores the Pauli exclusion principle and does not allow for the powerful short range repulsion between electrons. Both of these things increase the size of atoms by pushing the electrons further apart than the Hartree model allows for, creating a systematic bias in the energy structure of a Hartree (modelled) atom compared to reality. You can get a decent idea of how quantum atoms work — get energy states out with reasonable labels, for example — from the Hartree model, but quantitatively it isn’t so good.
You can, of course, have 36 different people program in a Hartree model, and run the resulting programs on different hardware to different precision and tolerance, and get 36 different results. Those results might well be normally distributed around some sort of “mean Hartree model result”! And even if this were true, it would not ever, under any circumstances, make the “multimodel ensemble mean” of the Hartree model a good predictor or descriptor of reality!
You can average an infinite number of broken or incorrect models and still not converge to a correct model. The entire idea is silly. Under an enormously fortuitous, incredibly unlikely special state of affairs where the systematic errors attributable to different models happen to cancel, you might find an “ensemble” of models that converges in the mean to reality, but one cannot even sanely argue that this extremely special circumstance holds for climate models.
Indeed, nobody expects climate models to work at all, let alone work in the multimodel ensemble mean. And by “nobody”, I include climate scientists. They all know perfectly well that it is unlikely that one single climate model is working correctly even in a mean sense one model at a time! They know this because no two unrelated models converge to the same model predictions in the mean. At most one model could be correct, and it is far more likely that none of them are correct. There is overwhelming evidence — some of it presented above — of systematic bias in the CMIP5 models. And yet again in AR5, this is acknowledged, right before they state that they’re going to ignore this inconvenient truth and just use the CMIP5 MME mean wherever as if it were some sort of meaningful predictor, and even attach words like “high confidence” to it in the Summary for Policy Makers where the term “confidence” at any level, high, low, or medium hasn’t the slightest defensible meaning in any sense of statistical confidence.
This is what drives me personally bananas. It makes the entire report a “confidence” game. Confidence in science is not a statement of opinion. It isn’t even a statement of the opinion of authoritative experienced researchers in the field. It is a defensible statement of a result from statistical analysis. It is a p-value.
AR5’s use of the term is a direct violation of the very precepts of science. It has reduced it from quantitatively defensible analysis to punditry and politics. It is abhorrent. It is despicable. It is just plain wrong.
rgb
+10! Your comments are always spot on and interesting.
Hello rgb.
Considering what you have stated:
“With that said, I agree well enough with the conclusion. My own fits to HadCRUT4 indicate a total climate sensitivity of around 1.8 C, just under the 2.0 C and far under the 3+ C (still) being asserted by various parties.”
————-
Would you consider that the AGW is impossible!
Estimating a CS ~1.8C through the assessed reality of HadCRUT4 puts the posiible range of ECS ~1.2C
~1.8C (with an average of ~1.5C) far below for a possible AGW.
As the CS at ~1.8 the possible range of CS is ~1.6C to 2C (with an average of ~1.8C, as you put it) which will make the ECS be at the range given above.
The ECS in principle stand for a condition while the CS moves to a new range, like in the case of AGW….and always it will move to a lower value, …….. but never the less CONSIDERING the ECS ~2.4C to 4.5C (with an average of ~ 3.4C) as prior to AR5 means most possibly an AGW.
In that range of ECS the CS would have been somewhere inbetween 2.8C to 4.4C (with an average of ~3.6C)….AND AT SUCH VALUES NO REAL IMPORTANT DIFFERENCE AS TO BE CONSIDERED DIFFERENT and with a problem for AGW, but if you lower the CS significantly the value of ECS drops too low for a comfort and becomes meaningless and paradoxal, and therefor so ends up to be the AGW.
In this kind of interpretation will you really consider that the AGW is impossible?
cheers.
Forgive me, but I’m having a hard time understanding what you are saying here. Let me instead clarify what I’m saying.
warming (and are arguing about the particular value of
that is reasonable) or what. c) If you are asserting that no warming has occurred, by all means say so. Then we can terminate the discussion early, because (note well) that I’m “assuming that HadCRUT4 is reasonably accurate”. If you change this assumption, of course you will arrive at different conclusions, but then the discussion has to be about something else, that is, the problems or lack thereof with HadCRUT4. I might even AGREE that it has problems, but that doesn’t affect the value of the exercise above.
* IF one takes HadCRUT4 at face value (not arguing about whether or not that is justified, as that’s a distinct issue).
* AND one takes the hard result of real physics line by line computations as well as slightly approximated models that the average surface temperature ought to vary with the log of the CO_2 concentration (at absorption saturation, where we long since are)
* AND one constructs a “reasonable” model for CO_2 from 1850 to the present that almost perfectly matches Mauna Loa data from 1959 through the present (so it is pretty much dead on there) and ice core data back to 1850 (so it is at least likely to be approximately correct there and in between)
* AND one uses the latter two assumptions to fit the former data,
* THEN one obtains a very, very good fit to the data. That is, it is absolutely impossible to reject the null hypothesis that CO_2 was the proximate cause of the average warming in between. Quite the contrary, it is a hypothesis with sufficient explanatory power of the data that there nothing much left to explain!
Do you see how that works? It is simply a matter of fact that CO_2 concentration is a sufficient explanation for HadCRUT4 if the latter is a valid and reasonably accurate statement of the surface temperature anomaly. It is as simple as that. I freely acknowledge that there could be alternative hypotheses that would also work. I freely acknowledge that HadCRUT4 might not be accurate. I freely acknowledge that my interpolatory CO_2 model might not be valid. But it is undeniably true that the three assumptions above form a very coherent and consistent result, one that directly measures a TCS of 1.8 C in the warming observed so far — a result that really only depends on the beginning and end points but the fact that it works well in between is fairly strong evidence that the hypotheses above could be mutually correct. Or that HadCRUT4 was cooked up to support a TCS of 1.8 (unlikely, since most of the IPCC seems to want it to be much higher). Or that my CO_2 model is wrong but coincidentally happens to produce a good fit by pure chance. Or…
So please, by all means, come up with or express your own explanations for the warming, but if you are going to argue with mine please understand that a) you need to be quantitative. I can defend mine with a physics based, quantitative fit built using R and the hypotheses and the data. No handwaving necessary. b) Be clear about what you are asserting. I can’t figure out if you agree or disagree with HadCRUT4, with any given model for CO_2 increase, I can’t figure out whether or not you agree or disagree with the expected
rgb
Hi rgb.
I think your comment was a reply to me.
You say:
“Forgive me, but I’m having a hard time understanding what you are saying here. Let me instead clarify what I’m saying.”
———————–
Forgive me too, but I have to put my understanding of this little debate of ours as clearly as I can, no offence intended, honestly.
Reading your reply to me I have a very hard time to accept the above statement of yours as true, as you actually have achieved in a very clever way to avoid the answering of my question by defaulting that question in the point that it stood at.
In your latest reply to me, you are saying to me that while you said the “total climate sensitivity of around 1.8 C, just under the 2.0 C”, you actually did not mean the CS (as I happend to have understood it) but the TCS, and therefor you do not need or do not have to answer my question anymore, simply because any value of TCS (whatever that value be) in its own proves or disproves nothing about AGW, and my interpretation that the question was based at happends to be outside the meaning you had about your “total climate sensitivity” .
Very clever of you I must say.
About the rest of your latest reply to me, I can’t make head or tails for most of it to be honest,….and no I did not dispute the accuracy of HadCRUT4, it was taken in face value as you say, because it does not really matter on the subject of that question you had to answer.
So you did not need to go to all that trouble and make such a long reply to me, you just had to say that by “total climate sensitivity” you did mean TCS, contrary to what I thought, the CS.
That would have been good enough.
You see, TCS does not actually mean “total climate sensitivity”, but actually for what is worth it means Transient Climate Sensitivity, a kinda of CS needed to explain, uphold and measure the possibility of The Climate moving towards a new climate equilibrium, aka the AGW.
Forgive me for thinking that “total” in relation of CS has no any actual meaning and therefor my mistake of thinking that you simply meant CS when you said “total climate sensitivity”, BUT YOU SEE THERE WAS NO MUCH CHOICE THERE, FOR NOT SAYING NONE.
Simply as far as I can see, you did manage to avoid and dodge my question by simply, very cleverly and stealthy moving the goal posts.:-)
Never the less you have indirectly answered my question, I assume….. ..you can’t really consider the AGW been impossible, no matter what, under no circumstances.
But while at it, as I am not very comfortable with assumptions, allow me and also forgive me for asking again the same question to you but now by aiming at it to where you have moved the goal posts.
So according to your computation that puts the TCS at the present as a value of 1.8C and considering that the same computation in a “present” 14 years ago would have produced a TCS value distincly higher than 1.8C, therefor considering that the TCS value is going downhill contrary as expected in an AGW scenario moving towards of a new climate equilibrium, would you consider that AGW is impossible, according to such an interpretation?
cheers
“My own fits to HadCRUT4 indicate a total climate sensitivity of around 1.8 C, just under the 2.0 C and far under the 3+ C (still) being asserted by various parties” I’m assuming that is a fit of a sin+linear function that treats the dots as just noise. Maybe a bigger envelope but still a back of envelope and still the same conclusion. The lowest estimates of the models are the only ones that you can take serious – with a pinch of salt.
I’ve played with linear in time as well, but it doesn’t work as well as the natural log of the concentration of CO_2 (which is itself not linear in time):
http://www.phy.duke.edu/~rgb/Toft-CO2-PDO-jpg
As you can see, it works really, really well — especially with the sin variation that I have no explanation for at all, it merely seems to improve the fit empirically and is probably nothing meaningful (certainly nothing I’d gamble on into the distant future:-).
The hard question is this: This result has been obvious for decades now. Hansen could have computed the fit back in 1980 and would have gotten nearly the same thing. By 2000 it was very clear, and by then we had enough Mauna Loa data to fit back to at least the 1940s on enormously simple assumptions. Where, precisely, is there any reason to think that TCS is over 3 in this figure? Note well that by the time the anomaly increases by 3 C, CO_2 has nearly tripled. It isn’t even close. And this figure includes all feedbacks — by ignoring them, and assuming a linear response added on to the otherwise logarithmic increase.
IMO the single stretch from roughly 1983 to 2000 caused a widespread panic (even among climate scientists who really should have known better) because people simply could not see the overall pattern of temperature variation back to 1850. What they probably saw was a moderate warming due to CO_2 plus a strong, rapid warming due to the unexplained harmonic, capped by an unusually strong ENSO. But in perspective, even the unusually strong ENSO was just another transient modulating the climate around the dominant trend, one that very likely is CO_2 driven but which is unlikely to be catastrophic.
This situation was not improved by appointing Hansen to be the head of NASA GISS. Talk about political disaster! Appointing somebody who’s mind is clearly already made up and who cannot even maintain the facade of objectivity, somebody who gets arrested at protests against nuclear power (at the same time he is demonizing CO_2, pretty much working to bring down civilization itself) — madness!
rgb
Sorry for getting back to this so late. My tractor caught fire. The battery decided to arc with the bonnet.
The link doesn’t work but I can guess what it is. I noticed it about a year ago that there is an significant warming of about 0.2°C since 1950 as well as the 60 year period but I’m convinced that it is bigger than reality because of the inconvenient 40s blip.
rgbatduke,
Two points:
Point #1 – Every time I see your comments here at WUWT I have this vision of being again a freshman at university. In the visions, instead of majoring in Engineering Science with Nuclear Power focus as I did, in my vision I see majoring in both statistics and in the philosophy of science.
Point #2 – You said, “ [. . .] The “pause” is something that actually was neither explained nor anticipated as of 2006 in the models. So I’m a bit suspicious when CO_2 cranks up but “forcings” have somehow been found that moderate the trend, not in just one model but in the bulk of them, hindcasting the pause that wasn’t there four or five years ago. Really?” You caught the GCMers / IPCC GCM assessers in a real gotcha there alright! : )
Have a happy Holiday Season!
John
Pls download the entire document so you can see the normal distribution of the predictions. Thx!
I have also been disgusted by the use of the word “confidence” in AR5. Not only was it devoid of any statistical or scientific meaning, it was also devoid of any honesty, in the sense that an honest person would have noted that the fact that global temperatures had not risen for many years should have prevented anyone from expressing their own non-scientific and non-statistical confidence in climate science.