No significant warming for 17 years 4 months

By Christopher Monckton of Brenchley

As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.

The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.

Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:

clip_image002

On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.

The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.

Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.

If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.

However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.

In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.

Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.

Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.

It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:

clip_image004

The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.

The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.

Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.

From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.

In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.

UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

429 Comments
Inline Feedbacks
View all comments
Bob Diaz
June 13, 2013 10:37 am

I want to zero in on the most important line stated, “It is better to focus on the ever-widening discrepancy between predicted and observed warming rates.”
In one sentence Monckton has zeroed into the total failure of the alarmist group, the models are wrong. They have overestimated the impact of increased CO2.

climatereason
Editor
June 13, 2013 10:44 am

Jai Mitchell said about a comment from Dodgy Geezer
‘The idea that we are still coming out of the last ice age is a common misperception. The end of the last ice age happened at the beginning of the current Holocene period about 12,000 years ago. Since then temperatures have actually gone down a bit and we have been very stable for the last 6000 years or so.’
DG said nothing about the ‘Ice age’. He specifically referenced the ‘little ice age’ meaning the period of intermittent intense cold that ended with glacier retreat which occurred from 1750/1850. That term is something you would have been better employed in commenting on if you had felt like being pedantic;
“The term Little Ice Age was originally coined by F Matthes in 1939 to describe the most recent 4000 year climatic interval (the Late Holocene) associated with a particularly dramatic series of mountain glacier advances and retreats, analogous to, though considerably more moderate than, the Pleistocene glacial fluctuations. This relatively prolonged period has now become known as the Neoglacial period.’ Dr Michael Mann
http://www.meteo.psu.edu/holocene/public_html/shared/articles/littleiceage.pdf
tonyb

John Tillman
June 13, 2013 10:44 am

jai mitchell says:
June 13, 2013 at 9:43 am
@Dodgy Geezer
The idea that we are still coming out of the last ice age is a common misperception. The end of the last ice age happened at the beginning of the current Holocene period about 12,000 years ago. Since then temperatures have actually gone down a bit and we have been very stable for the last 6000 years or so.
unless you live in Greenland, of course. . .
——————————
Dodgy said Little Ice Age, not the “last ice age”.
Earth is at present headed toward the next big ice age (alarmists in the ’70s were right about the direction but wrong as to time scale). Global temperatures are headed down, long-term. The trend for at least the past 3000 years, since the Minoan Warm Period, if not 5000, since the Holocene Optimum, is decidedly down. The short-term trend, since the depths of the Little Ice Age about 300 years ago, is slightly up, of course with decadal fluctuations cyclically above & below the trend line.

jc
June 13, 2013 10:46 am

@rgbatduke says:
June 13, 2013 at 7:20 am
“Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.”
———————————————————————————————————————
Whilst grasping the basic principles of what you say, I cannot comment on what might pass for “legitimate” contemporary interpretation of principle and methodology as actually practiced and accepted within the wide range of applications across many disciplines by those claiming an expertise and the right to do so.
I am fairly confident that these are in practice “elastic” depending on requirements, and that on the basis that where justification is required, those promoting and defending such “desirable” formulations bring more energy and commitment, and utilize a mechanism of reference to the “particularities” of their endeavors to which others are not privy, to neutralize any queries. This is of course antithetical to the concept of knowledge, let alone a body of it.
This is pervasive across any field of activity in which an expertise based on specialist understanding is claimed. It cannot be viewed as being isolated from the promotion into classified Disciplines of such things as observation and commentary on such things as civic affairs into political “science”, which is in actuality just a matter of opinion and fluid interaction. Such things actively incorporate the justifying “truth is what you make” it whilst at the same time elevate this to the level of the immutable, governed by autonomous laws, in order to dignify and as a mechanism to prevail, both as an activity in itself and in the positioning of the proponents. There can be no appeal to first principles that are accepted as defining the limits of interpretation, because they don’t exist.
“Climate Science” as an orthodoxy, and as a field, as opposed to investigations into particular areas that may have have relevance to climate, does not exist as science. What is most obvious and disturbing about AGW is its lack of intellectual underpinning – in fact its defiance of the basic application of intelligence which you highlight in this abuse of the specific rigor required in adhering to this manifestation of it in statistical methodology.
You are right to say: “do not engage”. It is essential to refuse to concede the legitimacy of interaction with those who claim it when such people are palpably either not sincere, not competent, or not what they claim to be. To state and restate the fundamental basis of inadequacy is what is obligatory. A lack of acknowledgement, and an unwillingness to rethink a position based on this, tells everyone who is willing and capable of listening everything they need to know about such people and the culture that is their vehicle. You do not cater to the dishonest, the deceptive, or the inadequate seeking to maintain advantage after having insinuated themselves, when it is clear what they are. You exclude them.
To be frustrated, although initially unavoidable since it derives from the assumption that others actually have a shared base in respect for the non-personal discipline of reality, is not useful. It is only when the realization occurs that what within those parameters is a “mistake” is not, and will not be, seen as a mistake by its proponents – whether through inadequacy or design – that clarity of understanding and purpose can emerge.
The evidence is constant and overwhelming that “Climate Science” and “Climate Scientists” are not what they claim to be. Whether this is by incompetence or intent is in the first instance irrelevant. They are unfit. What they are; what they represent; what they compel the world to; is degradation.
The blindingly obvious can be repeatedly pointed out to such people to no effect whatsoever.
They must be stopped. They can only be stopped by those who will defend and advance the principles which they have subverted and perverted. This demands hostility and scathing condemnation. This is not a time in history for social etiquettes, whether general or academic.

george e. smith
June 13, 2013 11:01 am

“””””……StephenP says:
June 13, 2013 at 6:28 am
Rather off-topic, but there are 4 questions that I would like the answer to:
1. We are told the concentration of CO2 in the atmosphere is 0.039%, but what is the concentration of CO2 at different heights above the earth’s surface? As CO2 is ‘heavier than air’ one would expect it to be at higher percentages near the earth’s surface.
2. Do the CO2 molecules rise as they absorb heat during the day from the sun? And how far?
3. Do the CO2 molecules fall at night when they no longer get any heat input from the sun?
4. When a CO2 molecule is heated, does it re-radiate equally in all directions, assuming the surroundings are cooler, or does it radiate heat in proportion to the difference in temperature in any particular direction?
Any comments gratefully received…….””””””
Stephen; let’s start at #4. That’s a bit of a tricky question. In an atmospheric situation, any time any molecule or atom “radiates” (they all do), there is no preferred direction for the photon to exit. Arguably, the molecule has no knowledge of direction, or of any conditions of its surroundings, including no knowledge of which direction might be the highest or lowest Temperature gradient. So a radiated photon is equally likely to go in any direction.
As to a CO2 molecule which has captured an LWIR photon, in the 15 micron wavelength region for example, one could argue, that the CO2 molecule has NOT been heated, by such a capture; but its internal energy state has changed, and it now is likely oscillating in its 15 micron “bending mode”, actually one of two identical “degenerate” bending modes.
In the lower atmosphere, it is most likely that the CO2 molecule will soon collide with an N2 molecule, or an O2 molecule, or even an Ar atom. It is most unlikely to collide with another CO2 molecule. At 400 ppm, there are 2500 molecules for each CO2, so it is likely to be 13-14 molecular spacings to the next CO2; our example doesn’t even know another like it is even there.
When such a collision occurs, our CO2 molecule is likely to forget about doing the elbow bend, and it will exchange some energy with whoever it hit. Maybe the LWIR photon is re-emitted at that point; perhaps with a Doppler shift in frequency, and over a lot of such encounters, the atmospheric Temperature will change; probably an increase. The CO2 molecule itself, really doesn’t have a Temperature; that is a macro property, of a large assemblage of molecules or atoms.
But the bottom line is that an energy exchange in such an isolated event, is likely to be in any direction whatsoever.
We are told that CO2 is “well mixed” in the atmosphere. I have no idea what that means. At ML in Hawaii, the CO2 cycles about 6ppm p-p each year; at the north pole it is about 18 ppm, and at the South pole it is about -1ppm (opposite phase). That’s not my idea of well mixed.
A well mixed mixture, would have no statistically significant change in composition between samples taken anywhere in the mixture; well in my view anyway.
I suspect that there is a gradient in CO2 abundance with altitude. With all the atmospheric instabilities, I doubt that it is feasible to measure it.

Luther Wu
June 13, 2013 11:16 am

It’s his Lordship this, his Lordship that, “he’d deny a blackened pot”
But there for all the world to see, he shows the MET wot’s wot

climatereason
Editor
June 13, 2013 11:29 am

Luther Wu
Byron will be turning in his grave
tonyb

climatereason
Editor
June 13, 2013 11:31 am

John Tillman
You must be as amazed as I am that its got warmer since the end of the LIA. Who would have thought it?
tonyb

Gary Hladik
June 13, 2013 11:35 am

rgbatduke says (June 13, 2013 at 7:20 am): [snip]
Wow. I read every word, understood about half, concur with the rest. The part I didn’t understand took me way, way back to college physics, when we solved the Schrödinger equation for the hydrogen atom. That was the closest I ever came to being a physicist. 🙂 While I enjoyed the trip down memory lane, if you expand this comment into an article, I’d suggest using an example more familiar to most readers than the physics of a carbon atom. 🙂
I looked up the xkcd comic for green jelly beans. During my “biostatistician” period, I was actually involved in a real life situation similar to that–’nuff said.
I remember a thread on WUWT in which a commenter cherry-picked an IPCC GCM that came closest to the (then) trend of the so-called global average temperature. Other commenters asked why the IPCC chose to use their “ensemble” instead of this model. Apparently the model that got the temperature “almost right” was worse than the other models at predicting regional cloud cover, precipitation, humidity, temperature patterns, etc. Green jelly beans all over again.

June 13, 2013 11:41 am

rgbatduke says at June 13, 2013 at 7:20 am
A lot of very insightful information.
Of course averaging models ignores what the models are meant to do. They are meant to represent some understanding of the climate. Muddling them up only works if thy all have exactly the same understanding.
That is they are either known to be all perfect in which case they would all be identical as there is only one real climate.
Or they are known to be all completely unrelated to the actual climate. That is they are assumed to be 100% wrong in a random way. If they were systematically wrong they couldn’t be mixed up equally.
So what does the fact that this mixing has been done say about expert opinion on the worth of the climate models?
My only fault with the comment by rgbatduke is that it was a comment not a main post. It deserves to be a main post.

rgbatduke
June 13, 2013 11:42 am

As I understand it, running the same model twice in a row with the same parameters won’t even produce the same results. But somehow averaging the results together is meaningful? Riiiight. As meaningful as a “global temperature” which is not at all.
This, actually, is what MIGHT be meaningful. If the models perfectly reasonably do “Monte Carlo Simulation” by adding random noise to their starting parameters and then generate an ensemble of answers, the average is indeed meaningful within the confines of the model, as is the variance of the individual runs. Also, unless the model internally generates this sort of random noise as part of its operation, it will indeed produce the same numbers from the same exact starting point (or else the computer it runs on is broken). Computer code is deterministic even if nature is not. This isn’t what I have a problem with. What I object to is a model that predicts a warming that fails at the 2-3 sigma level for its OWN sigma to predict the current temperatures outside still being taken seriously and averaged in to “cancel” models that actually agree at the 1 sigma level as if they are both somehow equally likely to be right.
The models that produce the least average warmingin the whole collection that contributes to AR5 are the only ones that have a reasonable chance of being at least approximately correct. Ones that still predict a climate sensitivity from 3 to 5 C have no place even contributing to the discussion. This is the stuff that really has been falsified (IMO).
Also, global temperature is a meaningful measure that might well be expected to be related to both radiative energy balance and the enthalpy/internal energy content of the Earth. It is not a perfect measure by any means, as temperature distribution is highly inhomgeneous and variable, and it isn’t linearly connected with local internal energy because a lot of that is tied up in latent heat, and a lot more is constantly redistributing among degrees of freedom with vastly different heat capacities, e.g. air, land, ocean, water, ice, water vapor, vegetation.
This is the basis of the search for the “missing heat” — since temperatures aren’t rising but it is believed that the Earth is in a state of constant radiative imbalance, the heat has to be going somewhere where it doesn’t raise the temperature (much). Whether or not you believe in the imbalance (I’m neutral as I haven’t looked at how they supposedly measure it on anything like a continuous basis if they’ve ever actually measured it accurately enough to get out of the noise) the search itself basically reveals that Trenberth actually agrees with you. Global temperature is not a good metric of global warming because one cannot directly and linearly connect absorbed heat with surface temperature changes — it can disappear into the deep ocean for a century or ten, it can be absorbed by water at the surface of the ocean, be turned into latent heat of vaporization, be lost high in the troposphere via radiation above the bulk of the GHE blanket to produce clouds, and increase local albedo to where it reflects 100x as much heat as was involved in the evaporation in the first place before falling as cooler rain back into the ocean, it can go into tropical land surface temperature and be radiated away at enhanced rates from the T^4 in the SB equation, or it can be uniformly distributed in the atmosphere and carried north to make surface temperatures more uniform. Only this latter process — improved mixing of temperatures — is likely to be “significantly” net warming as far as global temperatures are concerned.
rgb

June 13, 2013 11:54 am

‘The models that produce the least average warmingin the whole collection that contributes to AR5 are the only ones that have a reasonable chance of being at least approximately correct. Ones that still predict a climate sensitivity from 3 to 5 C have no place even contributing to the discussion. This is the stuff that really has been falsified (IMO).”
The best estimates of ECS come from paleo data and then observational data. For ECS they range from 1C to 6C.
The climate models range from 2.1C to 4.4C for ECS and much lower for TCR.
Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
even Popper realized this in the end as did Feynman.

June 13, 2013 12:02 pm

Remember that it is global temperature, not energy imbalance, that is the factor expected to be responsible for the feedbacks that turn the gradual changes we have barely noticed into a global catastrophe.
If the energy being absorbed doesn’t cause the global temperature changes then the proposed mechanisms for the feedbacks – like increased water vapour in the atmosphere – don’t work.
And therefore the priority given to the field of Climatology needs to be reassessed.

Eustace Cranch
June 13, 2013 12:11 pm

rgbatduke says:
June 13, 2013 at 11:42 am
“…one cannot directly and linearly connect absorbed heat with surface temperature changes — it can disappear into the deep ocean for a century or ten…”
Disappear? How? Will someone PLEASE explain the mechanism to me?

June 13, 2013 12:15 pm

One must always remember the mandate of the IPCC when review information they provide. They are not mandated to study all possible causes of climate change, only human caused climate change:
“The Intergovernmental Panel on Climate Change (IPCC) was established by World Meteorological Organization and United Nations Environmental Programme (UNEP) in 1988 to assess scientific, technical, and socioeconomic information that is relevant in understanding human-induced climate change, its potential impacts, and options for mitigation and adaptation.”
Hence, the whole concept of open science within the IPCC is not relevant since they are working with a stated and clear agenda.

Luther Wu
June 13, 2013 12:20 pm

climatereason says:
June 13, 2013 at 11:29 am
Luther Wu
Byron will be turning in his grave
tonyb
________________
I’m sure you meant Kipling…

u.k(us)
June 13, 2013 12:38 pm

The secret to our success, such as it is, is the ability to adapt to changing conditions.
If conditions were unchanging, what would be the point of random mutations in DNA.

Gary Hladik
June 13, 2013 12:42 pm

Steven Mosher says (June 13, 2013 at 11:54 am): “Finally, there is no such thing as falsification. There is confirmation and disconfirmation. even Popper realized this in the end as did Feynman.”
Perhaps you could explain the difference between “falsification” and “disconfirmation”, or link a reference that does. Preferably at kindergarten level. 🙂

Snotrocket
June 13, 2013 12:49 pm

rgbatduke says: <i."One cannot generate an ensemble of independent and identically distributed models that have different code. "
Yep. I guess that must be like having an ‘average car’ and then telling children that that’s what all cars really look like…. Now that would be something to see, an average car. (Bearing in mind, an Edsel might well be in the mix somewhere).

Lars P.
June 13, 2013 12:49 pm

rgbatduke says:
June 13, 2013 at 7:20 am
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons.
Thank you for your post, it is brilliant and should be elevated to a blog-post itself. The idea you present is only logical and indeed is a shame it has not been already done so.
Indeed there makes no sense to continue to use models which are so far away from reality. Only models which have been validated by real data should continue to be used.
It is what scientists do all the time… in science. They scrap models that have been invalidated and focus on those which give best results, they do not continue to use an ensemble of models of which 95% go into Nirvana and draw a line somewhere 95% Nirvana 5 % real.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Thank you again!

RCSaumarez
June 13, 2013 1:00 pm

@rgbatduke
Brilliant comment (essay). Of course formimg an ensemble of model outputs and saying that its mean is “significant” is arrant nonsense -it isn’t a proper sample or a hypothesis test and it certainly isn’t a prediction. All one can say, given the disparity of results, something is wrong with the models, as you point out. The thing that is so depressing is that people who should know better seem to believe it – probably because they don’t understand it.
On the subject of Monte-Carlo, some non-linear systems can give a very wide range of results which reflect the distribution of inputs that invoke the non-linearity. In my field, cardiac electrophysiology, this is particularly important and small changes in assumptions in a model will lead to unrealistic behaviour. Even forming simple statistics with these results is wrong for the reasons you so eloquently state. Widely diverging results should force attention on the non-linear behaviour that cause this divergence and a basic questioning of the assumptions.

Adam
June 13, 2013 1:11 pm

You can have hours of fun trying to estimate m with statistical significance when given a data set generated by
y = m*x + c
where
(y[n]-y[n-1]) ~ F(0,a)
x[n] – x[n-1] = L
where F is a non-stationary non-normal distribution. It is even more fun if you assume that F is normal and stationary even though it is not. But fun does not pay the bills.

rgbatduke
June 13, 2013 1:17 pm

rgbduke. I took the liberty of sending an email to Judy Curry asking that she take a look at your comment and consider asking you to write a tightened up version to be used as a discussion topic at ClimateEtc. Please give this some thought and ping her at her home institution to the Southwest of you. (Okay, West Southwest.)
Thank you,
RayG

Sounds like work. Which is fine, but I’m actually up to my ears in work that I’m getting paid for at the moment. To do a “tightened up version” I would — properly speaking — need to read and understand the basic structure of each GCM as it is distinguished from all of the rest. This not because I think there is anything in what I wrote above that is incorrect, but because due diligence for an actual publication is different from due diligence for a blog post, especially when one is getting ready to call 40 or 50 GCMs crap and the rest merely not yet correct while not quite making it to the level of being crap. Also, since I’m a computational physicist and moderately expert in Bayesian reasoning, statistics, and hypothesis testing I’d very likely want to grab the sources for some of the GCMs and run them myself to get a feel for their range of individual variance (likely to increase their crap rating still further).
That’s not only not a blog post, that’s a full time research job for a couple of years, supported by a grant big enough to fund access to supercomputing resources adequate to do the study properly. Otherwise it is a meta-study (like the blog post above) and a pain in the ass to defend properly, e.g. to the point where it might get past referees. In climate science, anyway — it might actually make it past the referees of a stats journal with only a bit of tweaking as the fundamental point is beyond contention — the average and variance badly violate the axioms of statistics, hence they always call it a “projection” (a meaningless term) instead of a prediction predicated upon sound statistical analysis where the variance could be used as the basis of falsification.
The amusing thing is just how easy it is to manipulate this snarl of models to obtain any “average” prediction you like. Suppose we have only two models — G and B. G predicts moderate to low warming, gets things like cloud cover and so on crudely right, it is “good” in the sense that it doesn’t obviously fail to agree with empirical data within some reasonable estimate of method error/data error combined. B predicts very high warming, melting of the ice pack in five years, 5 meter SLR in fifty years, and generally fails to come close to agreeing with contemporary observations, it is “bad” in the specific sense that it is already clearly falsified by any reasonable comparison with empirical data.
I, however, am a nefarious individual who has invested my life savings in carbon futures, wind generation, and banks that help third world countries launder the money they get from carbon taxes on first world countries while ensuring that those countries aren’t permitted to use the money to actually build power plants because the only ones that could meet their needs burn things like coal and oil.
So, I take model B, and I add a new dynamical term to it, one that averages out close to zero. I now have model B1 — son of B, gives slightly variant predictions (so they aren’t embarrassingly identical) but still, it predicts very high warming. I generate model B2 — brother to B1, it adds a different term, or computes the same general quantities (same physics) on a different grid. Again, different numbers for this “new” model, but nothing has really changed.
Initially, we had two models, and when we stupidly averaged their predictions we got a prediction that was much worse than G, much better than B, and where G was well within the plausible range, at the absolute edge of plausible. But now there are three bad models, B, B1, and B2, and G. Since all four models are equally weighted, independent of how good a job they do predicting the actual temperature and other climate features I have successfully shifted the mean over to strongly favor model B so that G is starting to look like an absolute outlier. Obviously, there is no real reason I have to start with only two “original” GCMs, and no reason I have to stop with only 3 irrelevant clones of B.
Because I am truly nefarious and heavily invested in convincing the world that the dire predictions are true so that they buy more carbon futures, subsidize more windmills, and transfer still more money to third world money launderers, all I have to do is sell it. But that is easy! All of the models, G and B+ (and C+ and D+ if needed) are defensible in the sense that they are all based on the equations of physics at some point plus some dynamical (e.g. Markov) process. The simple majority of them favor extreme warming and SLR. There are always extreme weather events happening somewhere, and some of them are always “disastrous”. So I establish it as a well-known “fact” that physics itself — the one science that people generally trust — unambiguously predicts warming because a simple majority of all of these different GCMs agree, and point to any and all anecdotal evidence to support my claim. Since humans live only a pitiful 30 or 40 adult years where they might give a rat’s ass about things like this (and have memories consisting of nothing but anecdotes) it is easy to convince 80% of the population, including a lot of scientists who ought to know better, that it really, truly is going to warm due to our own production of CO_2 unless we all implement a huge number of inconvenient and expensive measures that — not at all coincidentally — line my personal pocket.
Did I mention that I’m (imaginarily) an oil company executive? Well, turns out that I am. After all, who makes the most money from the CAGW/CACC scare? Anything and everything that makes oil look “scarce” bumps the price of oil. Anything and everything that adds to the cost of oil, including special taxes and things that are supposed to decrease the utilization of oil, make me my margin on an ever improving price basis in a market that not only isn’t inelastic, it is inelastic and growing rapidly as the third world (tries to) develop. I can always sell all of my oil — I have to artificially limit supply as it is to maintain high profits and prolong the expected lifetime of my resources. Greenpeace can burn me in friggin’ effigy for all I care — the more they drive up oil costs the more money I make, which is all that matters. Besides, they all drive SUVs themselves to get out into the wilderness and burn lots of oil flying around lobbying “against” me. I make sure that I donate generously to groups that promote the entire climate research industry and lobby for aggressive action on climate change — after all, who actually gets grants to build biofuel plants, solar foundries, wind farms, and so on? Shell Oil. Exxon. BP. Of course. They/we advertise it on TV so people will now how pious the oil/energy industry is regarding global warming.
Not that I’m asserting that this is why there are so many GCMs and they are all equally weighted in the AR5 average — that’s the sort of thing that I’d literally have to go into not only the internals of but the lineage of across all the contributing GCMs to get a feel for whether or not it is conceivably true. It seems odd that there are so many — one would think that there is just one set of correct physics, after all, and one sign of a correctly done computation based on correct physics is that one gets the same answer within a meaningful range. I would think that four GCMs would be plenty — if GCMs worked at all. Or five. Not twenty, thirty, fifty (most run as ensembles themselves and presenting ensemble averages with huge variances in the first place). But then, Anthony just posted a link to a Science article that suggests that four distinct GCMs don’t agree within spitting distance in a toy problem the sort of thing one would ordinarily do first to validate a new model and ensure that all of the models are indeed incorporating the right physics.
These four didn’t. Which means that at least three out of four GCMs tested are wrong! Significantly wrong. And who really doubts that the correct count is 4/4?
I’m actually not a conspiracy theorist. I think it is entirely possible to explain the proliferation of models on the fishtank evolutionary theory of government funded research. The entire science community is effectively a closed fishtank that produces no actual fish food. The government comes along and periodically sprinkles fish food on the surface, food tailored for various specific kinds of fish. One decade they just love guppies, so the tank is chock full of guppies (and the ubiquitous bottom feeders) but neons and swordtails suffer and starve. Another year betas (fighting fish) are favored — there’s a war on and we all need to be patriotic. Then guppies fall out of fashion and neons are fed and coddled while the guppies start to death and are eaten by the betas and bottom dwellers. Suddenly there is a tankful of neons and even the algae-eaters and sharks are feeling the burn.
Well, we’ve been sprinkling climate research fish food grants on the tank for just about as long as there has been little to no warming. Generations of grad students have babysat early generation GCMs, gone out and gotten tenured positions and government research positions where in order to get tenure they have had to write their own GCMs. So they started with the GCMs they worked with in grad school (the only ones whose source code they had absolutely handy), looked over the physics, made what I have no doubt was a very sincere attempt to improve the model in some way, renamed it, got funding to run it, and voila — B1 was born of B, every four or five years, and then B1′ born of B1 as the first graduated student graduated students of their own (who went on to get jobs) etc — compound “interest” growth without any need for conspiracy. And no doubt there is some movement along the G lines as well.
In a sane universe, this is half of the desired genetic optimization algorithm that leads to ever improving theories and models The other half is eliminating the culls on some sort of objective basis. This can only happen by fiat — grant officers that defund losers, period — or by limiting the food supply so that the only way to get continued grant support is to actually do better in competition for scarce grant resources.
This ecology has many exemplars in all of the sciences, but especially in medical research (the deepest, richest, least critical pockets the world has ever known) and certain branches of physics. In physics you see it when (for a decade) e.g. string theory is favored and graduate programs produce a generation of string theorists, but then string theory fails in its promise (for the moment) and supersymmetry picks up steam, and so on. This isn’t a bad ecology, as long as there is some measure of culling. In climate science, however, there has been anti-culling — the deliberate elimination of those that disagree with the party line of catastrophic warming, the preservation of GCMs that have failed and their inclusion on an equal basis in meaningless mass averages over whole families of tightly linked descendents where whole branches probably need to go away.
Who has time to mess with this? Who can afford it? I’m writing this instead of grading papers, but that happy time-out has to come to an end because I have to FINISH grading, meet with students for hours, and prepare and administer a final exam in introductory physics all before noon tomorrow. While doing six other things in my copious free moments. Ain’t got no grant money, boss, gotta work for a living…
rgb

Nick Stokes
June 13, 2013 1:24 pm

rgbatduke says: June 13, 2013 at 7:20 am
“One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again.”

Well, who did assemble it? It says at the top “lordmoncktonfoundation.com”.

rgbatduke
June 13, 2013 1:33 pm

On the subject of Monte-Carlo, some non-linear systems can give a very wide range of results which reflect the distribution of inputs that invoke the non-linearity. In my field, cardiac electrophysiology, this is particularly important and small changes in assumptions in a model will lead to unrealistic behaviour. Even forming simple statistics with these results is wrong for the reasons you so eloquently state. Widely diverging results should force attention on the non-linear behaviour that cause this divergence and a basic questioning of the assumptions.
Eloquently said right back at you. Computational statistics in nonlinear modeling is a field where angels fear to tread. Indeed, nonlinear regression itself is one of the most difficult of statistical endeavors because there really aren’t any intrinsic limits on the complexity of nonlinear multivariate functions. In the example I gave before, the correct many electron wavefunction is a function that vanishes when any two electron coordinates (all of which can independently vary over all space) are the same, that vanishes systematically when any single electron coordinate becomes large compared to the size of the atom, that is integrable at the origin in the vicinity of the nucleus (in all coordinates separately or together), that satisfies a nonlinear partial differential equation in the electron-electron and electron nucleus interaction, that is fully antisymmetric, and that obeys the Pauli exclusion principle. One cannot realize this as the product of single electron wavefunctions, but that is pretty much all we know how to build or sanely represent as any sort of numerical or analytic function.
And it is still simple compared to climate science. At least one can prove the solutions exist — which one cannot do in the general case for Navier-Stokes equations.
Does climate science truly stand alone in failing to recognize unrealistic behavior when it bites it in the ass? Widely diverging results should indeed force attention on the non-linear behavior that causes the divergence and a basic questioning of the assumptions. Which is, still fairly quietly, actually happening, I think. The climate research community is starting to face up to the proposition that no matter how invested they are in GCM predictions, they aren’t working and the fiction that the AR collective reports are somehow “projective” let alone predictive is increasingly untenable.
Personally, I think that if they want to avoid pitchforks and torches or worse, congressional hearings, the community needs to work a bit harder and faster to fix this in AR5 and needs to swallow their pride and be the ones to announce to the media that perhaps the “catastrophe” they predicted ten years ago was a wee bit exaggerated. Yes, their credibility will take a well-deserved hit! Yes, this will elevate the lukewarmers to the status of well-earned greatness (it’s tough to hold out in the face of extensive peer disapproval and claims that you are a “denier” for doubting a scientific claim and suggesting that public policy is being ill advised by those with a vested interest in the outcome). Tough. But if they wait much longer they won’t even be able to pretend objectivity — it will smack of a cover-up, and given the amount of money that has been pissed away on the predicted/projected catastrophe, there will be hell to pay if congress decides it may have been actually lied to.
rgb