No significant warming for 17 years 4 months

By Christopher Monckton of Brenchley

As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.

The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.

Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:

clip_image002

On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.

The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.

Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.

If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.

However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.

In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.

Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.

Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.

It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:

clip_image004

The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.

The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.

Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.

From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.

In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.

UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony

Advertisements

  Subscribe  
newest oldest most voted
Notify of
RDG

Thank you.

Harold Ambler

1. Time to point out again that when the warmists convinced the world to use anomaly graphs in considering the climate system they more or less won the game. As Essex and McKitrick (and others) point out, temperature, graphed in Kelvins, has been pretty close to flat for the past thousand years or so. The system displays remarkable homeostasis, and almost no lay people are aware of this simple fact.
2. I would like to make a documentary in which man-on-the-street interviews are conducted where the interviewee gets to draw absolute temps over the last century, last millennium, etc. The exaggerated sense of what has been happening would be hilarious, and kind of sad, to see.
3. The intellectual knots that the warmists have already tied themselves into explaining away the last decade and a half of global temps have been ugly. And, as most here know, I am betting that the ugliness gets uglier for the next decade and a half — at least.
4. Don’t sell your coat.

AlecM

There can be no CO2-GW, A or otherwise. And even if there were, there could be no positive feedback. CO2 is the working fluid in the control system maintaining OLR = SW thermalised.
This is imposed by irreversible thermodynamics – the increased radiation entropy from converting 5500 K SW to 255 K LW. The clouds adapt to control atmosphere entropy production to a minimum.
Basic science was forgotten by Hansen when the first GISS modelling paper wrongly assumed CO2 blocked 7 – 14 micron OLR and LR warming was the GHE: 1981_Hansen_etal.pdf from NASA. They got funding and fame for 32 years of a scientific scam.

ImranCan

Very nice post …. I made some similar remarks in comments on a John Abrahams / Dana Nuticelli article in the Guardian yesterday – just asking how climate change effects could be “accelerating” when temperatures have not been going up ….. and had my comments repeatedly censored. I woke up this morning to find I am now banned as a commenter. Simply a very sad indictment of the inability of warmest ‘scientists’ to tolerate any form of critique or basic obvious questioning.

Thomas

Note that “No warming” and “no statistically significant warming” are not the same thing. The most reasonable interpretation of Santer’s statement is that there has to be no measured warming for 17 years, and as is clear from the diagram there has been warming, only not large enough to be statistically significant. The uncertainly is large enough that the data are also consistent with a trend of 0,2 K/decade, i.e., in line with IPCC predictions.

Jean Meeus

Yes indeed. A few days ago, the Belgian newspaper ‘Metro’, too, wrote that the temperatures are accelerating dangerously. Well heavens…

MattN

I am 100% positive I remember Gavin saying 10 years somewhere on ReallywrongClimate. No warming for 10 years, the models were wrong….

HaroldW

Correction: The essay at Lucia’s Blackboard was written by SteveF, not by Lucia.

dwr54

Re Santer et al. (2011). Is it not the case that this paper explicitly refers to lower troposphere (i.e. satellite) data and that it also explicitly refers to the “observational” data, rather than statistical significance levels?
In other words, all Santer et al. 2011 stated was that we should see a warming trend in the raw satellite data over a period of 17 years. At present that is what we do see in both UAH and RSS (much more so in UAH).
I don’t immediately see what Santer et al. 2011 has to do with statistical significance in a surface station data set such as HadCRUT4.

Steven

I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? The equations that attempt to map/predict magnetic fields of the earth are complex Fourier series. Is someone, somewhere suggesting that the magnetic field is more complex than the climate envelope about the earth? I realize this is a short timescale and things may look linear but they are not. Not even close. Like I said in the beginning, the great climate hoax is nothing more than what I just called it. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.

Colin Porter

So how did the climate scientists and the news media including the NYT report the 1998 El Nino? Apocalypse now, I would suggest! So even if the start date was cherry picked, it would be fair game.

Thomas said:
“the data are also…..in line with IPCC predictions.”
Ha, ha, ha, ha!
And the sky is green and the grass is blue…..

Jostemikk

No statistically significant warming in 18 years and 5 months:
http://woodfortrees.org/plot/rss/from:1995/plot/rss/from:1995/trend
#Time series (rss) from 1979 to 2013.42
#Selected data from 1995
#Least squares trend line; slope = 0.00365171 per year
No varming in 16 years and 5 months:
http://woodfortrees.org/plot/rss/from:1997/plot/rss/from:1997/trend
#Time series (rss) from 1979 to 2013.42
#Selected data from 1997
#Least squares trend line; slope = -0.000798188 per year
Oh lord…

David L. Hagen

SteveF wrote Estimating the Underlying Trend in Recent Warming
(“12 June, 2013 (20:10) Written by: SteveF” posted at Lucia’s The Blackboard

The slope since 1997 is less than 1/6 that from 1979 to 1996. . . .
Warming has not stopped, but it has slowed considerably. . . .
the influence of the ENI on global temperatures (as calculated by the by the global regression analysis) is just slightly more than half the influence found for the tropics alone (30S to 30N): 0.1099+/- 0.0118 global versus 0.1959+/-0.016 tropics. . . .
The analysis indicates that global temperatures were significantly depressed between ~1964 and ~1999 compared to what they would have been in the absence of major volcanoes. . . .
the model does not consider the influence of (slower) heat transfer between the surface and deeper ocean. In other words, the calculated impact of solar and volcanic forcings would be larger (implying somewhat higher climate sensitivity) if a better model of heat uptake/release to/from the ocean were used.

This looks like a SteveF provides a major improvement in understanding and quantifying the “shorter” term impacts of solar, volcanoes and ocean oscillations (ENSO) and their related lags. Now hope he can get it formally published.

This post is preaching to the choir (and, with all due respect for Christopher Monckton’s energy in the climate debates, it is by a scientific dilettante, however well-informed and clearly intelligent, to an audience of laypersons–what the failure of climate science, in the present incompetent consensus, has brought us all to). (And I am not one of the many who has a pet theory, and claims to have all the answers–I merely kept my eyes and mind open for clear, definitive evidence of what is really wrong, and found it, as some portion of the readers here well know. I am a professional scientist, a physicist, in the older academic tradition, that knew how to Verify.)
ImranCan’s comment above confirms what so many should already know: The Insane Left (my term for them) only dared to alarm the world with this monumental fraud because they fervently want to believe a benevolent universe (not God, heaven forbid, but only a universe in which “you create your own reality”–one of the great lies of the modern world) has put into their hands an authoritative instrument through which their similarly-fixated political ideology could take over… the western world, at least. The “science” has ALWAYS been “settled”, period, because they NEED it to be, to hold together their fundamentally creaky coalition of peoples bitter, for any reason, against “the old order”. They want a revolution, one way or another. And this is war, one way or another. The best hope for mankind, and especially the western world, is that somehow a growing number of those who have been suborned to the Insane Left will come to their senses, let their innate intelligence come out, and declare their independence and opposition to the would-be tyrants.

Radical Rodent

Perhaps off-topic, but I am having serious thoughts about why we constantly refer to the “greenhouse effect”. To use a greenhouse is to use a pretty poor analogy; the Earth is not surrounded by a hard shell of “greenhouse gasses”, with air movements and other causes of potential cooling inside strictly regulated. It could be that we are not only barking up the wrong tree, but we are in the wrong garden, in the wrong country – and it is not even a tree!
About 99% of the Earth’s atmosphere (i.e. 20.9% oxygen and 78% nitrogen) is not composed of “greenhouse gasses.” Why not test the idea: find a greenhouse, and remove 99% of the glass, so as to leave a thin web of glass (let us assume this is possible). I doubt you will be able to measure any difference between the “inside” of the greenhouse and outside; however, to “improve” its effectiveness, add 0.05% more glass. Stand back, and watch in amazement as the temperatures soar!
You don’t think someone is trying to sell us a load of snake oil, do you?

M Courtney

ImranCan says at June 13, 2013 at 4:05 am

Very nice post …. I made some similar remarks in comments on a John Abrahams / Dana Nuticelli article in the Guardian yesterday – just asking how climate change effects could be “accelerating” when temperatures have not been going up ….. and had my comments repeatedly censored. I woke up this morning to find I am now banned as a commenter. Simply a very sad indictment of the inability of warmest ‘scientists’ to tolerate any form of critique or basic obvious questioning.

I also linked to the MET office and showed that temperature rises are not accelerating. In additon I pointed out the theoretical basis for the acceleration was challenged empirically by the lack of the Tropical Hotspot (with a link to Jo Nova).
So I also am now banned from posting at the Guardian. That is, I am subject to “pre-moderation”.
The worst impact of creating this echo-chamber is the decline in the Guardian’s readership. The number of comments on their environment blogs is declining rapidly.
It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.
How long until the advertisers realise?

John West

@ MattN
http://www.realclimate.org/index.php/archives/2007/12/a-barrier-to-understanding/
Norman Page asks:
“what year would you reconsider the CO2 – Warming paradigm if the CRU Global annual mean temperature is cooler than 2005 – 2009…?”
Schmidt answers:
“You need a greater than a decade non-trend that is significantly different from projections. [0.2 – 0.3 deg/decade]”

Frank K.

“So I also am now banned from posting at the Guardian.”
Welcome to the newspeak Orwellian media complex, Winston.
Fortunately, we are still free enough in this world to tell the Guardian (and, most importantly, their $ponsor$) to stuff it…

ConfusedPhoton

How long before the 17 year test becomes a 25 year test? – just a matter of homogenising!

Mark Hladik

If memory serves, it seems that the Meteorological community has used the ‘thirty-year’ time frame for standardizing its records, in order to classify climate and climate zones. I suspect that meteorologists might soon suggest that a ‘fifty-year’ or even a ‘sixty-year’ time frame become the standard reference frame.
That would be one way to get around Gavin’s “… seventeen year …” test.
Or, we could just adjust the data some more, to make them fit the models … … … ………

eyesonu

At first there were a few looking for the truth. Then there were more. Soon there were many. Next there was an army marching for the truth. Now the truth goes marching on!
Oh, it’s that army of ones again. They have liberated the truth.
sorry, but I don’t know how to put musical notes in a blog post 😉

Jimbo

What I want to know from any Warmists is what would falsify the climate model projections as used by the IPCC? Example 20 years of no warming?

pyromancer76

M Courtney at 5:25 a.m. says:
“So I also am now banned from posting at the Guardian. That is, I am subject to “pre-moderation”.
The worst impact of creating this echo-chamber is the decline in the Guardian’s readership. The number of comments on their environment blogs is declining rapidly.
It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.
How long until the advertisers realise?”
Would that these former institutions of the Fourth Estate were subject to the forces of the market. Many would have failed already. However, they are being funded — and their employees (formerly investigative journalists) fully paid and supported — as the mouthpiece of elites who are acting similarly to the Robber Barons of the U.S. 19th Century. At least the Robber Barons through their greed also brought productivity. Not so much these elites. Who are they? Fabulously wealthy Islamists on our oil money; brilliant financial scam artists like financiers whether “left or right” (debt posing as equity); IT corporations who (corps are persons) destroy competition; all those corporations that also hate “the market” (immigration “reform” for cheap labor — that will take care of those independent Americans); and the secular religionists. What a motley group.
They will eventually fail. We must see that they do not take the rest of us along with them. Thank you Anthony and crew for your valiant and courageous efforts.

Scott Scarborough

It is meaningless to say that there is warming, just not statistically significant warming. Someone who says that does not know what statistical significance is.

The one time a “Cherry-picking” accusation fails is when you use the present day as an anchor & look back into the past.
The observed temperature differential just doesn’t meet any definition of “catastrophic,” “runaway,” “emergency,” “critical,” or any synonym you can pull out of the (unwarming) air to justify the multitude of draconian measures ALREADY IN PLACE that curtail world economies or subsidize failing alternative energy attempts!!!

Richard M

I like to use RSS because it is not contaminated with UHI, extrapolation and infilling. As indicated above the trend has been perfectly flat for 16.5 years (Dec. 1996). At some point in the near future, given the current cooling that could be later this year, the starting point could move back to the start of 1995. That would mean around 19 years with a zero trend.
I like to use the following graph because it demonstrates a change from the warming regime of the PDO to the cooling regime. It also shows how you could have many of the warmest years despite the lack of warming over the entire interval.
http://www.woodfortrees.org/plot/rss/from:1996.9/to/plot/rss/from:1996.9/to/trend/plot/rss/from:1996.9/to:2005/trend/plot/rss/from:2005/to/trend

Rob Dawg

How long before the warmists make 1998 go away like they did with the MWP ? Funny how 1998 was the shot across the bow warning when it was on the right side of the graph but an inconvienient truth on the left.

DirkH

M Courtney says:
June 13, 2013 at 5:25 am
“It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.”
Guardian, Spiegel and NYT are the modern versions of the Pravda for the West. I read them to know what the 5 minute hate of the day is.

Looks like Lucia’s website is overloaded. I can get through on the main page but I can’t open SteveF’s post without getting an error message. I tried to leave him the following comment:
SteveF: As far as I can tell, your model assumes a linear relationship between your ENSO index and global surface temperatures.
Trenberth et al (2002)…
http://www.cgd.ucar.edu/cas/papers/2000JD000298.pdf
…cautioned against this. They wrote, “Although it is possible to use regression to eliminate the linear portion of the global mean temperature signal associated with ENSO, the processes that contribute regionally to the global mean differ considerably, and the linear approach likely leaves an ENSO residual.”
Compo and Sardeshmukh (2010)…
http://journals.ametsoc.org/doi/abs/10.1175/2009JCLI2735.1?journalCode=clim
…note that it should not be treated as noise that can be removed. Their abstract begins: “An important question in assessing twentieth-century climate change is to what extent have ENSO-related variations contributed to the observed trends. Isolating such contributions is challenging for several reasons, including ambiguities arising from how ENSO itself is defined. In particular, defining ENSO in terms of a single index and ENSO-related variations in terms of regressions on that index, as done in many previous studies, can lead to wrong conclusions. This paper argues that ENSO is best viewed not as a number but as an evolving dynamical process for this purpose…”
I’ve been illustrating and discussing for a couple of years that the sea surface temperatures of the East Pacific(90S-90N, 180-80W) show that it is the only portion of the global oceans that responds linearly to ENSO, but that the sea surface temperatures there haven’t warmed in 31 years:
http://oi47.tinypic.com/hv8lcx.jpg
On the other hand, the sea surface temperature anomalies of the Atlantic, Indian and West Pacific (90S-90N, 80W-180) warm in El Niño-induced steps (the result of leftover warm water from the El Niños) that cannot be accounted for with your model:
http://oi49.tinypic.com/29le06e.jpg
A more detailed, but introductory level, explanation of the processes that cause those shifts can be found here [42MB .pdf]:
http://bobtisdale.files.wordpress.com/2013/01/the-manmade-global-warming-challenge.pdf
And what fuels the El Ninos? Sunlight. Even Trenberth et al (2002), linked above, acknowledges that fact. They write, “The negative feedback between SST and surface fluxes can be interpreted as showing the importance of the discharge of heat during El Niño events and of the recharge of heat during La Niña events. Relatively clear skies in the central and eastern tropical Pacific allow solar radiation to enter the ocean, apparently offsetting the below normal SSTs, but the heat is carried away by Ekman drift, ocean currents, and adjustments through ocean Rossby and Kelvin waves, and the heat is stored in the western Pacific tropics. This is not simply a rearrangement of the ocean heat, but also a restoration of heat in the ocean.”
In other words, ENSO acts as a chaotic recharge-discharge oscillator, where the discharge events (El Niños) are occasionally capable of raising global temperatures, where they remain relatively stable for periods of a decade or longer.
In summary, you’re treating ENSO as noise, while data indicate that it is responsible for much of the warming over the past 30 years.
Regards

Brian

I wonder if ACGW advocates feel a little like advocates of the Iraq invasion felt when no WMDs were discovered? Just a random thought.

I got through. There must’ve been a temporary mad rush to Lucia’s Blackboard for a few minutes.

StephenP

Rather off-topic, but there are 4 questions that I would like the answer to:
1. We are told the concentration of CO2 in the atmosphere is 0.039%, but what is the concentration of CO2 at different heights above the earth’s surface? As CO2 is ‘heavier than air’ one would expect it to be at higher percentages near the earth’s surface.
2. Do the CO2 molecules rise as they absorb heat during the day from the sun? And how far?
3. Do the CO2 molecules fall at night when they no longer get any heat input from the sun?
4. When a CO2 molecule is heated, does it re-radiate equally in all directions, assuming the surroundings are cooler, or does it radiate heat in proportion to the difference in temperaure in any particular direction?
Any comments gratefully received.

JabbaTheCat

Lucia’s site not currently available…

jonny old boy

Human beings caused the largest extinction rate in the planet’s history ( Pleistocene extinctions ). These extinctions came at a different time and at a different rate to be linked to the climate changes and its clear wild climate swings in short periods of time ( relatively ) did pretty much nothing to the earth’s species on any significant scale. Its exactly the same now. We are still causing extinctions at a record rate, simply by being here, not by “altering” the climate, and even if we did ( or are ) altering the climate, then this effect on the planet is insignificant to the simple fact that we are just “here”… So-called “climate scientists” are often no such thing, they do not understand the basics of pre-historic climate change and the parameters involved. They completely ignore the most important evidence. Large animals in Africa alone survived the P.E. period simply by having evolved along side humans, as soon as humans left Africa at a very fast rate, they pretty much wiped out the mega fauna everywhere else…. It is this pattern of human behaviour that is statistically significant, not fractions of a degree celcius. I wish alarmists would actually study a bit more !

Goldie

I suppose we could always wait until 2018. By which time the World will be bankrupt and it won’t matter. Alternatively we could start applying the precautionary principal the other way round. How about: A clear lack of correlation between hypothesis and reality should preclude precipitate action beyond that which is prudent and can be shown to have a benefit.

Ken G

First of all, skeptics didn’t pick 1998, the NOAA did in the 2008 State of the Climate report.
That report says, “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
It does not say “The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, except intervals starting in 1998…”
Second, I don’t know why anyone is bending over backwards to try to find statistical significance (or lack thereof) in a goalpost changing 17 year trend when we already have an unambiguous test for the models straight from the NOAA. Why bother with ever changing warmist arguments? Just throw the above at them and let them argue with the NOAA over it.

The problem is that models of catastrophic climate change are being used by futurists and tech companies and rent seekers generally to argue that our written constitutions need to be jettisoned and new governance structures created that rely more on Big data and supercomputers. To deal with the global warming crisis. wish I was making this up but I wrote about the political and social economy and using education globally to get there today. Based primarily on Marina Gorbis’ April 2013 book The Nature of The Future and Willis Harman’s 1988 Global Mind Change.
You can’t let actual temps get in the way of such a transformation. Do you have any idea how many well-connected people have decided we are all on the menu? Existing merely to finance their future plans and to do as we are told.

RichardLH

This analysys of the UAH data (and the implied future that it provides) says that the (short term < 60 years anyway) may all be cyclic – not a linear trend of any form during that preiod.
http://s1291.photobucket.com/user/RichardLH/media/uahtrendsinflectionfuture_zps7451ccf9.png.html
That could turn in time into a 'Short Term Climate Predictor' 🙂

M Courtney

The Guardian is left-wing. That won’t be popular with people who aren’t.
But it wasn’t dumbed down. It wasn’t anti-democratic. It wasn’t just hate.
The Guardian was part of the civil society in which develops the politcal awareness that a democracy needs.
So was the Telegraph from the other side.
But the Guardian has abandoned debate. That is the death of the Guardian. A loss which will be a weakening of the UK’s and the entire West’s political life.

SanityP

Interesting, and by the way:
On March 13, WUWT announced that Climategate 3.0 had occurred.
What happened to it?
Everybody just ignoring it ever happened?

Because of the the thermal inertia of the oceans and the fact that we should really be measuring the enthalpy of the system – the best metric for temperature is the SST data which varies much more closely with enthalpy than land temperatures.The NOAA data ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/annual.ocean.90S.90N.df_1901-2000mean.
data shows no net warming since 1997 and also shows that the warming trend peaked in about 2003 and that the earth has been in a slight cooling trend since then.This trend will likely steepen and last for at least 20 years and perhaps for hundred of years beyond that if ,as seems likely, the warming peak represents a peak in both the 60 and 1000 year solar cycles,
For a discussion and detailed forecast see
http://climatesense-norpag.blogspot.com/2013/04/global-cooling-methods-and-testable.html

Thomas

StephenP The CO2-contentration is constant throughout the atmosphere. Winds ensure that the atmosphere is stirred enough that the small density difference doesn’t matter. Nor does absorption or emission of photons cause the molecules to move up or down. CO2-molecules radiate equally in all directions.
Scott, “It is meaningless to say that there is warming, just not statistically significant warming. Someone who says that does not know what statistical significance is.”
I’d say that on the contrary, anyone who thinks a measured trend that is larger than zero but not quite reaches statistical significance is the same as no trend doesn’t not know enough about statistics. Compare these three measurement: 0.9+-1, 0+-1 and -0.9+-1. None of them is statistically different from zero, but the fist one allows values as high as 1.9 while the last one allows values as low as -1.9.

RichardLH

Thomas says:
June 13, 2013 at 6:56 am
“I’d say that on the contrary, anyone who thinks a measured trend that is larger than zero but not quite reaches statistical significance is the same as no trend doesn’t not know enough about statistics.”
And without sufficient knowledge as to what the future actually provides (or a accurate model :-)) then drawing any conclusions based on which end of any distribution the values may currently lie is just a gloryfied guess.
If you were to draw conclusion about the consistency with which the data has has moved towards a limit you would have a better statistical idea about what the data is really saying.

rgbatduke

Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!
This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).
So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that R^2 or p for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.
Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.
Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)
Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)
A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact that individual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.
A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).
In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physics omitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.
Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.
So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best prediction of carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.
Which of these is going to be the winner? LDF, of course. Why? Because the parameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.
Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the way physics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.
What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.
Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever by not computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.
Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors. Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!
So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.
Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and still possibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb

Jeff Alberts

The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point.

Going back to 1998 is small potatoes. Let’s go back 1000 years, 2000, 5000, even back to the last interglacial. The best data we have show that all of those times were warmer than now.
17 years? Piffle.

Jeff Alberts

rgbatduke says:
June 13, 2013 at 7:20 am
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

As I understand it, running the same model twice in a row with the same parameters won’t even produce the same results. But somehow averaging the results together is meaningful? Riiiight. As meaningful as a “global temperature” which is not at all.

Steven said:
“Since when is weather/climate a linear behavorist?… I realize this is a short timescale and things may look linear but they are not. Not even close.”
Absolutely spot-on Steven. Drawing lines all over data that is patently non-linear in its behaviour is a key part of the CAGW hoax.

Thomas

RichardLH, the context of the discussion is Monckton’s statement that “On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.” This statement is based on a (IMHO probably intentional) mixing of the measured trend which is what Santer was talking about and whether the trend is statistically significant or not. How can a model be falsified by a value of the trend that isn’t significantly different from the expected?

Latitude

This whole argument is the most ridiculous thing I’ve ever seen…
…who in their right mind would argue with these nutters when you start out by letting them define what’s “normal”
You guys have sat back and let the enemy define where that “normal” line is drawn…
….and then you argue with them that it’s above or below “normal”
Look at any paleo temp record……and realize how stupid this argument is
http://www.foresight.org/nanodot/wp-content/uploads/2009/12/histo4.png