# No significant warming for 17 years 4 months

By Christopher Monckton of Brenchley

As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.

The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.

Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:

On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.

The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.

Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.

If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.

However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.

In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.

Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.

Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.

It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:

The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.

The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.

Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.

From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.

In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.

UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony

Article Rating
Inline Feedbacks
RDG
June 13, 2013 3:38 am

Thank you.

Harold Ambler
June 13, 2013 3:39 am

1. Time to point out again that when the warmists convinced the world to use anomaly graphs in considering the climate system they more or less won the game. As Essex and McKitrick (and others) point out, temperature, graphed in Kelvins, has been pretty close to flat for the past thousand years or so. The system displays remarkable homeostasis, and almost no lay people are aware of this simple fact.
2. I would like to make a documentary in which man-on-the-street interviews are conducted where the interviewee gets to draw absolute temps over the last century, last millennium, etc. The exaggerated sense of what has been happening would be hilarious, and kind of sad, to see.
3. The intellectual knots that the warmists have already tied themselves into explaining away the last decade and a half of global temps have been ugly. And, as most here know, I am betting that the ugliness gets uglier for the next decade and a half — at least.

AlecM
June 13, 2013 3:57 am

There can be no CO2-GW, A or otherwise. And even if there were, there could be no positive feedback. CO2 is the working fluid in the control system maintaining OLR = SW thermalised.
This is imposed by irreversible thermodynamics – the increased radiation entropy from converting 5500 K SW to 255 K LW. The clouds adapt to control atmosphere entropy production to a minimum.
Basic science was forgotten by Hansen when the first GISS modelling paper wrongly assumed CO2 blocked 7 – 14 micron OLR and LR warming was the GHE: 1981_Hansen_etal.pdf from NASA. They got funding and fame for 32 years of a scientific scam.

ImranCan
June 13, 2013 4:05 am

Very nice post …. I made some similar remarks in comments on a John Abrahams / Dana Nuticelli article in the Guardian yesterday – just asking how climate change effects could be “accelerating” when temperatures have not been going up ….. and had my comments repeatedly censored. I woke up this morning to find I am now banned as a commenter. Simply a very sad indictment of the inability of warmest ‘scientists’ to tolerate any form of critique or basic obvious questioning.

Thomas
June 13, 2013 4:17 am

Note that “No warming” and “no statistically significant warming” are not the same thing. The most reasonable interpretation of Santer’s statement is that there has to be no measured warming for 17 years, and as is clear from the diagram there has been warming, only not large enough to be statistically significant. The uncertainly is large enough that the data are also consistent with a trend of 0,2 K/decade, i.e., in line with IPCC predictions.

Jean Meeus
June 13, 2013 4:20 am

Yes indeed. A few days ago, the Belgian newspaper ‘Metro’, too, wrote that the temperatures are accelerating dangerously. Well heavens…

MattN
June 13, 2013 4:22 am

I am 100% positive I remember Gavin saying 10 years somewhere on ReallywrongClimate. No warming for 10 years, the models were wrong….

HaroldW
June 13, 2013 4:28 am

Correction: The essay at Lucia’s Blackboard was written by SteveF, not by Lucia.

dwr54
June 13, 2013 4:30 am

Re Santer et al. (2011). Is it not the case that this paper explicitly refers to lower troposphere (i.e. satellite) data and that it also explicitly refers to the “observational” data, rather than statistical significance levels?
In other words, all Santer et al. 2011 stated was that we should see a warming trend in the raw satellite data over a period of 17 years. At present that is what we do see in both UAH and RSS (much more so in UAH).
I don’t immediately see what Santer et al. 2011 has to do with statistical significance in a surface station data set such as HadCRUT4.

Steven
June 13, 2013 4:36 am

I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? The equations that attempt to map/predict magnetic fields of the earth are complex Fourier series. Is someone, somewhere suggesting that the magnetic field is more complex than the climate envelope about the earth? I realize this is a short timescale and things may look linear but they are not. Not even close. Like I said in the beginning, the great climate hoax is nothing more than what I just called it. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.

Colin Porter
June 13, 2013 4:39 am

So how did the climate scientists and the news media including the NYT report the 1998 El Nino? Apocalypse now, I would suggest! So even if the start date was cherry picked, it would be fair game.

June 13, 2013 4:40 am

Thomas said:
“the data are also…..in line with IPCC predictions.”
Ha, ha, ha, ha!
And the sky is green and the grass is blue…..

Jostemikk
June 13, 2013 4:42 am

No statistically significant warming in 18 years and 5 months:
#Time series (rss) from 1979 to 2013.42
#Selected data from 1995
#Least squares trend line; slope = 0.00365171 per year
No varming in 16 years and 5 months:
#Time series (rss) from 1979 to 2013.42
#Selected data from 1997
#Least squares trend line; slope = -0.000798188 per year
Oh lord…

David L. Hagen(@hagendl)
June 13, 2013 4:50 am

SteveF wrote Estimating the Underlying Trend in Recent Warming
(“12 June, 2013 (20:10) Written by: SteveF” posted at Lucia’s The Blackboard

The slope since 1997 is less than 1/6 that from 1979 to 1996. . . .
Warming has not stopped, but it has slowed considerably. . . .
the influence of the ENI on global temperatures (as calculated by the by the global regression analysis) is just slightly more than half the influence found for the tropics alone (30S to 30N): 0.1099+/- 0.0118 global versus 0.1959+/-0.016 tropics. . . .
The analysis indicates that global temperatures were significantly depressed between ~1964 and ~1999 compared to what they would have been in the absence of major volcanoes. . . .
the model does not consider the influence of (slower) heat transfer between the surface and deeper ocean. In other words, the calculated impact of solar and volcanic forcings would be larger (implying somewhat higher climate sensitivity) if a better model of heat uptake/release to/from the ocean were used.

This looks like a SteveF provides a major improvement in understanding and quantifying the “shorter” term impacts of solar, volcanoes and ocean oscillations (ENSO) and their related lags. Now hope he can get it formally published.

June 13, 2013 4:52 am

This post is preaching to the choir (and, with all due respect for Christopher Monckton’s energy in the climate debates, it is by a scientific dilettante, however well-informed and clearly intelligent, to an audience of laypersons–what the failure of climate science, in the present incompetent consensus, has brought us all to). (And I am not one of the many who has a pet theory, and claims to have all the answers–I merely kept my eyes and mind open for clear, definitive evidence of what is really wrong, and found it, as some portion of the readers here well know. I am a professional scientist, a physicist, in the older academic tradition, that knew how to Verify.)
ImranCan’s comment above confirms what so many should already know: The Insane Left (my term for them) only dared to alarm the world with this monumental fraud because they fervently want to believe a benevolent universe (not God, heaven forbid, but only a universe in which “you create your own reality”–one of the great lies of the modern world) has put into their hands an authoritative instrument through which their similarly-fixated political ideology could take over… the western world, at least. The “science” has ALWAYS been “settled”, period, because they NEED it to be, to hold together their fundamentally creaky coalition of peoples bitter, for any reason, against “the old order”. They want a revolution, one way or another. And this is war, one way or another. The best hope for mankind, and especially the western world, is that somehow a growing number of those who have been suborned to the Insane Left will come to their senses, let their innate intelligence come out, and declare their independence and opposition to the would-be tyrants.

June 13, 2013 5:14 am

Perhaps off-topic, but I am having serious thoughts about why we constantly refer to the “greenhouse effect”. To use a greenhouse is to use a pretty poor analogy; the Earth is not surrounded by a hard shell of “greenhouse gasses”, with air movements and other causes of potential cooling inside strictly regulated. It could be that we are not only barking up the wrong tree, but we are in the wrong garden, in the wrong country – and it is not even a tree!
About 99% of the Earth’s atmosphere (i.e. 20.9% oxygen and 78% nitrogen) is not composed of “greenhouse gasses.” Why not test the idea: find a greenhouse, and remove 99% of the glass, so as to leave a thin web of glass (let us assume this is possible). I doubt you will be able to measure any difference between the “inside” of the greenhouse and outside; however, to “improve” its effectiveness, add 0.05% more glass. Stand back, and watch in amazement as the temperatures soar!
You don’t think someone is trying to sell us a load of snake oil, do you?

M Courtney
June 13, 2013 5:25 am

ImranCan says at June 13, 2013 at 4:05 am

Very nice post …. I made some similar remarks in comments on a John Abrahams / Dana Nuticelli article in the Guardian yesterday – just asking how climate change effects could be “accelerating” when temperatures have not been going up ….. and had my comments repeatedly censored. I woke up this morning to find I am now banned as a commenter. Simply a very sad indictment of the inability of warmest ‘scientists’ to tolerate any form of critique or basic obvious questioning.

I also linked to the MET office and showed that temperature rises are not accelerating. In additon I pointed out the theoretical basis for the acceleration was challenged empirically by the lack of the Tropical Hotspot (with a link to Jo Nova).
So I also am now banned from posting at the Guardian. That is, I am subject to “pre-moderation”.
The worst impact of creating this echo-chamber is the decline in the Guardian’s readership. The number of comments on their environment blogs is declining rapidly.
It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.
How long until the advertisers realise?

John West
June 13, 2013 5:37 am

@ MattN
http://www.realclimate.org/index.php/archives/2007/12/a-barrier-to-understanding/
“what year would you reconsider the CO2 – Warming paradigm if the CRU Global annual mean temperature is cooler than 2005 – 2009…?”
“You need a greater than a decade non-trend that is significantly different from projections. [0.2 – 0.3 deg/decade]”

Frank K.
June 13, 2013 5:40 am

“So I also am now banned from posting at the Guardian.”
Welcome to the newspeak Orwellian media complex, Winston.
Fortunately, we are still free enough in this world to tell the Guardian (and, most importantly, their \$ponsor\$) to stuff it…

ConfusedPhoton
June 13, 2013 5:43 am

How long before the 17 year test becomes a 25 year test? – just a matter of homogenising!

June 13, 2013 6:02 am

If memory serves, it seems that the Meteorological community has used the ‘thirty-year’ time frame for standardizing its records, in order to classify climate and climate zones. I suspect that meteorologists might soon suggest that a ‘fifty-year’ or even a ‘sixty-year’ time frame become the standard reference frame.
That would be one way to get around Gavin’s “… seventeen year …” test.
Or, we could just adjust the data some more, to make them fit the models … … … ………

eyesonu
June 13, 2013 6:04 am

At first there were a few looking for the truth. Then there were more. Soon there were many. Next there was an army marching for the truth. Now the truth goes marching on!
Oh, it’s that army of ones again. They have liberated the truth.
sorry, but I don’t know how to put musical notes in a blog post 😉

Jimbo
June 13, 2013 6:07 am

What I want to know from any Warmists is what would falsify the climate model projections as used by the IPCC? Example 20 years of no warming?

pyromancer76
June 13, 2013 6:09 am

M Courtney at 5:25 a.m. says:
“So I also am now banned from posting at the Guardian. That is, I am subject to “pre-moderation”.
The worst impact of creating this echo-chamber is the decline in the Guardian’s readership. The number of comments on their environment blogs is declining rapidly.
It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.
How long until the advertisers realise?”
Would that these former institutions of the Fourth Estate were subject to the forces of the market. Many would have failed already. However, they are being funded — and their employees (formerly investigative journalists) fully paid and supported — as the mouthpiece of elites who are acting similarly to the Robber Barons of the U.S. 19th Century. At least the Robber Barons through their greed also brought productivity. Not so much these elites. Who are they? Fabulously wealthy Islamists on our oil money; brilliant financial scam artists like financiers whether “left or right” (debt posing as equity); IT corporations who (corps are persons) destroy competition; all those corporations that also hate “the market” (immigration “reform” for cheap labor — that will take care of those independent Americans); and the secular religionists. What a motley group.
They will eventually fail. We must see that they do not take the rest of us along with them. Thank you Anthony and crew for your valiant and courageous efforts.

Scott Scarborough
June 13, 2013 6:11 am

It is meaningless to say that there is warming, just not statistically significant warming. Someone who says that does not know what statistical significance is.

June 13, 2013 6:11 am

The one time a “Cherry-picking” accusation fails is when you use the present day as an anchor & look back into the past.
The observed temperature differential just doesn’t meet any definition of “catastrophic,” “runaway,” “emergency,” “critical,” or any synonym you can pull out of the (unwarming) air to justify the multitude of draconian measures ALREADY IN PLACE that curtail world economies or subsidize failing alternative energy attempts!!!

Richard M
June 13, 2013 6:11 am

I like to use RSS because it is not contaminated with UHI, extrapolation and infilling. As indicated above the trend has been perfectly flat for 16.5 years (Dec. 1996). At some point in the near future, given the current cooling that could be later this year, the starting point could move back to the start of 1995. That would mean around 19 years with a zero trend.
I like to use the following graph because it demonstrates a change from the warming regime of the PDO to the cooling regime. It also shows how you could have many of the warmest years despite the lack of warming over the entire interval.

June 13, 2013 6:16 am

How long before the warmists make 1998 go away like they did with the MWP ? Funny how 1998 was the shot across the bow warning when it was on the right side of the graph but an inconvienient truth on the left.

DirkH
June 13, 2013 6:16 am

M Courtney says:
June 13, 2013 at 5:25 am
“It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.”
Guardian, Spiegel and NYT are the modern versions of the Pravda for the West. I read them to know what the 5 minute hate of the day is.

Bob Tisdale(@bobtisdale)
Editor
June 13, 2013 6:22 am

Looks like Lucia’s website is overloaded. I can get through on the main page but I can’t open SteveF’s post without getting an error message. I tried to leave him the following comment:
SteveF: As far as I can tell, your model assumes a linear relationship between your ENSO index and global surface temperatures.
Trenberth et al (2002)…
http://www.cgd.ucar.edu/cas/papers/2000JD000298.pdf
…cautioned against this. They wrote, “Although it is possible to use regression to eliminate the linear portion of the global mean temperature signal associated with ENSO, the processes that contribute regionally to the global mean differ considerably, and the linear approach likely leaves an ENSO residual.”
Compo and Sardeshmukh (2010)…
http://journals.ametsoc.org/doi/abs/10.1175/2009JCLI2735.1?journalCode=clim
…note that it should not be treated as noise that can be removed. Their abstract begins: “An important question in assessing twentieth-century climate change is to what extent have ENSO-related variations contributed to the observed trends. Isolating such contributions is challenging for several reasons, including ambiguities arising from how ENSO itself is defined. In particular, defining ENSO in terms of a single index and ENSO-related variations in terms of regressions on that index, as done in many previous studies, can lead to wrong conclusions. This paper argues that ENSO is best viewed not as a number but as an evolving dynamical process for this purpose…”
I’ve been illustrating and discussing for a couple of years that the sea surface temperatures of the East Pacific(90S-90N, 180-80W) show that it is the only portion of the global oceans that responds linearly to ENSO, but that the sea surface temperatures there haven’t warmed in 31 years:
http://oi47.tinypic.com/hv8lcx.jpg
On the other hand, the sea surface temperature anomalies of the Atlantic, Indian and West Pacific (90S-90N, 80W-180) warm in El Niño-induced steps (the result of leftover warm water from the El Niños) that cannot be accounted for with your model:
http://oi49.tinypic.com/29le06e.jpg
A more detailed, but introductory level, explanation of the processes that cause those shifts can be found here [42MB .pdf]:
And what fuels the El Ninos? Sunlight. Even Trenberth et al (2002), linked above, acknowledges that fact. They write, “The negative feedback between SST and surface fluxes can be interpreted as showing the importance of the discharge of heat during El Niño events and of the recharge of heat during La Niña events. Relatively clear skies in the central and eastern tropical Pacific allow solar radiation to enter the ocean, apparently offsetting the below normal SSTs, but the heat is carried away by Ekman drift, ocean currents, and adjustments through ocean Rossby and Kelvin waves, and the heat is stored in the western Pacific tropics. This is not simply a rearrangement of the ocean heat, but also a restoration of heat in the ocean.”
In other words, ENSO acts as a chaotic recharge-discharge oscillator, where the discharge events (El Niños) are occasionally capable of raising global temperatures, where they remain relatively stable for periods of a decade or longer.
In summary, you’re treating ENSO as noise, while data indicate that it is responsible for much of the warming over the past 30 years.
Regards

Brian
June 13, 2013 6:27 am

I wonder if ACGW advocates feel a little like advocates of the Iraq invasion felt when no WMDs were discovered? Just a random thought.

Bob Tisdale(@bobtisdale)
Editor
June 13, 2013 6:28 am

I got through. There must’ve been a temporary mad rush to Lucia’s Blackboard for a few minutes.

StephenP
June 13, 2013 6:28 am

Rather off-topic, but there are 4 questions that I would like the answer to:
1. We are told the concentration of CO2 in the atmosphere is 0.039%, but what is the concentration of CO2 at different heights above the earth’s surface? As CO2 is ‘heavier than air’ one would expect it to be at higher percentages near the earth’s surface.
2. Do the CO2 molecules rise as they absorb heat during the day from the sun? And how far?
3. Do the CO2 molecules fall at night when they no longer get any heat input from the sun?
4. When a CO2 molecule is heated, does it re-radiate equally in all directions, assuming the surroundings are cooler, or does it radiate heat in proportion to the difference in temperaure in any particular direction?

JabbaTheCat
June 13, 2013 6:32 am

Lucia’s site not currently available…

jonny old boy
June 13, 2013 6:37 am

Human beings caused the largest extinction rate in the planet’s history ( Pleistocene extinctions ). These extinctions came at a different time and at a different rate to be linked to the climate changes and its clear wild climate swings in short periods of time ( relatively ) did pretty much nothing to the earth’s species on any significant scale. Its exactly the same now. We are still causing extinctions at a record rate, simply by being here, not by “altering” the climate, and even if we did ( or are ) altering the climate, then this effect on the planet is insignificant to the simple fact that we are just “here”… So-called “climate scientists” are often no such thing, they do not understand the basics of pre-historic climate change and the parameters involved. They completely ignore the most important evidence. Large animals in Africa alone survived the P.E. period simply by having evolved along side humans, as soon as humans left Africa at a very fast rate, they pretty much wiped out the mega fauna everywhere else…. It is this pattern of human behaviour that is statistically significant, not fractions of a degree celcius. I wish alarmists would actually study a bit more !

Goldie
June 13, 2013 6:38 am

I suppose we could always wait until 2018. By which time the World will be bankrupt and it won’t matter. Alternatively we could start applying the precautionary principal the other way round. How about: A clear lack of correlation between hypothesis and reality should preclude precipitate action beyond that which is prudent and can be shown to have a benefit.

Ken G
June 13, 2013 6:39 am

First of all, skeptics didn’t pick 1998, the NOAA did in the 2008 State of the Climate report.
That report says, “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
It does not say “The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, except intervals starting in 1998…”
Second, I don’t know why anyone is bending over backwards to try to find statistical significance (or lack thereof) in a goalpost changing 17 year trend when we already have an unambiguous test for the models straight from the NOAA. Why bother with ever changing warmist arguments? Just throw the above at them and let them argue with the NOAA over it.

June 13, 2013 6:40 am

The problem is that models of catastrophic climate change are being used by futurists and tech companies and rent seekers generally to argue that our written constitutions need to be jettisoned and new governance structures created that rely more on Big data and supercomputers. To deal with the global warming crisis. wish I was making this up but I wrote about the political and social economy and using education globally to get there today. Based primarily on Marina Gorbis’ April 2013 book The Nature of The Future and Willis Harman’s 1988 Global Mind Change.
You can’t let actual temps get in the way of such a transformation. Do you have any idea how many well-connected people have decided we are all on the menu? Existing merely to finance their future plans and to do as we are told.

RichardLH
June 13, 2013 6:50 am

This analysys of the UAH data (and the implied future that it provides) says that the (short term < 60 years anyway) may all be cyclic – not a linear trend of any form during that preiod.
http://s1291.photobucket.com/user/RichardLH/media/uahtrendsinflectionfuture_zps7451ccf9.png.html
That could turn in time into a 'Short Term Climate Predictor' 🙂

M Courtney
June 13, 2013 6:50 am

The Guardian is left-wing. That won’t be popular with people who aren’t.
But it wasn’t dumbed down. It wasn’t anti-democratic. It wasn’t just hate.
The Guardian was part of the civil society in which develops the politcal awareness that a democracy needs.
So was the Telegraph from the other side.
But the Guardian has abandoned debate. That is the death of the Guardian. A loss which will be a weakening of the UK’s and the entire West’s political life.

SanityP
June 13, 2013 6:53 am

Interesting, and by the way:
On March 13, WUWT announced that Climategate 3.0 had occurred.
What happened to it?
Everybody just ignoring it ever happened?

June 13, 2013 6:55 am

Because of the the thermal inertia of the oceans and the fact that we should really be measuring the enthalpy of the system – the best metric for temperature is the SST data which varies much more closely with enthalpy than land temperatures.The NOAA data ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/annual.ocean.90S.90N.df_1901-2000mean.
data shows no net warming since 1997 and also shows that the warming trend peaked in about 2003 and that the earth has been in a slight cooling trend since then.This trend will likely steepen and last for at least 20 years and perhaps for hundred of years beyond that if ,as seems likely, the warming peak represents a peak in both the 60 and 1000 year solar cycles,
For a discussion and detailed forecast see
http://climatesense-norpag.blogspot.com/2013/04/global-cooling-methods-and-testable.html

Thomas
June 13, 2013 6:56 am

StephenP The CO2-contentration is constant throughout the atmosphere. Winds ensure that the atmosphere is stirred enough that the small density difference doesn’t matter. Nor does absorption or emission of photons cause the molecules to move up or down. CO2-molecules radiate equally in all directions.
Scott, “It is meaningless to say that there is warming, just not statistically significant warming. Someone who says that does not know what statistical significance is.”
I’d say that on the contrary, anyone who thinks a measured trend that is larger than zero but not quite reaches statistical significance is the same as no trend doesn’t not know enough about statistics. Compare these three measurement: 0.9+-1, 0+-1 and -0.9+-1. None of them is statistically different from zero, but the fist one allows values as high as 1.9 while the last one allows values as low as -1.9.

RichardLH
June 13, 2013 7:11 am

Thomas says:
June 13, 2013 at 6:56 am
“I’d say that on the contrary, anyone who thinks a measured trend that is larger than zero but not quite reaches statistical significance is the same as no trend doesn’t not know enough about statistics.”
And without sufficient knowledge as to what the future actually provides (or a accurate model :-)) then drawing any conclusions based on which end of any distribution the values may currently lie is just a gloryfied guess.
If you were to draw conclusion about the consistency with which the data has has moved towards a limit you would have a better statistical idea about what the data is really saying.

rgbatduke
June 13, 2013 7:20 am

Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!
This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).
So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that $R^2$ or $p$ for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.
Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.
Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)
Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)
A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact that individual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.
A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).
In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physics omitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.
Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.
So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best prediction of carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.
Which of these is going to be the winner? LDF, of course. Why? Because the parameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.
Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the way physics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.
What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.
Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever by not computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.
Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors. Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!
So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.
Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and still possibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb

Jeff Alberts
June 13, 2013 7:21 am

The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point.

Going back to 1998 is small potatoes. Let’s go back 1000 years, 2000, 5000, even back to the last interglacial. The best data we have show that all of those times were warmer than now.
17 years? Piffle.

Jeff Alberts
June 13, 2013 7:24 am

rgbatduke says:
June 13, 2013 at 7:20 am
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

As I understand it, running the same model twice in a row with the same parameters won’t even produce the same results. But somehow averaging the results together is meaningful? Riiiight. As meaningful as a “global temperature” which is not at all.

June 13, 2013 7:27 am

Steven said:
“Since when is weather/climate a linear behavorist?… I realize this is a short timescale and things may look linear but they are not. Not even close.”
Absolutely spot-on Steven. Drawing lines all over data that is patently non-linear in its behaviour is a key part of the CAGW hoax.

Thomas
June 13, 2013 7:29 am

RichardLH, the context of the discussion is Monckton’s statement that “On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.” This statement is based on a (IMHO probably intentional) mixing of the measured trend which is what Santer was talking about and whether the trend is statistically significant or not. How can a model be falsified by a value of the trend that isn’t significantly different from the expected?

Latitude
June 13, 2013 7:30 am

This whole argument is the most ridiculous thing I’ve ever seen…
…who in their right mind would argue with these nutters when you start out by letting them define what’s “normal”
You guys have sat back and let the enemy define where that “normal” line is drawn…
….and then you argue with them that it’s above or below “normal”
Look at any paleo temp record……and realize how stupid this argument is

RichardLH
June 13, 2013 7:51 am

Thomas says:
June 13, 2013 at 7:29 am
RichardLH, the context of the discussion is Monckton’s statement that “On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.”
I suspect that if you vist the link provided then you might discover that there is indeed some supporting evidence from the sattelite record for his observation.

Dr. Lurtz(@jlurtz)
June 13, 2013 7:54 am

Do not be for-lone deniers. From a Solar Cycle peak to the valley typically causes a global temperature reduction of -0.1C.
Unfortunately, we will all suffer if the global temperature decreases. Paradoxically, fewer hurricanes [cooler ocean temperatures] but greater crop damage due to cold temperature swings.
This is one case where I really wish that I was wrong. Heat bothers me, cold scares me. I’m too old to transition my life style and become an Eskimo.

June 13, 2013 7:58 am

Warmers went full stupid on predictions and pay the price now, skeptics shouldn’t emulate the behavior. By doing so it validates the junk nature of the temperature stats as being linked to human co2 and carbon. Which is total speculation and not supported by long-term proxies.
AGW is an emotional political argument, by playing make believe “it’s about science” meme only helps continue what should be dead on arrival in the first place. A hundred year temp chart given the tiny scale involved is fundamentally meaningless from the science view. That the models failed
isn’t a surprise and is a cost to advocates but making claims about co2 impact or no based on the temp stat is validating the canard of AGW at the same time it is trying to be critical of it. The stat has nothing to say about “cause” one way or the other. It’s o.k. to point out warmer failure and manipulation on the topic but it has nothing to say about “why” things are the way they are in climate.
I thought the mitigation film support from Monckton suffered the same flaws, why validate mythology of your opponent as a tactic? Looks like a rabbit hole.

tonyb(@climatereason)
Editor
June 13, 2013 8:01 am

rgbatduke said in part
‘So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth..’
I live 15 miles from the Met office who constantly assure us that their 500 year model projections of future climate states are more accurate than their two or three day forecasts. Why this is not challenged more I don’t know, because we see the results of modelling every day in the weather forecasts and that even during a day of feeding in new information the output-the forecast-has changed considerably and bears no relation to the original.
We have a met office app and the weather it will give us for the weekend will have changed twenty times by the time we actually get there. The ‘likely climate’ in 20 50 or 500 years time is infinitely more difficult to know than what is going to happen in two days time at a place 15 miles from their head office. The simple answer is they have no idea of all the components of the climate and their models are no more able to forecast the climate in future decades as they can the weather of the future month.
tonyb

June 13, 2013 8:02 am

Latitude is quite right.The models are all stuctured wrongly and their average uncertainties take no account of the structural uncertainties .In order to make the anthropogenic climate change a factor important enough to justify their own existence and to drive government CO2 policies the IPCC and its modellers had to perform the following mental gymnastics to produce or support a climate sensitivity to a doubling of CO2 of about 3 degrees.
a) Make the cause follow the effect . ie, even though CO2 changes follow temperature changes ,they simply assume illogically that CO2 change is the main driver.
b) The main GHG – Water vapour – also follows temperature independently of CO2 yet the effect of water vapour was added on to the CO2 effect as a CO2 feedback for purposes of calculating CO2 sensitivity.
c) Ignore the very serious questions concerning the relaibility of the ice core CO2 data. From the Holocene peak temperature to the Little Ice age CO2 ice core data for example one might well conclude that if CO2 was driving temperature it is an Ice House not a Greenhouse gas on mult-millenial scales.
The temperature projections of any models based on these irrational and questionable assumptions have no place in serious dicussion.All the innumerable doom-laden papers on impacts in the IPCC reports and elsewhere (eg Stern report) which use these projections as a basis are a complete and serious waste of time and money.Until you know within well defined limits what the natural variability actually is it is not possible to estimate the sensitivity of global temperatures to anthropogenic CO2 with any useful accuracy as far as policy is concerned.
Unfortunately the establishment scientists have gambled their scientifc reputations and positions on these illogical propositions and are so far out on the limbs of the tree of knowledge that they will find it hard to climb back before their individual boughs break.

Dodgy Geezer
June 13, 2013 8:06 am

My understanding is that we are slowly rising out of the Little Ice Age, so the ‘natural’ temperature condition should be a slight upwards slope – about 0.5 deg C per century.
If this rise is subtracted from the record, how long does the ‘flat’ period then become? A quick eyeball using woodfortrees suggests that it starts around 1995 – giving us 18 flat years so far….

G P Hanner
June 13, 2013 8:22 am

That’s the life cycle of the 17-year cicada.

climatereason(@climatereason)
Editor
June 13, 2013 8:34 am

dodgy geezer
CET has been shown to be a reasonable proxy for global temperatures and here it is from 1538 (my reconstruction) with the Met office instrumental period commencing 1659. It shows a steady rise throughout.
http://wattsupwiththat.com/2013/05/08/the-curious-case-of-rising-co2-and-falling-temperatures/
There has been a substantial downturn here over the last decade which presumably will eventually be reflected in the global temperature
tonyb

Ivan
June 13, 2013 8:37 am

What’s the purpose of using Hadcrut 4 when it is obvious what is going on, they are trying to artificially warm it up in the later period as compared to Hadcrut 3?

June 13, 2013 8:38 am

Thanks Christopher, great post.
Thanks, rgbatduke; great comment.

johnmarshall
June 13, 2013 8:42 am

Look at the satellite data sets gives 23 years. Remove the ENSO spikes and there jhas been a cooling since 1880.

June 13, 2013 8:43 am

Reblogged this on RubinoWorld.

RichardLH
June 13, 2013 8:49 am

I suspect that the inability of climate science to cross calibrate the various global estimated temperature data sets (sattelite, ballon, thermometer) or reconcile any of them to their models is at the heart of the problem.
It does not bode well that the trends distribute Sattelite – Termometer – Model.

Village Idiot
June 13, 2013 9:01 am

“From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements…In any event, the index will limit the scope for false claims..”
What a beezer wheeze, Sir Christopher. That’ll defrock the rank amateurs, charlatans and criminals… 🙂

JimF
June 13, 2013 9:10 am

rgbatduke says:
June 13, 2013 at 7:20 am
You…uh, erm…you mean the science ISN’T settled? :0 Nice! Great idea on sorting out the models.

Greg Mansion
June 13, 2013 9:32 am

[snip – Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

June 13, 2013 9:38 am

“But if it’s colder than normal, that’s proof of warming.”
We know they say that, but just now in this video a UN Climate delegate at Bonn says it so explicitly and idiotically that it almost blows your mind. Here the delegate insists that the freezing German summer weather is proof of warming. Insane:

Snotrocket
June 13, 2013 9:42 am

RGBATDUKE says: “…it looks like the frayed end of a rope,” Ahhh, that’ll be the rope that we give ’em enough of to hang themselves…

jai mitchell
June 13, 2013 9:43 am

@Dodgy Geezer
The idea that we are still coming out of the last ice age is a common misperception. The end of the last ice age happened at the beginning of the current Holocene period about 12,000 years ago. Since then temperatures have actually gone down a bit and we have been very stable for the last 6000 years or so.
unless you live in Greenland, of course. . .

Eliza
June 13, 2013 9:44 am

cwon14: I agree totally C02 has NO effect whatsoever on temperature confirmed by Salby et al and many others. To continue to argue with warmists that there is no correlation and put up graphs ect I believe is a waste of time and is just pandering to them which is exactly what they want

Greg Mansion
June 13, 2013 9:59 am

[snip – Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

george e. smith
June 13, 2013 10:12 am

June 13, 2013 at 6:02 am
If memory serves, it seems that the Meteorological community has used the ‘thirty-year’ time frame for standardizing its records, in order to classify climate and climate zones. I suspect that meteorologists might soon suggest that a ‘fifty-year’ or even a ‘sixty-year’ time frame become the standard reference frame.
That would be one way to get around Gavin’s “… seventeen year …” test.
Or, we could just adjust the data some more, to make them fit the models … … … “””””
Well there’s a very good reason for that “thirty year time frame” for climate results to become “real”, and also a good reason it should increase.
A recent study published in (I believe) Physics Today, analyzed the career outcomes for USA PhD in Physics, “graduates”.
The basic bottom line is that one third of US Physics PhDs eventually land a permanent real job, that utilizes their (limited) skill set. About 5% found temporary work. Bur 2/3 of all of them end up as lifelong post-doc fellows at some institute or other; never ever using their science learning for anything useful.
By going into the “climate field”, with its 30 year “payoff” time scale, these folks can live off grants for their full career, and really never need to show any believable results, before the next generation of unemployable post-doc fellows, take their place.
As current socialist programs slowly strangle the American economy, making it increasingly difficult for the “middle class” to ever achieve a viable retirement state, the mean career length, must necessarily increase, so the time base for “meaningful” climate results, will have to increase.
Recent articles about the fortunes; or lack thereof, of the LL NIF, so called National Ignition Facility, are hinting that this much ballyhooed boondoggle will never ever achieve ignition break even.
We were told it had a 70% chance of igniting, when the project was approved; now they are saying less than 50%. There is a suggestion that they need to go to a somewhat larger DT fuel pellet.
Oh but that is going to require about a 5X increase in the size and power of the laser. Well think how many post-doc fellows that can keep busy.
We already know just how big a Thermo-nuclear energy source has to be to work properly; and also how far away from human habitation it needs to be for safety; about 93 million miles.

June 13, 2013 10:30 am

Greg Mansion says (June 13, 2013 at 9:32 am): “It has been tested already…”
For a different perspective on the R W Wood Experiment:
http://wattsupwiththat.com/2013/02/06/the-r-w-wood-experiment/

douglas
June 13, 2013 10:30 am

Even taking things down to the very simple basics, one cannot dissuade the warmists.
If you have a theory that rising man made co2 is causing glaobal warming, and you go ahead with models to show that this is possible/true, than you figures MUST be in agreement with observations. Global warming is at a standstill, but co2 levels rise ….therefore your theory is WRONG.

RayG
June 13, 2013 10:33 am

@ rgb@duke. I took the liberty of sending an email to Judy Curry asking that she take a look at your comment and consider asking you to write a tightened up version to be used as a discussion topic at ClimateEtc. Please give this some thought and ping her at her home institution to the Southwest of you. (Okay, West Southwest.)
Thank you,
RayG

Bob Diaz
June 13, 2013 10:37 am

I want to zero in on the most important line stated, “It is better to focus on the ever-widening discrepancy between predicted and observed warming rates.”
In one sentence Monckton has zeroed into the total failure of the alarmist group, the models are wrong. They have overestimated the impact of increased CO2.

climatereason(@climatereason)
Editor
June 13, 2013 10:44 am

Jai Mitchell said about a comment from Dodgy Geezer
‘The idea that we are still coming out of the last ice age is a common misperception. The end of the last ice age happened at the beginning of the current Holocene period about 12,000 years ago. Since then temperatures have actually gone down a bit and we have been very stable for the last 6000 years or so.’
DG said nothing about the ‘Ice age’. He specifically referenced the ‘little ice age’ meaning the period of intermittent intense cold that ended with glacier retreat which occurred from 1750/1850. That term is something you would have been better employed in commenting on if you had felt like being pedantic;
“The term Little Ice Age was originally coined by F Matthes in 1939 to describe the most recent 4000 year climatic interval (the Late Holocene) associated with a particularly dramatic series of mountain glacier advances and retreats, analogous to, though considerably more moderate than, the Pleistocene glacial fluctuations. This relatively prolonged period has now become known as the Neoglacial period.’ Dr Michael Mann
http://www.meteo.psu.edu/holocene/public_html/shared/articles/littleiceage.pdf
tonyb

John Tillman
June 13, 2013 10:44 am

jai mitchell says:
June 13, 2013 at 9:43 am
@Dodgy Geezer
The idea that we are still coming out of the last ice age is a common misperception. The end of the last ice age happened at the beginning of the current Holocene period about 12,000 years ago. Since then temperatures have actually gone down a bit and we have been very stable for the last 6000 years or so.
unless you live in Greenland, of course. . .
——————————
Dodgy said Little Ice Age, not the “last ice age”.
Earth is at present headed toward the next big ice age (alarmists in the ’70s were right about the direction but wrong as to time scale). Global temperatures are headed down, long-term. The trend for at least the past 3000 years, since the Minoan Warm Period, if not 5000, since the Holocene Optimum, is decidedly down. The short-term trend, since the depths of the Little Ice Age about 300 years ago, is slightly up, of course with decadal fluctuations cyclically above & below the trend line.

jc
June 13, 2013 10:46 am

@rgbatduke says:
June 13, 2013 at 7:20 am
“Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.”
———————————————————————————————————————
Whilst grasping the basic principles of what you say, I cannot comment on what might pass for “legitimate” contemporary interpretation of principle and methodology as actually practiced and accepted within the wide range of applications across many disciplines by those claiming an expertise and the right to do so.
I am fairly confident that these are in practice “elastic” depending on requirements, and that on the basis that where justification is required, those promoting and defending such “desirable” formulations bring more energy and commitment, and utilize a mechanism of reference to the “particularities” of their endeavors to which others are not privy, to neutralize any queries. This is of course antithetical to the concept of knowledge, let alone a body of it.
This is pervasive across any field of activity in which an expertise based on specialist understanding is claimed. It cannot be viewed as being isolated from the promotion into classified Disciplines of such things as observation and commentary on such things as civic affairs into political “science”, which is in actuality just a matter of opinion and fluid interaction. Such things actively incorporate the justifying “truth is what you make” it whilst at the same time elevate this to the level of the immutable, governed by autonomous laws, in order to dignify and as a mechanism to prevail, both as an activity in itself and in the positioning of the proponents. There can be no appeal to first principles that are accepted as defining the limits of interpretation, because they don’t exist.
“Climate Science” as an orthodoxy, and as a field, as opposed to investigations into particular areas that may have have relevance to climate, does not exist as science. What is most obvious and disturbing about AGW is its lack of intellectual underpinning – in fact its defiance of the basic application of intelligence which you highlight in this abuse of the specific rigor required in adhering to this manifestation of it in statistical methodology.
You are right to say: “do not engage”. It is essential to refuse to concede the legitimacy of interaction with those who claim it when such people are palpably either not sincere, not competent, or not what they claim to be. To state and restate the fundamental basis of inadequacy is what is obligatory. A lack of acknowledgement, and an unwillingness to rethink a position based on this, tells everyone who is willing and capable of listening everything they need to know about such people and the culture that is their vehicle. You do not cater to the dishonest, the deceptive, or the inadequate seeking to maintain advantage after having insinuated themselves, when it is clear what they are. You exclude them.
To be frustrated, although initially unavoidable since it derives from the assumption that others actually have a shared base in respect for the non-personal discipline of reality, is not useful. It is only when the realization occurs that what within those parameters is a “mistake” is not, and will not be, seen as a mistake by its proponents – whether through inadequacy or design – that clarity of understanding and purpose can emerge.
The evidence is constant and overwhelming that “Climate Science” and “Climate Scientists” are not what they claim to be. Whether this is by incompetence or intent is in the first instance irrelevant. They are unfit. What they are; what they represent; what they compel the world to; is degradation.
The blindingly obvious can be repeatedly pointed out to such people to no effect whatsoever.
They must be stopped. They can only be stopped by those who will defend and advance the principles which they have subverted and perverted. This demands hostility and scathing condemnation. This is not a time in history for social etiquettes, whether general or academic.

george e. smith
June 13, 2013 11:01 am

“””””……StephenP says:
June 13, 2013 at 6:28 am
Rather off-topic, but there are 4 questions that I would like the answer to:
1. We are told the concentration of CO2 in the atmosphere is 0.039%, but what is the concentration of CO2 at different heights above the earth’s surface? As CO2 is ‘heavier than air’ one would expect it to be at higher percentages near the earth’s surface.
2. Do the CO2 molecules rise as they absorb heat during the day from the sun? And how far?
3. Do the CO2 molecules fall at night when they no longer get any heat input from the sun?
4. When a CO2 molecule is heated, does it re-radiate equally in all directions, assuming the surroundings are cooler, or does it radiate heat in proportion to the difference in temperature in any particular direction?
Stephen; let’s start at #4. That’s a bit of a tricky question. In an atmospheric situation, any time any molecule or atom “radiates” (they all do), there is no preferred direction for the photon to exit. Arguably, the molecule has no knowledge of direction, or of any conditions of its surroundings, including no knowledge of which direction might be the highest or lowest Temperature gradient. So a radiated photon is equally likely to go in any direction.
As to a CO2 molecule which has captured an LWIR photon, in the 15 micron wavelength region for example, one could argue, that the CO2 molecule has NOT been heated, by such a capture; but its internal energy state has changed, and it now is likely oscillating in its 15 micron “bending mode”, actually one of two identical “degenerate” bending modes.
In the lower atmosphere, it is most likely that the CO2 molecule will soon collide with an N2 molecule, or an O2 molecule, or even an Ar atom. It is most unlikely to collide with another CO2 molecule. At 400 ppm, there are 2500 molecules for each CO2, so it is likely to be 13-14 molecular spacings to the next CO2; our example doesn’t even know another like it is even there.
When such a collision occurs, our CO2 molecule is likely to forget about doing the elbow bend, and it will exchange some energy with whoever it hit. Maybe the LWIR photon is re-emitted at that point; perhaps with a Doppler shift in frequency, and over a lot of such encounters, the atmospheric Temperature will change; probably an increase. The CO2 molecule itself, really doesn’t have a Temperature; that is a macro property, of a large assemblage of molecules or atoms.
But the bottom line is that an energy exchange in such an isolated event, is likely to be in any direction whatsoever.
We are told that CO2 is “well mixed” in the atmosphere. I have no idea what that means. At ML in Hawaii, the CO2 cycles about 6ppm p-p each year; at the north pole it is about 18 ppm, and at the South pole it is about -1ppm (opposite phase). That’s not my idea of well mixed.
A well mixed mixture, would have no statistically significant change in composition between samples taken anywhere in the mixture; well in my view anyway.
I suspect that there is a gradient in CO2 abundance with altitude. With all the atmospheric instabilities, I doubt that it is feasible to measure it.

Luther Wu
June 13, 2013 11:16 am

It’s his Lordship this, his Lordship that, “he’d deny a blackened pot”
But there for all the world to see, he shows the MET wot’s wot

climatereason(@climatereason)
Editor
June 13, 2013 11:29 am

Luther Wu
Byron will be turning in his grave
tonyb

climatereason(@climatereason)
Editor
June 13, 2013 11:31 am

John Tillman
You must be as amazed as I am that its got warmer since the end of the LIA. Who would have thought it?
tonyb

June 13, 2013 11:35 am

rgbatduke says (June 13, 2013 at 7:20 am): [snip]
Wow. I read every word, understood about half, concur with the rest. The part I didn’t understand took me way, way back to college physics, when we solved the Schrödinger equation for the hydrogen atom. That was the closest I ever came to being a physicist. 🙂 While I enjoyed the trip down memory lane, if you expand this comment into an article, I’d suggest using an example more familiar to most readers than the physics of a carbon atom. 🙂
I looked up the xkcd comic for green jelly beans. During my “biostatistician” period, I was actually involved in a real life situation similar to that–’nuff said.
I remember a thread on WUWT in which a commenter cherry-picked an IPCC GCM that came closest to the (then) trend of the so-called global average temperature. Other commenters asked why the IPCC chose to use their “ensemble” instead of this model. Apparently the model that got the temperature “almost right” was worse than the other models at predicting regional cloud cover, precipitation, humidity, temperature patterns, etc. Green jelly beans all over again.

M Courtney
June 13, 2013 11:41 am

rgbatduke says at June 13, 2013 at 7:20 am
A lot of very insightful information.
Of course averaging models ignores what the models are meant to do. They are meant to represent some understanding of the climate. Muddling them up only works if thy all have exactly the same understanding.
That is they are either known to be all perfect in which case they would all be identical as there is only one real climate.
Or they are known to be all completely unrelated to the actual climate. That is they are assumed to be 100% wrong in a random way. If they were systematically wrong they couldn’t be mixed up equally.
So what does the fact that this mixing has been done say about expert opinion on the worth of the climate models?
My only fault with the comment by rgbatduke is that it was a comment not a main post. It deserves to be a main post.

rgbatduke
June 13, 2013 11:42 am

As I understand it, running the same model twice in a row with the same parameters won’t even produce the same results. But somehow averaging the results together is meaningful? Riiiight. As meaningful as a “global temperature” which is not at all.
This, actually, is what MIGHT be meaningful. If the models perfectly reasonably do “Monte Carlo Simulation” by adding random noise to their starting parameters and then generate an ensemble of answers, the average is indeed meaningful within the confines of the model, as is the variance of the individual runs. Also, unless the model internally generates this sort of random noise as part of its operation, it will indeed produce the same numbers from the same exact starting point (or else the computer it runs on is broken). Computer code is deterministic even if nature is not. This isn’t what I have a problem with. What I object to is a model that predicts a warming that fails at the 2-3 sigma level for its OWN sigma to predict the current temperatures outside still being taken seriously and averaged in to “cancel” models that actually agree at the 1 sigma level as if they are both somehow equally likely to be right.
The models that produce the least average warmingin the whole collection that contributes to AR5 are the only ones that have a reasonable chance of being at least approximately correct. Ones that still predict a climate sensitivity from 3 to 5 C have no place even contributing to the discussion. This is the stuff that really has been falsified (IMO).
Also, global temperature is a meaningful measure that might well be expected to be related to both radiative energy balance and the enthalpy/internal energy content of the Earth. It is not a perfect measure by any means, as temperature distribution is highly inhomgeneous and variable, and it isn’t linearly connected with local internal energy because a lot of that is tied up in latent heat, and a lot more is constantly redistributing among degrees of freedom with vastly different heat capacities, e.g. air, land, ocean, water, ice, water vapor, vegetation.
This is the basis of the search for the “missing heat” — since temperatures aren’t rising but it is believed that the Earth is in a state of constant radiative imbalance, the heat has to be going somewhere where it doesn’t raise the temperature (much). Whether or not you believe in the imbalance (I’m neutral as I haven’t looked at how they supposedly measure it on anything like a continuous basis if they’ve ever actually measured it accurately enough to get out of the noise) the search itself basically reveals that Trenberth actually agrees with you. Global temperature is not a good metric of global warming because one cannot directly and linearly connect absorbed heat with surface temperature changes — it can disappear into the deep ocean for a century or ten, it can be absorbed by water at the surface of the ocean, be turned into latent heat of vaporization, be lost high in the troposphere via radiation above the bulk of the GHE blanket to produce clouds, and increase local albedo to where it reflects 100x as much heat as was involved in the evaporation in the first place before falling as cooler rain back into the ocean, it can go into tropical land surface temperature and be radiated away at enhanced rates from the $T^4$ in the SB equation, or it can be uniformly distributed in the atmosphere and carried north to make surface temperatures more uniform. Only this latter process — improved mixing of temperatures — is likely to be “significantly” net warming as far as global temperatures are concerned.
rgb

Steven Mosher(@stevemosher)
June 13, 2013 11:54 am

‘The models that produce the least average warmingin the whole collection that contributes to AR5 are the only ones that have a reasonable chance of being at least approximately correct. Ones that still predict a climate sensitivity from 3 to 5 C have no place even contributing to the discussion. This is the stuff that really has been falsified (IMO).”
The best estimates of ECS come from paleo data and then observational data. For ECS they range from 1C to 6C.
The climate models range from 2.1C to 4.4C for ECS and much lower for TCR.
Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
even Popper realized this in the end as did Feynman.

M Courtney
June 13, 2013 12:02 pm

Remember that it is global temperature, not energy imbalance, that is the factor expected to be responsible for the feedbacks that turn the gradual changes we have barely noticed into a global catastrophe.
If the energy being absorbed doesn’t cause the global temperature changes then the proposed mechanisms for the feedbacks – like increased water vapour in the atmosphere – don’t work.
And therefore the priority given to the field of Climatology needs to be reassessed.

Eustace Cranch
June 13, 2013 12:11 pm

rgbatduke says:
June 13, 2013 at 11:42 am
“…one cannot directly and linearly connect absorbed heat with surface temperature changes — it can disappear into the deep ocean for a century or ten…”
Disappear? How? Will someone PLEASE explain the mechanism to me?

June 13, 2013 12:15 pm

One must always remember the mandate of the IPCC when review information they provide. They are not mandated to study all possible causes of climate change, only human caused climate change:
“The Intergovernmental Panel on Climate Change (IPCC) was established by World Meteorological Organization and United Nations Environmental Programme (UNEP) in 1988 to assess scientific, technical, and socioeconomic information that is relevant in understanding human-induced climate change, its potential impacts, and options for mitigation and adaptation.”
Hence, the whole concept of open science within the IPCC is not relevant since they are working with a stated and clear agenda.

Luther Wu
June 13, 2013 12:20 pm

climatereason says:
June 13, 2013 at 11:29 am
Luther Wu
Byron will be turning in his grave
tonyb
________________
I’m sure you meant Kipling…

u.k(us)
June 13, 2013 12:38 pm

The secret to our success, such as it is, is the ability to adapt to changing conditions.
If conditions were unchanging, what would be the point of random mutations in DNA.

June 13, 2013 12:42 pm

Steven Mosher says (June 13, 2013 at 11:54 am): “Finally, there is no such thing as falsification. There is confirmation and disconfirmation. even Popper realized this in the end as did Feynman.”
Perhaps you could explain the difference between “falsification” and “disconfirmation”, or link a reference that does. Preferably at kindergarten level. 🙂

Snotrocket
June 13, 2013 12:49 pm

rgbatduke says: <i."One cannot generate an ensemble of independent and identically distributed models that have different code. "
Yep. I guess that must be like having an ‘average car’ and then telling children that that’s what all cars really look like…. Now that would be something to see, an average car. (Bearing in mind, an Edsel might well be in the mix somewhere).

Lars P.
June 13, 2013 12:49 pm

rgbatduke says:
June 13, 2013 at 7:20 am
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons.
Thank you for your post, it is brilliant and should be elevated to a blog-post itself. The idea you present is only logical and indeed is a shame it has not been already done so.
Indeed there makes no sense to continue to use models which are so far away from reality. Only models which have been validated by real data should continue to be used.
It is what scientists do all the time… in science. They scrap models that have been invalidated and focus on those which give best results, they do not continue to use an ensemble of models of which 95% go into Nirvana and draw a line somewhere 95% Nirvana 5 % real.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Thank you again!

RCSaumarez
June 13, 2013 1:00 pm

@rgbatduke
Brilliant comment (essay). Of course formimg an ensemble of model outputs and saying that its mean is “significant” is arrant nonsense -it isn’t a proper sample or a hypothesis test and it certainly isn’t a prediction. All one can say, given the disparity of results, something is wrong with the models, as you point out. The thing that is so depressing is that people who should know better seem to believe it – probably because they don’t understand it.
On the subject of Monte-Carlo, some non-linear systems can give a very wide range of results which reflect the distribution of inputs that invoke the non-linearity. In my field, cardiac electrophysiology, this is particularly important and small changes in assumptions in a model will lead to unrealistic behaviour. Even forming simple statistics with these results is wrong for the reasons you so eloquently state. Widely diverging results should force attention on the non-linear behaviour that cause this divergence and a basic questioning of the assumptions.

June 13, 2013 1:11 pm

You can have hours of fun trying to estimate m with statistical significance when given a data set generated by
y = m*x + c
where
(y[n]-y[n-1]) ~ F(0,a)
x[n] – x[n-1] = L
where F is a non-stationary non-normal distribution. It is even more fun if you assume that F is normal and stationary even though it is not. But fun does not pay the bills.

rgbatduke
June 13, 2013 1:17 pm

@ rgb@duke. I took the liberty of sending an email to Judy Curry asking that she take a look at your comment and consider asking you to write a tightened up version to be used as a discussion topic at ClimateEtc. Please give this some thought and ping her at her home institution to the Southwest of you. (Okay, West Southwest.)
Thank you,
RayG

Sounds like work. Which is fine, but I’m actually up to my ears in work that I’m getting paid for at the moment. To do a “tightened up version” I would — properly speaking — need to read and understand the basic structure of each GCM as it is distinguished from all of the rest. This not because I think there is anything in what I wrote above that is incorrect, but because due diligence for an actual publication is different from due diligence for a blog post, especially when one is getting ready to call 40 or 50 GCMs crap and the rest merely not yet correct while not quite making it to the level of being crap. Also, since I’m a computational physicist and moderately expert in Bayesian reasoning, statistics, and hypothesis testing I’d very likely want to grab the sources for some of the GCMs and run them myself to get a feel for their range of individual variance (likely to increase their crap rating still further).
That’s not only not a blog post, that’s a full time research job for a couple of years, supported by a grant big enough to fund access to supercomputing resources adequate to do the study properly. Otherwise it is a meta-study (like the blog post above) and a pain in the ass to defend properly, e.g. to the point where it might get past referees. In climate science, anyway — it might actually make it past the referees of a stats journal with only a bit of tweaking as the fundamental point is beyond contention — the average and variance badly violate the axioms of statistics, hence they always call it a “projection” (a meaningless term) instead of a prediction predicated upon sound statistical analysis where the variance could be used as the basis of falsification.
The amusing thing is just how easy it is to manipulate this snarl of models to obtain any “average” prediction you like. Suppose we have only two models — G and B. G predicts moderate to low warming, gets things like cloud cover and so on crudely right, it is “good” in the sense that it doesn’t obviously fail to agree with empirical data within some reasonable estimate of method error/data error combined. B predicts very high warming, melting of the ice pack in five years, 5 meter SLR in fifty years, and generally fails to come close to agreeing with contemporary observations, it is “bad” in the specific sense that it is already clearly falsified by any reasonable comparison with empirical data.
I, however, am a nefarious individual who has invested my life savings in carbon futures, wind generation, and banks that help third world countries launder the money they get from carbon taxes on first world countries while ensuring that those countries aren’t permitted to use the money to actually build power plants because the only ones that could meet their needs burn things like coal and oil.
So, I take model B, and I add a new dynamical term to it, one that averages out close to zero. I now have model B1 — son of B, gives slightly variant predictions (so they aren’t embarrassingly identical) but still, it predicts very high warming. I generate model B2 — brother to B1, it adds a different term, or computes the same general quantities (same physics) on a different grid. Again, different numbers for this “new” model, but nothing has really changed.
Initially, we had two models, and when we stupidly averaged their predictions we got a prediction that was much worse than G, much better than B, and where G was well within the plausible range, at the absolute edge of plausible. But now there are three bad models, B, B1, and B2, and G. Since all four models are equally weighted, independent of how good a job they do predicting the actual temperature and other climate features I have successfully shifted the mean over to strongly favor model B so that G is starting to look like an absolute outlier. Obviously, there is no real reason I have to start with only two “original” GCMs, and no reason I have to stop with only 3 irrelevant clones of B.
Because I am truly nefarious and heavily invested in convincing the world that the dire predictions are true so that they buy more carbon futures, subsidize more windmills, and transfer still more money to third world money launderers, all I have to do is sell it. But that is easy! All of the models, G and B+ (and C+ and D+ if needed) are defensible in the sense that they are all based on the equations of physics at some point plus some dynamical (e.g. Markov) process. The simple majority of them favor extreme warming and SLR. There are always extreme weather events happening somewhere, and some of them are always “disastrous”. So I establish it as a well-known “fact” that physics itself — the one science that people generally trust — unambiguously predicts warming because a simple majority of all of these different GCMs agree, and point to any and all anecdotal evidence to support my claim. Since humans live only a pitiful 30 or 40 adult years where they might give a rat’s ass about things like this (and have memories consisting of nothing but anecdotes) it is easy to convince 80% of the population, including a lot of scientists who ought to know better, that it really, truly is going to warm due to our own production of CO_2 unless we all implement a huge number of inconvenient and expensive measures that — not at all coincidentally — line my personal pocket.
Did I mention that I’m (imaginarily) an oil company executive? Well, turns out that I am. After all, who makes the most money from the CAGW/CACC scare? Anything and everything that makes oil look “scarce” bumps the price of oil. Anything and everything that adds to the cost of oil, including special taxes and things that are supposed to decrease the utilization of oil, make me my margin on an ever improving price basis in a market that not only isn’t inelastic, it is inelastic and growing rapidly as the third world (tries to) develop. I can always sell all of my oil — I have to artificially limit supply as it is to maintain high profits and prolong the expected lifetime of my resources. Greenpeace can burn me in friggin’ effigy for all I care — the more they drive up oil costs the more money I make, which is all that matters. Besides, they all drive SUVs themselves to get out into the wilderness and burn lots of oil flying around lobbying “against” me. I make sure that I donate generously to groups that promote the entire climate research industry and lobby for aggressive action on climate change — after all, who actually gets grants to build biofuel plants, solar foundries, wind farms, and so on? Shell Oil. Exxon. BP. Of course. They/we advertise it on TV so people will now how pious the oil/energy industry is regarding global warming.
Not that I’m asserting that this is why there are so many GCMs and they are all equally weighted in the AR5 average — that’s the sort of thing that I’d literally have to go into not only the internals of but the lineage of across all the contributing GCMs to get a feel for whether or not it is conceivably true. It seems odd that there are so many — one would think that there is just one set of correct physics, after all, and one sign of a correctly done computation based on correct physics is that one gets the same answer within a meaningful range. I would think that four GCMs would be plenty — if GCMs worked at all. Or five. Not twenty, thirty, fifty (most run as ensembles themselves and presenting ensemble averages with huge variances in the first place). But then, Anthony just posted a link to a Science article that suggests that four distinct GCMs don’t agree within spitting distance in a toy problem the sort of thing one would ordinarily do first to validate a new model and ensure that all of the models are indeed incorporating the right physics.
These four didn’t. Which means that at least three out of four GCMs tested are wrong! Significantly wrong. And who really doubts that the correct count is 4/4?
I’m actually not a conspiracy theorist. I think it is entirely possible to explain the proliferation of models on the fishtank evolutionary theory of government funded research. The entire science community is effectively a closed fishtank that produces no actual fish food. The government comes along and periodically sprinkles fish food on the surface, food tailored for various specific kinds of fish. One decade they just love guppies, so the tank is chock full of guppies (and the ubiquitous bottom feeders) but neons and swordtails suffer and starve. Another year betas (fighting fish) are favored — there’s a war on and we all need to be patriotic. Then guppies fall out of fashion and neons are fed and coddled while the guppies start to death and are eaten by the betas and bottom dwellers. Suddenly there is a tankful of neons and even the algae-eaters and sharks are feeling the burn.
Well, we’ve been sprinkling climate research fish food grants on the tank for just about as long as there has been little to no warming. Generations of grad students have babysat early generation GCMs, gone out and gotten tenured positions and government research positions where in order to get tenure they have had to write their own GCMs. So they started with the GCMs they worked with in grad school (the only ones whose source code they had absolutely handy), looked over the physics, made what I have no doubt was a very sincere attempt to improve the model in some way, renamed it, got funding to run it, and voila — B1 was born of B, every four or five years, and then B1′ born of B1 as the first graduated student graduated students of their own (who went on to get jobs) etc — compound “interest” growth without any need for conspiracy. And no doubt there is some movement along the G lines as well.
In a sane universe, this is half of the desired genetic optimization algorithm that leads to ever improving theories and models The other half is eliminating the culls on some sort of objective basis. This can only happen by fiat — grant officers that defund losers, period — or by limiting the food supply so that the only way to get continued grant support is to actually do better in competition for scarce grant resources.
This ecology has many exemplars in all of the sciences, but especially in medical research (the deepest, richest, least critical pockets the world has ever known) and certain branches of physics. In physics you see it when (for a decade) e.g. string theory is favored and graduate programs produce a generation of string theorists, but then string theory fails in its promise (for the moment) and supersymmetry picks up steam, and so on. This isn’t a bad ecology, as long as there is some measure of culling. In climate science, however, there has been anti-culling — the deliberate elimination of those that disagree with the party line of catastrophic warming, the preservation of GCMs that have failed and their inclusion on an equal basis in meaningless mass averages over whole families of tightly linked descendents where whole branches probably need to go away.
Who has time to mess with this? Who can afford it? I’m writing this instead of grading papers, but that happy time-out has to come to an end because I have to FINISH grading, meet with students for hours, and prepare and administer a final exam in introductory physics all before noon tomorrow. While doing six other things in my copious free moments. Ain’t got no grant money, boss, gotta work for a living…
rgb

Nick Stokes(@bilby)
June 13, 2013 1:24 pm

rgbatduke says: June 13, 2013 at 7:20 am
“One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again.”

Well, who did assemble it? It says at the top “lordmoncktonfoundation.com”.

rgbatduke
June 13, 2013 1:33 pm

On the subject of Monte-Carlo, some non-linear systems can give a very wide range of results which reflect the distribution of inputs that invoke the non-linearity. In my field, cardiac electrophysiology, this is particularly important and small changes in assumptions in a model will lead to unrealistic behaviour. Even forming simple statistics with these results is wrong for the reasons you so eloquently state. Widely diverging results should force attention on the non-linear behaviour that cause this divergence and a basic questioning of the assumptions.
Eloquently said right back at you. Computational statistics in nonlinear modeling is a field where angels fear to tread. Indeed, nonlinear regression itself is one of the most difficult of statistical endeavors because there really aren’t any intrinsic limits on the complexity of nonlinear multivariate functions. In the example I gave before, the correct many electron wavefunction is a function that vanishes when any two electron coordinates (all of which can independently vary over all space) are the same, that vanishes systematically when any single electron coordinate becomes large compared to the size of the atom, that is integrable at the origin in the vicinity of the nucleus (in all coordinates separately or together), that satisfies a nonlinear partial differential equation in the electron-electron and electron nucleus interaction, that is fully antisymmetric, and that obeys the Pauli exclusion principle. One cannot realize this as the product of single electron wavefunctions, but that is pretty much all we know how to build or sanely represent as any sort of numerical or analytic function.
And it is still simple compared to climate science. At least one can prove the solutions exist — which one cannot do in the general case for Navier-Stokes equations.
Does climate science truly stand alone in failing to recognize unrealistic behavior when it bites it in the ass? Widely diverging results should indeed force attention on the non-linear behavior that causes the divergence and a basic questioning of the assumptions. Which is, still fairly quietly, actually happening, I think. The climate research community is starting to face up to the proposition that no matter how invested they are in GCM predictions, they aren’t working and the fiction that the AR collective reports are somehow “projective” let alone predictive is increasingly untenable.
Personally, I think that if they want to avoid pitchforks and torches or worse, congressional hearings, the community needs to work a bit harder and faster to fix this in AR5 and needs to swallow their pride and be the ones to announce to the media that perhaps the “catastrophe” they predicted ten years ago was a wee bit exaggerated. Yes, their credibility will take a well-deserved hit! Yes, this will elevate the lukewarmers to the status of well-earned greatness (it’s tough to hold out in the face of extensive peer disapproval and claims that you are a “denier” for doubting a scientific claim and suggesting that public policy is being ill advised by those with a vested interest in the outcome). Tough. But if they wait much longer they won’t even be able to pretend objectivity — it will smack of a cover-up, and given the amount of money that has been pissed away on the predicted/projected catastrophe, there will be hell to pay if congress decides it may have been actually lied to.
rgb

Lars P.
June 13, 2013 1:34 pm

rgbatduke says:
June 13, 2013 at 1:17 pm
rgbatduke, thanks for the good laugh and brilliant additional post!
I am sure your horoscope looks 5 stars for you today.

June 13, 2013 1:39 pm

Santer’s later paper:
http://www.pnas.org/content/early/2012/11/28/1210514109.full.pdf
Admits the models failed, and it turns out we didn’t have to wait 17 years to establish that fact after all.

Frank Slojkowski
June 13, 2013 1:40 pm

Why do we even waste time arguing over the statistical significance of every minor blip in the temperature curves? Another recent peer-reviewed paper assures us once again that the tropical hot spot, that inseperable signature of the models, is nowhere to be found. As Dr. Feynman has taught us, the models have failed the data test and are therefore worthless. It’s as simple as that.

rgbatduke
June 13, 2013 1:45 pm

rgbWell, who did assemble it? It says at the top “lordmoncktonfoundation.com”.
Aw, c’mon Nick, you can do better than that. Clearly I was referring to the AR5 ensemble average over climate models, which is pulled from the actual publication IIRC. This is hardly the first time it has been presented on WUWT.
And the spaghetti graph is even worse. Which is why they don’t present it in any sort of summary — even lay people inclined to believe in CAGW would question GCMs if they could see how divergent the predictions are from each other and from the actual climate record over the last 33 years, especially with regard to LTT and SST and SLR. SLR predictions are a joke. SST predictions have people scrabbling after missing heat and magic heat transport processes. The troposphere is a major fail. Everybody in climate science knows that these models are failing, and are already looking to explain the failures but only in ways that don’t lose the original message, the prediction (sorry, “projection”) of catastrophe.
I’ve communicated with perfectly reasonable climate scientists who take the average over the spaghetti seriously and hence endorse the 2.5C estimate that comes directly from the average. It’s high time that it was pointed out that this average is a completely meaningless quantity, and that 2/3 of the spaghetti needs to go straight into the toilet as failed, not worth the energy spent running the code. But if they did that, 2.5 C would “instantly” turn into 1-1.5C, or even less, and this would be the equivalent of Mount Tambora exploding under the asses of climate scientists everywhere, an oops so big that nobody would ever trust them again.
Bear in mind that I personally have no opinion. I think if anything all of these computations are unverified and hence unreliable science. We’re decades premature in claiming we have quantitative understanding of the climate. Possible disaster at stake or not, the minute you start lying in science for somebody’s supposed own benefit, you aren’t even on the slippery slope to hell, you’re already in it. Science runs on pure, brutal honesty.
Do you seriously think that is what the AR’s have produced? Honest reporting of the actual science, including its uncertainties and disagreements?
Really?
rgb

Lil Fella from OZ
June 13, 2013 1:58 pm

Dr. Pachauri said that he would not take notice of these trends unless they continued for 40 years.
I could not work that out seeing Dr.Carter wrote that 30 year spans are climate as opposed to the general comment regarding weather. Does the money run out then!?

Nick Stokes(@bilby)
June 13, 2013 2:10 pm

rgbatduke says: June 13, 2013 at 1:45 pm
“Aw, c’mon Nick, you can do better than that. Clearly I was referring to the AR5 ensemble average over climate models, which is pulled from the actual publication IIRC.”

You say exactly what you are referring to:
“This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.”
The graphs Monckton publishes above! But these are clearly marked “lordmoncktonfoundation.com” – not a common IPCC adornment. You’ve cited Monckton graphs, Spencer graphs. If there is an AR5 graph with the features you condemn (AR5 trend line etc) where is it?

jai mitchell
June 13, 2013 2:15 pm

& John Tillman
–Yes, I misread his statement but then it only makes one consider. If you all think that we are actually supposed to be headed into another ice age, then why are we “recovering” from the little ice age?
And if you are all such big fans if the medieval warm period, why wasn’t the little ice age a “recovery” from that, (since we are supposed to be headed into another ice age)
it sounds to me like you are really grasping at straws here.

rgbatduke
June 13, 2013 2:16 pm

Disappear? How? Will someone PLEASE explain the mechanism to me?
One proposed mechanism is that e.g. UV light passes into the ocean bypassing the surface layer where absorbed IR turns straight into latent heat with no actual heating, warms it at some moderate depth, which is then gradually mixed downward to the thermocline
The catch is, the water in the deep ocean is stable — denser and colder than the surface layer. It turns over due to variations in surface salinity in the so-called “global conveyor belt” of oceanic heat circulation on a timescale of centuries, and much of this turnover skips the really deep ocean below the thermocline because it is so very stable at a nearly uniform temperature of 4 C. Also, water has a truly enormous specific heat compared to air, even dumping all of the supposed radiative imbalance into the ocean over decades might be expected to produce a truly tiny change in water temperature, especially if the heat makes it all the way down to and through the thermocline.
So one ends up with deep water that is a fraction of a degree warmer than it might have been otherwise (but nevertheless with a huge amount of heat tied up in that temperature increase) that isn’t going anywhere until the oceanic circulation carries it to the surface decades to centuries from now.
To give you some idea of how long it takes to equilibrate some kinds of circulation processes, Jupiter may well be still giving off its heat of formation from four and a half billion years ago! as it is radiating away more energy than it is receiving. Or there could be other processes contributing to that heat. Brown dwarf stars don’t generate heat from fusion, but nevertheless are expected to radiate heat away for 100 billion years from their heat of formation. The Earth’s oceans won’t take that long, but they are always disequilibrated with the atmosphere and land and act as a vast thermal reservoir, effectively a “capacitor” that can absorb or release heat to moderate more rapid/transient changes in the surface/atmospheric reservoirs, which is why Durham (where I live most of the year) is currently 5-10 F warmer outside than where I am sitting in Beaufort next to the ocean at this minute.
So if the “missing heat” really is missing, and is going into the ocean, that is great news as the ocean could absorb it all for 100 years and hardly notice, moderating any predicted temperature increase in the air and on land the entire time, and who knows, perhaps release it slowly to delay the advent of the next glacial epoch a few centuries from now. Although truthfully nobody knows what the climate will do next year, ten years from now, or a century from now, because our current climate models and theories do not seem to work to explain the past (at all!), the present outside of a narrow range across which they are effectively fit, or the future of whenever they were fit. Indeed, they often omit variables that appear to be important in the past, but nobody really knows why.
rgb

Bruce Cobb
June 13, 2013 2:17 pm

rgbatduke says:
In the end, they’re all sons of B’s, aren’t they?

dbstealey(@dbstealey)
June 13, 2013 2:29 pm

rgb@duke says:
“To give you some idea of how long it takes to equilibrate some kinds of circulation processes, Jupiter may well be still giving off its heat of formation from four and a half billion years ago!
But since Jupiter’s year is 11.89 years long, it has been radiating for only a mere 379 million years.
[Just practicing to be a SkS ‘science’ writer… ☺]

climatereason(@climatereason)
Editor
June 13, 2013 2:32 pm

jai Mitchell said
‘Yes, I misread his statement but then it only makes one consider. If you all think that we are actually supposed to be headed into another ice age, then why are we “recovering” from the little ice age?
And if you are all such big fans if the medieval warm period, why wasn’t the little ice age a “recovery” from that, (since we are supposed to be headed into another ice age)’
So you misread the comment (we all do it) but instead of acknowledging that you then go off at a tangent. The world warms and cools. it cools down before we reach a glacial period and warms up after it. THE Ice age is the daddy of them all, but there have been numerous lesser glacial periods within the last 4000 years of ‘neo glaciation’ or a number of little ice ages if you like, with ‘our’ LIA being the coldest of them all during the holocene. I’ve graphed 6 periods of glaciation over the last 3000 years-‘our’ lia wasn’t the only one as Matthes pointed out, just the most recent.
tonyb

taxed
June 13, 2013 2:32 pm

l expect to see further cooling for.the rest of the year.
The current jet stream pattern is what is putting a brake on the warming , but l do think we can expect to see more heavy rain and the risk of floods across the NH during the rest of the year. As Arctic air dives deep to the south.

rgbatduke
June 13, 2013 2:43 pm

Jeeze, Nick:
First of all, note “fig 11.33a” on the graph above. Second, note reproductions from the AR5 report here:
http://wattsupwiththat.com/2012/12/30/ar5-chapter-11-hiding-the-decline-part-ii/
Then there is figure 1.4:
http://wattsupwiththat.com/2012/12/14/the-real-ipcc-ar5-draft-bombshell-plus-a-poll/
Sure, these are all from the previously released draft, and who knows somebody may have fixed them. But pretending that they are all Monckton’s idea and not part of the actual content of AR5 at least as of five or six months ago is silly. If you are having difficulty accessing the leaked AR5 report and looking at figure 11.33, let me know (it is reproduced in the WUWT above, though so I don’t see how you could be). You might peek at a few other figures where yes, they average over a bunch of GCMs. Is Monckton’s graph a precise reproduction of AR5 11.33a? No, but it comes damn close to 11.33b. And 11.33a reveals the spaghetti snarl in the models themselves and makes it pretty evident that the actual observational data is creeping along the lower edge of the spaghetti from 1998/1999 (La Nina) on.
So, is there a point to your objection, or where you just trying to suggest that AR5 does not present averages over spaghetti and base its confidence interval on the range it occupies? Because 11.33b looks like it does, I’m just sayin. So does 11.11a. So does 1.4, which has the additional evil of adding entirely idiotic and obviously hand drawn “error bars” onto the observational data points.
But where in AR5 does it say “these models appear to be failing”? Or just “Oops”?
Mind you, perhaps they’ve completely rewritten it in the meantime. Who would know? Not me.
rgb

JJ
June 13, 2013 2:47 pm

Nick Stokes says:
You say exactly what you are referring to:

Yes, he does. And you understand quite well what he said. And yet you lie and pretend otherwise. Why must you lie, Nick?
The graphs Monckton publishes above! But these are clearly marked “lordmoncktonfoundation.com” – not a common IPCC adornment. You’ve cited Monckton graphs, Spencer graphs. If there is an AR5 graph with the features you condemn (AR5 trend line etc) where is it?
It is on the graphs that Monckton and Spencer published, of course. But then, you knew that.
Monckton and Spencer cite and present the AR5 model ensemble graphs in their own insightful critiques of the AR5 work. Duke expands on those critiques in a particularly cogent way. And you lie about it. Everything under heaven has its purpose, it seems.

David L.
June 13, 2013 2:48 pm

First and foremost: a line is simply the wrong function. Period.

jc
June 13, 2013 2:58 pm

@ rgbatduke says:
June 13, 2013 at 1:45 pm
“…perfectly reasonable climate scientists…”
————————————————————————————————————————–
Should read: “…give the impression of perfectly reasonable…”
A simulation.
Reason is not restricted to the capacity to follow one comment or assertion (in any language including mathematical) with another that in itself does not create an obvious disjunction with either the first or with other points of apparent relevance that it is obviously contingent on at that particular point. This is mechanical in nature, and relies on the perception that what can be expressed within those particular confines constitutes all that is both required and possible.
This is a lawyers mode of being with apparent plausability of association being in itself the demonstration of the required reality to be established. It is also the mechanism which is used when it is said that someone is “being reasonable” in that they will accept a situation or proposition on the basis that a resolution is desirable quite regardless of the seen and understood, and incompletely identified or acknowledged, elements or context that would otherwise “complicate” matters. These rely on a circumscribed view, and a self-contained justification. Not fundamental principle.
Being “reasonable” in the above social or proceedural way is not evidence of reason. Reason, or the effective existence and application of intelligence, requires, at the start, not just acceptance of a reality but the desire to be subject to it. At any and all times.
The world is full of people who are practiced at, by virtue of not appearing hostile, or not observably failing to agree with that which cannot be denied, seeming “reasonable”. This, in itself, is meaningless. To be genuinely reasonable requires a readiness to admit realities that undermine conveniences built on and around a contrary conception.
Reason and honesty are synonymous.
In the case of “Climate Scientists” who will not or cannot acknowledge a reality pertaining to this field, they are not “reasonable” in any meaningful way. If, in conjunction with such a position, they can pass this off as “reasonable” it merely illustrates a core aspect of their character.
I realize that your use of the word reasonable above was both off-hand and likely intended to communicate the socially civilized nature of the exchanges you refer to, with no apparent hostility or reticence that might be characterized as evasion or duplicity.
But it is very important not to paint a false picture. A pretense of openness and “reasonableness” fails if basic foundational issues of indisputable importance are not acknowledged. And that is the case with these “scientists”.
A stick is a stick. Two plus two does not equal five.
There are no excuses.

pat
June 13, 2013 3:14 pm

and the Bonn talks end in failure:
14 June: Bloomberg: Alessandro Vitelli: UN Climate-Talks Collapse Piles Pressure on November Summit
United Nations talks on reforms to emissions-market rules stalled this week after members rejected a proposal to reconsider the body’s decision-making rules, putting additional pressure on a climate summit in November.
The loss of two weeks’ negotiating time means that items that were due to be discussed in Bonn from June 3 through June 14 may now be revisited at the UN’s annual climate conference in Warsaw at the end of the year, adding to an already-packed agenda that may not be fully addressed, according to a project developers’ group…
The loss of two weeks’ negotiating time may mean that a review of UN offset market rules may not be completed by the end of the year, said Gareth Phillips, chairman of the Project Developers’ Forum, a group representing investors and developers of clean energy projects that generate carbon credits.
“We’ve lost a massive amount of time,” Phillips said today in an interview in Bonn. “Parties were already in two minds over whether they could complete the review of the CDM in Warsaw, so now it looks very unlikely we can conclude the work by then.”…
***“You really can’t expect there to be a negotiation at the seriousness of this one, which is about transforming the whole global energy economy, without there being hurdles and obstacles,” she (Ruth Davis, political director of Greenpeace U.K.) said today in an interview in Bonn…
http://www.bloomberg.com/news/2013-06-13/un-climate-talks-collapse-piles-pressure-on-november-summit.html

Billy Liar
June 13, 2013 3:20 pm

Eustace Cranch says:
June 13, 2013 at 12:11 pm
“…one cannot directly and linearly connect absorbed heat with surface temperature changes — it can disappear into the deep ocean for a century or ten…”
Disappear? How? Will someone PLEASE explain the mechanism to me?

Disappeared = not currently measured

Nick Stokes(@bilby)
June 13, 2013 3:23 pm

rgbatduke says: June 13, 2013 at 2:43 pm
“Jeeze, Nick:
First of all, note “fig 11.33a” on the graph above.”

Yes, but the graph is not Fig 11.33a. Nothing like it.
You said, for example,
“Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
The AR5 graphs you linked to do not do any of that. No variance or standard deviation is quoted. They do show quantiles of the actual model results, but that is just arithmetic. At most they speak of an “assessed likely range”. There’s nothing anywhere about variance, uncorrelated random deviates etc. That’s all Monckton’s addition.
JJ says: June 13, 2013 at 2:47 pm
“And you understand quite well what he said. And yet you lie and pretend otherwise. Why must you lie, Nick?”

What an absurd charge. Yes, I understand quite well what he said. He said that the graphs that are shown are a swindle, the maker should be bitch-slapped etc. And he clearly thought that he was talking about the IPCC. But he got it wrong, and won’t admit it. The things he’s accusing the IPCC of are actually Monckton alterations of what the IPCC did.
Now you may think that doesn’t matter. But what does factual accuracy count for anyway, in your world.

JJ
June 13, 2013 3:25 pm

M Courtney says:
My only fault with the comment by rgbatduke is that it was a comment not a main post. It deserves to be a main post.

I concur!
With the title “The Average of Bull\$#!^ is not Roses”
🙂

phlogiston
June 13, 2013 3:26 pm

Steven Mosher says:
June 13, 2013 at 11:54 am
Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
even Popper realized this in the end as did Feynman.

At least you recognise that Popper’ philosophy is toxic to AGW, as it is to other anti-science scams such as the linear no-threshold hypothesis of radiation carcinogenesis, politically mandated to strip the west of its nuclear industry.
However as Popper says, “there are no inductive inferences”. Induction will only take you down the garden path.
Notice how AGW is being pushed into untestable corners, like longer timescales and the deep ocean. You guys are scared of Popper. You need to be.

phlogiston
June 13, 2013 3:37 pm

Nick Stokes says:
June 13, 2013 at 3:23 pm
rgbatduke says: June 13, 2013 at 2:43 pm
“Jeeze, Nick:
First of all, note “fig 11.33a” on the graph above.”
Yes, but the graph is not Fig 11.33a. Nothing like it.
The great AGW WORM-OUT has begun.
“Predict global warming? Me?? No – that’s just a Monkton fabrication.
All we did was project a statistical envelope of warm-cold wet-dry storm-notstorm glacier retreatadvance moreless twisters and peccatogenic day-to-day change in weather which never happened in pre-industrial times.”
Get used to this, NS is the figurehead (aka frigging in the rigging) of a vast diatribe of AGW denial that is on its way.

Arno Arrak
June 13, 2013 3:39 pm

I don’t like your temperature graph based on HadCRUT4. There are many things wrong with it starting with the choice of scale. The temperature region included is too narrow and should begin where satellite data begin which is 1979. Bimonthly resolution is too course for significant detail – should use at least monthly resolution. And linear fit through a forest of noise is worthless. The right way to show a temperature record is not to use a running mean or to fit any graph to it but to outline it with a broad semi-transparent band as wide as the average random fuzz that is part of the record. That random fuzz is not noise but represents cloudiness that varies randomly. This limits its amplitude and anything that sticks far out is an anthropogenic artifact. You can use linear fit later once you can actually see that it is linear. To find the shape of the mean temperature curve in the presence of ENSO oscillations (which are everywhere) you start by putting dots in the middle of each line connecting an El Nino peak with its neighboring La Nina valley and connecting the dots.This is done after the transparent band is laid down. There will be some random deviations but that is the nearest you will ever get to global mean temperature. These are just general requirements. In my opinion only satellite data should be used when available because ground-based data have been manipulated and secretly computer processed. They do not show the true height of El Nino peaks and their twenty-first century segments have all been raised up by as much as a tenth of a degree. But their worst imaginary feature has been a non-existent warming in the eighties and nineties. They call it the late twentieth century warming and it is still part of AR5 previews like the horsetail graphs of CMIP5. In researching my book What Warming? I compared satellite and ground-based temperature curves and found that satellite curves showed an 18 year linear segment from 1979 to 1997. But ground-based curves showed a steady warming in that time slot which they called “late twentieth century warming.” I considered it fake and put that in the book. Nothing happened. Until last fall, that is, when GISTEMP, HadCRUT, and NCDC temperature repositories decided in unison to get rid of that fake warming and follow the satellite data in the eighties and nineties. Nothing was said about it. I consider this coordinated action an admission that they knew the warming was fake. Their twenty-first century data are likewise screwed up and cannot be trusted. I also discovered that all three were secretly computer processed. That was an accident because they did not know that their software left traces of its work in their database. These consist of sharp, high spikes sticking up from the broad magic marker band at the beginnings of years. They looked like noise but noise does not know the human calender. They are in exact same places in all three data sets and have been there at least as far back as 2008. What connection, if any, they have with that fake warming I do not know. But now that we know there is a no-warming zone in the eighties and nineties and a no-warming zone also in the twenty-first century we can put it all together. There is only a narrow strip between, enough to accommodate the super El Nino of 1998 and its associated step warming. The step warming was caused by the large amount of warm water the super El Nino carried across the ocean. In four years it raised global temperature by a third of a degree Celsius and then stopped. As a result, all twenty-first century temperatures are higher than the nineties. Hansen noticed this and pointed out that of the ten highest temperatures, nine occurred after 2000. Not surprising since they all sit on the high warm platform created by the step warming, the only warming during the entire satellite era. These years cannot be greenhouse warming years because the step warming was oceanic, not atmospheric in origin. There is actually no room left for greenhouse warming during the satellite era because the two no-warming stretches and the super El Nino use up the entire time available. That means no greenhouse warming for the last 34 years. With this fact in mind, can you believe that any of the warming that preceded the satellite era can be greenhouse warming? I think not.

jai mitchell
June 13, 2013 3:41 pm

ClimateReason,
(how do you quote somebody on this?)
u said, ” I’ve graphed 6 periods of glaciation over the last 3000 years-’our’ lia wasn’t the only one as Matthes pointed out, just the most recent.”
hasn’t that shown that the temperatures have been going down during this period? The LIA is associated with the maurader minimum, Saying that we are “recovering” from that implies that the sun itself is “recovering” from that. However, the change in temperatures during the last 5 decades are not based on changes in the sun’s intensity since that effect is pretty much instantaneous.
http://chartsgraphs.files.wordpress.com/2009/09/tsi_1611_2009_11yr_ma.png
if you look at this chart, and if you think that solar irradiance is the cause of the variation then we would have 1.5 C average variation every 6.5 years due to the solar cycle (the solar cycle does cause some variation but only very little since it is only .075% of the total sun’s activity (peak to trough)

Ian Robinson
June 13, 2013 3:50 pm

The Met Office is so worried, it#s holding a meeting to discuss why the UK is no longer experiencing…
… any warming since 2006!

Bart
June 13, 2013 3:51 pm

jai mitchell says:
June 13, 2013 at 3:41 pm
“However, the change in temperatures during the last 5 decades are not based on changes in the sun’s intensity since that effect is pretty much instantaneous.”
Sigh… Just another guy who does not understand the concept of frequency response.

Nick Stokes(@bilby)
June 13, 2013 3:52 pm

phlogiston says: June 13, 2013 at 3:37 pm
” No – that’s just a Monkton fabrication.”

I don’t think it’s a Monckton fabrication. The attribution could be clearer, but it’s properly marked “lordmoncktonfoundation.com”. I don’t even think it’s that bad, but RGB says:
“One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again.”
Clearly he thought he was referring to the IPCC, but the graph is labelled “Monckton”, and his diatribe matches the graph in this post. It does not match the AR5 graphs that he later linked to.

phlogiston
June 13, 2013 4:06 pm

Nick Stokes says:
June 13, 2013 at 3:52 pm
phlogiston says: June 13, 2013 at 3:37 pm
” No – that’s just a Monkton fabrication.”
Clearly he thought he was referring to the IPCC, but the graph is labelled “Monckton”, and his diatribe matches the graph in this post. It does not match the AR5 graphs that he later linked to.
I’m sure Monkton himself can clarify the provenance of this figure.

Nick Stokes(@bilby)
June 13, 2013 4:27 pm

phlogiston says: June 13, 2013 at 4:06 pm
“I’m sure Monkton himself can clarify the provenance of this figure.”

Lord M says
“The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.”
It sure sounds like he’s doing the stats and graphing himself.

phlogiston
June 13, 2013 4:36 pm

Nick Stokes says:
June 13, 2013 at 4:27 pm
An un-vetted person doing statistics, how shocking!
Do you assert – contrary to Monkton – that the ensemble models are spot-on in predicting the global temperature trend in the last two decades? Or are we still in the cloud of unknowing?

Jim
June 13, 2013 4:36 pm

Hmmm, I was never too good in math. But let me give this a try. We are about 5 years through solar cycle 24, and in this cycle, the sun is very quiet. Solar cycle 23 lasted for 12.6 years, and the sun was very quiet during this cycle as well. In fact, there were 821 spotless days for the sun during cycle 23, and that level of spotless days or more was only achieved about 100 years before during solar cycle 11.
But back to the math part, for which I am terrible at doing. However, I can do simple arithmetic. So the total length of years for solar cycles 23 and 24 is 12.6 years + 5 years = 17.6 years. Now, you say that global warming has stopped for 17 years?
I guess I am too simple to figure these things out. Climate is soooo complicated.

Nick Stokes(@bilby)
June 13, 2013 5:02 pm

phlogiston says: June 13, 2013 at 4:36 pm
“Nick Stokes says:
An un-vetted person doing statistics, how shocking!”

I am not shocked. It was RGB who spoke harshly of it.
“Do you assert – contrary to Monkton – that the ensemble models are spot-on in predicting the global temperature trend in the last two decades?”
No, and they don’t claim to be. Basically GCMs are numerical weather programs that generate weather. But they are not forecasting weather – there is no expectation that the weather will progress just as a model predicts (that’s mostly why they disagree so much on this scale). The expectation is that, as with reality, the weather will average out into an identifiable climate. And as with reality, that takes a while.

Myrrh
June 13, 2013 5:02 pm

[snip – more PSI/Slayers Junk science -mod]

Olaf Koenders
June 13, 2013 5:20 pm

Why is it when we cherry pick a start date of the year 1000 do the warmists suddenly shut up? It’s probably because of the Viking swords in their backs..

June 13, 2013 5:55 pm

Greg Mansion says:
June 13, 2013 at 9:59 am
Greg, I’m sure there were many years when Mayan Priests threw the virgins into the pit and crop results improved. By chance of course. All the spaghetti graphs in the world wouldn’t improve the “science” of human sacrifice and crop results. Nor would or reasoned people start a dissident debate based on graphs produced by the Priests at the time even if they were on the right side of science. Mayans I’m sure were more honest and didn’t attribute their beliefs to science at all.
It seems to me many skeptics make a priority of co-opting the basic warming talking points which are a pure fallacy. Causal assumptions about the temperature stats always peeve me. Lowering the logic bar is essential for the AGW believers and the temp stat graph does exactly that. It doesn’t matter long-term about the short-term changes of the graph, if you accept the talking point you’ve lost an important piece of logic in the farce of AGW debating.
There should be a lot more qualifying important points about the AGW scam if people comment on the temp stat and the “pause” from the skeptical side. Then again many skeptics live for the weeds like this of the debate which will go on forever if left to them. Monckton is slipping.

Shawnhet
June 13, 2013 6:09 pm

Steven Mosher says:
June 13, 2013 at 11:54 am
“Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
even Popper realized this in the end as did Feynman.”
So, your first statement makes the claim that it is false to claim that there is such a thing as falsification? 😉
Seriously, I’m pretty sure that you misunderstood Popper (I can’t make be sure about Feynman but it doesn’t sound like him either). I think you must be thinking of naive falsificationalism which is not the same thing.
Cheers, 🙂

kim
June 13, 2013 6:37 pm

The racehorse is running to provenance, but it’s in Rhode Island. Seems skeert blue of the devil, so stridefully he avoids the point.
==========

Gary Pearse
June 13, 2013 6:37 pm

“bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb”
rgb, this sort of thing is the modus operandi of bad climate science. The adjustments made to the temperature record took the good high quality rural thermometers and averaged them with the poorly sited ones and apparently added something additional. The rural sites averaged 0.155C/decade trend, poorly sited 0.248C/decade and NOAA’s final adustment resulted in 0.309C/decade average in the contiguous 48. How on earth could the best model in the world, based on good physics ever “hindcast” or “project” this. Assuming the rest of the world temps are fiddled in similar fashion as they most certainly are, this would mean that the “observed” trends are even exaggerated and the departure from projections even greater.

June 13, 2013 6:39 pm

Has Ben Santer taken a swing at anyone yet?

Gary Pearse
June 13, 2013 6:40 pm

Sorry, I left out the link to the NOAA changes to the US Temps:
http://wattsupwiththat.com/2012/07/29/press-release-2/

Shawnhet
June 13, 2013 6:41 pm

Nick Stokes says:
June 13, 2013 at 5:02 pm
“The expectation is that, as with reality, the weather will average out into an identifiable climate. And as with reality, that takes a while.”
I don’t think that’s right at all. From the IPCC’s Third Assessment report Section 8.5.1.1
“The model evaluation chapter of the IPCC Second Assessment Report (Gates et al., 1996) found that “large-scale features of the current climate are well simulated on average by current coupled models.””
From the above we can see that models are averaged together because doing so allows you to “simulate” the large scale features of the climate. IOW, individual models on their own do not simulate those large-scale features(if some did there would be no need to average them at all). It has nothing to do with the “time” you let a model run for.
Cheers, 🙂

William Astley
June 13, 2013 6:48 pm

The fact that there has been no warming for 17 years is an anomaly. An anomaly is an observation that cannot be explained by the assumed mechanisms, the assumed hypothesis/hypotheses. There are three standard approaches to address anomalies: 1) Ignore them (that is the most common approach, name calling is useful if there are ignorant people that persist in bring up the anomalies, the use of the word ‘denier’ is the type of imaginative approach that can be used to stifle discussion), 2) Make the anomaly go away by reinterpreting the data (GISS is an example of that approach), or 3) Develop a modified mechanism or a new mechanism to explain them away.
There is no question that the lack of warming for 17 years is real, not an instrumental error, or a misinterpretation of the measurements. The Hadcrut3 to Hadcrut4 and the GISS manipulations are pathetic warmists attempts to raise planetary temperature which only muddies the water and does not remove the anomaly.
Thermometers have not changed with time. There is no logical reason to propose a change in the laws of physics to explain what is observed. The laws of physics have not changed with time.
If the CO2 mechanism (William: Big if) does not saturate, increasing CO2 in the atmosphere should result in an increase in forcing which should result in a gradually increasing planetary temperature that oscillations with the normal ‘chaotic’ planetary mechanisms. What should be observed as atmospheric CO2 increases is a wavy asymptotically (increasing asymptotically as the CO2 forcing is continually increasing) increasing planetary temperature.
That is not observed.
The warmists have proposed that the additional forcing due to increased atmospheric CO2 is hiding in the ocean. They also tried the hypothesis that increased aerosols due to China coal use inhibited the warming. Some scallywag however noted that the majority of the warming was observed in the Northern hemisphere where Chinese aerosol concentration should be highest which should inhibit warming in the Northern Hemisphere which is the opposite of observations. The Northern Hemisphere ex-tropics warmed four times more than the tropics, twice as much as the planet as whole (which curiously is also what happens during a Dansgaard-Oeschger cycle).
The problem with the heat hiding in the ocean hypothesis is there must be a mechanism that would suddenly send the additional energy from the CO2 forcing into the ocean to stall the warming. In addition to the requirement for a new mechanism that would suddenly send heat into the deep ocean, there needs to be heat regulating mechanism that must mysteriously increase to cap the CO2 warming. (i.e. The heat hiding in the ocean must fortuitously increase to cap planetary temperature rise.)
The warmists if they were interested in solving the scientific puzzle should have summarized the problem situation and possibilities. When that is done it is clear some hypotheses are not valid.
Summary of the CO2 mechanism in accordance with warmist theory.
1) Based on theoretical calculations and measurements increased atmospheric CO2 does not result in a significant increase in planetary temperature in the lower troposphere. That region of the atmosphere is saturate as the absorption spectrum of CO2 and water overlap and there is sufficient CO2 in the lower troposphere as CO2 is a heavy than air molecule (CO2 concentration is greater proportionally at lower elevations due to its higher mass than O2 and N2) and there is a greater amount of water vapour, so increased CO2 does not theoretically result in significant warming in the lower troposphere.
2) At higher elevation in the atmosphere there is less water vapour, so all else being equal (i.e. the conditions at that elevation are as assumed by the models) the additional atmospheric CO2 should theoretically cause increase warming at higher elevations in the troposphere. The warming at the higher regions in troposphere should then by radiation of long wave radiation warm the planet’s surface.
Logical Option A:
If heat is not hiding in the ocean and the laws of physics hold, then something is causing the CO2 mechanism to saturate in the upper troposphere such that increased CO2 or other greenhouse gases does not cause warming in that region of the atmosphere. If logical option A is correct, and if the upper troposphere was already saturated, such that increased CO2 does not cause significant warming, then something else caused the warming in the last 70 years.
It is known that planetary temperature has cyclically warmed and cooled in the past (Dansgaard-Oeschger cycles) and it is known that there are solar magnetic cycle changes that correlate with the warming and cooling cycles. An example is the Medieval Warm period that is followed by the Little Ice age.
The warmists have chosen to ignore the fact that there is cyclic warming and cooling in the paleo record.
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://www.climate4you.com/
So if the CO2 mechanism was saturated at a level of say 200 ppm, then additional CO2 has a negligible affect on planetary temperate. A new mechanism is therefore required to explain the 70 years of warming that is observed.
The above graph shows a new mechanism is not required. The same mechanism that caused the Dansgaard-Oeschger warming and cooling caused the warming in the last 70 years.
Now as the solar magnetic cycle has rapidly slowed down, we would expect the planet to cool.
If the planet cools, will know that something is different in the upper troposphere the model assumptions and the something that is different inhibits the greenhouse warming mechanism. (Inhibit is the correct term rather than saturate).
Logical Option B:
The heat is hiding in the oceans. Planetary temperature has not risen in the tropics where there should be the greatest CO2 forcing on the planet as the tropical region has the most amount of long wave radiation that is emitted off to space and there is amply water to amplify the CO2 warming. The heat is hiding in the ocean hypothesis requires particularly in the tropics that there by a step increase in ocean mixing to hide the heat in the deep ocean.
There is no observational evidence of increased surface winds why would there be temperatures in the tropics have not increased significantly. There is no driver to force heat into the deep ocean. The question is why suddenly now should heat start to hide in the deep ocean? There needs to a physical explanation as to what is suddenly changed to force heat particularly in the tropics into the deep ocean. Ignoring the fact that there is no explanation of what would turn on heat hiding in the ocean, there is an ignored problem that if there is suddenly intermixing of surface waters with deep ocean waters atmospheric CO2 levels should drop as CO2 is pulled into the colder deeper waters. That is not observed. Atmospheric CO2 is gradually rising.

dbstealey(@dbstealey)
June 13, 2013 7:00 pm

jai mitchell says:
“The LIA is associated with the maurader minimum…”
+_+_+_+_+_+_+_+_+_+_+_++++++++_+_+_+_+_
Admit it: you’re just winging it. Anyone who doesn’t understand the context [or how to spell] the Maunder Minimum [which refers to sunspot numbers] is only pretending to understand the subject.

pottereaton
June 13, 2013 7:02 pm

Nick Stokes is here to quibble again. rgbatduke has written some very compelling posts today so Nick must punish him by quibbling over trivial points which customarily arise from Nick’s deliberately obtuse misreading of isolated statements while ignoring the most forceful and incisive arguments found in the comments. It’s his modus operandi at ClimateAudit, so I suppose we shouldn’t be surprised to witness in spades here.
Thanks to rgb for the being generous with his time and for passionately dissecting these issues in depth. Occasionally people convey a deep understanding of the core problems facing climate science and rgb did it brilliantly today. It’s comforting to know he’s teaching at a well-known university.

LdB
June 13, 2013 7:06 pm

“The expectation is that, as with reality, the weather will average out into an identifiable climate. And as with reality, that takes a while.”
Given your like of the facts a man of science like myself, can you expand the factual basis on why the weather must average out and when you say “takes a while” how long is that and what basis are you using for that statement.
For the record I believe that both sides of the climate change argument is about as far from science as you can get you get and neither side should be able to use science in the description of what they are doing … it is about as scientific as astrology and horoscopes based on political agendas.

ferdberple(@ferdberple)
June 13, 2013 7:30 pm

Thomas says:
June 13, 2013 at 4:17 am
as is clear from the diagram there has been warming, only not large enough to be statistically significant.
=============
Wrong. The error bars show that there may or may not have been warming. There is no way to know for sure.
That is the meaning of statistical significance. That within certain bounds you cannot say which way the answer lies. Temperature is within those bounds, so you cannot accurately say “there has been warming”.

ferdberple(@ferdberple)
June 13, 2013 7:34 pm

Nick Stokes says:
June 13, 2013 at 5:02 pm
The expectation is that, as with reality, the weather will average out into an identifiable climate. And as with reality, that takes a while.
============
That doesn’t make the expectation correct. The law of large numbers does not hold for chaotic time series. You cannot calculate a meaningful average for chaotic systems over time. The result is spurious nonsense.

jim2
June 13, 2013 7:40 pm
RoyFOMR
June 13, 2013 7:59 pm

@rgbatduke says:
June 13, 2013 at 7:20 am
I’ve read thousands of posts on science blogs and this post of yours stand head and shoulders above any other that I’ve read and I’ve read many excellent ones.
I don’t know how you did it but, by God, it really hit the spot for me and I’m sure for many others too.
Thank you.
(Mr Watts, I’ve posted rgb’s full text from a H/T from StreetCred on BishopHill. Apologies if I’ve overstepped the mark and please feel free to snip)

John Tillman
June 13, 2013 8:17 pm

jai mitchell says:
June 13, 2013 at 2:15 pm
& John Tillman
–Yes, I misread his statement but then it only makes one consider. If you all think that we are actually supposed to be headed into another ice age, then why are we “recovering” from the little ice age?
And if you are all such big fans if the medieval warm period, why wasn’t the little ice age a “recovery” from that, (since we are supposed to be headed into another ice age)
it sounds to me like you are really grasping at straws here.
—————————————-
This has been explained many times to you. Either you somehow missed all the explanations or want to remain willfully obtuse.
“Recovery” means regression to the mean from excursion above or below a trendline. The world recovered from the Medieval Warm Period by returning back to the trend, then continuing on below it into the LIA. Since about 1700 Earth has been “recovering” from that cold period.
From the Minoan Warm Period 3000 years ago, the long term temperature trend line has been down, but with (possibly quasi-sine wave) cyclical excursions above & below it, all occurring naturally. The Minoan WP was followed by a cold period, which was followed by the Roman WP, followed by the Dark Ages Cold Period, interrupted by the lesser Sui-Tang WP (the peak of which was lower than the Roman & the subsequent Medieval WPs), followed by more cold, then the Medieval WP, followed by the remarkably frigid LIA, followed by the Modern WP. The trend line connecting the peak of the Minoan, Roman, Medieval & Modern WPs is decidedly down.
There is no prima facie case for any significant human effect on climate unless & until the Modern WP gets warmer than the Medieval, which hasn’t happened yet. Each recovery from the preceding cycle, whether warm or cold, has peaked or troughed out at a lower temperature, based upon proxy data, such as the Greenland ice cores. This is just one of many inconvenient truths about CACCA.
Had you really tried to study & understand Dr. Aksofu’s graph, you would grasp this simple concept instead of clutching at CAGW straws.

John Tillman
June 13, 2013 8:18 pm
barry
June 13, 2013 8:22 pm

He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

There is a propensity to quote one sentence from the Santer paper (jn the abstract) as if it is the defning point therein, and weild it as a benchmark for the surface data stat sig, or for model verification, or to claim that the anthropogenic signal is lost. This is a profound misunderstanding of the paper, which conlcudes;

In summary, because of the effects of natural internal climate variability, we do not expect each year to be inexorably warmer than the preceding year, or each decade to be warmer than the last decade, even in the presence of strong anthropogenic forcing of the climate system. The clear message from our signal-to-noise analysis is that multi-decadal records are required for identifying human effects on tropospheric temperature.

This is not a discrepancy with the abstract, which maintains that you need *at least* 17 years of data from the MSU records, but that may not always be sufficient.

When trends are computed over 20-year periods, there is a reduction in the amplitude of both the control run noise and the noise superimposed on the externally forced TLT signal in the 20CEN/A1B runs. Because of this noise reduction, the signal component of TLT trends becomes clearer, and the distributions of unforced and forced trends begin to separate (Figure 4B). Separation is virtually complete for 30-year trends
…On timescales longer than 17 years, the average trends in RSS and UAH near-global TLT data consistently exceed 95% of the unforced trends in the CMIP-3 control runs (Figure 6D), clearly indicating that the observed multi-decadal warming of the lower troposphere is too large to be explained by model estimates of natural internal variability….
For timescales ranging from roughly 19 to 30 years, the LAD estimator yields systematically higher values of pf”– i.e., model forced trends are in closer agreement with observations….

The 17-year quote is a minimum under one of their testing scenarios. They do not recommend a ‘benchmark’ at all, but point out that the signal to noise ratio declines the more data you have.
It is not enough to cite a quote out of context. Data, too must be analysed carefully, and not simply stamped with pass/fail based on a quote. Other attempts at finding a benchmark (a sound principle) are similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).

ferdberple(@ferdberple)
June 13, 2013 8:26 pm

ditto on the comments of praise for @rgbatduke postings above. home run after home run. I felt what I was reading was truly inspired. I would like to echo the other comments that these posting be elevated to a blog article. Perhaps just collected “as is” into a posting.
The logic to me is inescapable. Ask 10 people the answer to a question. If you get 10 different answers then one can be pretty sure than at least 9 of them are wrong, and 1 of them might be right. You cannot improve the (at most) 1 possibly right by averaging it with the other (at least) 9 wrong answers.
So why, when we have 30 models that all give different answers, do we average them together? Doesn’t this means that the climate scientists themselves don’t know which one is right? So how can they be so sure that any of them are right?
If you asked 30 people the answer to a question and they all gave the wrong answer, what are the odds that you can average all the wrong answers and get a right answer? Very likely one of the wrong answers is closer to the right answer than is the average.

Steve from Rockwood
June 13, 2013 8:27 pm

10 years minimum, but 15 years practically, 17 years for confirmation, 20 years with padded proof, 30 years would eliminate any natural effects, 60 years would clarify the long term natural trends and 90 years would definitely answer some important questions…but if we had 120 years of worldwide satellitle coverage I couldn’t really predict what we would know…surely we should collect such data and then reconvene.

SAMURAI
June 13, 2013 8:29 pm

Thank you Lord Monckton of Benchley, for a job well done.
I especially enjoyed seeing the R2 value of the 17 year 4 month trend……0.11…
0.11?… 0.11!? Are you frigging kidding me?
And we still take these grant whor….umm.. bed-wetters seriously?
It is to laugh.
It it weren’t for the \$TRILLIONS being wasted on this hoax, it would almost be funny…Almost…
The eventual cost to the scientific community’s credibility and the actual economic and social destruction this silly hoax has inflicted on the world’s economy so far has not been so humorous; tragic comes to mind.

June 13, 2013 9:07 pm

Samurai, I also nearly dropped my uppers when I saw the R2 value is 0.11.
It’s almost ZERO! Close enough to almost call it zero. At least it isn’t negative, but then, it could start to be without much of a change.

JimF
June 13, 2013 9:10 pm

Anthony/moderators: You come down hard on others, like the dragons or whatever, and some others. Why not give Nick Stokes his one little chance at puerile nastiness, then cut off all his even more juvenile following posts?
Mosher: In re “falsification” as used by rgb@duke. I don’t think he used it in the sense that you think he did. I think he used it in the sense of something a tort lawyer would love to sink his claws into; i.e., “climate scientists” lying through their teeth and misappropriating public funds either through sheer venality or total lack of skill. You may want to clarify that with Mr. RGB – who has clearly posted some of the best thinking we’ve seen on this matter of GCMs.
REPLY: Well, as much as I think Nick has his head up some orifice at times due to his career in government making him wear wonk blinders, he did do one thing that sets him apart from many others who argue against us here. When our beloved friend and moderator Robert Phelan died, Nick Stokes was the only person on the other side of the argument here (that I am aware of) who made a donation to help his family in the drive I setup to help pay for his funeral.
For that, he deserves some slack, but I will ask him to just keep it cool. – Anthony

John Archer
June 13, 2013 9:26 pm

Finally, there is no such thing as falsification. There is confirmation and disconfirmation.” — Steven Mosher, June 13, 2013 at 11:54 am
I agree with that. Just as verification, in the sense of proving a truth, can’t be had in any science, neither can falsification which is the same thing — proving a truth, the truth that something is false. Haha! Doh!
Even Popper realized this in the end as did Feynman.
Feynman, of course, is no surprise. Besides, I understand he didn’t have a lot of time for philosophy. You can see why. 🙂
But Popper, on the other hand, is a surprise to me. I’ve read some of his stuff and similar but not all of it by a long shot, and on the whole I am very sympathetic to it, except for your point above where I thought he had a big blind spot. He kept banging on about corroboration, for instance, when confirming (putative) truths but seemed a little more adamant when it came to falsification. Dogmatic I’d say.
In fact, the last I heard on this—and that was at least about a couple of decades ago maybe—was that he used to throw a hissy fit if someone brought the symmetry up. Ooh, touchy! 🙂
I didn’t know he recanted though. That’s news to me. Good for him.
The upshot is that he took us all round the houses and back to where we started in the first place — stuck with induction. Haha. Fun if you have nothing better to do.

AJB
June 13, 2013 9:49 pm

The video that perked my scepticism in the whole hullabaloo:
http://edge.org/conversation/the-physics-that-we-know
“Is there anything you can say at all?” … about 7:45 mins in.

June 13, 2013 9:49 pm

pottereaton says: June 13, 2013 at 7:02 pm

Nick Stokes is here to quibble again. rgbatduke has written some very compelling posts today so Nick must punish him by quibbling over trivial points which customarily arise from Nick’s deliberately obtuse misreading

And if there is anything at which Nick Stokes has proven himself to be the numero uno expert, it is in the art and artifice of “deliberately obtuse misreading” (although, much to my disappointment, there have been times – of which this thread is one – that Steve Mosher has been running neck and neck with Stokes)
But that aside … having just subjected myself (albeit somewhat fortified by a glass of Shiraz) to watching the performances (courtesy of Bishop Hill) across the pond of so-called experts providing testimony at a hearing of the U.K. House of Commons Environmental Audit Committee, I’ve come to the conclusion that ‘t would have been a far, far better thing had they requested the appearance and testimony of rgbatduke than they have ever done before!

JimF
June 13, 2013 9:54 pm

Anthony: fair enough. I will just “ignore” him. ‘Nuff said.

Steven Mosher(@stevemosher)
June 13, 2013 10:08 pm

“But Popper, on the other hand, is a surprise to me. I’ve read some of his stuff and similar but not all of it by a long shot, and on the whole I am very sympathetic to it, except for your point above where I thought he had a big blind spot. He kept banging on about corroboration, for instance, when confirming (putative) truths but seemed a little more adamant when it came to falsification. Dogmatic I’d say.”
In the end of course he had to admit to the fact that real scientists don’t actually falsify theories. They adapt them. I’m refering to his little fudge around the issue of auxilary hypothesis.
“As regards auxiliary hypotheses we propose to lay down
the rule that only those are acceptable whose
introduction does not diminish the degree of falsi ability
or testability of the system in question, but, on the
contrary, increases it.”
That in my mind is an admission that scientists in fact have options when data contradicts a theory: namely the introduction of auxilary hypothesis. Popper tried to patch this with a “rule”
about auxiliary hypotheses, but the rule in fact was disproved. Yup, his philosophical rule was
shown to be wrong.. pragmatically.
In Popper formulation we are only allowed to introduce auxiliary hypotheses if those are testable
and if they dont “diminish” falsifiability ( however you measure that is a mystery ) This approach to science was luckily ignored by working scientists. The upshot of Poppers approach is that one could reject theories that were actually true.
in the 1920’s physicists noted that in beta decay( a neutron into a proton and electron) the combined energy of the proton and the electron was greater than the energy of the neutron.
This lead ssome physicists to claim that conservation of energy was falsified.
Pauli suggested that there was also an invisible particle emitted. Fermi named it neutrino.
However at the time there was no way of detecting this. By adding this auxiliary hypothesis conservation of energy was saved, BUT the auxiliary hypotheses was not testable. Popper’s rule would have said “thou shalt not save the theory”
Of course in 1956 the neutrino was detected and conservation of energy was preserved, but by Poppers “rulz” the theory would have been tossed. The point being is that theories dont get tossed. They get changed. Improved. and there are no set rules for how this happens. Its a pragmatic endeavor. So that scientists will keep a theory around, even one that has particles that can’t be detected, as long as that theories is better than any other. Skepticism is a tool of science its not science itself.
If you want an even funnier example see what Feynman said about renormalization.
“The shell game that we play … is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.”
So there you go. in order to keep a theory in play, a theory that worked, Feynman used a process that he thought was mathematically suspect. haha, changing math to fit the theory.

Steven Mosher(@stevemosher)
June 13, 2013 10:23 pm

Hilary
“And if there is anything at which Nick Stokes has proven himself to be the numero uno expert, it is in the art and artifice of “deliberately obtuse misreading” (although, much to my disappointment, there have been times – of which this thread is one – that Steve Mosher has been running neck and neck with Stokes)”
I find your intolerance of Nick’s contrary opinions and other contrary opinions to be out of line with the praise for this site which the good Lord bestowed just the other day.
Lets be clear on a couple things. Feynman is no authority on how science works. read his opinion on renormalization and you will understand that he did not practice what he preached.
Popper was likewise wrong about science. This isnt a matter of philosophical debate, its a matter of historical fact.
Here is a hint. You can be a sceptic and not rely on either of these guys flawed ideas about how science in fact operates. Theories rarely get “falsified” they get changed, improved, or forgotten when some better theory comes along. Absent a better theory, folks work with the best they have.

John Tillman
June 13, 2013 10:25 pm

Steven Mosher says:
June 13, 2013 at 11:54 am
Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
even Popper realized this in the end as did Feynman.
———————————————–
Please confirm with actual statements Popper & Feynman that they “realized” this. Absent your providing evidence to this effect, I think that you have misunderstood the mature thought of both men.
The physicists and philosophers of science Alan Sokal and Jean Bricmont, among others, could not have disagreed with you more. In their 1997 (French; English 1998) book “Fashionable Nonsense” they wrote, “When a theory successfully withstands an attempt at falsification, a scientist will, quite naturally, consider the theory to be partially confirmed and will accord it a greater likelihood or a higher subjective probability… But Popper will have none of this: throughout his life he was a stubborn opponent of any idea of ‘confirmation’ of a theory, or even of its ‘probability’…(however) the history of science teaches us that scientific theories come to be accepted above all because of their successes”.
The history of science is rife with instances of falsification, which neither Popper nor Feynman would I’m sure deny (again, please provide evidence against this view, given their well known support of the theory of falsifiability). There very much indeed is such a thing. Nor would either deny that to be scientific an hypothesis must make falsifiable predictions. If either man did deny this tenet, please show me where.
For instance, Galileo’s observation of the phases of Venus conclusively falsified the Ptolemaic system, without confirming Copernicus’ versus Tycho’s.
As you’re probably aware, Popper initially considered the theory of natural selection to be unfalsifiable, but later changed his mind. I have never read anywhere in his work that he changed his mind about falsifiability. The kind of ad hoc backpedaling in which CACCA engages is precisely what Popper criticized as unscientific to the end. If I’m wrong, please show me where & how.
And that goes double for Feynman.

pat
June 13, 2013 10:36 pm

Samurai says –
“If it weren’t for the \$TRILLIONS being wasted on this hoax, it would almost be funny…Almost…
The eventual cost to the scientific community’s credibility and the actual economic and social destruction this silly hoax has inflicted on the world’s economy so far has not been so humorous; tragic comes to mind.”
INVESTORS are really, really concerned about CAGW and the environment!!! nil chance they’ll ever admit it’s a hoax:
13 June: Reuters: Laura Zuckerman: Native Americans decry eagle deaths tied to wind farms
A Native American tribe in Oklahoma on Thursday registered its opposition to a U.S. government plan that would allow a wind farm to kill as many as three bald eagles a year despite special federal protections afforded the birds…
They spoke during an Internet forum arranged by conservationists seeking to draw attention to deaths of protected bald and golden eagles caused when they collide with turbines and other structures at wind farms.
The project proposed by Wind Capital Group of St. Louis would erect 94 wind turbines on 8,400 acres (3,400 hectares) that the Osage Nation says contains key eagle-nesting habitat and migratory routes.
The permit application acknowledges that up three bald eagles a year could be killed by the development over the 40-year life of the project…
The fight in Oklahoma points to the deepening divide between some conservationists and the Obama administration over its push to clear the way for renewable energy development despite hazards to eagles and other protected species.
The U.S. Fish and Wildlife Service, the Interior Department agency tasked with protecting eagles and other wildlife to ensure their survival, is not sure how many eagles have been killed each year by wind farms amid rapid expansion of the facilities under the Obama administration.
UNDERESTIMATED EAGLE DEATHS
***Reporting is voluntary by wind companies whose facilities kill eagles, said Alicia King, spokeswoman for the agency’s migratory bird program.
She estimated wind farms have caused 85 deaths of bald and golden eagles nationwide since 1997, with most occurring in the last three years as wind farms gained ground through federal and state grants and other government incentives…
***Some eagle experts say federal officials are drastically underestimating wind farm-related eagle mortality. For example, a single wind turbine array in northern California, the Altamont Pass Wind Resource Area, is known to kill from 50 to 70 golden eagles a year, according to Doug Bell, wildlife program manager with the East Bay Regional Park District.
Golden eagle numbers in the vicinity are plummeting, with a death rate so high that the local breeding population can no longer replace itself, Bell said.
The U.S. government has predicted that a 1,000-turbine project planned for south-central Wyoming could kill as many as 64 eagles a year.
***It is illegal to kill bald and golden eagles, either deliberately or inadvertently, under protections afforded them by two federal laws, the Migratory Bird Treaty Act and the Bald and Golden Eagle Protection Act…
In the past, federal permits allowing a limited number of eagle deaths were restricted to narrow activities such as scientific research…
***Now the U.S. Fish and Wildlife Service is seeking to lengthen the duration of those permits from five to 30 years to satisfy an emerging industry dependent on investors seeking stable returns…
http://in.reuters.com/article/2013/06/13/usa-eagles-wind-idINL2N0EP1ZS20130613
——————————————————————————–

June 13, 2013 10:54 pm

rgbatduke at 1:17 pm – Oh Yes, follow the money. Corporate America, which of course includes Big Oil, has consistently been the main supplier of money to the Green Movement for decades.

Thomas
June 13, 2013 11:50 pm

ferdberple 7:30 pm. Impressive cherrypicking of a partial sentence there to make it sound as if I’m wrong.

M Courtney
June 14, 2013 12:26 am

In the past I ahve defended Nick Stokes for making pertinent points despoite ther being unpopular here.
However he has really made a fool of himself here.
The question isn’t who made this particular average of model outputs it is whether anyone should make an average of model outputs at all. Clearly, Monckton has made this average of model outputs to criticise the average of model outputs in the forthcoming AR5 (read the post).
Yet, the posts of rgbatduke persuasively argue that making an average of model outputs is a meaningless exercise anyway.
But criticising Monckton for taking the methodology of AR5 seriously is daft.
Criticising AR5 for not being serious is the appropriate response.
I look forward to Nick Stokes strongly condemning any averaging of meolds in AR5. But I fear I may be disappointed.

David Cage
June 14, 2013 12:32 am

Why do all these predictions get based on a linear projection? Try putting a cyclic waveform on the noisy one and compare the correlations then. They beat the hell out of any linear ones.

Nick Stokes(@bilby)
June 14, 2013 1:05 am

M Courtney says: June 14, 2013 at 12:26 am
“However he has really made a fool of himself here.
The question isn’t who made this particular average of model outputs it is whether anyone should make an average of model outputs at all.”

Model averaging is only a small part of the argument here. Let me just give a few quotes from the original RGB post:
“Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
“What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. “
“Why even pay lip service to the notion that or for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning?”
“This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning.”
My simple point is that these are features of Lord Monckton’s graphs, duly signed, in this post. It is statistical analysis that he added. There is no evidence that the IPCC is in any way responsible. Clear?
As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge. In model world, we can rerun simultaneously to try to get a common signal. It’s true that models form an imperfect population, and fancy population statistics may be hard to justify. But I repeat, the fancy statistics here seem to be Monckton’s. If there is a common signal, averaging across model runs is the way to get it.

RichardLH(@richardlinsleyhood)
June 14, 2013 1:57 am

David Cage says:
June 14, 2013 at 12:32 am
Why do all these predictions get based on a linear projection? Try putting a cyclic waveform on the noisy one and compare the correlations then. They beat the hell out of any linear ones.
Indeed this analysis (which shows short term cyclic forms in the UAH data) http://s1291.photobucket.com/user/RichardLH/story/70051 supports the non-linear argument.

M Courtney
June 14, 2013 2:14 am

It’s true that models form an imperfect population, and fancy population statistics may be hard to justify.

Which is the point.
Monckton can justify it by referring ot AR5 which he is commenting on. Whatever fancy stastistics he uses is not relevant to the question of whether including different models – that have no proven common physics – is appropriate at all. He is commenting on AR5.
The point of the original RGB post, as you quote, is the latter idea: The question of whether including different models that have no common physics is appropriate at all.
So what Monckton did is irrelevant to the original RGB post. Monckton was addressing AR5.
AR5 is the problem here (assuming the blending of disparate models still occurs in the published version).

William Astley
June 14, 2013 3:58 am

The following is a summary of the comments concerning the observed and unexplained end of global warming. The comments are interesting as they show a gradual change in attitudes/beliefs concerning what is the end of global warming.
Comment:
If the reasoning in my above comment is correct the planet will now cool which would be an end to global warming as opposed to a pause in global warming.
Source“No Tricks Zone“
5 July, 2005
“The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn’t statistically significant…,” Dr. Phil Jones – CRU emails.
7 May, 2009
“No upward trend…has to continue for a total of 15 years before we get worried,” Dr. Phil Jones – CRU emails.
15 Aug 2009
“…This lack of overall warming is analogous to the period from 2002 to 2008 when decreasing solar irradiance also countered much of the anthropogenic warming…,” Dr. Judith L. Lean – Geophysical Research Letters.
19 November 2009
“At present, however, the warming is taking a break.[…] There can be no argument about that,” Dr. Mojib Latif – Spiegel.
19 November 2009
“It cannot be denied that this is one of the hottest issues in the scientific community. [….] We don’t really know why this stagnation is taking place at this point,” Dr. Jochem Marotzke – Spiegel.
13 February 2010
Phil Jones: “I’m a scientist trying to measure temperature. If I registered that the climate has been cooling I’d say so. But it hasn’t until recently – and then barely at all.”
BBC: “Do you agree that from 1995 to the present there has been no statistically-significant global warming?”
Phil Jones: “Yes, but only just.”
2010
“…The decade of 1999-2008 is still the warmest of the last 30 years, though the global temperature increment is near zero…,” Prof. Shaowu Wang et al – Advances in Climate Change Research.
2 June 2011
“…it has been unclear why global surface temperatures did not rise between 1998 and 2008…,” Dr Robert K. Kaufmann – PNAS.
18 September 2011
“There have been decades, such as 2000–2009, when the observed globally averaged surface-temperature time series shows little increase or even a slightly negative trend1 (a hiatus period)…,” Dr. Gerald A. Meehl – Nature Climate Change.
14 October 2012
“We agree with Mr Rose that there has been only a very small amount of warming in the 21st Century. As stated in our response, this is 0.05 degrees Celsius since 1997 equivalent to 0.03 degrees Celsius per decade.” Source: metofficenews.wordpress.com/, Met Office Blog – Dave Britton (10:48:21) –
30 March 2013
“…the five-year mean global temperature has been flat for a decade,” Dr. James Hansen –
The Economist.
7 April 2013
“…Despite a sustained production of anthropogenic greenhouse gases, the Earth’s mean near-surface temperature paused its rise during the 2000–2010 period…,” Dr. Virginie Guemas – Nature Climate Change.
22 February 2013
“People have to question these things and science only thrives on the basis of questioning,” Dr. Rajendra Pachauri – The Australian.
27 May 2013
“I note this last decade or so has been fairly flat,” Lord Stern (economist) – Telegraph.

Patrick
June 14, 2013 4:16 am

“13 February 2010
Phil Jones: “I’m a scientist trying to measure temperature. ….”
I can read a thermometer AND can use Microsoft Excel. Now where is my grant money?

Paul Mackey
June 14, 2013 4:41 am

Actually, I am very surprised to hear the Guardian still has ANY readers left.

cRR Kampen
June 14, 2013 4:50 am

How dead can a turkey be 🙂

June 14, 2013 4:56 am

“Lets be clear on a couple things. Feynman is no authority on how science works. read his opinion on renormalization and you will understand that he did not practice what he preached.
Popper was likewise wrong about science. This isnt a matter of philosophical debate, its a matter of historical fact.”

Wow Mosher bashes Feynman ánd Popper. So wot’s your achievement in science compared with Feyman and Popper, Mosher? Already received a Noble prize? Your arrogance is toe-curling…

David L.
June 14, 2013 4:58 am

Steven says: June 13, 2013 at 4:36 am
“I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? The equations that attempt to map/predict magnetic fields of the earth are complex Fourier series. Is someone, somewhere suggesting that the magnetic field is more complex than the climate envelope about the earth? I realize this is a short timescale and things may look linear but they are not. Not even close. Like I said in the beginning, the great climate hoax is nothing more than what I just called it. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.”
————————————–
YES, YES, and YES!
I can’t see how any legitimate scientist would entertain these climate hacks beyond the first mention of a linear projection in their papers. At that statement they prove they don’t know what they are talking about. I agree you can use a line to interpolate data between two actual data points, but to fit a line and then project that into the distant future? Give me a giant break.
If you don’t know the real function it is wrong to assume a line will work. You might as well assume a Taylor expansion out to twelfth order for that matter. Assume anything; you’ll most likely be wrong. Assuming a line doesn’t get you any closer to being right.
The most amazing thing to me is that the line doesn’t even fit the data displayed! If they would analyze the residuals they’d see they weren’t normally distributed. The line isn’t even appropriate over the short timescale they plot.
Dr. Santer’s 17 year plot clearly shows the temperatures have gone up and are now coming back down. It’s not even leveling off, no more than the peak of the voltage on an AC circuit. It smoothly goes up and comes back down.
Can you imagine these guys as an artillery battery? They’d plot the first few points of the shell as it comes out of the barrel and project it linearly to their target.

jim2
June 14, 2013 5:06 am

Those who can do science, do it. Those who can’t become philosophers.

June 14, 2013 5:11 am

Blimey – some incredible minds here. Genuinely impressive stuff!
I shall now sum up my research in this matter using my limited intellect.
Its June.
I’m cold.

el gordo
June 14, 2013 5:31 am

‘After summer floods and droughts, freezing winters and even widespread snow in May this year, something is clearly wrong with Britain’s weather.
‘Concerns about the extreme conditions the UK consistently suffers have increased to such an extent that the Met Office has called a meeting next week to talk about it.
‘Leading meteorologists and scientists will discuss one key issue: is Britain’s often terrible weather down to climate change, or just typical?’

June 14, 2013 5:35 am

Can I edit Dr. Stokes’ comment to make clearer?
If there is a common signal programmed into the code of multiple models, averaging across model runs is the way to get it to show up in the output.

ferdberple(@ferdberple)
June 14, 2013 6:01 am

Thomas says:
June 13, 2013 at 11:50 pm
I’m wrong.
==========
I can’t disagree.

Richard M
June 14, 2013 6:01 am

barry says:
It is not enough to cite a quote out of context. Data, too must be analysed carefully, and not simply stamped with pass/fail based on a quote. Other attempts at finding a benchmark (a sound principle) are similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).

I actually agree with this statement. The amount of time is not the biggest factor. The question is related to finding some factors that could come into play (“principle”). That is why the almost perfect fit of global temperatures with the DPO is so significant.
The current 16.5 years of no warming is actually around 8 years of warming followed by 8+ years of cooling that peaks right at the PDO switch. That is the “sound principle” that demonstrates that we really don’t even need to wait 17 years, we can say with high certainty that the PDO has a stronger influence on temperatures than CO2. And, if that is true then CO2’s effect is very small.

garymount(@garymount)
June 14, 2013 6:04 am

You can calculate the circumference of a circle as accurately as you like with straight (linear) lines:
see page 8.

Richard M
June 14, 2013 6:08 am

Nick Stokes says:
As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge. In model world, we can rerun simultaneously to try to get a common signal. It’s true that models form an imperfect population, and fancy population statistics may be hard to justify. But I repeat, the fancy statistics here seem to be Monckton’s. If there is a common signal, averaging across model runs is the way to get it.

Nick, the reason averaging A MODEL makes sense is because you are trying to eliminate the affect of noise. When you average multiple models what are you doing? In essence you are averaging differing implementations of physics. Please inform me what a normal distribution of different physics provides? And, what is the meaning of the mean of a normal distribution of different physics. Dr. Brown made this clear. It is so idiotic I can’t even imagine you supporting this nonsense. You are smarter than that.

ferdberple(@ferdberple)
June 14, 2013 6:19 am

Nick Stokes says:
June 14, 2013 at 1:05 am
As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge.
============
There is no good reason to average chaos. It is a mathematical nonsense to do so because the law of large numbers does not apply to chaotic time series. There is no mean around which the data can be expected to converge.
The reason averaging works for some problems is because there is a mean to be discovered. You sample contains noise, and over time the noise will be random. Some positive and some negative. Over time the law of large numbers operates to equal out the positive and negative noise, and the signal will emerge.
However, as rgbatduke has posted, all this goes out the window when you are dealing with chaos. Chaotic systems are missing a constant mean and constant deviation. There is no convergence, only spurious convergence. False, misleading convergence that is not what it appears.
In chaotic systems you have attractors, which might be considered local means. When you use standard statistics to analyze them, you appear to get good results while the system is orbiting an attractor, but then it shoots off towards another attractor and makes a nonsense of your results.
So the idea that you can improve your results by taking longer samples of chaotic systems is a nonsense. The longer a chaotic system is sampled, the more likely if will diverge towards another attractor, making your results less certain not more certain.
This is the fundamental mistake in the mathematics of climate. The assumption that you can average a chaotic system (weather) over time and the chaos can be evened out as noise. That is mathematical wishful thinking, nothing more. Chaos is not noise. It looks like noise, but it is not noise and cannot be treated as noise if you want to arrive at a meaningful result.

Richard M
June 14, 2013 6:20 am

Why are there so many models? In engineering we have standards and committees to make changes to the standards. Assuming there is only one physics there should be only one model where all the changes that get made must be approved by a standards committee. Sure that takes a little extra effort but the result is one, arguably better, model. Instead we have dozens of which none are of much value (other than to the paychecks of the modelers).
Of course, this is the difference between researchers and engineers. The former is not too concerned with accuracy.

RichardLH(@richardlinsleyhood)
June 14, 2013 6:24 am

Richard M says:
June 14, 2013 at 6:20 am
“Of course, this is the difference between researchers and engineers. The former is not too concerned with accuracy.”
Also the difference between discovery and manufacture.

Patrick
June 14, 2013 6:33 am

“Richard M says:
June 14, 2013 at 6:20 am”
In engineering we (I do) know what +/- 2 microns are (+/- 3 microns, bin the job and start again). It is measureable, it is finite. On the other hand, computer based climate cartoon-ography, sorry I mean climate modelling, is, in it’s basic form, just a WAG where nothing is finite nor even measured (Other than the monthly pay check).

Nick Stokes(@bilby)
June 14, 2013 6:33 am

Richard M says: June 14, 2013 at 6:08 am
“When you average multiple models what are you doing? In essence you are averaging differing implementations of physics. Please inform me what a normal distribution of different physics provides?”

There is no expectation of a normal distribution involved in averaging.
But why do you think different models use different physics?
ferdberple says: June 14, 2013 at 6:19 am
“There is no good reason to average chaos.”

This would mean that you could never speak of any weather average. But we do that all the time, and find it useful.
Some folks are overly dogmatic about chaos.

June 14, 2013 6:36 am

I am most grateful to Professor Brown for having pointed out that taking an ensemble of models that use different code, as the Climate Model Intercomparison Project does, is questionable, and that it is interesting to note the breadth of the interval of projections from models each of which claims to be rooted in physics.
In answer to Mr. Stokes, the orange region representing the interval of models’ outputs will be found to correspond with the region shown in the spaghetti-graph of models’ projections from 2005-2050 at Fig. 11.33a of the Fifth Assessment Report. The correspondence between my region and that in Fig. 11.33a was explained in detail in an earlier posting. The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output. If Mr. Stokes thinks the models are projecting some warming rate other than that for the 45 years 2005-2050, perhaps he would like to state what he thinks their central projection is.
Several commenters object to applying linear regression to the temperature data. Yet this standard technique helpfully indicates whether and at what rate stochastic data are trending upward or downward, and allows comparison of temperature trends with projections such as those in the Fifth Assessment Report. A simple linear regression is preferable to higher-order polynomial fits where – as here – the data uncertainties are substantial.
Some commenters object to making any comparison at all between what the models predict and what is happening in the real world. However, it is time the models’ projections were regularly benchmarked against reality, and I shall be doing that benchmarking every month from now on. If anyone prefers benchmarking methods other than mine, feel free to do your own thing. One understands that the cry-babies and bed-wetters will not be at all keen to have the variance between prediction and observation regularly and clearly demonstrated: but the monthly Global Warming Prediction Index and comparison graph are already being circulated so widely that it will soon be impossible for anyone to get away with lying to the effect that global warming is occurring at an unprecedented rate, or that it is worse than we ever thought possible, or that the models are doing a splendid job, or that we must defer to the consensus because consensus must be right.
Finally, Mr. Mansion says that, just as correlation does not imply causation, absence of correlation does not imply absence of causation. In logic he is incorrect. Though correlation indeed does not imply causation, absence of correlation necessarily implies absence of causation. CO2 concentration continues to increase, but temperature is not following it. So, at least at present, the influence of CO2 concentration change on temperature change is not discernible.

Patrick
June 14, 2013 6:36 am

“RichardLH says:
June 14, 2013 at 6:24 am
Also the difference between discovery and manufacture.”
Based on my previous post, the discovery you haven’t read (Understood) the science (Drawing)! I agree!

ferdberple(@ferdberple)
June 14, 2013 6:43 am

barry says:
June 13, 2013 at 8:22 pm
similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).
================
The problem is that we are likely dealing with a strange attractor, a fractal distribution, Which implies that regardless of the scale, the variability will appear the same. What this means mathematically is that there is no time scale that will prove satisfactory. There is no time scale at which you can expect the signal to emerge from the noise, because the noise is not noise. It is chaos. The system will continue to diverge, no matter if you collect data for 100, 1000, 1 million, 1 billion years.
The best that can be hoped for in our current understanding is to look for patterns in how the system orbits its attractors. This behavior may give some degree of cyclical predictability, or not, depending on the motion of the attractors. We use this approach to calculate the ocean tides with a high degree of precision, even though the underlying physics is chaotic.
Climate science on the other hand has ignored the cyclical behavior of climate and instead attempted to use a linear approximation of a non-linear system. And is now confused because the linear projections are diverging from observation. Yet this divergence is guaranteed as a result of the underlying chaotic time series.

Patrick
June 14, 2013 6:46 am

“Nick Stokes says:
June 14, 2013 at 6:33 am
This would mean that you could never speak of any weather average. But we do that all the time, and find it useful.
Some folks are overly dogmatic about chaos.”
And some folks are overly accepting of “averages”. It’s meaningless to compare an absolute, as is ALWAYS the case in weathercasts, with an average. But it is done everyday, in every weathercast.

Nick Stokes(@bilby)
June 14, 2013 6:46 am

Monckton of Brenchley says: June 14, 2013 at 6:36 am
“The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output. If Mr. Stokes thinks the models are projecting some warming rate other than that for the 45 years 2005-2050…”

I was far less critical of your graphs than Prof Brown, and I don’t particularly want to argue projections here. I was merely pointing out that they are indeed your estimates and statistics, and the graphs are not IPCC graphs, as they are indeed clearly marked.

M Courtney
June 14, 2013 6:54 am

Nick Stokes says at June 14, 2013 at 6:33 am

But why do you think different models use different physics?

Because they all give different results. Sure they must have some bits in common (I hope they use a round planet) but they don’t all model everything in the same way.
So what are you bundling?
Not variations in inputs to see what the model predicts are the most significant component.
Not variations in a single parameter model to see if that parameter is modelled correctly.
You are averaging a load of different concepts about how the climate works. That is the error that rgbatduke skewered at June 13, 2013 at 7:20 am…

there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

BTW, Nick Stokes: Please don’t think I am criticising you personally. I greatly respect your coming here into the lion’s den. I just have nowhere else to go now I can’t engage at the Guardian (sigh).

ferdberple(@ferdberple)
June 14, 2013 6:56 am

The faulty mathematics of the hockey stick and tree ring calibration could well be what led climate science down a dead end. The hockey stick made climate appear linear over large enough time scales to give some assurance of predictability. By minimizing the signal and amplifying the noise, tree ring calibration made temperatures appear stable over very long time periods, leading climate scientists to believe that linear models would prove well behaved. However, they were built on faulty mathematics. The fault is called “selection by the dependent variable”. It results in a circular argument. It is a reasonably well known statistical error and it is hard to believe the scientists involved were not aware of this, because some of them were formally trained in mathematics.

mogamboguru
June 14, 2013 6:57 am

Duncan says:
June 14, 2013 at 5:11 am
Blimey – some incredible minds here. Genuinely impressive stuff!
I shall now sum up my research in this matter using my limited intellect.
Its June. I’m cold.
——————————————————————————————————-
And I am out of funds, too, because this year’s unnervingly long, cold winter cost me 1000 Euros extra just for heating my home. Out of the window go my summer holidays…
Global warming? I am all for it! But where is it?

June 14, 2013 7:04 am

Mr. Stokes vexatiously persists in maintaining that Professor Brown had criticized my graphs, long after the Professor himself has plainly stated he had criticized not my graphs but the IPCC’s graphs, from one of which I had derived the interval of models’ projections displayed in orange and correctly attributed in the second of the two graphs in the head posting.
Of course it is embarrassing to Mr. Stokes that global warming is not occurring at anything like the predicted rate; and it is still more embarrassing to him that the variance between prediction and reality is now going to be visibly displayed every month. But continuing to lie to the effect that Professor Brown was criticizing my graphs when the Professor has said he was doing no such thing does not impress. Intellectual dishonesty of this kind has become the hallmark of the climate extremists.

Justthinkin
June 14, 2013 7:43 am

Lars said…Only models which have been validated by real data should continue to be used.
I’m a little confused. If you are using real data,then I was taught,and have experienced,that I do not need a model. Right now (8:30 AM MDT),my thermometer outside my south facing window,in the shade, shows it is +5C.After a little further checking,yup,it is indeed June 14/2013,not November,so it is cool out. I do not need a model to tell me that! The only model I need is the one of the F-104 (in 1/48th scale),I helped my then 12 year old step sister build,which still hangs in her bedroom. It is no more a reality capable of doing anything other than collecting dust,then a climate model is. And the rednekk truck I will use today to get to the lake pulling the boat I will use for fishing(fingers crossed) is a reality,not a model.
Has everybody forgotten GIGO?

barry
June 14, 2013 7:45 am

The problem is that we are likely dealing with a strange attractor, a fractal distribution, Which implies that regardless of the scale, the variability will appear the same.

Climate, in loose terms, is the average of the variability.

What this means mathematically is that there is no time scale that will prove satisfactory. There is no time scale at which you can expect the signal to emerge from the noise, because the noise is not noise. It is chaos. The system will continue to diverge, no matter if you collect data for 100, 1000, 1 million, 1 billion years.

By that reckoning, the seasons should be indistinguishable.

The best that can be hoped for in our current understanding is to look for patterns in how the system orbits its attractors. This behavior may give some degree of cyclical predictability, or not, depending on the motion of the attractors. We use this approach to calculate the ocean tides with a high degree of precision, even though the underlying physics is chaotic.

There is no reason to presume that, given an ever increasing forcing, climate should be cyclical. On geological time scales stretching to hundreds of millions of years, there is no cyclical behaviour. There is no reason to expect it on every time scale. The cyclical, or osciallting processes we are sure of (ENSO, solar cycle on multi-decadal scale) are the variability within the climate system. You appear to be arguing that the world’s climate has ascillated roughly evenly around a mean for the length of its existence. Surely you know that this is wrong.

Climate science on the other hand has ignored the cyclical behavior of climate and instead attempted to use a linear approximation of a non-linear system. And is now confused because the linear projections are diverging from observation. Yet this divergence is guaranteed as a result of the underlying chaotic time series.</blockquote.
I'm fairly confident 'climate science', which discusses the four seasons, is aware of cyclical behaviour. Weather is chaos, climate is more predictable. The millennial reconstructions don't have cyclical patterns, but they do have fluctuations. We now have an ever-increasing forcing agent, so the question is not whether the global climate will change, but by how much. That is where the discussion of supposedly diverging trends is centred.

Werner Brozek
June 14, 2013 8:06 am

Another strong el Niño could – at least temporarily – bring the long period without warming to an end.
That is true. It will certainly not be CO2 that will bring the long period of warming to an end. Look at the following graph for RSS.
The area on the left that is below the green flat slope line needs a 1998 or 2010 El Nino to counter it. Any El Nino that is less strong will merely move the start time for a flat slope for RSS from December 1996 towards December 1997.

juan slayton
June 14, 2013 9:15 am

Monckton of Brenchley:
Though correlation indeed does not imply causation, absence of correlation necessarily implies absence of causation.
You have me puzzling on this one. If true, then RSA encryption should be impossible. The input causes the output, but as I understand it (probably incorrectly) it is next to impossible to find a correlation between the two. I don’t immediately see how your statement is a logical necessity.

Jim Ryan
June 14, 2013 9:38 am

Juan, it would be the encryption algorithm that causes the output, wouldn’t it? Inspecting these two, you could discover a correlation to the output.
I think Monckton’s point holds. Consider two kinds of event which are uncorrelated. What would you take as evidence that “in spite of complete lack of correlation, events of type A cause events of type B”? I don’t think anything would count as evidence, do you? I can’t imagine a possible world in which there is such evidence. The meaning of “causation” and “complete lack of correlation” just don’t overlap. So, I would conclude that absence of correlation necessarily implies absence of causation.

Bart
June 14, 2013 9:49 am

juan slayton says:
June 14, 2013 at 9:15 am
“…it is next to impossible to find a correlation between the two.”
You generally do not have “the two”, just the one, the output.

Alpha Tango
June 14, 2013 9:50 am

All those little adjustment upwards in recent history has come back to haunt the alarmists. The temperatures keep on failing to rise so they have to keep on adjusting just to keep the trend flat, hehe

June 14, 2013 10:04 am

For those with an interest, several months ago, the University of Kentucky hosted of forum on climate change with three excellent speakers who were all self-described conservatives. Liberals reported how they better understand that there are thoughtful conservative perspectives on, and solutions to, climate change, thus allowing for a broadened public discussion. In turn, conservatives in attendance learned the same thing. You can watch the recording of this event at http://bit.ly/135gvNa. The starting time for each speaker is noted at this page, so you can listen to the speakers of greatest interest to you.

Beta Blocker
June 14, 2013 10:10 am

ferdberple says: June 14, 2013 at 6:56 am “The faulty mathematics of the hockey stick and tree ring calibration could well be what led climate science down a dead end. The hockey stick made climate appear linear over large enough time scales to give some assurance of predictability. …….”

The hockey stick is primarily an AGW industry marketing tool created by Michael Mann, Limited Liability Climatologist (LLC), in response to a pressing market need for a scientific-looking analysis product which eliminates the Medievel Warm Period.
But do the climate modelers take the hockey stick seriously enough to incorporate its purported historical data into their hindcasts and/or into their predictions, either directly or indirectly? Perhaps someone can give us some informed information as to whether they do or they don’t.
In any case, what ever happens with the future trend in global mean temperature — up, down, or flat — the climate science community as a whole will never abandon its AGW dogma.
The great majority of climate scientists — 80%, 90%, 97%, whatever percentage it actually is — will continue with “It’s the CO2, and nothing else but the CO2, so help us God”, regardless of how convoluted the explanations must become to support that narrative.

William Astley
June 14, 2013 10:11 am

I am trying, just for fun, as a kind of a game, to imagine how the politicians, public, and warmists would react to global cooling. …. ….It is curiously difficult to imagine the scenario of global cooling after 20 years of nonstop media discussions, scientific papers, the IPCC reports, yearly climate conferences, and books all pushing global warming as a crisis. …. …..
To imagine global cooling, it seems it is necessary to pretend or try to imagine the warming of the last 70 years had nothing to do with the increase in atmospheric CO2. Try to imagine that the warming was 100% due to solar magnetic cycle changes. (That makes it possible for the warming to be reversible.) Got that picture? Now imagine the onset of significant cooling, back to 1850’s climate. The cooling will be significant and rapid, occurring over roughly 5 years. Can you picture that change?
http://www.solen.info/solar/images/comparison_recent_cycles.png
Will the public request a scientific explanation for the onset of significant planetary cooling? Will the media start to interview the so called ‘skeptics’? Will the media connect the sudden slowdown of the solar magnetic cycle with the planetary cooling? … …Will the media ask why no one noticed that there are cycles of warming and cooling in the paleo climate record that correlate with solar magnetic cycle changes? The warming and cooling cycles are clearly evident. There are peer reviewed papers that connected past solar magnetic cycles changes with the warming and cooling cycles. How is it possible that this evidence was ignored? When there was 17 years without warming why did no one relook at the theory?
How long will the public accept massive subsides of scam green energy if there is unequivocal significant evidence the planet is cooling? Add a stock market crash and a currency crisis to the picture.
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://en.wikipedia.org/wiki/Little_Ice_Age
Little Ice Age
The Little Ice Age (LIA) was a period of cooling that occurred after the Medieval Warm Period (Medieval Climate Optimum).[1] While it was not a true ice age, the term was introduced into the scientific literature by François E. Matthes in 1939.[2] It has been conventionally defined as a period extending from the 16th to the 19th centuries,[3][4][5] or alternatively, from about 1350 to about 1850,[6]….
Europe/North America
….The population of Iceland fell by half, but this was perhaps caused by fluorosis after the eruption of the volcano Laki in 1783.[20] Iceland also suffered failures of cereal crops, and people moved away from a grain-based diet.[21] The Norse colonies in Greenland starved and vanished (by the early 15th century), as crops failed and livestock …. …. Hubert Lamb said that in many years, “snowfall was much heavier … ….Crop practices throughout Europe had to be altered to adapt to the shortened, less reliable growing season, and there were many years of dearth and famine (such as the Great Famine of 1315–1317, although this may have been before the LIA proper).[25] According to Elizabeth Ewan and Janay Nugent, “Famines in France 1693–94, Norway 1695–96 and Sweden 1696–97 claimed roughly 10% of the population of each country. In Estonia and Finland in 1696–97, losses have been estimated at a fifth and a third of the national populations, respectively.”[26] Viticulture disappeared from some northern regions. Violent storms caused serious flooding and loss of life. Some of these resulted in permanent loss of large areas of land from the Danish, German and Dutch coasts.[24] … ….Historian Wolfgang Behringer has linked intensive witch-hunting episodes in Europe to agricultural failures during the Little Ice Age.[36]
Comment:
As the planet has suddenly started to cool, I would assume GCR now again modulates planetary cloud cover. We certainly appear to live in interesting times.
http://ocean.dmi.dk/arctic/meant80n.uk.php
http://nsidc.org/data/seaice_index/images/daily_images/S_timeseries.png

Richard Day
June 14, 2013 10:13 am

Hypothesis: 17y 4m > 17y.
Hey warmists, lmao.

Reg Nelson
June 14, 2013 10:13 am

Accuracy was never the goal of climate models — there’s no money in that. Scientists were forced to “Chicken Little” the results to try and spur action by governments. Seventeen years later and the sky hasn’t fallen. The new “Chicken Little” meme is “Extreme Climate Events”.

Gail Combs
June 14, 2013 10:42 am

Steven says:
June 13, 2013 at 4:36 am
I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? …. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.
>>>>>>>>>>>>>>>>>>>
Monckton and others use the assumptions made by the Warmists, like linear behavior and use their much abused/fudged data sets and STILL win the scientific debate. No wonder the Climastrologists refused to debate knowledgeable people or even entertain questions about warming from the lay audience. Only by continually moving the goal posts and silencing any and all questions can they keep the Hoax going.

george e. smith
June 14, 2013 11:11 am

“””””…..ferdberple says:
June 14, 2013 at 6:19 am
Nick Stokes says:
June 14, 2013 at 1:05 am
As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge.
============
There is no good reason to average chaos. It is a mathematical nonsense to do so because the law of large numbers does not apply to chaotic time series. There is no mean around which the data can be expected to converge………”””””””””
Averaging is a quite well defined, and quite fictitious process, that we simply made up in our heads; like all mathematics. It’s over half a century, since I last had any formal instruction in mathematics; but I do have a degree in it, so I vaguely recollect how it can sometimes be quite pedantic, in its exact wording.
But in layperson lingo, it is quite simple. You have a set of numbers; hopefully each of them expressed in the same number system; binary/octal/decimal/whatever.
You add all of the numbers together, using the rules for addition, that apply to whatever branch or arithmetic you are using, and then you divide the total by the number of original input numbers, you started with, and the result is called the “average”. Some may use the word mean, as having the same meaning; but I prefer to be cautious, and not assume that “mean” and “average” are exactly the same thing.
So that is what “average” is. Now notice, I said nothing about the original numbers, other than they all belong to the same number system. There is no assumption that the numbers are anything other than some numbers, and are quite independent of each other.
No matter, the definition of “average” doesn’t assume any connections, real or imagined, between the numbers. There also is no assumption that the “average” has ANY meaning whatsoever. It simply is the result of applying a well defined algorithm, to a set of numbers.
So it works for the money amount on your pay check, each pay interval, or for the telephone numbers in your local phone book, or for the number of “animals” (say larger than an ant) per square meter of the earth surface (if you want to bother checking the number in your yard.)
Or it also works for the number you get if you read the thermometer once per day, or once per hour, outside your back door.
In all cases, it had NO meaning, other than fitting the defined function “average”, that we made up.
If you sit down in your back yard, and mark out a square meter, and then count the number of larger than ant sized animals in that area; you are not likely to count a number, that equals the world global average value. Likewise, whatever the source of your set numbers, you aren’t likely to ever find that average number wherever you got your set from. It’s not a “real” number; pure fiction, as distinct from the numbers you read off your back door thermometer, which could be classified as “data”.
Averages, are not “data”; they are the defined result of applying a made up algorithm to a set of numbers; ANY set of numbers drawn from a single number system, and the only meaning they have, is that they are the “average”.

jai mitchell
June 14, 2013 11:22 am

@Bart
Bart says:
June 13, 2013 at 3:51 pm
jai mitchell says:
June 13, 2013 at 3:41 pm
“However, the change in temperatures during the last 5 decades are not based on changes in the sun’s intensity since that effect is pretty much instantaneous.”
Sigh… Just another guy who does not understand the concept of frequency response.
————-
Bart,
your link simply says that the response can be delayed by up to 90 degrees. Since the period of the cycle is 11 years, 90 degrees is 5.5 years.
the average over the entire cycle has not changed significantly over the last 50 years. It sounds like you don’t understand the question.
I will restate it.
If the LIA was caused solely by solar activity (and not also caused by an abnormal increase in volcanic activity) Then the amount of warming since then would cause a significant change in the cycle of temperatures based on the current solar cycle every 6 years or so (from trough to maximum)
Your link only says that this effect is “delayed” not “averaged” over the period of the sine wave.

Just Steve
June 14, 2013 11:35 am

http://gizmodo.com/uh-why-is-an-artist-living-inside-a-floating-wooden-eg-512882997
Apparently this genius hasn’t gotten the memo yet……

Just Steve
June 14, 2013 11:37 am

Forgot to h/t Steve Milloy at Junk Science

Nick Stokes(@bilby)
June 14, 2013 11:55 am

M Courtney says:June 14, 2013 at 6:54 am
“That is the error that rgbatduke skewered at June 13, 2013 at 7:20 am…

‘there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!’

That’s not “skewered”. It’s nonsense. The Earth “uses the same physics” and famously gets different results, day after day. The phyaics models use is clearly stated, and differs very little. The same model will produce different weather with a small change in initial conditions (butterfly effect).

June 14, 2013 11:58 am

Solar magnetic dipole is finally on the move
http://www.vukcevic.talktalk.net/LFC6.htm

RCSaumarez
June 14, 2013 12:02 pm

@rgbatduke
One thing in your 1st post that I missed and is really important is your reference to Taylor’s theorem, which is a really important point.
Given small intervals, Taylor’s theorem allows one to linearise a system by ignoring higher derivatives. Normally we do this knowing that it is an approximation and compute a new results as one extends the interval from the initial condition, taking care to achieve stability. As the interval increases one needs an increased number of higher order terms to describe the system. However, in an observed,system such as temperature, we have difficulty in extracting the 1st derivative, let alone the higher derivatives. Hence we use linear trends because we can’t measure the signal sufficiently accurately to do any thing else.
This impinges on averaging model results. If we have several models, their outputs at T+Dt could be identical and we could say that the models were good. However, the mechanisms could be different and that the higher order derivatives could be different at T=0. However the models have been calibrated over a short period so that they conform to a data set. When one averages the “output”, one is, by implication, also obtaining an average of the initial derivatives, which seems highly questionable. As time increases, the results of the models will depend increasingly on the higher derivatives at the initial conditions and they will then diverge. One could say that the models’ first order term is reasonably correct but by averaging one is also saying that the higher derivatives don’t matter..

R. de Haan
June 14, 2013 12:07 pm

If it isn’t CO2 heating up the planet it is coal dust blown from a moving train polluting surface waters and so it goes. US 40 billion a year coal industry fights for survival, the green apparatchiks want to kill it: http://www.nytimes.com/2013/06/15/business/energy-environment/a-fight-over-coal-exports-and-the-industrys-future.html?_r=0&hp=&adxnnl=1&adxnnlx=1371236502-ynr7hQjeIpB7QrZ2jnxtEQ

Nick Stokes(@bilby)
June 14, 2013 12:09 pm

Monckton of Brenchley says:
June 14, 2013 at 7:04 am
“Mr. Stokes vexatiously persists in maintaining that Professor Brown had criticized my graphs, long after the Professor himself has plainly stated he had criticized not my graphs but the IPCC’s graphs”

The Professor plainly stated what graphs he was criticising:
“This is reflected in the graphs Monckton publishes above, where …”
He seems to have been under the impression that they are IPCC graphs, but they aren’t, are they?
His criticisms are quite specific. No IPCC graphs have been nominated which have the kind of statistics that he criticises. Your graphs do.

george e. smith
June 14, 2013 12:52 pm

I often read these threads from the bottom up, so I see the recent comments, and can go back up to see what inspired them.
So I finally got to the original post of rgbatduke, that many had referenced.
So now I know, that I made a correct decision, when I decided to forgo the pleasures of rigorous quantum mechanics; and launch into a career in industry instead. Even so, starting in electronics with a degree in Physics and Maths, instead of an EE degree, made me already a bit of an oddball.
But I also remember when I got dissatisfied with accepting that the Voltage gain for a Pentode stage was simply “gm. Rl” and I figured, I should be able to start from the actual electrode geometries inside a triode or pentode, and solve the electrostatic field equations, to figure out where all the electrons would go, so I would have a more accurate model of the circuit behavior. And this was before PCs and Spice. Well I also remember, when I decided on the total idiocy of that venture, and consigned it to the circular file.
So Prof. Robert demonstrated why sometimes, too much accuracy is less than useless, if you can’t actually use the result to solve real problems. Well I eventually accepted that Vg = gm.Rl is also good enough for both bipolar and MOS transistors too, much of the time. Well, you eventually accept that negative feedback designs are even better, and can make the active devices almost irrelevant.
It is good if your Physics can be rendered sufficiently real, so you might derive Robert’s carbon spectrum to advance atomic modeling capability, for a better understanding of what we think matter is; but no, it isn’t the way to predict the weather next week.
A recently acquired PhD physicist friend, who is currently boning up on QM at Stanford; mostly as an anti-Alzheimer’s brain stimulus, told me directly, that QM can only mess things up, more than they currently are; well unless of course, you need it.
Thanks Robert..

June 14, 2013 12:55 pm

Nick Stokes says (June 14, 2013 at 11:55 am): “The Earth “uses the same physics” and famously gets different results, day after day.”
Different from what? AFAIK we only have one Earth.

bob droege
June 14, 2013 1:09 pm

One thing about the uncertainty in the trend since Jan 1996, the trend could be zero or -0.029 per decade and the trend could be 0.207 C per decade, with equal probability, but more likely around 0.089 C per decade.

Patrick
June 14, 2013 1:18 pm

June 14, 2013 at 12:55 pm
Different from what? AFAIK we only have one Earth.”
Certainly only one Earth in my world however some may live multiple realities.

jai mitchell
June 14, 2013 1:41 pm

If we have no significant warming in 17 years 4 months but have .28C average warming in both UAH and Hadcrut4 in 20 years (equaling a 1.4C increase in 100 years–a 2.1C increase in 2100 from 1940 levels.

Patrick
June 14, 2013 1:43 pm

dbstealey(@dbstealey)
June 14, 2013 2:00 pm

Patrick,
True. And the ‘lower tropo’ is cherry-picked. Global surface temps are the relevant meteric. See here.

Margaret Smith
June 14, 2013 2:04 pm

rgbatduke says:
“snarl of models”
Excellent collective noun!

M Courtney
June 14, 2013 2:07 pm

ferdberple says: at June 14, 2013 at 6:19 am quite a lot about chaos that meant that models couldn’t be aggregated, notably

There is no good reason to average chaos

Nick Stokes replied at June 14, 2013 at 6:33 am

This would mean that you could never speak of any weather average. But we do that all the time, and find it useful. Some folks are overly dogmatic about chaos.

Which sounded reasonable. I took Nick Stokes at his word, but then he says…

The Earth “uses the same physics” and famously gets different results, day after day. The physics models use is clearly stated, and differs very little. The same model will produce different weather with a small change in initial conditions (butterfly effect).

So which is it?
Nick Stokes, you are not sounding consistent.

M Courtney
June 14, 2013 2:15 pm

Consistency
Nick Stokes says: at June 14, 2013 at 12:09 pm in reply to Monckton of Brenchley, June 14, 2013 at 7:04 am:

His criticisms are quite specific. No IPCC graphs have been nominated which have the kind of statistics that he criticises. Your graphs do.

But Monkton is still responding in kind to the leaked AR5 graphs. He has to compare apples with apples even if we don’t like them apples.
I agree the leaked AR5 graphs are rubbish.
But for consistency, will you (Nick Stokes) condemn the IPCC if the published AR5 includes such an average?

jbird
June 14, 2013 2:31 pm

The theory that there is any such thing as a “global temperature” upon which you can calculate anomalies is terribly flawed in and of itself. Simply beginning with such a poorly defined concept will ultimately lead to all kinds of logical errors – the state of climate science today.
There are many good discussions of this. Below is just one of them:

Bart
June 14, 2013 2:35 pm

jai mitchell says:
June 14, 2013 at 11:22 am
No, what it says is gain is distributed over frequency, and you cannot deduce sensitivity to long term excitations based on short term ones without thorough knowledge of the frequency response of the system.

jai mitchell
June 14, 2013 2:55 pm

Bart,
The sinusoidal solar cycle, held at a constant average incidence for 50 years will not produce a long-term warming.
There is a point when extremist contrarianism becomes disinformation–you have completely crossed that line.
the solar function has a period of 12 years. On average, it has been relatively constant for over 50 years. you cannot infer warming due to a relatively constant average solar irradiation, even if it does operate on a sinusoidal function. . .that is just voodoo science.
unless you can prove to me that the earth’s response to increased solar activity isn’t felt for over 40 years. . .I suppose you have a peer reviewed document that states something to that effect?????

jai mitchell
June 14, 2013 2:57 pm

@ Patrick

jai mitchell
June 14, 2013 3:01 pm

DbStealey
you said,
True. And the ‘lower tropo’ is cherry-picked. Global surface temps are the relevant meteric. See here.
but your link shows RSS lower troposphere values. . .not global surface. If you wanted to use global surface then you should have looked here
http://www.woodfortrees.org/plot/gistemp/from:1993/plot/gistemp/from:1993/trend/plot/esrl-co2/from:1993/normalise/offset:0.68/plot/esrl-co2/from:1993/normalise/offset:0.68/trend
(note, the original plot was 1993, not 1997.9 (just before the largest el nino in recorded history that you decided to cherry pick)

milodonharlani
June 14, 2013 3:04 pm

jai mitchell says:
June 14, 2013 at 2:55 pm
————————————–
As has been commented upon in this blog many times, the UV component of TSI fluctuates by a factor of two, on about the time scale of observed sine wave above & below the trend line of recovery from the LIA in average temperature, with appropriate lag to produce observed PDO & AMO oscillations.

Nick Stokes(@bilby)
June 14, 2013 3:04 pm

M Courtney says: June 14, 2013 at 2:07 pm
“So which is it?”

There is no inconsistency there. Weather outcomes are very sensitive to perturbations; this is reflected in model performance. But long term climate averages make sense and are universally used in the everyday world.
Fluid mechanics have dealt with this for many years. Turbulence is classic chaotic flow. For over a century it has been dealt with by Reynolds averaging.
“But Monkton is still responding in kind to the leaked AR5 graphs.”
That makes no sense, and he didn’t even talk about model averaging in his post. I’m simply dealing with his ridiculous attempt to pretend that RGB was not talking about the graphs published in this post, when he clearly said that he was.

June 14, 2013 3:48 pm

Mr. Stokes continues to lie in his habitual fashion. Professor Brown states quite plainly that it was the IPCC’s graph, reproduced as part of one of my graphs, that he was criticizing.
After you had speculated on who had compiled my graph, which has the words “lordmoncktonfoundation.com” plainly written on it, Professor Brown writes: “Aw, c’mon Nick, you can do better than that. Clearly I was referring to the AR5 ensemble average over climate models, which is pulled from the actual publication IIRC.”
Professor Brown’s criticism is directed at the compilation of an ensemble from models using different code. That is what the IPCC reproduced in its draft of AR5, and that is what I reproduced from AR5, and labelled it as such.
I note that Mr. Stokes is entirely unable to refute what my graph demonstrates: that the models are over-predicting global temperatures. It does not matter whether one takes the upper bound or lower bound of the models’ temperature projections or anywhere in between: the models are predicting that global warming should by now be occurring at a rate that is not evident in observed reality. Get used to it.
The moderators may like to consider whether outright lying on Mr. Stokes’ part is a useful contribution here. It illustrates the intellectual bankruptcy of the paid and unpaid trolls who cling to climate extremism notwithstanding the evidence, but otherwise it is merely vexatious.

dbstealey(@dbstealey)
June 14, 2013 3:53 pm

jai mitchell says:
“…a 2.1C increase in 2100 from 1940 levels.”
And you accuse me of cherry-picking!
Bart is right, you are way out of your depth. Even the über-alarmist NY Times now admits that global warmibg has stopped. Go argue with them if you don’t like it.

Lars P.
June 14, 2013 4:13 pm

Nick Stokes says:
June 14, 2013 at 3:04 pm
M Courtney says: June 14, 2013 at 2:07 pm
“So which is it?”
There is no inconsistency there. Weather outcomes are very sensitive to perturbations; this is reflected in model performance. But long term climate averages make sense and are universally used in the everyday world.
Fluid mechanics have dealt with this for many years. Turbulence is classic chaotic flow. For over a century it has been dealt with by Reynolds averaging.
“But Monkton is still responding in kind to the leaked AR5 graphs.”
That makes no sense, and he didn’t even talk about model averaging in his post. I’m simply dealing with his ridiculous attempt to pretend that RGB was not talking about the graphs published in this post, when he clearly said that he was.

Nick, you know perfectly clear that rgb was addressing the divergence between the models and the reality and between the models themselves. He makes it pretty clear in his post, that much too many models which are modelling so bad the reality are still used. Models which contradict each other. This is not weather perturbation reflected in model performance, the divergence is growing and growing.
Yes long term climate averages are universally used, however this is exactly what rgb shows it is wrong. Averaging dirt does not give good results.
You know perfectly clear that the outputs from IPCC models are exactly as rgb describes them.
And you know perfectly that you are doing just a divergence inventing excuses.
You also know that it is not climate variances and turbulences which make models go so far away from reality. The issue is simply they do not model correctly the current processes or they miss something. rgb’s post makes perfectly sense, and he does not criticises Christopher Monkcton’s chart, but the majority of models used by the current climate science to achieve those averages. You know that what he says makes perfect sense, this is why you try to move the discussion in a collateral diversion.

Nick Stokes(@bilby)
June 14, 2013 5:11 pm

Monckton of Brenchley says: June 14, 2013 at 3:48 pm
Professor Brown states quite plainly that it was the IPCC’s graph, reproduced as part of one of my graphs, that he was criticizing.

“Reproduced as part of”? Here’s how it is described above
“In answer to Mr. Stokes, the orange region representing the interval of models’ outputs will be found to correspond with the region shown in the spaghetti-graph of models’ projections from 2005-2050 at Fig. 11.33a of the Fifth Assessment Report. The correspondence between my region and that in Fig. 11.33a was explained in detail in an earlier posting. The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output.”
And here is Fig 11.33a. Reproduced? “Seems fairly to reflect”?
But RGB’s criticism was directed at the statistics in Lord Monckton’s graph. Let me quote:
“Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
Nowhere in the AR5 Fig 11.33 is a mean and standard deviation created, with variance, treating the difference as if they are uncorrelated random variates etc. Those are Lord M’s statistics.

jai mitchell </