Here’s something really interesting: two comparisons between model ensembles and 3 well known global climate metrics plotted together. The interesting part is what happens in the near present. While the climate models and climate measurements start out in sync in 1979, they don’t stay that way as we approach the present.
Here are the trends from 1979-”Year” for HadCrut, NOAA, GISSTemp compared to the trend based on 16 AOCGMs models driven by volcanic forcings:
A second graph showing 20 year trends is more pronounced.Lucia Liljegren of The Blackboard did both of these, and she writes:
Note: I show models with volcanic forcings partly out of laziness and partly because the period shown is affected by eruptions of both Pinatubo and El Chichon.
Here are the 20 year trends as a function of end year:
One thing stands out clearly in both graphs: in the past few years the global climate models and the measured global climate reality have been diverging.
Lucia goes on to say:
I want to compare how the observed trends fit into the ±95 range of “all trends for all weather in all models”. For now I’ll stick with the volcano models. I’ll do that tomorrow. With any luck, HadCrut will report, and I can show it with March Data. NOAA reported today.
Coming to the rescue was Blackboard commenter Chad, who did his own plot to demonstrate +/- 95% confidence intervals using the model ensembles and HadCRUT. He showed very similar divergent results to Lucia’s plots, starting about 2006.
So the question becomes: is this the beginning of a new trend, or just short term climatic noise? Only time will tell us for certain. In the meantime it is interesting to watch.



AGW seems to be running up against Saul Alinsky’s 7th Rule for Radicals- “A tactic that drags on too long becomes a drag” Alinsky, a hero of many in the Green Movement, understood that a political issue- especially one that relies on an emotional response- becomes boring in time to both the radical and those they target. Alinsky should have also warned to reach a political solution quickly before the “cause” is exposed to full analysis.
MattB (07:36:36) :
“hareynolds and others… there has never been an outright, total, worldwide ban on DDT. If you can’t even get something as basic as that right is it any wonder you don’t get AGW.”
The Stockholm Convention, with 98 signatory nations, banned the use of DDT world wide, effective 2002, with some parts of the world, controversially, still using it for disease vector control. Nations that required it, were severely limited, or couldn’t get it because reduced source (supply) increased the cost, and organizations that funded many of the eradication efforts succumbed to political correctness pressures and refused to fund DDT use.
If one is to be anally literal, your statement would stand. In a pragmatic sense, though, the ban is worldwide and total.
For such a literal-minded person, I’m surprised you are not concerned by the dichotomy between the AGW theories and models, and real-world observation.
This is an excellent document from Lord Moncton: http://scienceandpublicpolicy.org/images/stories/papers/reprint/markey_and_barton_letter.pdf
It’s funny also how he rubs the nose of NOAA by pointing to the Santa Rosa station, which has a rating of 5 on the surfacestation project database. Ecellent Work!
Mike Bryant (08:32:49),
Is this what you’re looking for?: click
DJ: They aren’t forecasts, they are “projections”-and as meteorologists like Anthony can tell you, the degree of impressiveness of a “forecast” is not how certain it is, but how successful it is…
Someone above mentioned Rahmstorf (2007)-note that since he was looking at a sixteen year period, the Pinatubo eruption was ~very~ near the end point, as was the 2005 El Nino-this combination will inflate the actual trend. David Stockwell also has some posts on it:
http://landshape.org/enm/rahmstorf-revisited/
and you gotta love that graph!
CO2 Realist (07:53:41) :
On RealClimate.org, Gavin Schmidt argues against industry standard practices of source code management, configuration management, and disclosure of code and data. Here’s a salient quote from Schmidt in a response to comment 89 in the post On Replication:
“My working directories are always a mess – full of dead ends, things that turned out to be irrelevent or that never made it into the paper, or are part of further ongoing projects. Some elements (such a one line unix processing) aren’t written down anywhere. Extracting exactly the part that corresponds to a single paper and documenting it so that it is clear what your conventions are (often unstated) is non-trivial. – gavin]”
If this isn’t a reason to use source code control, documentation and configuration management, I don’t know what is.
—
Thanks for the quote. This is pretty shocking to say the least! Now, this would not be an issue to me if these codes were truly “research codes”. However, the fact of the matter is that the numerical solutions being generated by AOGCMs such as Model E have formed the basis of ** numerous ** scare stories in the media about tipping points, species extinction, ice free poles, etc. And now, with Cap and Trade looming, the influence of these codes and their “solutions” is going to have a crippling effect on our economy.
For this and many more reasons, I think all of the major climate models employed by the government for use in climate prediction studies should undergo a vigorous and thorough review process, where the authors are ** required ** to produce complete documentation of their algorithms, verification and validation procedures for all subroutines, and source code control documents which can trace all changes to the source code. This is the very least these people can do. But, alas, it’s not a very rewarding activity because, as Dr. Schmidt has said before, he’s not paid to document his work, he’s paid to do “science”!
Re DDT – for info the following from Wikipedia (http://en.wikipedia.org/wiki/DDT):
Production and use statistics
From 1950 to 1980, when DDT was extensively used in agriculture, more than 40,000 tonnes were used each year worldwide,[7] and it has been estimated that a total of 1.8 million tonnes of DDT have been produced globally since the 1940s.[1] In the U.S., where it was manufactured by Ciba,[8] Montrose Chemical Company and Velsicol Chemical Corporation,[9] production peaked in 1963 at 82,000 tonnes per year.[3] More than 600,000 tonnes (1.35 billion lbs) were applied in the U.S. before the 1972 ban, with usage peaking in 1959 with about 36,000 tonnes applied that year.[10]
Today, 4-5,000 tonnes of DDT are used each year for the control of malaria and visceral leishmaniasis, with India being the largest consumer. India, China, and North Korea are the only countries still producing and exporting it, and production is reportedly on the rise.[11]
I’m not really sure you can call it ‘diverging’ since, except at the 1999 start point, the lines on that graph were never that close anyway.
‘Getting further apart than ever’ would be a more accurate description.
Sam the Skeptic (10:33:39) :
O/T
My eyesight is not what it was but is there a very small sun-speck at about 12 o’clock?
Well… You’re eyesight is fine. There is a bright spot at about 12 and another black tiny spot at about 5 o’clock. The latter has been there for weeks. I don’t have a good explanation for that tiny speck.
“How can a graph be so very wrong?”
The subject is the population of the UK.
http://news.bbc.co.uk/1/hi/magazine/8000402.stm
“The answer, crudely, is that the track record of population projection is abysmal. It borders on being a statistical lottery.”
The same people who spread alarm over AGW also see population as a problem
http://news.bbc.co.uk/1/hi/sci/tech/4584572.stm
As the old faint praise goes:- “You may not do very good work ; but you sure are slow !”
In this context; I really don’t care that the Playstation Video-Game climate models; or Global Circulation Models; if you want to be pedantic, are no good; because I’m absolutely sure that the raw data that goes into them is pure garbage.
Until those who make the experimental observations start complying with the Nyquist sampling criterion; that governs ALL sampled data systems; these modesl will always predict nonsense.
Please wake me up when ANY of these GCMs, run backwards, correctly predicts the LIA and the MWP.
A good scientific model ought to be able to reproduce the raw data that was used to create the model.
Ho hum.
George
PS But nice work Lucia.
Reply: “You may not do very good work ; but you sure are slow !” George…you’re killing me! ~ charles the moderator
I also see (the lack of growth in) population (of native born residents of Western democracies) as a problem. It will shortly become an even bigger problem. At some point, the crash will affect the already weak economy. The global economy has never previously experienced a decline in market size. Previous declines in market size were prior to a truly global economy. Those two previous declines were the Plague and the Dark Ages. What we face is worse, from an economic impact perspective. Even the third world is coming up against fecundity issues – meanwhile in the more developed areas the die is already cast, too late to turn back from the slippery increasingly steep slope.
Note – the impacts I mentioned will start to hit over the next 10 years in the developed world and elsewhere no later than 2060. The problem is compounded by the retirement and aging of the Baby Boomers
George E. Smith (11:49:03) :
because I’m absolutely sure that the raw data that goes into them is pure garbage.
I don’t disagree.
Until those who make the experimental observations start complying with the Nyquist sampling criterion; that governs ALL sampled data systems; these modesl will always predict nonsense.
I disagree. Actually, it all depends. What are the highest frequencies present in our climate? Most likely the day/night cycle is the highest, or at least, the highest that has any real impact. There may be higher frequencies, but I’ve never seen any discussion that indicates they have significance, which means they won’t cause problems if they get aliased. That said, at what time resolution do GCMs run?
Please wake me up when ANY of these GCMs, run backwards, correctly predicts the LIA and the MWP.
Given that they are likely highly non-linear, this can’t happen, either. An unfortunate consequence of non-linearity, indeed.
Mark
Nasif and Sam: Black speck at 5 o’clock may be a bad pixel in the camera. Lief has mentioned such things before.
CO2 Realist (07:53:41) writes about the lacking software quality control for the CGM’s.
I have some experience with a large Fortran model with similar lack of software quality: the Dutch airport risk calculation tool. I left the Dutch National Aerospace Lab mostly because this piece of crap software was being used to generate risk contours on a map with actual legal consequences.
The errors in this p.o.s were staggering. It has generated contours that were turned into law, where only 1 in 25 gridcells were correctly calculated, most were randomly wrong, and about 8 in 25 were calculated with undetermined data.
All this happened because one of the data files (surface texture) had two spaces between each figure instead of three! I kid you not!
Another important bug in this model was in the actual equations, which would lead to an infinite risk density right at the centerpoint of any circlesegment of an aircrafts track.
I assume that the programmer who wrote it put some arbitrary limit in it, or that it actually never hit the infinity value because of non-grid alignment of the aircraft tracks
When I got there, nobody new how to visualize data, and I spent some time building something to get false color pictures from the risk probability density files. Then I noticed these weird patterns every 500 meter or so, and very bright spots right in the center of an aircraft track.
The moral of the story is: You GCM modellers, you better show us your code and every single line of it, and ALL data, and the actual equations you’re trying to model, and I’m damn sure we’ll find plenty of bugs in them, some of them major.
The amazing gall the GCM model people have to think that they program something that big without very tight software quality management is mindblowing. The credibility of your model calculations is NULL.
Nasif, thanks for confirming.
Jack — that was what I thought as well. I remember somebody mentioning it.
Read this nonsense
http://news.bbc.co.uk/1/hi/sci/tech/8003060.stm
I would be interested to see the graphs compared to RSS or UAH data, as when HadCRUT and GISS are compared with satellite data they have been diverging.
Over the last 32 years for example UAH and GISS have diverged by approx. 0.3 degrees. I expect if the IPCC projections were compared with the more reliable (though not perfect) satellite data, the model divergence from reality would become even more apparent.
Replying to…
Aron (12:33:02) :
Read this nonsense
http://news.bbc.co.uk/1/hi/sci/tech/8003060.stm
Apart from the bits about “man made” and “greenhouse gas emissions”, it’s a very good article…;)
Aron (12:33:02) :
Read this nonsense
http://news.bbc.co.uk/1/hi/sci/tech/8003060.stm
Hmm…I wonder what might have contributed to the 1650-1750 mega-drought in West Africa…?
Regarding the disucssion of quality control and disclosure of the models:
Frank K. (11:17:51) :
“…I think all of the major climate models employed by the government for use in climate prediction studies should undergo a vigorous and thorough review process, where the authors are ** required ** to produce complete documentation of their algorithms, verification and validation procedures for all subroutines, and source code control documents which can trace all changes to the source code.”
Bart van Deenen (12:30:04) :
“The moral of the story is: You GCM modellers, you better show us your code and every single line of it, and ALL data, and the actual equations you’re trying to model, and I’m damn sure we’ll find plenty of bugs in them, some of them major…The credibility of your model calculations is NULL.”
I couldn’t agree more with both of you, and my experience is only with commercial business software. I’ve even heard arguments that there’s no need to document code, just read the code. But what if the code is wrong? Now you’re gleaning the logic and goal of the code from incorrectly written code?
With the huge impacts financially, socially, and politically, the models need to at the very least be held to the standards of commercial software. And don’t get me started on garbage in, garbage out. We know from this blog that there are huge issues with data quality.
Frank K and CO2 realist:
Perhaps I state the obvious about conclusions to be drawn from comments about Gavin S’s work. Gavin appears to be a first rate garbage collector.
“jack mosevich (12:14:20) :
Nasif and Sam: Black speck at 5 o’clock may be a bad pixel in the camera. Lief has mentioned such things before.”
Yea, it is a burt pixel on the detector. It might go away the next time they do a burnout on the detectors but for now it is a permanent feature. Actually they are so desparate to see the sunspot number going up that they want to give this burnt pixel a permanent sunspot number. 😉
Ron de Haan (09:19:27) :
Ray (11:01:19) : This is an excellent document from Lord Moncton:
http://scienceandpublicpolicy.org/images/stories/papers/reprint/markey_and_barton_letter.pdf
Excellent letter to the House Committee. Now if only a few would read the 40 pages.
The source of the Santa Rosa graphs was quoted as “Dr. Anthony Watts”.