Climate Models -vs- Climate Reality: diverging or just a dip?

Here’s something really interesting:  two comparisons between model ensembles and  3 well known global climate metrics plotted together. The interesting part is what happens in the near present. While the climate models and climate measurements start out in sync in 1979, they don’t stay that way as we approach the present.

Here are the trends from 1979-”Year” for HadCrut, NOAA, GISSTemp compared to the trend based on 16 AOCGMs models driven by volcanic forcings:

Figure 1: Trends since 1979.

Figure 1: Trends since 1979 ending in ‘Year’.

A second graph showing 20 year trends is more pronounced.Lucia Liljegren of The Blackboard did both of these, and she writes:

Note: I show models with volcanic forcings partly out of laziness and partly because the period shown is affected by eruptions of both Pinatubo and El Chichon.

Here are the 20 year trends as a function of end year:

Figure 2: Twenty-year trends as a function of end year.

Figure 2: Twenty-year trends as a function of end year.

One thing stands out clearly in both graphs: in the past few years the global climate models and the measured global climate reality have been diverging.

Lucia goes on to say:

I want to compare how the observed trends fit into the ±95 range of “all trends for all weather in all models”. For now I’ll stick with the volcano models. I’ll do that tomorrow. With any luck, HadCrut will report, and I can show it with March Data. NOAA reported today.

Coming to the rescue was Blackboard commenter Chad, who did his own plot to demonstrate +/- 95% confidence intervals using the model ensembles and HadCRUT. He showed very similar divergent results to Lucia’s plots, starting about 2006.

http://scientificprospective.files.wordpress.com/2009/04/hadcrut_models_01.png

So the question becomes: is this the beginning of  a new trend, or just short term climatic noise? Only time will tell us for certain. In the meantime it is interesting to watch.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
180 Comments
Inline Feedbacks
View all comments
April 16, 2009 4:34 pm

Smokey (16:18:35) :
In the report, Viscount Monckton thoroughly demolishes every claim made by the global warming contingent. Highly recommended! In fact, everyone who reads the report will find new reasons to be skeptical of the climate alarmist argument.
I had always suspected of those abrupt one year long high temperatures in 1998. After Lord Monckton’s report, I can easily explain the disparity by adding it to unusual solar activity and El Niño oscillation.

Arn Riewe
April 16, 2009 4:34 pm

Several have posted about Nat Geo. I agree that it’s become no more than an AGW advocacy site.
One of my favorites from the “HEAD I WIN, TAILS YOU LOSE DEPARTMENT”
http://news.nationalgeographic.com/news/2006/09/060911-growing-glaciers.html

April 16, 2009 4:53 pm
April 16, 2009 5:11 pm

Hi all,
I have again shamelessly stolen your chart for use on my weblog (hopefully with adequate credit to WUWT–let me know if you want more praise, and I’ll cheerfully give it to you). It’s here with my additional point: http://newsfan.typepad.co.uk/liberals_can_be_skeptics_/2009/04/predictions-vs-observations.html
My point is, let’s play to win. I am a liberal Democrat and I want to change liberal Democratic policy. These figures, as well as the rest of the points made on this blog, as well as Climate Skeptic, Jennifer Marohasy and the Blackboard, strongly indicate that we at the very least have time to go out and collect more and better data. To site temperature stations correctly, and to infill the blank spots on the map. To get better proxies, to put some radiosonde balloons in the troposphere and anchor them, to make sure the satellites are getting and reporting the data in the best way possible.
If nothing else–even if global warming is all true (which it obviously isn’t), even if catastrophe is coming down the road (which I don’t believe for a minute), it is further down the road than the alarmists thought. We have time to do the science right.
Isn’t this something we can push for?

Mike S
April 16, 2009 5:18 pm

DJ – Sorry but you sound like a victim of revisionism. 2004 was El Nino in the latter months, while 2005 was more or less ENSO neutral. Don’t you just hate it when they go back and do stuff like that ? Don’t shoot the messenger, I’m just trying to figure this stuff out myself.

Squidly
April 16, 2009 5:21 pm

CO2 Realist (07:53:41) :
Re: Frank K. (05:29:12) :
“In my opinion, the credibility of many climate models will be close to zero because the code developers (in particular the NASA GISS Model E) do a horrible job documenting and validating their codes (there are some exceptions). ”
This is a sore point with me as well, since I have some background in programming.
On RealClimate.org, Gavin Schmidt argues against industry standard practices of source code management, configuration management, and disclosure of code and data….

You have hit on what I consider to be one of many extremely important points. As a long time computer scientist myself, I cannot imagine how one could seriously consider conclusions from codes developed so haphazardly. The necessary methods unemployed in the design and construction methodologies evident within the codes and mechanisms of all of the models I have inspected (Model E for example), leave me completely unconvinced of the accuracy and consequence of their results. I simply cannot take seriously a piece of software so poorly written, poorly designed, poorly constructed, poorly managed and poorly maintained. Simply non-credible and bad science in the worst of ways.

Squidly
April 16, 2009 5:37 pm

Dave Middleton (08:27:10) :

The reason that the climate models are so wrong is simple…They fail to properly account for water vapor and clouds. Every model assumes a positive feedback mechanism from water vapor. Yet, there is no empirical or observational evidence to support such a positive feedback mechanism. If anything, the observational evidence supports a negative feedback mechanism…
http://wattsupwiththat.com/2009/03/05/negative-feedback-in-climate-empirical-or-emotional/

These are not the reasons, but certainly a very important none the less. A very simple everyday demonstration of this is to compare deserts to tropics. Which is warmer? and why?
Further, has there ever been a runaway heating affect found on Earth? If not, why not?
The answer to all of these questions is quite simple (and you don’t need a computer model), the feedback effect produced by radiative forcing between CO2 and water vapor is NEGATIVE not positive. Like almost all natural processes and responses, our climate is dominated by negative feedback, not positive feedback.
“We now conclude this test of the AGW Broadcast System, had this been a real emergency, you would have been instructed to relinquish your hard earned cash and give up your right to breathe.”

BarryW
April 16, 2009 5:48 pm

Mike McMillan (15:36:52) :
Mike I explained it for Tom here: BarryW (14:16:34) :
The trick is to plot the trends based on the ending year not the start year. So the trend plotted is the trend based on the months prior to that date not after it. So for January 2009 the plotted trend is based on the data from the preceding 240 months of data for the 20 yr trend (Jan 1990 to Jan 2009 inclusive).

Thom Scrutchin
April 16, 2009 5:54 pm

I would like to make a suggestion. We should stop using the acronym AGW since it is becoming clearer and clearer that AGW doesn’t exist.
The new acronym should be AGWA for Anthropogenic Global Warming Alarmism. Since that is not a scientific claim, I can appeal to the new consensus to prove that AGWA really does exist.

Philip_B
April 16, 2009 5:54 pm

Lucia, I think the $64K question here is ‘How does the current climate trend downturn compare with the last cyclic downturn and the last cyclic upturn’.
If this downturn is similar or less than the last downturn and the last upturn, then we are just going back to where we were 60 or so years ago. Not good news, but not particularly bad news either.
Eyeballing the data with volcanos removed (link below) the current downturn looks very similar to the 1940s downturn. The 1940s downturn stopped at around the current level (in terms of the drop in the anomaly) and then was basically flat for a decade or so, before resuming the up trend.
If we repeat the 1940s then temperatures will stay around current levels for a decade or more, which is nothing to worry about. Although the models will be categorically invalidated.
If we see further falls from here, say by more than -0.2C, then the current temperature downturn will look more like the 30 years of cooling temps from 1890.
If we see significantly more than -0.2C then we are into uncharted territory in terms of the (thermometer) temperature record and who knows where we go from there.
http://www.worldclimatereport.com/index.php/2008/12/17/recent-temperature-trends-in-context/
How large can temperature changes get. Well the study below shows a 22C rise (no mistake there) in temperatures at the start of current interglacial around 10K years ago.
http://cat.inist.fr/?aModele=afficheN&cpsidt=2934948

KimW
April 16, 2009 6:12 pm

The problem is not just with climate models, but the refusal to think on what evidence implies. From an article in Science,
” Overpeck and his colleagues studied sediments beneath Lake Bosumtwi in Ghana that gave an almost year-by-year record of droughts in the area going back 3,000 years. Until now, the instrumental climate record in this region stretched back only 100 years or so. The researchers found a pattern of decades-long droughts like the one that began in the Sahel in the 1960s that killed at least 100,000 people, as well as centuries-long “megadroughts” throughout this long period, with the most recent lasting from 1400 to 1750.
The scientists also described signs of submerged forests that grew around the lake when it dried up for hundreds of years. The tops of some of these tropical trees can still be seen poking up from the lake water. …………….
……. The cause of centuries-long megadroughts is not known, but he said the added burden of climate change could make this kind of drought more devastating. Temperatures in this region are expected to rise by 5 to 10 degrees F (2.77 to 5.55 degrees C) this century, the scientists said, even if there is some curbing of the greenhouse emissions that spur climate change. We might actually proceed into the future … we could cross a threshold driving the (climate) system into one of those big droughts without even knowing it’s coming,” Overpeck said. ”
I am ‘gobsmacked’ !!. I mean to say, if there were these ‘megadroughts’ in history before the present, – there had to be climate change – so how the heck can he assume there is “man-made climate change” now rather than a continuing cyclic climate pattern ?

wenx
April 16, 2009 6:30 pm

which curve is the Reality?
or which one is more close to the Reality?
Could anyone tell me that?
Thank you.

Philip_B
April 16, 2009 6:37 pm

A very simple everyday demonstration of this is to compare deserts to tropics. Which is warmer? and why?
The humid tropics are a lot warmer than hot deserts such as the Sahara, by about 10C (annual average). This is due to the water vapour greenhouse effect (in the tropics).
Although, having lived in both the humid tropics and hot desert climates, I know clouds have a much greater cooling effect in hot deserts than the humid tropics.
So, a wetter (more water vapour) world is a warmer world, but a cloudier world is a cooler world.

April 16, 2009 6:40 pm

KimW (18:12:41) :…centuries-long “megadroughts”…climate change could make this kind of drought more devastating…
Wow – AGW could turn the “megadroughts” into “super megadroughts”.
Is anyone else getting tired of hearing this crap?

April 16, 2009 6:42 pm

Continuing on the quality of the models:
Richard Lawson (16:07:10) writes:
“Do you think climate modelers have this many man-hours applied to their code? I think not. Enough said.”
And Squidly (17:21:09) writes:
“As a long time computer scientist myself, I cannot imagine how one could seriously consider conclusions from codes developed so haphazardly… I simply cannot take seriously a piece of software so poorly written, poorly designed, poorly constructed, poorly managed and poorly maintained.”

Looks like many of us are on the same page regarding quality and documentation of the models. Even though I’m not a scientist, the part of the climate debate I know something about (programming and standard practices) is hopelessly corrupted. Why am I not to think the other parts I’m not as well versed in aren’t of similar dubious quality.

Law of Nature
April 16, 2009 7:45 pm

Hi John and Douglas, a very warm “Hello there!” to Anthony
and many thx to Lucia and Chad for the graphs! 🙂
Greeting reader! 🙂
John Finn (06:38:54) :
Douglas DC (06:21:40) :
I say cooling further.Soon too…
Could you be more specific. What do you mean by “soon”?
Well I just wanted to comment that what you are seeing here are the intergrated longyear trends . .
Even if the temperature would decide to rise faster, the difference to the models for the trends shown on these graphs would still be different to the models for years to come.
In other words the 20 year trend is affected by this year for the last and the next 20years!
All the best,
LoN

BarryW
April 16, 2009 8:20 pm

CO2 Realist (18:42:37) :
Guys, welcome to the 20th century. Go back to the 1980’s and this is what you got for code quality and development. I heard those sorts of arguments from some of the big defense contractors when monitoring their software development back then. Couldn’t understand why they needed specs, documentation and testing. They just threw the code and hardware into the integration facility and kept banging on it till it worked (mostly). These guys are just hacking code until they get the answer that “looks right”.

Frank K.
April 16, 2009 9:50 pm

CO2 Realist (18:42:37) :
“Looks like many of us are on the same page regarding quality and documentation of the models. Even though I’m not a scientist, the part of the climate debate I know something about (programming and standard practices) is hopelessly corrupted. Why am I not to think the other parts I’m not as well versed in aren’t of similar dubious quality”
Before we get off this topic, those who are interested should check out the worst example of climate code documentation – the GISS Model E. You can read about it here:
http://www.giss.nasa.gov/tools/modelE/index.html
For their code “documentation” they reference the following paper:
http://pubs.giss.nasa.gov/abstracts/2006/Schmidt_etal_1.html
Please look at this paper and count the number of equations. I think there are six altogether.
I’ve looked around and there doesn’t appear to be any place where the GISS researchers are brave enough to list the differential equations they are solving. And of course there is no complete description of their algorithms beyond the cursory descriptions given in the link above. Like many research organizations, GISS feel that a list of possibly relevant papers is enough code documentation for them.
And then there is the code itself, which can be found here…
http://www.giss.nasa.gov/tools/modelE/modelEsrc/
The FORTRAN here is typical research code i.e. mostly no comments, poor formatting, very few pointers to any documentation, and nearly impossible to follow, unless you were one of the original developers.
For another example of GISS coding standards, there is the infamous GISTEMP here:
http://data.giss.nasa.gov/gistemp/sources/
Try to figure out what this code is really doing versus the description given in the “documentation”…

David Ball
April 16, 2009 10:15 pm

George E. Smith, I was wondering if you had the time and inclination to run in the next presidential election, assuming of course that you live in the U.S. You are one of my heroes!!! And a great sense of humor, to boot !!! I was going to add “master debator”, but I have too much class for that, …..

AJ
April 16, 2009 10:28 pm

great information … i have book marked this site and will come back in times

April 16, 2009 10:32 pm

More on model quality:
BarryW (20:20:41) says:
Guys, welcome to the 20th century. Go back to the 1980’s and this is what you got for code quality and development.

Barry, you made me laugh! I was working on IBM mainframes (COBOL, CICS, IMS database) in the mid 80s. I can certainly agree with your statement. How come our “scientists” haven’t progressed to the 21st century?
Frank K. (21:50:45) says:
Try to figure out what this code is really doing versus the description given in the “documentation”…

I’ll give it a go, but I’m sure I could take your word for it! It will be an enlightening exercise.
While I can certainly appreciate all that Lucia, Steve McIntyre, Jeff ID and others do with the statistics and math, the models fall apart much sooner than that. Like try starting with surfacestations.org and the poor quality of input data.
Seriously, I’m surprised more scientists, mathemeticians, statisticans, programmers, and others don’t call B.S. on the models and the claims of 90% certainty. Sheesh!
Oh, and for those that can remember, doing these HTML embedded format codes for italics and the like reminds me of the commands for “pretty printers” on mainframes and WordStar. Oops! Did I just date myself?
While I like WordPress, you’d think they could add a WYSIWYG editor for comments.

Squidly
April 17, 2009 12:22 am

Philip_B (18:37:10) :
A very simple everyday demonstration of this is to compare deserts to tropics. Which is warmer? and why?
The humid tropics are a lot warmer than hot deserts such as the Sahara, by about 10C (annual average). This is due to the water vapour greenhouse effect (in the tropics).
Although, having lived in both the humid tropics and hot desert climates, I know clouds have a much greater cooling effect in hot deserts than the humid tropics.
So, a wetter (more water vapour) world is a warmer world, but a cloudier world is a cooler world.

Yes, the average temperature in the tropics will stay warmer because the water vapor will hold the heat, however, the tMax will not reach nearly as high in the tropics as it will in the desert because of the negative feedback of the water vapor. Several papers out there (many listed in other threads on WUWT) clearly show this to be true.

anna v
April 17, 2009 1:17 am

KimW (18:12:41) :
from your quote:
The problem is not just with climate models, but the refusal to think on what evidence implies. From an article in Science,
” Overpeck and his colleagues studied sediments beneath Lake Bosumtwi in Ghana that gave an almost year-by-year record of droughts in the area going back 3,000 years. Until now, the instrumental climate record in this region stretched back only 100 years or so. The researchers found a pattern of decades-long droughts like the one that began in the Sahel in the 1960s that killed at least 100,000 people, as well as centuries-long “megadroughts” throughout this long period, with the most recent lasting from 1400 to 1750.

Bold mine, to point out that the greatest drought happened during what was the little ice age for the rest of the world.
Actually, if one looks at the ice core records, desertification happend during the deep cold ( note the dust levels, red curve)
http://upload.wikimedia.org/wikipedia/commons/thumb/c/c2/Vostok-ice-core-petit.png/400px-Vostok-ice-core-petit.png
This is because a lot of the water is caught up in the icebergs and the sea level is very low and there are no rains.
We might actually proceed into the future … we could cross a threshold driving the (climate) system into one of those big droughts without even knowing it’s coming,” Overpeck said. ”
Not unless he is prophesying the imminence of the next ice age.
This is sophistry in the exreme.

masonmart
April 17, 2009 1:21 am

Am I missing something? I’ve worked in the Sahara and Arabian deserts and Malysia/Indonesia in the tropics and the deserts are much hotter. Last year I took a walk in 55C. The tropics are typically 20c below that. The tropics van feel uncomfortable becaause of the humidity but dry 55C is still pretty shrivelling.

E.M.Smith
Editor
April 17, 2009 1:57 am

George E. Smith (13:56:13) : From all the postings I have seen here; GISStemp inputs are based on daily max/min readings of what; Anomalies; or temperatures ?
The bulk of the GIStemp input comes as an already computed MONTHLY average temperature. NOAA has taken MIN / MAX for each day and turned it into a monthly average single data point for each month for each year and it is that which is taken in by GIStemp and {averaged, homogenized, blended, chopped, formed, anomalized, and extruded} as pasteurized processed data food product anomaly maps.
See: http://chiefio.wordpress.com/2009/02/26/gistemp-step0-input-files/
This is only for the land data, which is too sparse in the time domain (not enough old thermometers) and too sparse in the space domain (most thermometers in the U.S.A., Europe, and Japan; not much elsewhere). See:
http://chiefio.wordpress.com/2009/02/24/so_many_thermometers_so_little_time/
For the ocean and arctic coverage, the sampling is even worse. The anomaly maps are adjusted in step4_5 by applying an anomaly map from elsewhere that uses satellite derived anomalies and simulations based on ice coverage estimates. From the bottom of:
http://chiefio.wordpress.com/2009/02/25/inside-gistemp-an-overview/
In STEP4_5 there are a couple of more bits of data downloaded. gistemp.txt lists these as:
http://www.hadobs.org HadISST1: 1870-present http://ftp.emc.ncep.noaa.gov cmb/sst/oimonth_v2 Reynolds 11/1981-present
One of these (oimonth_v2) is a Sea Surface Temperature anomaly map. Some folks have asserted this means that GIStemp uses satellite data since this anomaly map is used. Yes, the map is made from a combination of surface records and satellite data, but by the time it gets here is it just a grid of 1 degree cells (lat long) with a single anomaly number per month. Not exactly what I’d call “satellite data”. More like a “Satellite derived anomaly map” product.

Which I critique in:
http://chiefio.wordpress.com/2009/03/05/illudium/
And, of course, there is the whole issue of False Precision that runs through GIStemp. It starts with the NOAA data that are presented with a precision in the 1/100 F place yet are derived from readings that were only recorded to the whole degree F (and are NOT over sampled, each day being unique). So exactly where to you get any accuracy at all in the 1/10 ths place when there is no data in the 1/10 ths place? Hmmm?
GIStemp then converts this to 1/10 C and runs with the false precision as I discuss in:
http://chiefio.wordpress.com/2009/03/05/mr-mcguire-would-not-approve/
So we end up with folks panicking over 1/10 C “trends” and what import this has for the world when it is all fantasy precision and trash values based on inadequate data sources thoroughly masticated.
Don’t even mention Antarctica; as the Steig paper demonstrates; that place is a textbook case of sampling genius; was it 12 stations or some similar number that were used to concoct his analysis ?
Or the Arctic which is based on anomaly maps from interpolations from simulations based on estimates of ice cover… (NO exhaggeration.)
Just as wild guess, I would guess that the spatial undersampling would be by 4-5 orders of magnitude at the minimum.
Given that the GIStemp sea “temperature” input is a 1 degree cell anomaly map, you may be off by a few more orders of magnitude… and even much of THAT may be made up. From:
http://chiefio.wordpress.com/2009/02/28/hansen-global-surface-air-temps-1995/
Yup, next paragraph. They talk about “Empiracle Orthogonal Functions” used to fill in some South Pacific data… but it uses “Optimal Interpolations” which sure sounds like they are just cooking each datapoint independently… From here on out when they use EOF data they are talking about this synthetic data. It also looks like they use 1982-1993 base years to create the offsets that are used to cook the data for 1950-81. Wonder if any major ocean patterns were different in those two time periods, and just what surface (ship / bouy) readings were used to make the Sea Surface Temp reconstructions? They do say “The SST field reconstructed from these spatial and temporal modes is confined to 59 deg. N – 45 deg S because of limited in situ data at higher latitudes.” OK, got it. You are making up data based on what you hope are decent guesses. But in GIStemp “nearby” can be 1000km away with no consideration for climate differences, so I’m concerned that the same quality of care is being given here.
So you have a huge gap from 59 N to 90 N and another from 45 S to 90 S (when you get to fill in some Antarctic data, via more massaging from too few sites).
But even that doesn’t quite make it clear. Inspection of their map at:
http://www.emc.ncep.noaa.gov/research/cmb/sst_analysis/images/inscol.png
Shows that ship based data are highly concentrated in the northern hemisphere and near N. America, Europe, China Sea while the ice is estimated crudely and the buoy data have better distribution, but being largely from ‘floaters’ may have very poor temporal resolution for any one space.
My conclusion from all this is that GIStemp is largely a made up fantasy with no significance and is substantially divorced from reality.
Other than that, no problem (with sarcastic tone, please!)