Tisdale on model initialization in wake of the leaked IPCC draft

Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century?

Guest post by Bob Tisdale

The coupled climate models used to hindcast past and project future climate in the IPCC’s 2007 report AR4 were not initialized so that they could reproduce the multidecadal variations that exist in the global temperature record. This has been known for years. For those who weren’t aware of it, refer to Nature’s Climate Feedback: Predictions of climate post, written by Kevin Trenberth.

The question this post asks is, should the IPCC’s coupled climate models be initialized so that they reproduce the multidecadal variability that exists in the instrument-based global temperature records of the past 100 years and project those multidecadal variations into the future.

Coincidentally, as I finished writing this post, I discovered Benny Peiser’s post with the title Leaked IPCC Draft: Climate Change Signals Expected To Be Relatively Small Over Coming 20-30 Years at WattsUpWithThat. It includes a link to the following quote from Richard Black of BBC News:

And for the future, the [IPCC] draft gives even less succour to those seeking here a new mandate for urgent action on greenhouse gas emissions, declaring: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.

That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades. And we have already found that at least one of the climate models submitted to the CMIP5 archive for inclusion in the IPCC’s AR5 does not reproduce a multidecadal temperature signal. In other words, that model shows no skill at matching the multidecadal temperature variations of the 20th Century. So the question still stands:

Should IPCC climate models be Initialized so that they replicate the multidecadal variability of the instrument temperature record during past 100 years and project those multidecadal variations into the future?

In the post An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model, after illustrating that the NCAR CCSM4 (from the CMIP5 Archive, being used for the upcoming IPCC AR5) does not reproduce the multidecadal variations of the instrument temperature record of the 20th Century, I included the following discussion under the heading of NOTE ON MULTIDECADAL VARIABILITY OF THE MODELS :

…And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.

To help clarify those statements and to present them using Sea Surface Temperatures, the source of the multidecadal variability, I’ve prepared Figure 1. It compares observations to climate model outputs for the period of 1910 to year-to-date 2011. The Global Sea Surface Temperature anomaly dataset is HADISST. The model output is the model mean for the hindcasts and projections of the coupled climate models of Sea Surface Temperature anomalies that were prepared for the Fourth Assessment Report (AR4) from the Intergovernmental Panel on Climate Change (IPCC) published in 2007. As shown, the period of 1975 to 2000 is really the only multidecadal period when the models come close to matching the observed data. The two datasets diverge before and after that period.

Figure 1

Refer to Animation 1 for a further clarification. (It’s a 4-frame gif animation, with 15 seconds between frames.) It compares the linear trends of the Global Sea Surface Temperature anomaly observations to the model mean, same two datasets, for the periods of 1910 to 1945, 1945 to 1975, and 1975 to 2000. Sure does look like the models were programmed to latch onto that 1975 to 2000 portion of the data, which is an upward swing in the natural multidecadal variations.

Animation 1

A NOTE ABOUT BASE YEARS: Before somebody asks, I used the period of 1910 to 1940 as base years for anomalies. This period was chosen for an animation that I removed and posted separately. The base years make sense for the graphs included in that animation. But I used the same base years for the graphs that remain in this post, which is why all of the data has been shifted up from where you would normally expect to see it.

Figure 2 includes the linear trends of the Global Sea Surface Temperature observations from 1910 to 2010 and from to 1975 to 2000 and includes the trend for the model mean of the IPCC AR4 projection from 2000 to 2099. The data for the IPCC AR4 hindcast from 1910 to 2000 is also illustrated. The three trends are presented to show this disparity between them. The long-term (100 year) trend in the observations is only 0.054 deg C/decade. And keeping in mind that the trends for the models and observations were basically identical for the period of 1975 to 2000 (and approximately the same as the early warming period of 1910 to 1945), the high-end (short-term) trends for a warming period during those 100 years of observations is about twice the long-term trend or approximately 0.11 deg C per decade. And then there’s the model forecast from 2000 to 2099. Its trend appears to go off at a tangent, skyrocketing at a pace that’s almost twice as high as the high-end short-term trend from the observations. The model trend is at 0.2 deg C per decade. I said in the earlier post, “the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades.” The models not only continued that trend, they increased it substantially, and they’ve clearly overlooked the fact that there is a multidecadal component to the instrument temperature record for Sea Surface Temperatures. The IPCC projection looks bogus to anyone who takes the time to plot it. It really does.

Figure 2

CLOSING

The climate models used by the IPCC appear to be missing a number of components that produce the natural multidecadal signal that exists in the instrument-based Sea Surface Temperature record. And if these multidecadal components continue to exist over the next century at similar frequencies and magnitudes, future Sea Surface Temperature observations could fall well short of those projected by the models.

SOURCES

Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runs webpage. I converted the monthly data to annual averages for this post to simplify the graphs and discussions. And again, the period of 1910 to 1940 was used as the base years for the anomalies.

ABOUT: Bob Tisdale – Climate Observations

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
119 Comments
Inline Feedbacks
View all comments
Editor
November 15, 2011 12:47 pm

Roger Knights says: “In online disputes with warmists, I’ve encountered the claim that the global SST is rising. I haven’t been able to find a chart that I could link to that would counter that–there isn’t one here on WUWT. Is there one anywhere?”
Hi, Roger. The rise in Global SST anomalies has slowed considerably over the past decade or so. It really depends on the timeframe they’re looking at. Here are a couple of graphs I prepared for this post, but didn’t feel they contributed to it when I finished writing it. On a decadal basis, the ten-year trends are back to below zero (2010 and YTD2011) for the first time since 1979, while model projections are nowhere close to zero:
http://i40.tinypic.com/dg44uh.jpg
This one really highlights the differences between the observations and the models: On a multidecadal basis, the thirty-year trends peaked around 2005 and appear to be dropping in response to a multidecadal signal that exists over the term of the data, while the model trends continue their march skywards:
http://i41.tinypic.com/280i3bl.jpg
And I would also disagree with any assumption on their parts that the cause of the warming is greenhouse gases. They would need to supply peer-reviewed papers that explain why Sea Surface Temperature anomalies for the East Pacific Ocean (90S-90N, 180-80W) have not risen for the past 30 years:
http://i40.tinypic.com/a5hyti.jpg
And why, between significant El Niño events, Sea Surface Temperature anomalies for the rest of the global oceans (90S-90N, 80W-180) don’t rise for decade-long stretches:
http://i44.tinypic.com/r7jbdf.jpg
ENSO is a process, so there is no way to remove its effects through linear regression as is so often attempted..
As far as I know, there are no papers that address this very obvious relationship that exists in the data. All one needs to do is volcano-adjust the data, and those stand out like sore thumbs. The most recent discussions of that are here (part 1):
http://bobtisdale.wordpress.com/2011/07/26/enso-indices-do-not-represent-the-process-of-enso-or-its-impact-on-global-temperature/
And here (part 2):
http://bobtisdale.wordpress.com/2011/08/07/supplement-to-enso-indices-do-not-represent-the-process-of-enso-or-its-impact-on-global-temperature/

Christopher Hanley
November 15, 2011 12:51 pm

Dr. Syun Akasofu is making a similar point here I think:
http://wattsupwiththat.com/2009/03/20/dr-syun-akasofu-on-ipccs-forecast-accuracy/

Editor
November 15, 2011 12:52 pm

steven mosher says: “The models are not programmed to ‘latch’ onto the period 1975-2000.”
It was a poor choice of words in my post last week that was carried over to this one because I could not, for the life of me, remember the term curve fitting. I will fix this tomorrow in both posts.
Regards

More Soylent Green!
November 15, 2011 1:30 pm

Brian H says:
November 15, 2011 at 4:06 am
John B, et al;
The model runs are not “initialized” on real world conditions at all. They’re just tweaked to see what effect fiddling the forcings has on their simplified assumptions. It’s not for nothing that the IPCC says in the fine print that they create ‘scenarios’, not projections. Even though they then go on to treat them as projections.

Given what I understand to be the chaotic nature of our climate, a small variation in initial conditions should mean greatly different results from model runs. Unless the models don’t work the way our climate actually works.
Regardless, I’m not sure our data are good enough to give an accurate set of initial conditions.
I’d also like to see some test runs of various models using a multiple standard sets of initial data. Create perhaps a dozen different sets of initial data and run each model with each set. It would be interesting to see the results of each.

John B
November 15, 2011 1:46 pm

More Soylent Green! says:
November 15, 2011 at 1:30 pm

Given what I understand to be the chaotic nature of our climate, a small variation in initial conditions should mean greatly different results from model runs. Unless the models don’t work the way our climate actually works.
Regardless, I’m not sure our data are good enough to give an accurate set of initial conditions.
I’d also like to see some test runs of various models using a multiple standard sets of initial data. Create perhaps a dozen different sets of initial data and run each model with each set. It would be interesting to see the results of each.
—————–
As I understand it, weater is chaotic, climate not so much. As Stephen Mosher pointed out upthread, NASA started a model run with zero water vapour and the water vapour “appeared”. If the model is good, and is run for long enough, it will be relatively insensitive to starting conditions.

November 15, 2011 3:21 pm

Tisdale:
Thanks for those two charts. I hope they do get posted here in the Reference section. But if they do, I suggest that each have documentation added to the caption describing and linking to the source data and the charting procedure used. They would then be useful “ammo” in the battle.

Editor
November 15, 2011 4:15 pm

Roger Knights says: “Thanks for those two charts. I hope they do get posted here in the Reference section.”
Assuming you’re talking about the East Pacific and Rest-Of-The-World SST anomaly graphs, they are included in my monthly SST anomaly updates. Example:
http://bobtisdale.wordpress.com/2011/11/07/october-2011-sea-surface-temperature-sst-anomaly-update/

November 15, 2011 4:18 pm

Bob its not curve fitting either
The models output absolute C
The actual values are not even close to the real temperature. Those values are then averaged and anomalized. Then they are base shifted ( as I recall ) to align with the temperature record.
The period selected is 1975 to 2000.
Jones even discusses this issue in the mails.
It has nothing whatsoever to do with “curve” fitting models to a data.

Keith
November 15, 2011 4:46 pm

Hi Steven (may I call you Mosh?),
Thanks for your comments. While I’ve looked into this area a little bit (gotta try to prevent brain atrophy when you’re ill somehow), I’ve certainly not invested the time that you and many others have! I don’t understand how the GCMs work, but I’m interested, and also wondering how they should work.
I’m of the view that atmospheric gases should cause warming and that they have their part to play. From what I’ve seen I still don’t think we’re close to pinning down a value (for CO2 particularly, given the overlap with water vapour at certain wavelengths). It seems that known physics has been applied to a number of known potential factors, with a balancing item of ‘feedbacks’ to try to square the circle. This is still open to refinement, or perhaps wholesale reassessment, with every new study and discovery.
My initial point was around whether we know enough about all conceivable material factors to be able to assign accurate weightings to them in models, and whether by adjusting the weightings and playing with the possible impacts of not-well-understood potential factors we could obtain a better match to past temp trends. Given the resources available and the potential importance of the issue, I would expect that someone may have run some what-ifs to try to get ever-better results, assuming cognitive dissonance doesn’t get in the way.
Clearly we don’t know 100%, but how far out are we?
We seem to have volcanic/SO2 forcings tied down quite well, with a decent amount of empirical evidence of effects on reflected shortwave to back it up. Solar TSI in itself is also a comparatively basic calculation. Quantification of the empirical impact of CO2 in an open system seems more shaky, while we’ve not had particularly good measurements of UV/EUV variation so this area is shakier still. Solar magnetic flux and other suppositions related to it are seemingly still at the speculative basis with no clear physial mechanism, but this doesn’t preclude a mechanism ever being identified. I’m not comfortable with what appears to be an assumption that manmade CO2 should be held to be the cause of warming not clearly identified by other physical processes.
I mentioned oceanic cycles not just because of the PDO and AMO, but also due to the vast timescales involved in thermohaline circulation. 20th century upper oceanic heat content may not necessarily be purely determined by 20th century forcings. Bob looks to have as good an understanding of the multidecadal variations as anyone around, but I’ve not seen anybody demonstrate how the centuries-long cycles may affect climate and vice versa. It’s another world down in the depths, so I don’t know if we’ll ever get the data coverage to fully understand it.
Clouds may be an output but, if Svensmark and Stephen Wilde are on the right tracks, there may be variations in global cloud cover that have a solar cause, which will have a complex and variable effect on climate. Might be another area of attribution towards 20th century warming that is not due to CO2 increase. And then there may be factors that nobody has even thought of yet, never mind identified and quantified.
The various models are not in the same county as perfect yet, as you say. Surface temp forecasts based on a dominant role for CO2 haven’t been stunningly accurate, while the tropical mid-tropospheric hotspot isn’t as predicted. Any what-if studies using GCMs that could rule in or out other factors for closer investigation (i.e. to seek potential causation given sufficient correlation, in the looser sense) would be of interest.

Legatus
November 15, 2011 5:53 pm

“Leaked IPCC Draft: Climate Change Signals Expected To Be Relatively Small Over Coming 20-30 Years”
“That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades.”

One must ask, well, then, what does “climate change signals” mean, exactly? The only signals it can mean are warming, if it means less signals, and it clearly does, then it means less warming. The only possibly effect CO2 can cause, according to physics, is increased downward longwave radiation. Over the oceans (71% of the earth), this results in a slightly greater amount of evaporation, almost instantly countered by that creating clouds and shade, reducing the incoming shortwave radiation and thus reducing the longwave radiation upward which could be trapped and radiated back down. Only over land does CO2 cause warming, and that only when it does not cause too much evaporation which would produce the above effect. Result, CO2 causes some warming only over some of the land, but it is warming. Thus, if we have less “climate change signals”, we must have (slightly) less warming.
I think this is what the IPCC is saying, basically the same thing you are saying here, that the ocean driven variability means that the warming over the next several decades will be quite small. Of course, the IPCC will then claim that it will start to go up again, and will go up even faster, thus saying that their prediction of doom and gloom was right all along. What you are showing is that, if it goes up at the rate it has been with the natural variability included (which the IPCC has now admitted needs to be included), it will not go up faster in the future, and will slow down one or more times in the future again, and thus 100 years from now, it will be warmer, but not much warmer, not nearly as warm as the IPCC has been claiming all along (not warm enough to wreck the world economy eliminating CO2 over).
The simple fact however is that these models have no predictive value. There is no understanding of what causes the ocean circulations that they are now admitting cause “natural variability” or how and why they add or subtract from atmospheric heat, and there is no understanding of solar dynamics enough to be able to tell us why the sun sometimes goes into long periods of quietness which can cause a little ice age or even perhaps a full blown ice age. As such, any statement that their models can predict anything over 100 years is wrong (which the IPCC makes but Bob Tisdale does not).
Simply put, the models are not based on reality. The reason for that is seen in this quote As Upton Sinclair once said: “It is difficult to get a man to understand something when his job depends on not understanding it.”. basically, the stated job of the IPCC is to “prove” that CO2 causes enough warming that they, the IPCC, needs to take over to prevent this disaster. because of this, they investigate CO2, they do not investigate things like ENSO or other ocean effects, or solar variability and it’s effect on climate. In short, they do not investigate natural climate change causes at all, because they do not wish to. As such, they simply do not understand the climate, and thus cannot tell us what it will do in any future.
The only forecast I saw that may explain some of what will happen in the future was one that correctly hindcasted the Dalton and Maunder minimums, and forecast the current quiet sun ( unlike NASA, which got it completely wrong), and forecast that the next solar cycle (or 2) will be quieter yet. This will of course be moderated by the oceans, which will slow everything down, and make it harder to tell if the sun is doing it since what the sun does will not immediately show down here due to the stored heat of the oceans. I also expect, however, that upper latitudes will most definatly notice it, as the cold from the poles invades their land, especially well away from the oceans moderating influence. I expect increasingly cold winters in places like upper North America and upper Europe and Asia. The IPCC will, of course, try either (or both!) to explain this away as merely local conditions, or somehow blame it on CO2 (and there are plenty of people dumb enough to believe them).
I just had a thought… (it won’t happen again, I promise!):
Let us say that the earth has natural temperature regulators, that tend to keep the temperature relatively stable, despite what the sun does.
The sun was [quiet] several hundred years ago (Maunder and Dalton minimums).
The earth then went into La Nina mode to try and capture more sunlight in the oceans to keep things warm.
This means more trade winds.
In those days, ships used sails.
The discovery and colonization of the new world, and the age of sail and discovery, may have been aided and abetted by the quiet sun and the earths La Nina adjustment to it.
The only way we will know if this may be true is to wait a decade or two and see if the sun goes quiet as predicted, and then see if the earth goes into La Nina mode to catch more rays.

Editor
November 15, 2011 6:11 pm

steven mosher says: “Bob its not curve fitting either.”
Steven: It appears as though the models are programmed so that the outputs create a multidecadal trend from the mid-1970s to 2000 that is close to the observed trend. They make no effort to reproduce the early 20th century rise in temperature that is comparable in trend to the latter warming period, and they make no effort to reproduce the mid-century flattening. The appearance is of a simple fit of the forcings curve so that the model output aligns with the first couple of years of observations and the last few decades of the 20th Century. I could overlay the forcings curve atop the model output if you like to illustrate what I’m talking about.

Richard M
November 15, 2011 7:30 pm

I keep wondering why … why, if it’s claimed that the short term cooling is natural variability that will eventually turn around, then why haven’t the models factored this in?
Oh, you mean they don’t understand exactly what has caused this cooling and yet we are to believe they KNOW it will turn around. Yeah, right.
The problem is simple. Unknowns and a few complete errors in their knowledge. The believers simply can’t or don’t want to process the idea of unknowns. We’ve seen this in science over and over again, why would anyone believe we won’t see it with climate? Of course, we already know the answer to that as well. Politics.

P. Solar
November 15, 2011 10:42 pm

John B says:
>>
That doesn’t follow. As an analogy, we can model the tides pretty accurately, but if you stand on a beach and watch the waves crashing in, you might be tempted to say “the tide signal is too small to be modelled”. So it is with climate. And the analogy goes further: we can’t model the details of the beach, the turbulence in the waves, the wind on a particular day, etc., but we can still produce tide tables. The waves may be hugely important if you are swimming on that beach, but they do not affect the longer term trend of the tide.
>>
You analogy is good choice.
We can accurately predict tides by analysing the cycles. We have no idea whatsoever how to model ocean dynamics in a way that can predict tides .
The same is true of climate.
We can do better by looking for cycles and the linearly increasing _rate of change_ of temp due to increasing CO2, than by trying to calculate everything from scratch when we don’t know how do it.
The most trivial curve fitting does better than 20 super computer models and budgets of several billions:
http://tinypic.com/view.php?pic=2nrn24m&s=5

P. Solar
November 15, 2011 11:06 pm

steven mosher says: “Bob its not curve fitting either.”
Sorry, I don’t think that is correct. Until models can derive ALL the fundamentals from first principals , the basic operation does come down to curve fitting. They have a good understanding of a lot of things but the really important factors like cloud formation , precipitation, ocean currents and atmospheric circulation they really have no idea whatsoever.
The bits they can’t model, they have to make up based on empirical data. They give this a fancy name like parametrisation to make it sound scientific but this is basically empirical tweaking of model INPUTS until the hindcast is reasonably close. In other words : curve fitting.
The “curves” in question are the little bits of climate they do know how to model precisely and the “parameters”, rather than pure cosines, but the process is fundamentally the same thing.
Don’t get confused by the detail, climate models are all fundamentally an excersize in curve fitting.

marcoinpanama
November 16, 2011 5:03 am

Theo Goodwin says:
“Isn’t it amazing that Trenberth can show that he understands the problems with the models yet continue to act as an attack dog for climate science?”
Not if his motivation is politics instead of science. I often refer to AGW extremists as the new Bolsheviks. This is where the extreme left wraps around to meet the extreme right and says “Stop the world, I want to get off…” by tearing down the institutions and industries that have brought the human race progress, redistributing the wealth and returning to a “simpler time.” As so aptly described in Atlas Shrugged. AGW is merely the convenient whipping boy.

Editor
November 16, 2011 3:10 pm

Thanks, Anthony

November 16, 2011 5:01 pm

Warning  – Titanic Disaster Ahead
Arguing here whether or not the ~65year AMO natural oceanic temperature cycle should be included in future IPCC-approved climate models smacks of discussing how best to re-arrange the deck chairs on the Titanic. Reading through the blog comments to Bob Tisdale’s excellent and timely posting, it is clear that climate models are bunk and of no value in the climate debate. So we really do need to move on to focus on real climate data.
I have been monitoring the HadCRUT world land/sea temperature series for the past decade.  When I started in 2001, and much to my astonishment, I discovered that the temperature data I plotted out (from 1850 to 2001, a span of 152 years) looked decidedly unalarming. Although the data showed much wild variability from year to year, the long term average temperature rise over the full period was a puny 0.4degC per century.  
As each year has gone by, and despite the ever-growing climate alarmism of this last decade, my original observation that the temperature trend was completely benign has been increasingly confirmed. Here is that temperature chart, now brought up to date:
   http://www.thetruthaboutclimatechange.org/tempsworld.html 
The HadCRUT3 annual average world land/sea temperature data points are plotted in grey. 
The blue linear regression line shows an average rate of temperature increase over the 161 year span of 0.0041degC per year (hence 0.41degC per century). 
The red line shows an 11 year running mean version of the temperature data. This eliminates year-to-year natural variability and reveals a cyclic oscillation of about 67 year period, swinging approximately 0.25degC above and below the long term trend line. This oscillation is generally regarded as due to a long term cyclic changes in ocean heat distribution and is certainly natural.
Those who believe in the power of increased greenhouse gases to raise the world temperature alarmingly have fixated on the 30 year period from 1970 to 2000 when the ocean oscillation was in its rising phase. This allowed alarmists to report an apparently dangerous rate of warming which they ascribed to the big increase in atmospheric CO2 that occurred in the second half of the 20th century due to post-World War 2 industrialisation. They consequently, and erroneously, projected this apparently permanent temperature trend out to show an alarming rise of several degrees C by 2100.
I am not alone in making the above skeptical analysis. Those of us who did so have expected that the temperature curve would turn downwards as we got into the falling phase of the ocean oscillation. Since that is exactly what has happened during the last decade, panic has increasingly set in amongst the climate alarmist fraternity. If this downward swing continues for the next 20 years or so, the skeptical position will surely be overwhelmingly verified. The ‘great global warming panic’ will by then have become simply an object of historical curiosity.
In summary, therefore, I believe that skeptics do their cause no good by wasting time agonising over whether or not the climate models should be tweaked by including this or that new feature. That is surely something for the climate alarmists in their ivory towers to worry about as they contemplate the possible demise of their grand theories.

November 17, 2011 10:17 pm

I know I’m a couple of days late to this, but I must say I’m disturbed by the state of global climate modeling, if the results are only discussed on a global scale.
My comments:
1) If multidecadal (20 to 50+ years) timescale variations are important to the global temperature signal, then the model should be run for a minimum of 200 to 400 years, with observed data to compare to. As John B states, multidecadal variations average out over time, so the only way to know if your model is accounting for them well enough, or if they’re averaging out, is to run it over multiple oscillations. Assuming there aren’t multi-century variations that need to be accounted for (not necessarily a good assumption), then 400 years seems to be a good minimum model run time (8 fifty year cycles).
2) How is comparing a global average temperature supposed to indicate the skill of a model?
If it’s a decent global model, then regional variations (at least dividing the globe into 16 to 64 regions or more), should all have results that are reasonable, compared to observed data. If it hasn’t enough skill to match regional temperature variations, then why should I believe its global results?
I model hydrology for a living, and it’s not enough to know that the flow at the outlet of your system compares well to observed data. If you have 4 ‘subwatersheds’ and have data for each, then you should compare your model at the outlet of each ‘subwatershed,’ as well as the combined ‘watershed’ level results. Just because the watershed-level results look OK, doesn’t mean that the model is actually representing reality very well. It’s possible you’re just getting lucky.
Global average results are a horrible comparison, if your model is designed to consider regional or local physical processes.

November 18, 2011 1:37 am

Anthony Holder is eloquently confirming my previous point above that the existing climate models are bunk. In addition to the fact that they only predict over a comparatively short time span ( in climate change terms) rather than the hundreds of years that would be necessary to even out ~67 year natural cyclic variations, they have no skill at all at the subset level – i.e. regional predictions.
All this confirms that we should simply stop looking at them, obsessing over them, or giving them the oxygen of our attention. To do so deflects honest people who are trying to understand the climate change controversy away from the fact that the “emperor truly has no clothes”, as is evidenced by the long term instrumental temperature record which shows only a paltry 0.4degC per century rise since 1850. You might be surprised how few people know this simple fact which puts this whole silly debate into proper perspective.

1 3 4 5