I recently covered a press release from Dr. Ben Santer where it was claimed that:
In order to separate human-caused global warming from the “noise” of purely natural climate fluctuations, temperature records must be at least 17 years long, according to climate scientists.
Bob Tisdale decided to run the numbers on Ar4 models:
17-Year And 30-Year Trends In Sea Surface Temperature Anomalies: The Differences Between Observed And IPCC AR4 Climate Models
By Bob Tisdale
We’ve illustrated and discussed in a number of recent posts how poorly the hindcasts and projections of the coupled climate models used in the Intergovernmental Panel on Climate Change’s 4th Assessment Report (IPCC AR4) compared to instrument-based observations. And this post is yet another way to illustrate that fact. We’ll plot the 17-year and 30-year trends in global and hemispheric Sea Surface Temperature anomalies from January 1900 to August 2011 (the updates of HADISST data used in this post by the Hadley Centre can lag by a few months) and compare them to the model mean of the Hindcasts and Projections of the coupled climate models used in the IPCC AR4. As one would expect, the model mean show little to no multidecadal variability, which is commonly known. Refer to the June 4, 2007 post at Nature’s Climate Feedback: Predictions of climate, written by Kevin Trenberth. But there is evidence that the recent flattening of Global Sea Surface Temperature anomalies and the resulting divergence of them from model projections is a result of multidecadal variations in Sea Surface Temperatures.
WHY 17-YEAR AND 30-YEAR TRENDS?
A recent paper by Santer et al (2011) Separating Signal and Noise in Atmospheric Temperature Change: The Importance of Timescale, state at the conclusion of their abstract that, “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.” Sea surface temperature data is not as noisy as Lower Troposphere temperature anomalies, so we’ll assume that 17 years would be appropriate timescale to present sea surface temperature trends on global and hemispheric bases as well. And 30 years: Wikipedia defines Climate “as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose.”
But we’re using monthly data so the trends are actually for 204- and 360-month periods.
ABOUT THE GRAPHS IN THIS POST
This post does NOT present graphs of sea surface temperature anomalies, with the exception of Figures 2 and 3, which are provided as references. The graphs in this post present 17-year and 30-year linear trends of Sea Surface Temperature anomalies in Deg C per Decade on a monthly basis, and they cover the period of January 1900 to August 2011 for the observation-based Sea Surface data and the period of January 1900 to December 2099 for the model mean hindcasts and projections. Figure 1 is a sample graph of the 360-month (30-year) trends for the observations, and it includes descriptions of a few of the data points. Basically, the first data point represents the linear trend of the Sea Surface Temperature anomalies for the period of January 1900 to December 1929, and the second data point shows the linear trend of the data for the period of February 1900 to January 1930, and so on, until the last data point that covers the most recent 360-month (30-year) period of September 1981 to August 2011.
Figure 1
Note also how the trends vary on a multidecadal basis. The model-mean data do not produce these variations, as you shall see. And you’ll also see why they should, because they are important. Observed trends are dropping, but the model mean trends are not.
I’ve provided the following two comparisons of the “raw” Sea Surface Temperature anomalies and the 360-month (Figure 2) and 204-month (Figure 3) trends as references.
Figure 2
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
Figure 3
COMPARISONS OF SEA SURFACE TEMPERATURE ANOMALY TRENDS OF CLIMATE MODEL OUTPUTS AND INSTRUMENT-BASED OBSERVATIONS
In each of the following graphs, I’ve included the following notes. The first one reads,
The Models Do Not Produce Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed, Because They Are Not Initialized To Do So. This, As It Should Be, Is Also Evident In Trends.
And since those notes in red are the same for Figure 4 through 9, you’ll probably elect to overlook them. The other note on each of the graphs describes the difference between the observed trends for the most recent period and the trends hindcast and projected by the models. And they are significant, so don’t overlook those notes.
There’s no reason for me to repeat what’s discussed in the notes on the graphs, so I’ll present the comparisons of the 360-month and 204-month trends first for Global Sea Surface Temperature anomalies, then for the Northern Hemisphere data, and finally for the Southern Hemisphere Sea Surface Temperature anomaly data. Some of you may find the results surprising.
GLOBAL SEA SURFACE TEMPERATURE COMPARISONS
Figure 4
HHHHHHHHHHHHHHHHHHHHHHHHHHH
Figure 5
NORTHERN HEMISPHERE SEA SURFACE TEMPERATURE COMPARISONS
Figure 6
HHHHHHHHHHHHHHHHHHHHHHHHHHH
Figure 7
SOUTHERN HEMISPHERE SEA SURFACE TEMPERATURE COMPARISONS
Figure 8
HHHHHHHHHHHHHHHHHHHHHHHHHHH
Figure 9
CLOSING
Table 1 shows the observed Global and Hemispheric Sea Surface Temperature anomaly trends, 204-Month (17-Year) and 360-Month (30-Year), for period ending August 2011. Also illustrated are the trends for the Sea Surface Temperature anomalies as hindcast and projected by the model mean of the coupled climate models employed in the IPCC AR4.
Table 1
Comparing the 204-month and 360-month hindcast and projected Sea Surface Temperature anomaly trends of the coupled climate models used in the IPCC AR4 to the trends of the observed Sea Surface Temperature anomalies is yet another way to show the models have no shown no skill at replicating and projecting past and present variations in Sea Surface Temperature on multidecadal bases. Why should we believe they have any value as a means of projecting future climate?
SOURCE
Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runswebpage.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.










DirkH:
Tamino’s post at
http://tamino.wordpress.com/2011/11/20/tisdale-fumbles-pielke-cheers/
is really funny. He tries to debunk Bob with Mosher’s multi-model argument. Look at his last figure – the single model runs he shows vs. the “multi model mean”. How can a multi model mean have a higher value than all the single runs? Only in climate science.
Naw, this is that new fangled method of using stats. Yow will note he didn’t include the error bars of the models, so about the only thing one can conclude is that the mean presented is at the top of the error bars…..right?
The graph is just priceless. And he thinks he is a really good statistician. Ay…..yep….
@Camburn 9:58
I think Tamino was calculating in tongues.
I think most people here would agree that 17 years, or even 30 years, isn’t long enough to monitor any change in climate
– a 30 year period covers only half of what appears to be a 60 – 70 year cycle of highs & lows
– so it’s much better to look over, say, 66 years
The only problem with this is that we only have good data for 100 years at best, but it does show some sort of trend:
http://www.woodfortrees.org/plot/hadsst2gl/mean:792/plot/hadcrut3gl/mean:792
The nice thing about hitting the right cycle period is that it averages out intra-cycle variations, leaving just the underlying trend that we’re interested in.
– so, here, we’re left with an upward trend of about 0.4C per century…
Anthony:
Looks to me like Tamino has one heck of a wagging tongue. And you know what the old saying about those are?
To further illustrate my point, here’s a few more plots using different filter sizes
http://www.woodfortrees.org/plot/hadsst2gl/mean:360/plot/hadsst2gl/mean:480/plot/hadsst2gl/mean:600/plot/hadsst2gl/mean:720/plot/hadsst2gl/mean:840
Assuming the underlying data is correct, then it’s clear there’s an upward trend
– of about 0.4C per century
Climate models don’t really need to model the ‘multi-decadal’ oscillations, as long as they accept their limitations, and can correctly model & predict the long term trends
Also, it seems to me, that the long terms trends are fairly obvious, and not very difficult to model
John Marshall says:
November 20, 2011 at 3:09 am
quote
ie, untrained sailors,
end quote
Sorry sir – utter codswallop.
OBS were/are taken every 6 hours GMT by the ships Navigators.
The deck watchkeeping officers being the 1st mate; 2nd mate and 3rd mate.
Amongst other things they were trained in was how to take the simple meteo observations with the equipment at hand. These were professionals who amongst other thing had to have managed to get their heads around spherical trigonometry; and the 2nd & 3rd mates would demonstrate the use of a sextant almost every day – you try ‘shooting noon’ on a pitching deck; and then working out where on the globe you are. Untrained my backside.
Of course; some of them ‘flogged’ the obs reports – just like some scientists ‘flog’ their reports too.
One instantly easy to use correction would be to observe the various watches moving against GMT and any idle Officer’s garbage should stick out like a sore thumb. Except – the most common form of ‘flogging’ was just to use the last obs.
This is the normal cack reason rolled out as to why millions of reports are not used – but; hey – its ok to use the discredited bulk of land based stations with their DEMONSTRATED bad siting.
I smell a large and deliberate denigration of what should be a well mined dataset of world wide reports. I can only assume that Merchant Marine OBS totally failed to show any signs of an AGW signal; probably the contrary; if any one was to bother they would probably find lots of proof of not just the PDO AMO; uncle tom cobly and all; but many other interesting temperatures – including the previously reported low ice levels in the arctic.
I suggest;sir; that before you throw out insults like “,ie, untrained sailors,” you get off your high horse and find out who these “ie, untrained sailors,” were; and just what their training was.
For your information I was an untrained sailor, – I was so untrained as a Radio Electronics officer that I obviously couldn’t manage to send the OBS reports in to the local radio stations – incidentally I used to help prepare the OBS and kick any slackers.
Most of the “untrained sailors,” who put the OBS together had their own selfish interest at heart of course. After all; the OBS were used by the WMO and others to put together the world’s shipping forecasts; that the “untrained sailors,” relied on – some times a wrong weather forecast could be the difference between life and death.
Nick Stokes says:
November 20, 2011 at 5:54 am
I wrote a post here which compares them all. Except for the period 1930-1960, when there is some large oscillation, the comparison of the three with the models (graph here) shows that the model tracks the path of the observations better.
The global terrestrial temperature profile can be simulated simple with a good proxy with some (six to eleven) solar tide functions. The decade from 1960 to 1970 or the time from 1900 to 2020.
V.
Interesting is that Sweden was snowfree on November 17 (never happened before that late) and that the temperature anomoly was tremendous. Also now snow for Norway, the season is delayed for two weeks at least. In Finland unusual low amount of snow. No skiing in Scotland: no snow. No skiing in the Alps, to warm now snow.
http://www.smhi.se/sgn0102/maps/ttmk_111119.gif
NotTheAussiePhilM says:
November 20, 2011 at 10:19 am
“To further illustrate my point, here’s a few more plots using different filter sizes”
The 0.4 deg/century trend starts before massive industrial CO2 emissions in the 1950ies start… not a good omen for the CAGW theory and the future performance of their models. 😉
So Santer beats the c*p out of himself here. Neat.
By his 17-year standard, (a) it’s cooling (b) this contradicts the models’ predictions (c) all the models are useless.
The fit is pretty well nonexistent except for around 1990-2000 global deltaT (fig 5) which I guess reflects the time Santer was developing his models. More of his record-counterfeiting here. I’m quoting WUWT commenter Andrew Russell who reckons this is Pat Michaels’ work that made Santer want to “beat the c**p” out of him.
But the records do support Akasofu’s picture of 60-year cycles imposed on a slow linear recovery from L~I~A (or on another, much slower cycle).
Thank you Bob. I hope we will see your graphic outing of Santer elevated to the MSM somewhere.
JOSH??
Camburn, Anthony
don’t make me laugh!
“that new fang
ledmethod of using stats” has to be “calculating in tongues” “calculating inforkedforking tongues”Bob Tisdale says: November 20, 2011 at 9:37 am
“Do you really think your graph shows the model mean track the multidecadal variations in the trends of any of those datasets?”
My point is that the SST measures vary among themselves, and their deviation from the model mean is not hugely different from their variation from their own mean would be.
Theo Goodwin says: November 20, 2011 at 8:57 am
“Does it mean “looks more like the path?” If so, then you are just eyeballing something and calling it science.”
Theo, this is an eye-balling post. All I’m saying is that if you cast your eyeball over a wider range of SST datasets, what you think you saw with just HADISST looks a lot different.
Werner Brozek: Here’s a further explanation that will complement that of Nick Stokes.
HADSST2 is a spatially incomplete dataset for starts. Refer to:
http://bobtisdale.wordpress.com/2010/07/05/an-overview-of-sea-surface-temperature-datasets-used-in-global-temperature-products/
There is a significant amount of data missing in the Southern Hemisphere in HADSST2, and if you’re not aware, the high latitudes of the Southern Hemisphere of the spatially complete, satellite-based SST datasets like HADISST and Reynolds OI.v2 SST data show a significant cooling at those latitudes. An example:
http://i42.tinypic.com/2z9f7nq.jpg
That graph is included in my monthly updates:
http://bobtisdale.wordpress.com/2011/11/07/october-2011-sea-surface-temperature-sst-anomaly-update/
Second, HADSST2 has another bias. The Hadley Centre spliced two “incompatible” (for lack of a better word) source datasets together in 1998 and it created an upward shift in the HADSST2 data that does not exist in any of the other SST datasets. The upward shift in the HADSST2 data after 1998 is approximately 0.065 deg C compared to HADISST. That’s a lot of upward bias.
http://i56.tinypic.com/308fjar.jpg
The graph is from this post:
http://bobtisdale.wordpress.com/2010/11/26/does-hadley-centre-sea-surface-temperature-data-hadsst2-underestimate-recent-warming/
I also discussed and illustrated it with other SST datasets in this post:
http://bobtisdale.wordpress.com/2009/12/12/met-office-prediction-%e2%80%9cclimate-could-warm-to-record-levels-in-2010%e2%80%9d/
So those two biases are the likely reasons that the HADSST2 trends are closer to the model mean in Nick’s post.
Regards
DirkH
actually some models do get the decadal oscillations correct. In frequency and magnitude. They can only get the timing correct by dumb luck. that’s just math and the test constraints.
when you download model data and have a look at it let me know. you’ll be less of a waste of time
Systems theory and ecosystems theory all over again, welcome back to the 1970’s.
Dennis Nikols — ” I see Roger Pielke Sr. is suggesting you submit this for peer review. Great idea but not sure it would help.”
And I’d wager Pielke Sr. wouldn’t go within a mile of acting as co-author on this with Tisdale.
One reason for the “17 years” should be the COP “17” for this year.
Next year he will be shouting “18 years”, and in 2013 “19 years” ….. /sarc
J Bowers says: “And I’d wager Pielke Sr. wouldn’t go within a mile of acting as co-author on this with Tisdale.”
No reason to speculate or wager. I have no intention of writing a paper. I write blog posts.
“Bob Tisdale says:
November 20, 2011 at 1:37 pm
Werner Brozek: Here’s a further explanation that will complement that of Nick Stokes.”
Thank you for these replies. I have not digested them yet but plan to work on it. But in the meantime, unless I missed it, HADISST is not part of the options on the http://www.woodfortrees.org/plot/. Should it be there and if so, can you use your influence to get it added? Thank you!
Truth is the IPCC models (all of them) don’t hindcast or forecast, even unanimously getting the sign wrong. They are all worthless and GIGO continues to rule. Let’s move on.
For grins, I took the global SST 360 month figure above, superimposed solar cycle data from http://solarphysics.livingreviews.org/Articles/lrsp-2010-1/, screen-scraped and (linearly) scaled by eyeball so that the 240-month anomaly looks like it would be roughly proportional to the integrated area of the previous two cycles at the beginning, and added four vertical lines. The result is here http://www.phy.duke.edu/~rgb/combined.jpg (sorry, don’t know how to include it directly in a “reply in this interface).
The four vertical lines are:
a) 1945 — nuclear testing begins
b) 1957 — above ground nuclear testing exceeds 40 blasts a year, many in the megaton range.
c) 1963 — above ground testing peaks (1962 at 140 tests/year) followed by test ban treaty in 63.
c-d) 26 years over which there are at LEAST 40 tests/year, underground but (of course) vented to the atmosphere in such a way that there is still detectable fallout in the upper atmosphere. From d) on there have still been tests, but only a relative handful that with less total energy than was probably produced by volcanic activity over the same general timeframe.
I’ve often wondered why nuclear testing has never been considered (AFAIK) as a confounding parameter to simple climate models based on insolation alone as proxied by the solar cycle. As you can see, by aligning the solar cycle so that it merely eyeballs out normalized to the linear trend on a 20-previous year basis, one expects warming where it warms everywhere but the stretch from 1945-1967 where (I would postulate) we had a “nuclear winter” from the HUGE amount of aerosols and radioactive dust dumped into the stratosphere and beyond. If there is any truth to the idea that clouds are nucleated by GCRs as well as particulate matter, how much more strongly must they be nucleated by radioactive particulate matter, not to mention the substantial volume of exotic aerosols each blast no doubt produced.
If one inverts the graph of nuclear testing from wikipedia, it almost perfectly fits the “hole” in SSTs relative to 2-cycle average of solar output. This also explains why warming has been predominantly northern hemisphere — the anomalous COOLING was due to (mostly) northern hemisphere blasts, and the clearing of radioactive dust has had a stronger effect there as blasts were gradually cleaned further and further up by successive treaties (plus the fact that the tests themselves were smaller and smaller — numerical simulation was considered adequate to design most warheads from somewhere late 70’s on, and we had reliable designs for BIG bombs, all we would ever need on both sides of the cold war).
This also explains the part of the figure I haven’t drawn in — the last two solar cycles, if added, suggest that the anomaly SHOULD be more or less back on the pre-1945 pattern of depending mostly on the previous two solar cycles, which makes so very much sense, ESPECIALLY for SSTs where LOCAL warming and UHI effects are not easily cherrypicked and human-manipulated confounding effects.
Just a thought. I’d guess that the DoD or somebody has long-term measurements of the annual patterns of residual fallout, enough to estimate how they might compare and contribute to ordinary dust, aerosol/ghg and GCR modulation of insolation over the last sixty years.
rgb
Try as I might, I can’t find a way of getting around this statement. Your linked graph is pixelated beyond use. I can make out the vertical lines, is all. The meaningful plot would be nuclear tests vs cloudiness vs SSTs. I think Bob Tisdale or Willis Eschenbach can direct you to some sort of cloudiness data. I may be wrong. GK
I apologize — probably a result of laziness — I scraped the screen to build it, but I screwed up the resolution of the original figure in the process, I guess. Try again, I (should have) fixed it now. I also put up an eps version of the figure combined.eps at the same (otherwise) URL if the jpg doesn’t come through clean.
I’m not certain one should ignore the direct effect of aerosols and dust from nuclear blasts and go only with clouds. If clouds were strongly correlated, it might help support the GCR-cloud connection, but volcanoes can produce significant cooling without the radiation and with similar or even smaller total energies being released. It’s a multivariate process and hence it would be dangerous to assume that we know which of several possible mechanism dominate the effect (if any). However, everybody remembers that one of the predicted doomsday aspects of global nuclear war was “nuclear winter”.
It would be interesting to get a firmer figure on the total megatonnage blown per year — the wikipedia page has only the total number of tests, not the size of each test. The larger tests in the 50’s were up to 15 megatons IIRC, in a single blast right on the water (huge chunk of hot, radioactive water blasted up through all atmospheric layers); many of them (especially larger ones) were on islands and nearly all were above ground. The Apple 1 (no, not the computer) test blast part of the “teapot” series was set off on my birthday (3/29/55) at right about the time I was born (again, for grins); this series added up to around 150 kt set off in roughly 2 months, all of them ground blasts IIRC and hence they kicked up much dust.
From the figure it looks like even the very FEW blasts set off in 1945 very likely had an immediate effect followed by a cumulative (integrated) but gradually decaying longer term effect, just as the “right” way to deal with solar activity is almost certainly to use an e.g. exponential integration window that stretches back at least three or four cycles, not a straight up 30 month running average. This is probably true for Tisdale’s figures as well — he’s using a square window 240 or 350 months into the past, and for his purpose that is fine, but almost all of the “reasonable” ways to average past behavior to acquire a smoothed current behavior should be non-Markovian integrals with an exponential window in the past. Or maybe non-exponential, or (most likely) multi-exponential.
The timescale for turnover of the ocean is order of 1000 years, with the potential for non-Markovian behavior on longer scales still (see Global Physical Climatology by Hartman (1994). It’s not just what Mr. Sun is doing today, it is what Mr. Sun was doing ten, fifty, a hundred years ago. To quote: “The thermal, chemical, and biological properties of the deep ocean therefore constitute a potential source of long-term memory for the climate system for timescales up to a millenium.” Somehow I don’t think that the GCM people are including water cooled during the Maunder minimum (that is very likely still randomly upwelling at various points in the oceanic turnover), let alone water warmed during the MWP roughly 1000 years ago — but they should. Hartman’s book seems to have preceded Mann et al and the Hockey Team — he states quite clearly “The early Holocene from about 10000-5000 years ago was a time when Northern Hemisphere climates were warmer than today’s”. Gosh, I wonder how that happened, since it wasn’t anthropogenic CO_2. In fact, the facts as laid out TEXTBOOK style (as in everybody knew them) in 1994 offer scant support to the AGW due to CO_2 hypothesis.
rgb
“But we’re using monthly data so the trends are actually for 204- and 360-month periods.”
No, data is the individual thermometer readings.
You are talking about monthly averages, and then you (and everybody else) are making averages of averages.
November 20, 2011 at 11:06 am
Nick Stokes says:
I wrote a post here which compares them all. Except for the period 1930-1960, when there is some large oscillation, the comparison of the three with the models (graph here) shows that the model tracks the path of the observations better.
I think the relevant point of this subject is the physical nature of that oscillating global climate and the processes and geometries involved. As long as there is no satisfactory explanation of that, what is called ‘noise’ on a ‘holy’ increasing function to hell, this subject has no relevance.
Science ever has to show by arguments if there is an effect of relevance. Because the visible oscillating frequencies of temperature proxies are visible from 1/month to 1/millennium these analysed frequencies and its functions are the basis for a possible human bias. But in a first step the very nature of these oscillations have to be explained. But this is not jet done in the climate science community. Although there is some FFT work about the oscillations, it seems that the power frequencies do say nothing to the workers. By this a short look to possible sources of these oscillations easy can be found in the synodic tide profiles of the couples frequenting the Sun.
There is a cave in China in Wanxiang and deep in 1 km people have taken a stalagmite (WX42b) and analysed the delta O18 values from 195 AD – 2003 AD. It seems to be different in many ways whether one takes 14C data or 18O data, and moreover there are big differences in different locations on the globe. However, the very point of these data (for the oscillations) is the time coherence of changing values, and not in the first order the amplitude of the proxy temperature, because all these methods seems to be nonlinear in respect to seasons and/or temperature and atmosphere. And because there is no nonlinearity in the time, each movement around the Sun can be calculated in seconds of time, and can be a linear reference to the time calibrated scale from the decay data.
There is a possible interesting trend visible in the density of the planets in that way that first the density has an impact on the tide like functions visible in the samples and second that it seems that there could be a quantum effect in the densities. If there would be steps of integer density values and one subtract the integers, then the densities have a linar function with the range order.
http://volker-doormann.org/images/densityfraction.gif
However, it is evident that the synodic function oft the plutinos Quaoar and Pluto has a main impact on the amplitudes in the temperature proxies from all known samples in the last two millennia, followed by the synodic function of Neptune and Pluto.
http://volker-doormann.org/images/wx42b_1500_2000.gif
http://volker-doormann.org/images/wx42b_0_2000.gif
http://volker-doormann.org/images/wx42b_100o_1500.gif
http://volker-doormann.org/images/wx42b_500_1000.gif
http://volker-doormann.org/images/wx42b_1500_2000.gif
http://volker-doormann.org/images/wx42b_0_500.gif
It is not always clear from the data whether there is done a high frequency cut or not. Because of that it is possible that the amplitudes relating to the inner planets like Mercury and/or Venus are smoothed to death. But that these fast running objects make footprints in the hadcrut3 history can be shown.
From the given evidence the couple of Quaoar and Pluto makes its complicated period of 913.5 years, this means that the increasing heat profile since the LIA can be taken as a phase of a period.
For the subject of this thread it means that there is no basis for the hoax of human CO2 and no basis for a speculation.
V.
Doesn’t this puts the lie to Trenberth’s position that the models have not been initialized?
Hegerl:
[IPCC AR5 models]
So using the 20th c for tuning is just doing what some people have long
suspected us of doing […] and what the nonpublished diagram from NCAR showing
correlation between aerosol forcing and sensitivity also suggested.