I recently covered a press release from Dr. Ben Santer where it was claimed that:
In order to separate human-caused global warming from the “noise” of purely natural climate fluctuations, temperature records must be at least 17 years long, according to climate scientists.
Bob Tisdale decided to run the numbers on Ar4 models:
17-Year And 30-Year Trends In Sea Surface Temperature Anomalies: The Differences Between Observed And IPCC AR4 Climate Models
By Bob Tisdale
We’ve illustrated and discussed in a number of recent posts how poorly the hindcasts and projections of the coupled climate models used in the Intergovernmental Panel on Climate Change’s 4th Assessment Report (IPCC AR4) compared to instrument-based observations. And this post is yet another way to illustrate that fact. We’ll plot the 17-year and 30-year trends in global and hemispheric Sea Surface Temperature anomalies from January 1900 to August 2011 (the updates of HADISST data used in this post by the Hadley Centre can lag by a few months) and compare them to the model mean of the Hindcasts and Projections of the coupled climate models used in the IPCC AR4. As one would expect, the model mean show little to no multidecadal variability, which is commonly known. Refer to the June 4, 2007 post at Nature’s Climate Feedback: Predictions of climate, written by Kevin Trenberth. But there is evidence that the recent flattening of Global Sea Surface Temperature anomalies and the resulting divergence of them from model projections is a result of multidecadal variations in Sea Surface Temperatures.
WHY 17-YEAR AND 30-YEAR TRENDS?
A recent paper by Santer et al (2011) Separating Signal and Noise in Atmospheric Temperature Change: The Importance of Timescale, state at the conclusion of their abstract that, “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.” Sea surface temperature data is not as noisy as Lower Troposphere temperature anomalies, so we’ll assume that 17 years would be appropriate timescale to present sea surface temperature trends on global and hemispheric bases as well. And 30 years: Wikipedia defines Climate “as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose.”
But we’re using monthly data so the trends are actually for 204- and 360-month periods.
ABOUT THE GRAPHS IN THIS POST
This post does NOT present graphs of sea surface temperature anomalies, with the exception of Figures 2 and 3, which are provided as references. The graphs in this post present 17-year and 30-year linear trends of Sea Surface Temperature anomalies in Deg C per Decade on a monthly basis, and they cover the period of January 1900 to August 2011 for the observation-based Sea Surface data and the period of January 1900 to December 2099 for the model mean hindcasts and projections. Figure 1 is a sample graph of the 360-month (30-year) trends for the observations, and it includes descriptions of a few of the data points. Basically, the first data point represents the linear trend of the Sea Surface Temperature anomalies for the period of January 1900 to December 1929, and the second data point shows the linear trend of the data for the period of February 1900 to January 1930, and so on, until the last data point that covers the most recent 360-month (30-year) period of September 1981 to August 2011.
Figure 1
Note also how the trends vary on a multidecadal basis. The model-mean data do not produce these variations, as you shall see. And you’ll also see why they should, because they are important. Observed trends are dropping, but the model mean trends are not.
I’ve provided the following two comparisons of the “raw” Sea Surface Temperature anomalies and the 360-month (Figure 2) and 204-month (Figure 3) trends as references.
Figure 2
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
Figure 3
COMPARISONS OF SEA SURFACE TEMPERATURE ANOMALY TRENDS OF CLIMATE MODEL OUTPUTS AND INSTRUMENT-BASED OBSERVATIONS
In each of the following graphs, I’ve included the following notes. The first one reads,
The Models Do Not Produce Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed, Because They Are Not Initialized To Do So. This, As It Should Be, Is Also Evident In Trends.
And since those notes in red are the same for Figure 4 through 9, you’ll probably elect to overlook them. The other note on each of the graphs describes the difference between the observed trends for the most recent period and the trends hindcast and projected by the models. And they are significant, so don’t overlook those notes.
There’s no reason for me to repeat what’s discussed in the notes on the graphs, so I’ll present the comparisons of the 360-month and 204-month trends first for Global Sea Surface Temperature anomalies, then for the Northern Hemisphere data, and finally for the Southern Hemisphere Sea Surface Temperature anomaly data. Some of you may find the results surprising.
GLOBAL SEA SURFACE TEMPERATURE COMPARISONS
Figure 4
HHHHHHHHHHHHHHHHHHHHHHHHHHH
Figure 5
NORTHERN HEMISPHERE SEA SURFACE TEMPERATURE COMPARISONS
Figure 6
HHHHHHHHHHHHHHHHHHHHHHHHHHH
Figure 7
SOUTHERN HEMISPHERE SEA SURFACE TEMPERATURE COMPARISONS
Figure 8
HHHHHHHHHHHHHHHHHHHHHHHHHHH
Figure 9
CLOSING
Table 1 shows the observed Global and Hemispheric Sea Surface Temperature anomaly trends, 204-Month (17-Year) and 360-Month (30-Year), for period ending August 2011. Also illustrated are the trends for the Sea Surface Temperature anomalies as hindcast and projected by the model mean of the coupled climate models employed in the IPCC AR4.
Table 1
Comparing the 204-month and 360-month hindcast and projected Sea Surface Temperature anomaly trends of the coupled climate models used in the IPCC AR4 to the trends of the observed Sea Surface Temperature anomalies is yet another way to show the models have no shown no skill at replicating and projecting past and present variations in Sea Surface Temperature on multidecadal bases. Why should we believe they have any value as a means of projecting future climate?
SOURCE
Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runswebpage.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.










“A recent paper by Santer et al (2011) Separating Signal and Noise in Atmospheric Temperature Change: The Importance of Timescale, state at the conclusion of their abstract that, “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.”
In case Santer has objections to you using the SST instead of the TLT, the difference between RSS and hadsst2 (0.00624 and 0.00757 ) for the last 17 years is not that great:
http://www.woodfortrees.org/plot/hadsst2gl/from:1979/plot/hadsst2gl/from:1995/trend/plot/rss/from:1979/plot/rss/from:1995/trend
But if “17 years in length are required for identifying human effects”, would that not imply that the human effects are very small and that no urgent action is required?
Santer wants it to be 17 years because he can’t count to 30.
Werner Brozek;
But if “17 years in length are required for identifying human effects”, would that not imply that the human effects are very small and that no urgent action is required?>>>
I find that the alarmists get very quiet when one roles out arguments such as that. If the sensitivity is as high as they claim, we would clearly see the warming signal above and beyond natural variation. If sensitivity is so low that it can barely be teased from the “noise” and natural variability so easily over comes it, then it never mattered in the first place.
Thanks Bob. Well done, as usual.
Without doing a lot of reading searching for this “17” noise eliminator I am perplexed because many temperature records are longer than this. Should we not, then, have been able to “separate human-caused global warming” trends from the noise ?[Santer] – by “we” I don’t mean you or me but rather the “climate scientists” referenced (or not) in the press release.
What purpose is being served by this pronouncement? It seems as though some folks are frustrated by the lack of a signal for AGW and expect natural fluctuations to dominate for awhile but promise it will clear up soon.
So I wonder, if natural fluctuations can swamp the signal, then natural fluctuations ought to be able to push the system beyond some presumed “tipping point” without any help from humans. Surely they don’t believe that. I hope someone will comment on where they see Santer and colleagues going with this – ‘cause I sure don’t.
Don’t throw out the bad model with the bath water. Because that’s the proof… the cold bath water is proof that you’re taking a hot bath.
Typo:
yet another way to show the models have no shown no skill
Thanks for your report Bob. Another good comparison
Good, as usual.
Bob.
What you want to do to finish the test is the following. You need to compute the difference between the trends and test whether the modelled trend (plus CIs) and the observed trends (plus CIs)
are significant different.
we know that they MUST necessarily differ. they must differ, because of how models are initialized
and they MUST differ because the earth is one realization. If they were ever the same that
would indicate something was amiss.
I will add observational data and ECHAM5 model scenario for the North Atlantic alone:
http://oi56.tinypic.com/wa6mia.jpg
And for the claim “our models predicted exactly such lull in warming”, here is another one:
http://oi55.tinypic.com/14mf04i.jpg
I am not sure whether the missing “initialization” is the culprit of model fail. Models obviously do not model the natural variability and they just blindly follow the Keeling curve, since they are all based on purely radiation assumptions – remember that thick arrow from K-H diagrams. Warm phase of AMO is then faultily expressed as CO2-induced whatever.
Speaking rather generally, measurements over a long time period are not strictly necessary to understand a system. If the state can be measured accurately enough, then the time needed to establish the relationship between the various state variables can be as small as the accuracy of the measurement allows.
With satellites and argos, one would imagine that the measurements should be comprehensive enough to begin understanding how it all interacts, provided ideological blinkers such as the necessity to prove AGW are removed.
From Richard Blacks blog:
http://www.bbc.co.uk/news/science-environment-15698183
Black tells us the report “found it’s way into his possession” and goes on to say:
“Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.
And then Santer tells us 17 years needed for a sign of climate change
Conclusion?
With equal superinventive authority to Ben Santer, I state that 13 years is the minimum period required to show the effect of human influence. Here is the consequence. If you take the last big temperature anomaly, 1998, and add 13, you get 2011. Papers published up to 2011 can go in the IPCC next round.
But Ben Santer can see that 1998 + 17 = 2015, so papers covering the whole 17 years since the big event cannot find their way into the next IPCC performance.
The year 1998 is a problematic break point, because global temperatures after it have barely changed despite CO2 keeping on the increase; and because other indices like cyclonic storm frequency/severity and effects of severe climate events on humans and the rate of rise of ocean levels are decreasing – or good factors like food production are rising – as shown by Indur M. Goklany in the WUWT article above this one.
Why go for a complex explanation when a simple one stares you in the face?
30 year time span is to short. Try 100 or 200 years and just maybe……
“But if “17 years in length are required for identifying human effects”, would that not imply that the human effects are very small and that no urgent action is required
No. It implies three things.
1. A noisy observation dataset
2. decadal oscillations in the system
3. A signal that is small relative to these.. in the PAST
Bob,
A right-fieldish question.
My background is range science, in which the estimation of herbage biomass is a common measurement. This being commonly done by clipping said biomass within a frame – “quadrat” for the technical.
And one of the considerations is that this quadrat be large enough to cover pattern in the vegetation (see Greig-Smith “Quantatative Plant Ecology”).
So it seems to me that, with “climate” and cycles of around 60 years being in evidence, that a quadrat of 30 years doesn’t cut the mustard?
Bob
I always like reading your posts; I find them very informative.
You know that you are onto something when Mr Gates does not pop up defending Santer’s work!
Reference R. Black’s BBC blog.
It would seem the Climate Vulnerable Forum established itself for money. In their eyes, there is no greater virtue than holding out the begging bowl.
President Nasheed of the Maldives has warned that climate change may mean the end of his nation. His Government is working to construct 11 new regional airports in 11 regions and work is under way to complete them as soon as possible, said Minister of Communication and Civil Aviation Mahmoud Razi. Razi who is among the newest three cabinet ministers appointed by President Mohamed Nasheed in June said so answering questions in the People’s Majlis Razi said regional airports will be constructed in Shaviyani, Noonu, Raa, Baa, Lhaviyani, Alifu Dhaalu, Dhaalu, Gaafu Alifu, Gaafu Dhaalu and Gnaviyani atolls.
http://www.maldivestourismupdate.com
Originally posted at http://www.real-science.com/drowning-islands-building-eleven-new-airports
Everyday that I visit WUWT, I am astonished at the cerebral firepower that is constantly on display. Bob’s graphs have dissected and demolished the obfuscations inherent in sanctimonious Santer’s desire for 17 years to pass, before conclusions are drawn to determine a climate signal.
It’s a travesty that Santer is unable to foresee that our sceptical scientists at WUWT are better at analysing the flaws in warmist dogma, than are the grossly overpaid, post normal scientists who continue with their pretence that “CO2 is a pollutant”, Can honesty be modelled on a computer?
Perhaps Bob could produce a graph, which charts the inevitable decline to null, of Santer’s reputation and funding. Cognitive dissonance has prevented Santer understanding that when in a hole, the first rule is “Stop Digging”. A Tisdale Graph would assist him in his confabulations.
Werner Brozek says: “In case Santer has objections to you using the SST instead of the TLT, the difference between RSS and hadsst2…”
There are signifcant differences between the HADISST dataset I used in this post and HADSST2 that you’ve presented, one of which is an upward bias in the HADSST2 data in 1998 that happened when they spliced two different source datasets together. Refer to:
http://bobtisdale.wordpress.com/2010/07/05/an-overview-of-sea-surface-temperature-datasets-used-in-global-temperature-products/
Can anyone confirm the moment when global warming stopped and add 17 years?
It’s just that I’d like to know the exact year, month, day, hour and minute that the argument should finally stop.
I guess that’s a bit optimistic, huh?
Thank you Bob for again showing the derivative of SST equals the PDO.
http://virakkraft.com/SST-deriv-PDO.png
The time period required is directly related to how that time period can be used to support or not AGW. And so we see weather is not climate means nothing if the weather event ‘proves’ AGW such as heatwaves . While 10 years or more of temp’s failing to increases is too short a time to disprove AGW , as you need longer , in fact there is no ceiling on this number it gets higher as the figures continue to fail to match the ‘models’
Once you got you head around that idea you get to the bottom line which is , the time period required is not a ‘scientific measurement’ but like so much in climate science , a political one which is way it can be many things and constantly changing.
There appears to be a slipped decimal point on the upward trends in Fig. 1.
How reliable are the preARGO temperatures? Not very. even the satellite data set is of lower accuracy than ARGO which is one reason why ARGO was put in place. Pre 1979 sea surface temperatures are very suspect due in part to the great variety of measuring devices and readers, ie, untrained sailors, who gathered the data.
Apart from this thanks for an informative post.
This, to me, is roughly as “disruptive” as Steve M’s dissection of the hockey stick. It becomes almost physically painful to see the nonsense continue. Is there some credible way to form a national or international pressure group that could find a voice, beyond the blogs (no disrespect intended – vital to be here!) – some powerful advocacy for the real science? Or do we just have to wait for the truth to permeate out by osmosis?
I have spent a career looking for signals within noise, and if I had ever dared to present paying customers with waveforms as dodgy as those that are as accepted within this “discipline”, the reaction (from heads of R&D within industrial OEMs) would be swift and unhesitating…
I can’t express the sense of frustration. Needless to say, I’m in the UK, home of the Climate Change Act 2008, which will go down in the history books as evidence of how 21st century insanity could pass into legislation without a murmur.