Santer's "17 years needed for a sign of climate change" compared against the IPCC models

I recently covered a press release from Dr. Ben Santer where it was claimed that:

In order to separate human-caused global warming from the “noise” of purely natural climate fluctuations, temperature records must be at least 17 years long, according to climate scientists.

Bob Tisdale decided to run the numbers on Ar4 models:

17-Year And 30-Year Trends In Sea Surface Temperature Anomalies: The Differences Between Observed And IPCC AR4 Climate Models

By Bob Tisdale

We’ve illustrated and discussed in a number of recent posts how poorly the hindcasts and projections of the coupled climate models used in the Intergovernmental Panel on Climate Change’s 4th Assessment Report (IPCC AR4) compared to instrument-based observations. And this post is yet another way to illustrate that fact. We’ll plot the 17-year and 30-year trends in global and hemispheric Sea Surface Temperature anomalies from January 1900 to August 2011 (the updates of HADISST data used in this post by the Hadley Centre can lag by a few months) and compare them to the model mean of the Hindcasts and Projections of the coupled climate models used in the IPCC AR4. As one would expect, the model mean show little to no multidecadal variability, which is commonly known. Refer to the June 4, 2007 post at Nature’s Climate Feedback: Predictions of climate, written by Kevin Trenberth. But there is evidence that the recent flattening of Global Sea Surface Temperature anomalies and the resulting divergence of them from model projections is a result of multidecadal variations in Sea Surface Temperatures.

 

WHY 17-YEAR AND 30-YEAR TRENDS?

A recent paper by Santer et al (2011) Separating Signal and Noise in Atmospheric Temperature Change: The Importance of Timescale, state at the conclusion of their abstract that, “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.” Sea surface temperature data is not as noisy as Lower Troposphere temperature anomalies, so we’ll assume that 17 years would be appropriate timescale to present sea surface temperature trends on global and hemispheric bases as well. And 30 years: Wikipedia defines Climate “as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose.”

But we’re using monthly data so the trends are actually for 204- and 360-month periods.

ABOUT THE GRAPHS IN THIS POST

This post does NOT present graphs of sea surface temperature anomalies, with the exception of Figures 2 and 3, which are provided as references. The graphs in this post present 17-year and 30-year linear trends of Sea Surface Temperature anomalies in Deg C per Decade on a monthly basis, and they cover the period of January 1900 to August 2011 for the observation-based Sea Surface data and the period of January 1900 to December 2099 for the model mean hindcasts and projections. Figure 1 is a sample graph of the 360-month (30-year) trends for the observations, and it includes descriptions of a few of the data points. Basically, the first data point represents the linear trend of the Sea Surface Temperature anomalies for the period of January 1900 to December 1929, and the second data point shows the linear trend of the data for the period of February 1900 to January 1930, and so on, until the last data point that covers the most recent 360-month (30-year) period of September 1981 to August 2011.

Figure 1

Note also how the trends vary on a multidecadal basis. The model-mean data do not produce these variations, as you shall see. And you’ll also see why they should, because they are important. Observed trends are dropping, but the model mean trends are not.

I’ve provided the following two comparisons of the “raw” Sea Surface Temperature anomalies and the 360-month (Figure 2) and 204-month (Figure 3) trends as references.

Figure 2

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 3

COMPARISONS OF SEA SURFACE TEMPERATURE ANOMALY TRENDS OF CLIMATE MODEL OUTPUTS AND INSTRUMENT-BASED OBSERVATIONS

In each of the following graphs, I’ve included the following notes. The first one reads,

The Models Do Not Produce Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed, Because They Are Not Initialized To Do So. This, As It Should Be, Is Also Evident In Trends.

And since those notes in red are the same for Figure 4 through 9, you’ll probably elect to overlook them. The other note on each of the graphs describes the difference between the observed trends for the most recent period and the trends hindcast and projected by the models. And they are significant, so don’t overlook those notes.

There’s no reason for me to repeat what’s discussed in the notes on the graphs, so I’ll present the comparisons of the 360-month and 204-month trends first for Global Sea Surface Temperature anomalies, then for the Northern Hemisphere data, and finally for the Southern Hemisphere Sea Surface Temperature anomaly data. Some of you may find the results surprising.

GLOBAL SEA SURFACE TEMPERATURE COMPARISONS

Figure 4

HHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 5

NORTHERN HEMISPHERE SEA SURFACE TEMPERATURE COMPARISONS

Figure 6

HHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 7

SOUTHERN HEMISPHERE SEA SURFACE TEMPERATURE COMPARISONS

Figure 8

HHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 9

CLOSING

Table 1 shows the observed Global and Hemispheric Sea Surface Temperature anomaly trends, 204-Month (17-Year) and 360-Month (30-Year), for period ending August 2011. Also illustrated are the trends for the Sea Surface Temperature anomalies as hindcast and projected by the model mean of the coupled climate models employed in the IPCC AR4.

Table 1

Comparing the 204-month and 360-month hindcast and projected Sea Surface Temperature anomaly trends of the coupled climate models used in the IPCC AR4 to the trends of the observed Sea Surface Temperature anomalies is yet another way to show the models have no shown no skill at replicating and projecting past and present variations in Sea Surface Temperature on multidecadal bases. Why should we believe they have any value as a means of projecting future climate?

SOURCE

Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runswebpage.

0 0 votes
Article Rating
101 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
tokyoboy
November 19, 2011 4:32 pm

Some typos in Fig. 1?
At least the 1938-68 slope is much larger (in absolute value) than 0.01 degC/decade.

Interstellar Bill
November 19, 2011 4:53 pm

Bob, Bob, you’ve got to stop all this promiscuous data-flinging
and just accept that the consensus is in and the debate is over.
The 20th century warming was ‘Global’, you see,
because we know CO2 caused it all.
Never mind that a third of the world got colder,
we’ll just stop putting thermometers there.
Look, Bob, we know that eeville Satanic Gasses are marching us to Doomsday,
and that only Government can Save Us, with more taxes and control.
Can’t you just get along?
See you at our Durban-fest! (It’s how we reward the believers.)

Editor
November 19, 2011 5:01 pm

tokyoboy says: “Some typos in Fig. 1?”
Nope. I just rounded it off. The global HADISST SST anomaly trend calculated by Excel for the 360-month period of June 1938 to May 1968 was -0.0099 deg C/Decade.

AndyG55
November 19, 2011 5:05 pm

If 17 years is necessary to see a trend, HOW did the Global Warming Scare start in the late 1970’s?

Camburn
November 19, 2011 5:08 pm

Thank you Mr. Tisdale for provideing more confirmation that AR4, Sec8, all subsections that indicated that climate models had large areas of uncertainty has been confirmed.
Now the question arises, there are at least two large discrepencies that pass the 17 year Santor time window. The strat has not been cooling for 17 years and the SST is not complying in a statistically verifiable way.
Ok…..the large question: Why are we spending large sums of money on climate models that continue to produce poor levels of certainty?
This would have been an interesting presentation at Rep Markely’s hearing last week. Then he could have said that four people attended.

pete50
November 19, 2011 5:08 pm

OMG, its worse than we thought!

DirkH
November 19, 2011 5:11 pm

Differentiating the temperature time series and smoothing the differential should obtain a curve with a similar shape to your trends; and that’s basically an operation with a bandpass characteristic. So the information obtained is already part of the spectrum of the original signal as the filter operations are linear.
Hmmm… can we test it with woodfortrees? Yes, seems to work:
http://www.woodfortrees.org/plot/hadcrut3gl/derivative/mean:360
So from a signal theoretical approach it looks like the models get the higher frequencies in the signal completely wrong. Should be interesting to look at with a wavelet analysis.

wayne
November 19, 2011 5:20 pm

“Why should we believe they have any value as a means of projecting future climate?”
Good question Bob and the answer is both simple and correct, we shouldn’t.

ew-3
November 19, 2011 5:37 pm

pete50 says:
November 19, 2011 at 5:08 pm
OMG, its worse than we thought!
Quoting a certain Penn State professor ?

Lawrie Ayres
November 19, 2011 5:47 pm

Australia has had a “carbon” tax foisted upon us as a direct result of these models and our own scientists, those in government employ, have been too lazy or too beholden to do what Bob Tisdale has done; challenge the findings with real data. I accept it won’t happen but those who led us here should be jailed for fraud, accepting payment under false pretenses.

wayne
November 19, 2011 5:49 pm

toykoboy, at first was thinking the same, but look at the ‘y’ axis label, it is trend rate, not temperature.

tokyoboy
November 19, 2011 6:09 pm

wayne says: November 19, 2011 at 5:49 pm…..
Yassuh. Got it. Thanks……. And sorry, Bob.

Mr.D.Imwit
November 19, 2011 6:13 pm

Completly off topic but just as newsworthy, the B.B.C.(British Bullsh*t Corporation) caught with their pants down http://www.dailymail.co.uk/news/article-2063737/BBCs-Mr-Climate-Change-accepted-15-000-grants-university-rocked-global-warning-scandal.html

November 19, 2011 6:14 pm

The significance of Santer, et al is that they are saying that 17 years without statistically significant warming proves the models wrong.
I’d say your table 1 is proof of no significant warming in SSTs over the last 17 years.
BTW, Santer is referring to surface temps and not as widely claimed by Warmers, troposphere temps.

Tom
November 19, 2011 6:15 pm

Good work, Bob. The stuff that really matters is always simple.

Dennis Nikols, P. Geo
November 19, 2011 6:17 pm

Thanks for this Bob it is well done. One would think it would put an end to Santer’s bafflegab. Probably not. In 1975 J. Tuzo Wilson (Geophysicist) was asking why rational men, who have a history of rational thought, are so irrational? I don’t think he had a good answer, none of us do either. I see Roger Pielke Sr. is suggesting you submit this for peer review. Great idea but not sure it would help. This model foolishness is not about science it is about ideology and Wilson’s irrationality.

Manfred
November 19, 2011 6:18 pm

Figure 1 is showing clearly the superposition of a natural trend of about 60-70 years length and perhaps a linear trend..
The best linear trend estimate can then be deducted at equivalent cycle states, such as the difference between the 2000s and the 1940s, which is approximately 0.2-0.3 degrees according to HadSST3 or approximately 0.3-0.4 degrees per century.
A 30 years timescale such as between 1979 and 2009, starts around a natural cycle minium and end in a maximum cycle maximum, grossly inflating the linear trend. Same error for the period from 1900 until today.

Gail Combs
November 19, 2011 6:20 pm

AndyG55 says:
November 19, 2011 at 5:05 pm
If 17 years is necessary to see a trend, HOW did the Global Warming Scare start in the late 1970′s?
__________________________________
Because at the First Earth Summit in 1972 Maurice Strong started the ball rolling.

“It is instructive to read Strong’s 1972 Stockholm speech and compare it with the issues of Earth Summit 1992. Strong warned urgently about global warming, the devastation of forests, the loss of biodiversity, polluted oceans, the population time bomb. Then as now, he invited to the conference the brand-new environmental NGOs [non-governmental organizations]: he gave them money to come; they were invited to raise hell at home. After Stockholm, environment issues became part of the administrative framework in Canada, the U.S., Britain, and Europe. “ http://www.afn.org/~govern/strong.html

November 19, 2011 6:23 pm

Fine article, Bob. I always enjoy your well thought out posts. And I really like your charts!

Iskandar
November 19, 2011 6:25 pm

Throw out the models, they are not worth the kWH they consumed on electricity!

RockyRoad
November 19, 2011 6:50 pm

Models, models, models. Nothing more needs to be sad. (OOpss… typo….lol).

Steve from Rockwood
November 19, 2011 7:06 pm

Nice work Bob.
We used to have 10. Now we have 17 and 30.
It feels like the goal posts are moving.

wayne
November 19, 2011 7:14 pm

Iskandar… COME ON, don’t throw out all of the models! Had some great, very soft, models as friends in the past and they do tend to consume many kWh of electricity for sure… they’re just not of the type you are speaking. 😉

November 19, 2011 7:24 pm

FORGET IPCC’s SCARING PROJECTIONS: Mushrooming of Desalination Systems in the M.E. started after 1980.

November 19, 2011 7:45 pm

First, sorry for my English, but I must comment on the matter.
Really, really right, to establish a minimum to know how the weather changes, we would need eternity!

Werner Brozek
November 19, 2011 7:46 pm

“A recent paper by Santer et al (2011) Separating Signal and Noise in Atmospheric Temperature Change: The Importance of Timescale, state at the conclusion of their abstract that, “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.”
In case Santer has objections to you using the SST instead of the TLT, the difference between RSS and hadsst2 (0.00624 and 0.00757 ) for the last 17 years is not that great:
http://www.woodfortrees.org/plot/hadsst2gl/from:1979/plot/hadsst2gl/from:1995/trend/plot/rss/from:1979/plot/rss/from:1995/trend
But if “17 years in length are required for identifying human effects”, would that not imply that the human effects are very small and that no urgent action is required?

cohenite
November 19, 2011 8:03 pm

Santer wants it to be 17 years because he can’t count to 30.

November 19, 2011 8:14 pm

Werner Brozek;
But if “17 years in length are required for identifying human effects”, would that not imply that the human effects are very small and that no urgent action is required?>>>
I find that the alarmists get very quiet when one roles out arguments such as that. If the sensitivity is as high as they claim, we would clearly see the warming signal above and beyond natural variation. If sensitivity is so low that it can barely be teased from the “noise” and natural variability so easily over comes it, then it never mattered in the first place.

John F. Hultquist
November 19, 2011 8:33 pm

Thanks Bob. Well done, as usual.
Without doing a lot of reading searching for this “17” noise eliminator I am perplexed because many temperature records are longer than this. Should we not, then, have been able to “separate human-caused global warming” trends from the noise ?[Santer] – by “we” I don’t mean you or me but rather the “climate scientists” referenced (or not) in the press release.
What purpose is being served by this pronouncement? It seems as though some folks are frustrated by the lack of a signal for AGW and expect natural fluctuations to dominate for awhile but promise it will clear up soon.
So I wonder, if natural fluctuations can swamp the signal, then natural fluctuations ought to be able to push the system beyond some presumed “tipping point” without any help from humans. Surely they don’t believe that. I hope someone will comment on where they see Santer and colleagues going with this – ‘cause I sure don’t.

November 19, 2011 9:03 pm

Don’t throw out the bad model with the bath water. Because that’s the proof… the cold bath water is proof that you’re taking a hot bath.

Ben
November 19, 2011 10:16 pm

Typo:
yet another way to show the models have no shown no skill
Thanks for your report Bob. Another good comparison

jorgekafkazar
November 19, 2011 10:29 pm

Good, as usual.

November 19, 2011 10:45 pm

Bob.
What you want to do to finish the test is the following. You need to compute the difference between the trends and test whether the modelled trend (plus CIs) and the observed trends (plus CIs)
are significant different.
we know that they MUST necessarily differ. they must differ, because of how models are initialized
and they MUST differ because the earth is one realization. If they were ever the same that
would indicate something was amiss.

November 19, 2011 11:16 pm

I will add observational data and ECHAM5 model scenario for the North Atlantic alone:
http://oi56.tinypic.com/wa6mia.jpg
And for the claim “our models predicted exactly such lull in warming”, here is another one:
http://oi55.tinypic.com/14mf04i.jpg
I am not sure whether the missing “initialization” is the culprit of model fail. Models obviously do not model the natural variability and they just blindly follow the Keeling curve, since they are all based on purely radiation assumptions – remember that thick arrow from K-H diagrams. Warm phase of AMO is then faultily expressed as CO2-induced whatever.

peter_ga
November 19, 2011 11:37 pm

Speaking rather generally, measurements over a long time period are not strictly necessary to understand a system. If the state can be measured accurately enough, then the time needed to establish the relationship between the various state variables can be as small as the accuracy of the measurement allows.
With satellites and argos, one would imagine that the measurements should be comprehensive enough to begin understanding how it all interacts, provided ideological blinkers such as the necessity to prove AGW are removed.

MangoChutney
November 19, 2011 11:41 pm

From Richard Blacks blog:
http://www.bbc.co.uk/news/science-environment-15698183
Black tells us the report “found it’s way into his possession” and goes on to say:
“Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.
And then Santer tells us 17 years needed for a sign of climate change
Conclusion?

Geoff Sherrington
November 20, 2011 12:03 am

With equal superinventive authority to Ben Santer, I state that 13 years is the minimum period required to show the effect of human influence. Here is the consequence. If you take the last big temperature anomaly, 1998, and add 13, you get 2011. Papers published up to 2011 can go in the IPCC next round.
But Ben Santer can see that 1998 + 17 = 2015, so papers covering the whole 17 years since the big event cannot find their way into the next IPCC performance.
The year 1998 is a problematic break point, because global temperatures after it have barely changed despite CO2 keeping on the increase; and because other indices like cyclonic storm frequency/severity and effects of severe climate events on humans and the rate of rise of ocean levels are decreasing – or good factors like food production are rising – as shown by Indur M. Goklany in the WUWT article above this one.
Why go for a complex explanation when a simple one stares you in the face?

old construction worker
November 20, 2011 12:28 am

30 year time span is to short. Try 100 or 200 years and just maybe……

November 20, 2011 12:46 am

“But if “17 years in length are required for identifying human effects”, would that not imply that the human effects are very small and that no urgent action is required
No. It implies three things.
1. A noisy observation dataset
2. decadal oscillations in the system
3. A signal that is small relative to these.. in the PAST

Another Ian
November 20, 2011 1:48 am

Bob,
A right-fieldish question.
My background is range science, in which the estimation of herbage biomass is a common measurement. This being commonly done by clipping said biomass within a frame – “quadrat” for the technical.
And one of the considerations is that this quadrat be large enough to cover pattern in the vegetation (see Greig-Smith “Quantatative Plant Ecology”).
So it seems to me that, with “climate” and cycles of around 60 years being in evidence, that a quadrat of 30 years doesn’t cut the mustard?

richard verney
November 20, 2011 1:53 am

Bob
I always like reading your posts; I find them very informative.
You know that you are onto something when Mr Gates does not pop up defending Santer’s work!

Perry
November 20, 2011 1:55 am

Reference R. Black’s BBC blog.
It would seem the Climate Vulnerable Forum established itself for money. In their eyes, there is no greater virtue than holding out the begging bowl.
President Nasheed of the Maldives has warned that climate change may mean the end of his nation. His Government is working to construct 11 new regional airports in 11 regions and work is under way to complete them as soon as possible, said Minister of Communication and Civil Aviation Mahmoud Razi. Razi who is among the newest three cabinet ministers appointed by President Mohamed Nasheed in June said so answering questions in the People’s Majlis Razi said regional airports will be constructed in Shaviyani, Noonu, Raa, Baa, Lhaviyani, Alifu Dhaalu, Dhaalu, Gaafu Alifu, Gaafu Dhaalu and Gnaviyani atolls.
http://www.maldivestourismupdate.com
Originally posted at http://www.real-science.com/drowning-islands-building-eleven-new-airports

Perry
November 20, 2011 1:56 am

Everyday that I visit WUWT, I am astonished at the cerebral firepower that is constantly on display. Bob’s graphs have dissected and demolished the obfuscations inherent in sanctimonious Santer’s desire for 17 years to pass, before conclusions are drawn to determine a climate signal.
It’s a travesty that Santer is unable to foresee that our sceptical scientists at WUWT are better at analysing the flaws in warmist dogma, than are the grossly overpaid, post normal scientists who continue with their pretence that “CO2 is a pollutant”, Can honesty be modelled on a computer?
Perhaps Bob could produce a graph, which charts the inevitable decline to null, of Santer’s reputation and funding. Cognitive dissonance has prevented Santer understanding that when in a hole, the first rule is “Stop Digging”. A Tisdale Graph would assist him in his confabulations.

Editor
November 20, 2011 1:56 am

Werner Brozek says: “In case Santer has objections to you using the SST instead of the TLT, the difference between RSS and hadsst2…”
There are signifcant differences between the HADISST dataset I used in this post and HADSST2 that you’ve presented, one of which is an upward bias in the HADSST2 data in 1998 that happened when they spliced two different source datasets together. Refer to:
http://bobtisdale.wordpress.com/2010/07/05/an-overview-of-sea-surface-temperature-datasets-used-in-global-temperature-products/

cui bono
November 20, 2011 2:01 am

Can anyone confirm the moment when global warming stopped and add 17 years?
It’s just that I’d like to know the exact year, month, day, hour and minute that the argument should finally stop.
I guess that’s a bit optimistic, huh?

lgl
November 20, 2011 2:01 am

Thank you Bob for again showing the derivative of SST equals the PDO.
http://virakkraft.com/SST-deriv-PDO.png

KnR
November 20, 2011 2:35 am

The time period required is directly related to how that time period can be used to support or not AGW. And so we see weather is not climate means nothing if the weather event ‘proves’ AGW such as heatwaves . While 10 years or more of temp’s failing to increases is too short a time to disprove AGW , as you need longer , in fact there is no ceiling on this number it gets higher as the figures continue to fail to match the ‘models’
Once you got you head around that idea you get to the bottom line which is , the time period required is not a ‘scientific measurement’ but like so much in climate science , a political one which is way it can be many things and constantly changing.

Bart
November 20, 2011 2:53 am

There appears to be a slipped decimal point on the upward trends in Fig. 1.

November 20, 2011 3:09 am

How reliable are the preARGO temperatures? Not very. even the satellite data set is of lower accuracy than ARGO which is one reason why ARGO was put in place. Pre 1979 sea surface temperatures are very suspect due in part to the great variety of measuring devices and readers, ie, untrained sailors, who gathered the data.
Apart from this thanks for an informative post.

sensorman
November 20, 2011 3:36 am

This, to me, is roughly as “disruptive” as Steve M’s dissection of the hockey stick. It becomes almost physically painful to see the nonsense continue. Is there some credible way to form a national or international pressure group that could find a voice, beyond the blogs (no disrespect intended – vital to be here!) – some powerful advocacy for the real science? Or do we just have to wait for the truth to permeate out by osmosis?
I have spent a career looking for signals within noise, and if I had ever dared to present paying customers with waveforms as dodgy as those that are as accepted within this “discipline”, the reaction (from heads of R&D within industrial OEMs) would be swift and unhesitating…
I can’t express the sense of frustration. Needless to say, I’m in the UK, home of the Climate Change Act 2008, which will go down in the history books as evidence of how 21st century insanity could pass into legislation without a murmur.

son of mulder
November 20, 2011 4:30 am

I noticed no change between the age of 0 & 16 or between 17 & 32 or between 33 &48 or since I was 48. Seems to work.

Matt G
November 20, 2011 4:51 am

Just shows a sine wave pattern and with the last one no higher than the previous, also shows no influence from other than natural cycle. If there was extra warming from humans the sine wave would have shown higher trend. The evidence has been in for ages now that the trend doesn’t show any AGW influence and like on Bob’s previous posts the ENSO over one of these sine waves, controls global ocean temperatures and there atmospheric temperatures generally.

Bill Illis
November 20, 2011 4:59 am

Great stuff Bob.
I don’t think anyone has shown the climate model projections for ocean temperatures before and now we know how far off the projections are compared to the actual results.
Your analysis shows that global warming is not significant for the oceans (Santer and his 20 co-authors certainly did not know this or they wouldn’t have written that paper). The lower troposphere will soon meet Santer’s 17 year timeline for significance as well. You’ve also shown the ocean heat content actuals versus climate models before and they are also far off.
Land temperatures?, well they are four times more variable than the other measures and we should wait to see how they respond to the current La Nina. We have to start thinking there is Land amplification signal like there is a polar amplification signal (at least that is what the data shows).
“Global warming” must be renamed to “Insignificant-and-far-less-than-predicted-warming.” That is what I am going to call it from now on.

November 20, 2011 5:00 am

The graphs in this post present 17-year and 30-year linear trends of Sea Surface Temperature anomalies in Deg C per Decade on a monthly basis, and they cover the period of January 1900 to August 2011 for the observation-based Sea Surface data and the period of January 1900 to December 2099 for the model mean hindcasts and projections.
If I have understood it correctly, than the matter of this work is twice; first to give scientific arguments for the global SST profile in the past, and second to make hindcasts and projections for this century.
I have a problem by using linear trends in physics if there is no physical mechanism and/or no physical relation as basis discussed (like the trend of a linear increasing velocity from a constant acceleration of a mass).
That what I can see in the SST profile for the last century from the hadsst data is a lot of frequencies of different power, and because it is not out of the question, that there are frequencies lower than 1/100 y^-1 there is a need to know the temperature profile for one millennium back or two. This can help to identify low frequencies of about 1/1000 years^-1.
This is significant because it is well known that ~500 yeaes ago the climate was cold but ~1000 years ago, the climate was warm as today. And this is an important point, because it concerns the climate level in this century.
There is indeed an astronomic function known with a main frequency of 2/1827 y^-1 or 1/913.5^y-1 from the tide profile of the plutinos couple of Quaoar and Pluto. Solar spring tides are related to warm times, and nip tides are related to cold times. But this relation seems not to be only a special geometry function of that synodic couple, mainly all neighbour couples in the solar system from Mercury to Quaoar are involved in that music. Summing up all these (empirical weighted) tide functions, one gets a function (GHI x) to check the hindcast. In the end it is no different work to sum up the tide functions for this coming hole century.
http://volker-doormann.org/images/s17_0a.gif
http://volker-doormann.org/images/s17_1000.gif
http://volker-doormann.org/images/s17_1500.gif
http://volker-doormann.org/images/s17_1800.gif
http://volker-doormann.org/images/s17_1900.gif
http://volker-doormann.org/images/s17_1960.gif
http://volker-doormann.org/images/s17_2000.gif
http://volker-doormann.org/images/s17_1900b.gif
http://volker-doormann.org/images/s17_2000a.gif
I think if some twilight climate authorities argue with the term ‘trend’ as a quotient of temperature divided by an arbitrary earth time interval, this function must not taken into the scientific discussion about the causes of the climate profile with its frequencies from month to millennia.
In the other thread about the ethics of J. Hansen I have given a hindcast to his trivial triple function (CO2 + volcanos + sun):
http://volker-doormann.org/images/hansen_verification1.jpg
Sometimes in science it needs more than Occams knife.
V.

RB
November 20, 2011 5:02 am

What KnR said.
It is now so obvious as to be beyond question.
Oh, and can someone please bring Al Gore up to date about extreme weather event attribution? Thanks in advance.

richard verney
November 20, 2011 5:07 am

cui bono says:
November 20, 2011 at 2:01 am
/////////////////////////////////////////////////////////
If 1998 is seen as an outlier (due to a strong El Nino) there has been little warming (even on the basis of the Team’s adjusted/harmonised data sets) since 1995 and that is why in 2010 Phil Jones conceded that there had been no ‘statistically significant’ warming for the past 15 years (ie., since 1995).
On that basis, there is an argument that the 17 years is up at the end of 2012.
2010 was quite a warm year (another reasonably strong El Nino) and that caused Phil Jones in 2011 to suggest that there was now ‘statistically significant’ warming since 1995. Whilst this is a matter of some conjecture, since the 2011 data is not yet in, it is likely that 2011 will not be regarded as particularly warm and will tend to depress the warming since 1995 such that it is likely that once the 2011 data is in, Phil Jones would be forced to accept that between 1995 and 2011 there is no ‘statistically significant’ warming. That being the case, it makes 2012 a very interesting year.
Many are of the view that 2012 will be influenced by La Nina conditions. If so, 2012 is unlikely to be particularly warm. One can therefore foresee a significant likelihood that when the temperature for the period 1995 to 2012 is considered, it will consist of 17 years worth of data showing no ‘statistically significant’ warming trend. If it pans out like that, I wonder what Santer (and/or other members of the Team) will say.

Snotrocket
November 20, 2011 5:26 am

Bob Tisdale writes, in his conclusion: “…the models have [ ] shown no skill at replicating and projecting past and present variations in Sea Surface Temperature on multidecadal bases. Why should we believe they have any value as a means of projecting future climate?”
Forgive me my deep feeling of smugness: as I read Bob’s article and studied the graphs I came to exactly the same conclusion before I ever got to the summary. The hind-casting is just so grossly out – apart from anything else!
WUWT, you are a great educator. Thanks.

matt v.
November 20, 2011 5:29 am

Thanks Bob for again going the extra step to keep us informed and doing all these facts checks . Your contributions are much appreciated and they have been invaluable in this climate debate

November 20, 2011 5:34 am

steven mosher says:
November 20, 2011 at 12:46 am
1. A noisy observation dataset
2. decadal oscillations in the system
3. A signal that is small relative to these.. in the PAST
====================================================================
That nails it !!

November 20, 2011 5:54 am

Bob Tisdale says: November 20, 2011 at 1:56 am
“There are signifcant differences between the HADISST dataset I used in this post and HADSST2 that you’ve presented, one of which is an upward bias in the HADSST2 data in 1998 that happened when they spliced two different source datasets together.”

Indeed so, and also with HADSST3. I wrote a post here which compares them all. Except for the period 1930-1960, when there is some large oscillation, the comparison of the three with the models (graph here) shows that the model tracks the path of the observations better. This underlines Tamino’s point that you can’t expect a model mean to show the variability of climate; it is much smoother than any of its constituent model instances. There’s also a gadget in the post which illustrates the underlying determination of the trend variation.

November 20, 2011 5:55 am

Thank you very much Bob!
Excellent article!
No warming since the global warming alarm was sounded: False alarm.

DirkH
November 20, 2011 6:17 am

steven mosher says:
November 20, 2011 at 12:46 am
“No. It implies three things.
1. A noisy observation dataset
2. decadal oscillations in the system
3. A signal that is small relative to these.. in the PAST”
I find it disingenious that you added the threatening “…in the PAST” to your point 3; but omitted the necessary caveat at 2.:
2. decadal oscillations in the system … that the models cannot simulate.
Because that would spell out too clearly that the models are crap, wouldn’t it, Steven.

Bill Illis
November 20, 2011 6:50 am

I see Tamino has done his usual “drive-by”.
What he doesn’t understand is that he is saying the model runs which have lots of variability (almost multi-decadal) and the runs that produce almost-no-warming are the most accurate runs.
He’s right. The climate models which have 65% less warming than the average and a multidecadal natural signal are the only accurate models.
Instead of trying to defend all the climate models by saying that a few of the no-warming ones are accurate enough, they should just get rid the inaccurate high warming models.

November 20, 2011 7:29 am

Remember when they used to tell up how this warming was the worst in 6000 years… Then the worst in 1000… Then that it was the rate that was abnormal, rather than the peak… Then that the last 30 years was where the real problems are… then that the 30 year trend are what is important, not the last 14 years.. and so on.
Santer is really playing a losing game here because his new argument requires a dramatic warming to occur in the next 16 years when all signs point to nothing of the sort in the cards. His faith is being tested.

Camburn
November 20, 2011 7:36 am

Kinda amusing how Mr. Foster didn’t notice the large deviation at the beginning, and the large deviation at the end of his last graph.
If his graph isn’t a poster child of how poor the models are doing, both hindcast and future cast, then I need new glasses.
He has only confirmed Mr. Tisdale’s point.

David
November 20, 2011 7:48 am

How come all the IPCC hindcast projection graphs show a reduction after 2050, but their predictions of warming in the atmosphere continue?

old construction worker
November 20, 2011 8:00 am

“David says:
November 20, 2011 at 7:48 am
How come all the IPCC hindcast projection graphs show a reduction after 2050, but their predictions of warming in the atmosphere continue?”
They are trying to find the “Hot Spot”.

DirkH
November 20, 2011 8:08 am

Tamino’s post at
http://tamino.wordpress.com/2011/11/20/tisdale-fumbles-pielke-cheers/
is really funny. He tries to debunk Bob with Mosher’s multi-model argument. Look at his last figure – the single model runs he shows vs. the “multi model mean”. How can a multi model mean have a higher value than all the single runs? Only in climate science.
Also, he admits defeat:
“There are definitely problems with the models. For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate. But overall they’re not bad, and the amount of natural variability they show is realistic.”
The problem is, Tamino, that you can’t cry wolf all day and at the same time admit that your forecasting technology is still in its infancy. We KNOW it’s in its infancy; we’ve been saying that for years, it’s nice that you agree. Would you CAGW fellows now shut up for the next fifty years and stop demanding control of the global economy. Oh, and expect a slight reduction in funding, your toys are just not that important.

Werner Brozek
November 20, 2011 8:16 am

“cui bono says:
November 20, 2011 at 2:01 am
Can anyone confirm the moment when global warming stopped and add 17 years?”
It depends on the data set you use. On HadCrut3:
http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
The peak was February, 1998. However the interesting thing is that if you plot from February 1998 to the present and take the slope, it is positive. But if you plot a bit further back to May 1997, you get a negative slope. So to answer your question, see what happens with the HadCrut3 slope in May, 2014.

ferd berple
November 20, 2011 8:19 am

AndyG55 says:
November 19, 2011 at 5:05 pm
If 17 years is necessary to see a trend, HOW did the Global Warming Scare start in the late 1970′s?
Because anyone that studies climate knew at that time based on previous cycles that the climate was going to switch from a cold cycle to a warm cycle (happens every 30 years), and by predicting that things were going to get warm they stood to make themselves famous. They just needed to tie it all to pollution so they could get the environmental movement on board.

Donald Mitchell
November 20, 2011 8:23 am

I do not understand the use of the word initialized regarding the failure to reproduce multidecadal variations in the models. It seems more likely to me that the models do not have an understanding of the causes of the multidecadal variations. If the models have a decent understanding of the multidecadal variations, is it simply that the users do not know how to initialize their own models?

ferd berple
November 20, 2011 8:38 am

“John F. Hultquist says:
November 19, 2011 at 8:33 pm
So I wonder, if natural fluctuations can swamp the signal, then natural fluctuations ought to be able to push the system beyond some presumed “tipping point” without any help from humans.”
That is why life is extinct on earth. The logic of AGW is that we must keep CO2 within a narrow range, otherwise as the earth warms due to CO2, more CO2 will be naturally released, leading to run-away warming.
So, since we now know natural variability can itself push temperatures outside that narrow range, natural variability will lead to run-away warming without any human influence leading to extinction of life on planet earth. Therefore none of us can be here, because life is already extinct from run-away global warming due to natural causes.

Werner Brozek
November 20, 2011 8:49 am

“richard verney says:
November 20, 2011 at 5:07 am
(ie., since 1995).
On that basis, there is an argument that the 17 years is up at the end of 2012.”
If 1995 is included, then the 17 years is up in six weeks at the end of 2011, is it not? But now we need to know exactly what Dr. Santer meant. Was it no statistically significant warming at the 95% certainty level? Or does the slope have to be 0 or less? Or can the slope be something like 0.006/year or less? And does it require both UAH and RSS to meet his number?

Theo Goodwin
November 20, 2011 8:57 am

Nick Stokes says:
November 20, 2011 at 5:54 am
“…shows that the model tracks the path of the observations better.”
Can you, or anyone in climate science, explicate the phrase “tracks the path of the observations” in a way that meets at least some rudimentary standard of scientific reasoning?
Does it mean “reproduces the path?” If so, please explain “reproduces the path better.” Any reproduction that is not exact is a failure, right?
Does it mean “looks more like the path?” If so, then you are just eyeballing something and calling it science.
Does it mean “scores higher on our metric than any competitor?” If so, please set forth the “metric” you are using.
Does it mean “here is my bluff?” I think it means this.

Editor
November 20, 2011 9:37 am

Nick Stokes says: “Indeed so, and also with HADSST3. I wrote a post here which compares them all. Except for the period 1930-1960, when there is some large oscillation, the comparison of the three with the models (graph here) shows that the model tracks the path of the observations better.”
Do you really think your graph..comment image
…shows the model mean track the multidecadal variations in the trends of any of those datasets?
Nick Stokes says: “This underlines Tamino’s point that you can’t expect a model mean to show the variability of climate; it is much smoother than any of its constituent model instances.”
Tamino’s post was a distraction from the point of my post. Nothing more, nothing less. I replied to it here:
http://bobtisdale.wordpress.com/2011/11/20/tamino-misses-the-point-and-attempts-to-distract-his-readers/

Camburn
November 20, 2011 9:58 am

DirkH:
Tamino’s post at
http://tamino.wordpress.com/2011/11/20/tisdale-fumbles-pielke-cheers/
is really funny. He tries to debunk Bob with Mosher’s multi-model argument. Look at his last figure – the single model runs he shows vs. the “multi model mean”. How can a multi model mean have a higher value than all the single runs? Only in climate science.
Naw, this is that new fangled method of using stats. Yow will note he didn’t include the error bars of the models, so about the only thing one can conclude is that the mean presented is at the top of the error bars…..right?
The graph is just priceless. And he thinks he is a really good statistician. Ay…..yep….

NotTheAussiePhilM
November 20, 2011 10:01 am

I think most people here would agree that 17 years, or even 30 years, isn’t long enough to monitor any change in climate
– a 30 year period covers only half of what appears to be a 60 – 70 year cycle of highs & lows
– so it’s much better to look over, say, 66 years
The only problem with this is that we only have good data for 100 years at best, but it does show some sort of trend:
http://www.woodfortrees.org/plot/hadsst2gl/mean:792/plot/hadcrut3gl/mean:792
The nice thing about hitting the right cycle period is that it averages out intra-cycle variations, leaving just the underlying trend that we’re interested in.
– so, here, we’re left with an upward trend of about 0.4C per century…

Camburn
November 20, 2011 10:16 am

Anthony:
Looks to me like Tamino has one heck of a wagging tongue. And you know what the old saying about those are?

NotTheAussiePhilM
November 20, 2011 10:19 am

To further illustrate my point, here’s a few more plots using different filter sizes
http://www.woodfortrees.org/plot/hadsst2gl/mean:360/plot/hadsst2gl/mean:480/plot/hadsst2gl/mean:600/plot/hadsst2gl/mean:720/plot/hadsst2gl/mean:840
Assuming the underlying data is correct, then it’s clear there’s an upward trend
– of about 0.4C per century
Climate models don’t really need to model the ‘multi-decadal’ oscillations, as long as they accept their limitations, and can correctly model & predict the long term trends
Also, it seems to me, that the long terms trends are fairly obvious, and not very difficult to model

peter_dtm
November 20, 2011 10:24 am

John Marshall says:
November 20, 2011 at 3:09 am
quote
ie, untrained sailors,
end quote
Sorry sir – utter codswallop.
OBS were/are taken every 6 hours GMT by the ships Navigators.
The deck watchkeeping officers being the 1st mate; 2nd mate and 3rd mate.
Amongst other things they were trained in was how to take the simple meteo observations with the equipment at hand. These were professionals who amongst other thing had to have managed to get their heads around spherical trigonometry; and the 2nd & 3rd mates would demonstrate the use of a sextant almost every day – you try ‘shooting noon’ on a pitching deck; and then working out where on the globe you are. Untrained my backside.
Of course; some of them ‘flogged’ the obs reports – just like some scientists ‘flog’ their reports too.
One instantly easy to use correction would be to observe the various watches moving against GMT and any idle Officer’s garbage should stick out like a sore thumb. Except – the most common form of ‘flogging’ was just to use the last obs.
This is the normal cack reason rolled out as to why millions of reports are not used – but; hey – its ok to use the discredited bulk of land based stations with their DEMONSTRATED bad siting.
I smell a large and deliberate denigration of what should be a well mined dataset of world wide reports. I can only assume that Merchant Marine OBS totally failed to show any signs of an AGW signal; probably the contrary; if any one was to bother they would probably find lots of proof of not just the PDO AMO; uncle tom cobly and all; but many other interesting temperatures – including the previously reported low ice levels in the arctic.
I suggest;sir; that before you throw out insults like “,ie, untrained sailors,” you get off your high horse and find out who these “ie, untrained sailors,” were; and just what their training was.
For your information I was an untrained sailor, – I was so untrained as a Radio Electronics officer that I obviously couldn’t manage to send the OBS reports in to the local radio stations – incidentally I used to help prepare the OBS and kick any slackers.
Most of the “untrained sailors,” who put the OBS together had their own selfish interest at heart of course. After all; the OBS were used by the WMO and others to put together the world’s shipping forecasts; that the “untrained sailors,” relied on – some times a wrong weather forecast could be the difference between life and death.

November 20, 2011 11:06 am

Nick Stokes says:
November 20, 2011 at 5:54 am
I wrote a post here which compares them all. Except for the period 1930-1960, when there is some large oscillation, the comparison of the three with the models (graph here) shows that the model tracks the path of the observations better.
The global terrestrial temperature profile can be simulated simple with a good proxy with some (six to eleven) solar tide functions. The decade from 1960 to 1970 or the time from 1900 to 2020.
V.

November 20, 2011 11:25 am

Interesting is that Sweden was snowfree on November 17 (never happened before that late) and that the temperature anomoly was tremendous. Also now snow for Norway, the season is delayed for two weeks at least. In Finland unusual low amount of snow. No skiing in Scotland: no snow. No skiing in the Alps, to warm now snow.
http://www.smhi.se/sgn0102/maps/ttmk_111119.gif

DirkH
November 20, 2011 11:28 am

NotTheAussiePhilM says:
November 20, 2011 at 10:19 am
“To further illustrate my point, here’s a few more plots using different filter sizes”
The 0.4 deg/century trend starts before massive industrial CO2 emissions in the 1950ies start… not a good omen for the CAGW theory and the future performance of their models. 😉

November 20, 2011 11:39 am

So Santer beats the c*p out of himself here. Neat.
By his 17-year standard, (a) it’s cooling (b) this contradicts the models’ predictions (c) all the models are useless.
The fit is pretty well nonexistent except for around 1990-2000 global deltaT (fig 5) which I guess reflects the time Santer was developing his models. More of his record-counterfeiting here. I’m quoting WUWT commenter Andrew Russell who reckons this is Pat Michaels’ work that made Santer want to “beat the c**p” out of him.
But the records do support Akasofu’s picture of 60-year cycles imposed on a slow linear recovery from L~I~A (or on another, much slower cycle).
Thank you Bob. I hope we will see your graphic outing of Santer elevated to the MSM somewhere.
JOSH??

November 20, 2011 11:49 am

Camburn, Anthony
don’t make me laugh!
“that new fangled method of using stats” has to be “calculating in tongues” “calculating in forked forking tongues”

November 20, 2011 11:49 am

Bob Tisdale says: November 20, 2011 at 9:37 am
“Do you really think your graph shows the model mean track the multidecadal variations in the trends of any of those datasets?”

My point is that the SST measures vary among themselves, and their deviation from the model mean is not hugely different from their variation from their own mean would be.
Theo Goodwin says: November 20, 2011 at 8:57 am
“Does it mean “looks more like the path?” If so, then you are just eyeballing something and calling it science.”

Theo, this is an eye-balling post. All I’m saying is that if you cast your eyeball over a wider range of SST datasets, what you think you saw with just HADISST looks a lot different.

Editor
November 20, 2011 1:37 pm

Werner Brozek: Here’s a further explanation that will complement that of Nick Stokes.
HADSST2 is a spatially incomplete dataset for starts. Refer to:
http://bobtisdale.wordpress.com/2010/07/05/an-overview-of-sea-surface-temperature-datasets-used-in-global-temperature-products/
There is a significant amount of data missing in the Southern Hemisphere in HADSST2, and if you’re not aware, the high latitudes of the Southern Hemisphere of the spatially complete, satellite-based SST datasets like HADISST and Reynolds OI.v2 SST data show a significant cooling at those latitudes. An example:
http://i42.tinypic.com/2z9f7nq.jpg
That graph is included in my monthly updates:
http://bobtisdale.wordpress.com/2011/11/07/october-2011-sea-surface-temperature-sst-anomaly-update/
Second, HADSST2 has another bias. The Hadley Centre spliced two “incompatible” (for lack of a better word) source datasets together in 1998 and it created an upward shift in the HADSST2 data that does not exist in any of the other SST datasets. The upward shift in the HADSST2 data after 1998 is approximately 0.065 deg C compared to HADISST. That’s a lot of upward bias.
http://i56.tinypic.com/308fjar.jpg
The graph is from this post:
http://bobtisdale.wordpress.com/2010/11/26/does-hadley-centre-sea-surface-temperature-data-hadsst2-underestimate-recent-warming/
I also discussed and illustrated it with other SST datasets in this post:
http://bobtisdale.wordpress.com/2009/12/12/met-office-prediction-%e2%80%9cclimate-could-warm-to-record-levels-in-2010%e2%80%9d/
So those two biases are the likely reasons that the HADSST2 trends are closer to the model mean in Nick’s post.
Regards

November 20, 2011 2:12 pm

DirkH
actually some models do get the decadal oscillations correct. In frequency and magnitude. They can only get the timing correct by dumb luck. that’s just math and the test constraints.
when you download model data and have a look at it let me know. you’ll be less of a waste of time

November 20, 2011 3:03 pm

Systems theory and ecosystems theory all over again, welcome back to the 1970’s.

J Bowers
November 20, 2011 3:08 pm

Dennis Nikols — ” I see Roger Pielke Sr. is suggesting you submit this for peer review. Great idea but not sure it would help.”
And I’d wager Pielke Sr. wouldn’t go within a mile of acting as co-author on this with Tisdale.

tokyoboy
November 20, 2011 3:23 pm

One reason for the “17 years” should be the COP “17” for this year.
Next year he will be shouting “18 years”, and in 2013 “19 years” ….. /sarc

Editor
November 20, 2011 4:31 pm

J Bowers says: “And I’d wager Pielke Sr. wouldn’t go within a mile of acting as co-author on this with Tisdale.”
No reason to speculate or wager. I have no intention of writing a paper. I write blog posts.

Werner Brozek
November 20, 2011 5:17 pm

“Bob Tisdale says:
November 20, 2011 at 1:37 pm
Werner Brozek: Here’s a further explanation that will complement that of Nick Stokes.”
Thank you for these replies. I have not digested them yet but plan to work on it. But in the meantime, unless I missed it, HADISST is not part of the options on the http://www.woodfortrees.org/plot/. Should it be there and if so, can you use your influence to get it added? Thank you!

gene watson
November 21, 2011 9:16 am

Truth is the IPCC models (all of them) don’t hindcast or forecast, even unanimously getting the sign wrong. They are all worthless and GIGO continues to rule. Let’s move on.

November 21, 2011 10:13 am

For grins, I took the global SST 360 month figure above, superimposed solar cycle data from http://solarphysics.livingreviews.org/Articles/lrsp-2010-1/, screen-scraped and (linearly) scaled by eyeball so that the 240-month anomaly looks like it would be roughly proportional to the integrated area of the previous two cycles at the beginning, and added four vertical lines. The result is here http://www.phy.duke.edu/~rgb/combined.jpg (sorry, don’t know how to include it directly in a “reply in this interface).
The four vertical lines are:
a) 1945 — nuclear testing begins
b) 1957 — above ground nuclear testing exceeds 40 blasts a year, many in the megaton range.
c) 1963 — above ground testing peaks (1962 at 140 tests/year) followed by test ban treaty in 63.
c-d) 26 years over which there are at LEAST 40 tests/year, underground but (of course) vented to the atmosphere in such a way that there is still detectable fallout in the upper atmosphere. From d) on there have still been tests, but only a relative handful that with less total energy than was probably produced by volcanic activity over the same general timeframe.
I’ve often wondered why nuclear testing has never been considered (AFAIK) as a confounding parameter to simple climate models based on insolation alone as proxied by the solar cycle. As you can see, by aligning the solar cycle so that it merely eyeballs out normalized to the linear trend on a 20-previous year basis, one expects warming where it warms everywhere but the stretch from 1945-1967 where (I would postulate) we had a “nuclear winter” from the HUGE amount of aerosols and radioactive dust dumped into the stratosphere and beyond. If there is any truth to the idea that clouds are nucleated by GCRs as well as particulate matter, how much more strongly must they be nucleated by radioactive particulate matter, not to mention the substantial volume of exotic aerosols each blast no doubt produced.
If one inverts the graph of nuclear testing from wikipedia, it almost perfectly fits the “hole” in SSTs relative to 2-cycle average of solar output. This also explains why warming has been predominantly northern hemisphere — the anomalous COOLING was due to (mostly) northern hemisphere blasts, and the clearing of radioactive dust has had a stronger effect there as blasts were gradually cleaned further and further up by successive treaties (plus the fact that the tests themselves were smaller and smaller — numerical simulation was considered adequate to design most warheads from somewhere late 70’s on, and we had reliable designs for BIG bombs, all we would ever need on both sides of the cold war).
This also explains the part of the figure I haven’t drawn in — the last two solar cycles, if added, suggest that the anomaly SHOULD be more or less back on the pre-1945 pattern of depending mostly on the previous two solar cycles, which makes so very much sense, ESPECIALLY for SSTs where LOCAL warming and UHI effects are not easily cherrypicked and human-manipulated confounding effects.
Just a thought. I’d guess that the DoD or somebody has long-term measurements of the annual patterns of residual fallout, enough to estimate how they might compare and contribute to ordinary dust, aerosol/ghg and GCR modulation of insolation over the last sixty years.
rgb

G. Karst
November 21, 2011 11:11 am

Robert Brown says:
November 21, 2011 at 10:13 am
If there is any truth to the idea that clouds are nucleated by GCRs as well as particulate matter, how much more strongly must they be nucleated by radioactive particulate matter, not to mention the substantial volume of exotic aerosols each blast no doubt produced.

Try as I might, I can’t find a way of getting around this statement. Your linked graph is pixelated beyond use. I can make out the vertical lines, is all. The meaningful plot would be nuclear tests vs cloudiness vs SSTs. I think Bob Tisdale or Willis Eschenbach can direct you to some sort of cloudiness data. I may be wrong. GK

November 21, 2011 1:58 pm

I apologize — probably a result of laziness — I scraped the screen to build it, but I screwed up the resolution of the original figure in the process, I guess. Try again, I (should have) fixed it now. I also put up an eps version of the figure combined.eps at the same (otherwise) URL if the jpg doesn’t come through clean.
I’m not certain one should ignore the direct effect of aerosols and dust from nuclear blasts and go only with clouds. If clouds were strongly correlated, it might help support the GCR-cloud connection, but volcanoes can produce significant cooling without the radiation and with similar or even smaller total energies being released. It’s a multivariate process and hence it would be dangerous to assume that we know which of several possible mechanism dominate the effect (if any). However, everybody remembers that one of the predicted doomsday aspects of global nuclear war was “nuclear winter”.
It would be interesting to get a firmer figure on the total megatonnage blown per year — the wikipedia page has only the total number of tests, not the size of each test. The larger tests in the 50’s were up to 15 megatons IIRC, in a single blast right on the water (huge chunk of hot, radioactive water blasted up through all atmospheric layers); many of them (especially larger ones) were on islands and nearly all were above ground. The Apple 1 (no, not the computer) test blast part of the “teapot” series was set off on my birthday (3/29/55) at right about the time I was born (again, for grins); this series added up to around 150 kt set off in roughly 2 months, all of them ground blasts IIRC and hence they kicked up much dust.
From the figure it looks like even the very FEW blasts set off in 1945 very likely had an immediate effect followed by a cumulative (integrated) but gradually decaying longer term effect, just as the “right” way to deal with solar activity is almost certainly to use an e.g. exponential integration window that stretches back at least three or four cycles, not a straight up 30 month running average. This is probably true for Tisdale’s figures as well — he’s using a square window 240 or 350 months into the past, and for his purpose that is fine, but almost all of the “reasonable” ways to average past behavior to acquire a smoothed current behavior should be non-Markovian integrals with an exponential window in the past. Or maybe non-exponential, or (most likely) multi-exponential.
The timescale for turnover of the ocean is order of 1000 years, with the potential for non-Markovian behavior on longer scales still (see Global Physical Climatology by Hartman (1994). It’s not just what Mr. Sun is doing today, it is what Mr. Sun was doing ten, fifty, a hundred years ago. To quote: “The thermal, chemical, and biological properties of the deep ocean therefore constitute a potential source of long-term memory for the climate system for timescales up to a millenium.” Somehow I don’t think that the GCM people are including water cooled during the Maunder minimum (that is very likely still randomly upwelling at various points in the oceanic turnover), let alone water warmed during the MWP roughly 1000 years ago — but they should. Hartman’s book seems to have preceded Mann et al and the Hockey Team — he states quite clearly “The early Holocene from about 10000-5000 years ago was a time when Northern Hemisphere climates were warmer than today’s”. Gosh, I wonder how that happened, since it wasn’t anthropogenic CO_2. In fact, the facts as laid out TEXTBOOK style (as in everybody knew them) in 1994 offer scant support to the AGW due to CO_2 hypothesis.
rgb

John Silver
November 21, 2011 11:56 pm

“But we’re using monthly data so the trends are actually for 204- and 360-month periods.”
No, data is the individual thermometer readings.
You are talking about monthly averages, and then you (and everybody else) are making averages of averages.

November 22, 2011 11:41 am

November 20, 2011 at 11:06 am
Nick Stokes says:
I wrote a post here which compares them all. Except for the period 1930-1960, when there is some large oscillation, the comparison of the three with the models (graph here) shows that the model tracks the path of the observations better.
I think the relevant point of this subject is the physical nature of that oscillating global climate and the processes and geometries involved. As long as there is no satisfactory explanation of that, what is called ‘noise’ on a ‘holy’ increasing function to hell, this subject has no relevance.
Science ever has to show by arguments if there is an effect of relevance. Because the visible oscillating frequencies of temperature proxies are visible from 1/month to 1/millennium these analysed frequencies and its functions are the basis for a possible human bias. But in a first step the very nature of these oscillations have to be explained. But this is not jet done in the climate science community. Although there is some FFT work about the oscillations, it seems that the power frequencies do say nothing to the workers. By this a short look to possible sources of these oscillations easy can be found in the synodic tide profiles of the couples frequenting the Sun.
There is a cave in China in Wanxiang and deep in 1 km people have taken a stalagmite (WX42b) and analysed the delta O18 values from 195 AD – 2003 AD. It seems to be different in many ways whether one takes 14C data or 18O data, and moreover there are big differences in different locations on the globe. However, the very point of these data (for the oscillations) is the time coherence of changing values, and not in the first order the amplitude of the proxy temperature, because all these methods seems to be nonlinear in respect to seasons and/or temperature and atmosphere. And because there is no nonlinearity in the time, each movement around the Sun can be calculated in seconds of time, and can be a linear reference to the time calibrated scale from the decay data.
There is a possible interesting trend visible in the density of the planets in that way that first the density has an impact on the tide like functions visible in the samples and second that it seems that there could be a quantum effect in the densities. If there would be steps of integer density values and one subtract the integers, then the densities have a linar function with the range order.
http://volker-doormann.org/images/densityfraction.gif
However, it is evident that the synodic function oft the plutinos Quaoar and Pluto has a main impact on the amplitudes in the temperature proxies from all known samples in the last two millennia, followed by the synodic function of Neptune and Pluto.
http://volker-doormann.org/images/wx42b_1500_2000.gif
http://volker-doormann.org/images/wx42b_0_2000.gif
http://volker-doormann.org/images/wx42b_100o_1500.gif
http://volker-doormann.org/images/wx42b_500_1000.gif
http://volker-doormann.org/images/wx42b_1500_2000.gif
http://volker-doormann.org/images/wx42b_0_500.gif
It is not always clear from the data whether there is done a high frequency cut or not. Because of that it is possible that the amplitudes relating to the inner planets like Mercury and/or Venus are smoothed to death. But that these fast running objects make footprints in the hadcrut3 history can be shown.
From the given evidence the couple of Quaoar and Pluto makes its complicated period of 913.5 years, this means that the increasing heat profile since the LIA can be taken as a phase of a period.
For the subject of this thread it means that there is no basis for the hoax of human CO2 and no basis for a speculation.
V.

ferd berple
November 22, 2011 7:06 pm

Doesn’t this puts the lie to Trenberth’s position that the models have not been initialized?
Hegerl:
[IPCC AR5 models]
So using the 20th c for tuning is just doing what some people have long
suspected us of doing […] and what the nonpublished diagram from NCAR showing
correlation between aerosol forcing and sensitivity also suggested.