Joe D’Aleo and Don Easterbrook have produced a new paper for SPPI. This graph of US Mean temperature versus the AMO and PDO ocean cycles is prominently featured:

I particularly liked the regression forecast fit:

They have this caveat:
Note this data plot started in 1905 because the PDO was only available from 1900. The divergence 2000 and after was either (1) greenhouse warming finally kicking in or (2) an issue with the new USHCN version 2 data.
Hmm. I’m betting USHCNv2.
Abstract:
Perlwitz etal (2009) used computer model suites to contend that the 2008 North American cooling was naturally induced as a result of the continent’s sensitivity to widespread cooling of the tropical (La Nina) and northeastern Pacific sea surface temperatures.
But they concluded from their models that warming is likely to resume in coming years and that climate is unlikely to embark upon a prolonged period of cooling. We here show how their models fail to recognize the multidecadal behavior of sea surface temperatures in the Pacific Basin, which determines the frequency of El Ninos and La Ninas and suggests that the cooling will likely continue for several decades. We show how this will be reinforced with multidecadal shift in the Atlantic.
Here’s the paper you can download:

UPDATE: The goodness of fit, seems almost too good. There may be a reason. I’m reminded in comments of this article by statistician William Briggs – (thanks Mosh)
Do not smooth times series, you hockey puck!
Where he points out:
Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data.
If, in a moment of insanity, you do smooth time series data and you do use it as input to other analyses, you dramatically increase the probability of fooling yourself! This is because smoothing induces spurious signals—signals that look real to other analytical methods. No matter what you will be too certain of your final results! Mann et al. first dramatically smoothed their series, then analyzed them separately. Regardless of whether their thesis is true—whether there really is a dramatic increase in temperature lately—it is guaranteed that they are now too certain of their conclusion.
Perhaps Mr. Briggs can have a look and expound in comments. I only have the output, not the method. But let’s find out and determine how good the “fit” truly is. – Anthony
UPDATE: Statistician Matt Briggs responds in depth here. He says:
I want to stress that if D&E did not smooth their data, the correlation would not have been as high; but as high as it would have been, it would still have been expected. All that smoothing has done here is artificially inflated the confidence D&E have in their results. It does not change the fact that AMO + PDO is well correlated with air temperature.
Ok so I can see where smoothing (removal of high frequency components) would make two time series generated with white noise more correlated after smoothing than before but less clear about red or brown noise generated series.
Geoff Sharp says:
October 1, 2010 at 7:43 am
Maybe with all your wisdom you can tell us why the F10.7 record jumped from roughly 80 to 90 while in the background the sunspot area is diminishing with no flare activity?
F10.7 does not directly answer to sunspot number or area, but rather to the integrated effect of emissions from loops in the corona. If you have several [even small] active regions close together [like 1109 and 1110], an extensive loop system develops with increased F10.7 as result.
Brian G Valentine says:
October 1, 2010 at 8:46 am
Proclaimeth the great Svalgaard:
the whole analysis is worthless.
Indeed is you base it on high R^2 on heavily smoothed obsolete data.
rbateman says:
October 1, 2010 at 9:28 am
Leif: Geoff is talking about a certain ‘type’ of sunspot where, when it round the limb to the backside, the flux rises abruptly.
No such thing. The Sun is just messy. You can find all kinds of odd behavior, if you want to.
I’ll suggest that if SPPI wants to have any credibility that they send out pieces for HARD critical review. The harshest critics they can find. I can think of three from these parts.. one who covers the sea, one who covers the sun, and a third who covers the land.
screw peer review. you want to “murder board” this stuff.
[snip – If you can’t be polite don’t bother, ~jove, mod]
Adding in ENSO will probably improve the fit. The divergence starting in 2000 looks like it’s actually starting closer to 1998 when a record setting El Nino took place.
In any case the global ocean drives the climate. Its average temperature is 4C, it has 1000 times more mass than the atmosphere, and it’s dynamic. We need to figure out what (other than the sun) drives the ocean. In particular the mixing of the frigid deep with the warm surface layer – how much mixing occurs, when, where, and what factors can vary it.
Stephen Mosher:
I agree. What could be good work is getting subverted, in my opinion, by emotion.
At this point I would like to see ASA or one of their journals conduct a review of the statistical methods used in the prominent climatology papers. A scorecard (over confident, confident, flawed, etc.) with recommendations for improvement would really help clear up the battlefield.
feet2thefire says:
September 30, 2010 at 10:17 pm
“I’ve been trying to find what causes the El Niño, and haven’t found anything on it. Everyone seems to assume the El Niño is the cause of other things – as if it just appears out of nowhere.”
Weakened trade winds result in less mixing of warm surface layer and cold deep water allowing the sun to warm the surface layer more than when trade winds are stronger.
If you take the Total Solar Irradiance for the last 110 years, you get a real eye-opening overlap with AMO/PDO and Temperature.
Meanwhile, poor little CO2 and fossil fuel use plots just go willy-nilly, having no relation to temperature, whatsoever.
Climate Change (nee Global Warming) marches to the tune of planetary mechanics. CO2 is just coming along for the ride.
steven mosher says:
October 1, 2010 at 10:31 am
‘I’ll suggest that if SPPI wants to have any credibility that they send out pieces for HARD critical review. The harshest critics they can find. I can think of three from these parts.. one who covers the sea, one who covers the sun, and a third who covers the land.
screw peer review. you want to “murder board” this stuff.’
Agreed. Good work will hold up.
And Bob Tisdale seems to have to make the same points re PDO and AMO over and over again on various threads. It must be like trying to herd the sea while ploughing cats.
Through my last several years of reading climate blogs I have learned some great defenses here that Joe D’Aleo and Don Easterbrook can put to work.
1. The method used may be wrong, but the results are consistent with numerous other peer reviewed climate research studies.
2. Regarding the regression fit and that nasty divergence after 2000, there is a climate science approved trick that can fix that right up. Perhaps trick is a poor choice of words, it’s more of a “good way to deal with a problem”.
The posts consistently focus on the fine points of statistical manipulation. As I pointed out in my earlier post, there are two primary points of contention:
1) the temperature history by UAH rather than GISS, and
2) the premise that a post-LIA warming event is still going on, rather than it stopped in the late 80s, as the IPCC report.
What effect do these two points have on the conclusions? The warmists, I suggest, would make the paper invalid because of them. The statistical details fail if the underlying data/assumptions are wrong – exactly what us skeptics claim is wrong with the whole CAGW by CO2 hypothesis.
No comments? Am I wrong in these points?
dp says:
October 1, 2010 at 8:37 am
“This has been kicked around for some time:
http://www.ukweatherworld.co.uk/forum/forums/thread-view.asp?tid=17838&start=1
The point of which is there is a natural signal and an anthro signal in the global data. Neither signal is known to be anything but cyclical, and the science is far from settled regarding positive or negative feedback. We know from the historic record that climate swings from one condition to another, and that no condition has been permanent, and that no swing was ever unidirectional.
Correct. The closest we can get to a simple solution of the interlinked/dependant cycles seen in our climate is to examine the behaviour of driven pendulum/oscillators.
The signal from any possible effect of human activity cannot be separated from these natural cycles as it will be smeared across many processes and trying to identify and separate all these tiny ‘human’ dTs is outside the granularity of our measurement systems.
Are not the PDO and AMO defined with “de-trended” data i.e. with the warming trends removed. If so how can they be used to prove that there is no overall warming? Also surely these oscillations are spacial as well as temporal i.e when one part of the Pacific gets warmer, a different part gets cooler with the net effect averaging to zero as required by basic thermodynamics. If so how can this be used to say anything about the presence or absence of a global effect?
RC Saumarez says:
October 1, 2010 at 2:45 am
“Surely the procedure for correlating smoothed data sets is well known (Papoulis, Bendat & Peirsol). The number of degrees of freedom of the data are reduced according to the serial correlation in the signals.”
How very true! But, referencing widely-acclaimed monographs will not deter some web-resident “experts,” who don’t understand the basics of signal analysis, from drawing quite a different conclusion. After all, were talking “climate science” here.
As for the results, there is little new or suprising here. As others have pointed out, low-frequency temperature variations at continental stations should resemble those of surrounding oceans.
BBD.
Yes, Bob is tireless.
Let me try to explain why each and every study that tries to “explain” or “model” the temperature by appealing to factors such as SST indicies or sun spots or almost anything else ( position of planets, etc ) is destined to be taken to pieces and shown to be wrong in some fundamental way. The whole enterprise is mistaken and wrong from the start. Let’s just start with the fundamental physics which all these approaches ignore. GHGs will cause warming. Aerosols will cause cooling. If you leave those two out of your equations, you are going to be wrong. One way or another, you are going to be wrong. Next, the real criticism of AGW lies not in criticizing the properties of GHGs, But rather in understanding how the models are uncertain. That is, the climate is very complex. understanding how much warming we will see is very complex. e have reason to doubt the accuracy of large complex models…. WHY, why why, would you even think that a simple model using SST indicies could explain everything you needed to know about the future climate. So, if you believe that climate models are barely able to predict the future, if you beleive THAT, then you can with impunity reject any simplified “regression” . It has to be wrong. The joy is in finding which way it is wrong, but you know its wrong.
So everytime I see one of these things I know its wrong. I know the various classes of mistakes that are made in these types of approaches. I know the various methodological tricks one can use to make results look good and deconstructing them is shooting fish in a barrel. I know that any approach that starts by fitting observational data to a set of equations is wrong wrong wrong, if it leaves key variables out.
If you want to see how to do a simple model that actually does things more or less right, then look at Lucia’s model “Lumpy”
SteveSadlov says: at 8:08 am RE: grape harvest in CA
On Tips & Notes I remarked re Wash. State
TIPS AND NOTES. 9/20/2010
Harvest time in the vineyard is based on (among other things) the accumulation of heat by wine grapes over the summer. Harvest time and grape quality are great ways of summarizing, or integrating, weather with respect to long term expectations – especially that of the vines. That said, in central Washington State’s vineyards the summer was underwhelming.
http://www.yakima-herald.com/stories/2010/09/19/grape-harvest-is-a-bit-later-than-usual-this-year
I agree with your conclusion:
This is definitely more in the climate category than the weather category.
I’d rather work with grapes and wine than all those messy numbers, smoothing, and correlation (or lack thereof).
“[…] you never, ever, for no reason, under no threat, SMOOTH the series!”
Nonsense. In advanced physical geography there is a novel concept called scale. Aggregation criteria affect spatiotemporal pattern. The responsible thing to do in exploratory data analysis is report on such variation. Statistical inference is quite another thing – and the assumptions upon which it is based rarely hold in nature.
Steven Mosher says:
October 1, 2010 at 4:24 pm
—–
That is very Wrong.
One very clear example is the ENSO. A natural oscillation that effects global temperatures by a clear +/-0.2C and is not caused by GHGs or Aerosols nor any climate forcing included in any climate model. The ENSO regions as well, have not warmed at all in the last 140 years – flat that is.
So, your statement is prima facie incorrect.
Now take the simple idea about how the ENSO works and one can indeed extend that to other ocean areas and prima facie, we can start to include other natural oscillations that have a temperature impact that are not caused by a climate model forcing.
Smoothing seems to be widely misunderstood here. The effects of any data smoothing on the signal can always be determined EXACTLY from the z-transform of the smoothing weights, yielding the both the amplitude and phase characteristics. As Bill Illis demonstrated with US monthly anomalies, we gain insight by eliminating intra-annual variations. The advisability of smoothing depends entirely on the signal characteristics. What it does NOT do however, is introduce spurious correlation between two time series, where none existed before. Computer results suggesting it does are the artifact of algorithms that fail to produce truly independent sequences of random numbers. Nor is it true that smoothing always increases correlation. If the coherence is largely at high-frequency compnents, smoothing them away will DECREASE THE CORRELATION.
“Take two randomly generated sets of numbers, pretend they are time series, and then calculate the correlation between the two. Should be close to 0 because, obviously, there is no relation between the two sets. After all, we made them up.
But start smoothing those series and then calculate the correlation between the two smoothed series. You will always find that the correlation between the two smoothed series is larger than between the non-smoothed series. Further, the more smoothing, the higher the correlation.”
Incorrect. Your awareness-raising exercise was worthwhile, but you’ve gone wrong with “always”. Exceptions exist.
Doug Proctor says:
October 1, 2010 at 1:48 pm
The posts consistently focus on the fine points of statistical manipulation. As I pointed out in my earlier post, there are two primary points of contention:
1) the temperature history by UAH rather than GISS, and
What is the relevance of this. The UAH record began in 1979. I can’t see what your point is. I did read the earlier post.
2) the premise that a post-LIA warming event is still going on, rather than it stopped in the late 80s, as the IPCC report.
Can you define what you mean by a “post-LIA warming event”. When did it start? What triggered it? It’s impossibe to comment on such vague concepts.
“Note this data plot started in 1905 because the PDO was only available from 1900.”
Not likely.
There are too many climate nonalarmists who think PDO is the same thing as North Pacific SST. Such serious misunderstandings undermine the credibility of the nonalarmist movement. [It’s not an offensive stretch to speculate that most readers here lack the background necessary to think critically about factor analysis (such as PCA). What is PDO, if not North Pacific SST? SAS has some good webpages for those tackling this question.]
“[…] the more smoothing, the higher the correlation.”
Not true. Exceptions exist. (I’ve had to point this out to career, tenured, academic statisticians [some of whom have terribly weak intuition about smoothing, its utility, its hazards, etc.])
Ian W, yes, the CO2 effect is logarithmic, so 25ppm should be divided by about 300,
and 90 ppm should be divided by 370 to compare them (.08 versus .24), so OK the warming effect sixty years later is three times as much, not four. As I said, it is accelerating.