Guest post by Dr. David Stockwell
I suspect that the only really convincing evidence against global warming is a sustained period of no global warming or cooling — climate sensitivity and feedbacks are too esoteric.
I have followed the recent global temperature with some excitement, and started to prepare a follow up to a previous article I wrote on the failure of global temperature to meet AGW expectations.
The Nature publication “Recent Climate Observations Compared to Projections” by Rahmstorf, Hansen and others in 2007 claimed an up-tick in a graph showed that “global temperatures were increasing faster than expected”, and consequently climate change would be worse than expected. In “Recent Climate Observations: Disagreement with Projections”, using their methodology and two additional year’s data, the up-tick was shown to be an artefact of inadequate smoothing of the effects of a strong El Nino. Perhaps this rebuttal played some part in subsequent revisions of Rahmstorf’s graph with longer smoothing, which had the unfortunate effect (for him) of removing the up-tick, so they could no longer claim, “global temperatures were increasing faster than expected”.
Can we answer the question “Is the Atmosphere still warming” in a reasonable way?
From the field of econometrics comes empirical fluctuation processes (EFP), available to programmers in an R package called strucchange – developed to analyse such things as changes in exchange rates by the brilliant Achim Zeileis. The idea is to find a test of the null hypothesis that the slope parameter m for a section of a series has not changed over time:
H0: m1 = m2 versus the alternative H1: m1 not equal to m2
The idea is to move a window of constant width over the whole sample period, and compare local trends with the overall distribution of trends. The resulting process should not fluctuate (deviate from zero) too much under the null hypothesis and—as the asymptotic distributions of these processes are well-known—boundaries can be computed, which are only crossed with certain probability. If, on the other hand, the empirical process shows large fluctuations and crosses the boundary, there is evidence that the data contains a structural change in the parameter. The peaks can be dated and segmented regression lines fit between the breaks in slope.
I applied the strucchange function EFP to the five official global temperature data sets (CRU, GISS, NOAA, UAH and RSS) from 1978 using the latest values in 2011, and to mean global sea level. The results for the global temperature are below:

The fluctuation process (top panel) crosses the upper significance boundary a number of times, indicating that the trend parameter is unstable. For example, it crosses in 1998, coincident with the strong El Nino, and then relaxes. Most recently, three of the five data sets are at the lower boundary, indicating that at least the CRU, NOAA and RSS datasets have shifted away from the overall warming trend since 1978.
The middle panel shows the structural break model for the CRU data, with the optimal number of breaks given by the minimum of the Bayesian Information Criterion (BIC) (bottom panel). The locations of the breaks are coincident (with a lag) with major events: the ultra-Plinian (stratosphere reaching) eruptions of Mt Chichon and Mt Pinatubo, the Super El Nino and the Pacific Decadal Oscillation (PDO) phase change in 2005.
Sometimes these types of models are sensitive to the start and end point, so I re-ran the analysis with data from 1950. Figure 2 is the resulting structural break model for CRU. While the fluctuation process did not show the same degree of recent downtrend, the structural break model is similar to the shorter series in Figure 1, except the temperatures since 1998 are fit with a single flat segment.
The temperature is plotted over random multiple AR(1) simulations, showing the temperature has ranged between the extremes of an AR(1) model over the period.

Another indication of global temperature is the mean global sea level, both barometric and non-barometric adjusted. Global sea levels tell the same story as atmospheric temperature, with a significant deceleration in sea level rise around the PDO shift in 2005.

By these objective criteria, there does appear to be a structural change away from the medium-term warming trend. Does this mean global warming has stopped?
What are the arguments that warming continues unabated?
Easterling and Wehner in their article “Is the climate warming or cooling?” lambasted “Numerous websites, blogs and articles in the media [that] have claimed that the climate is no longer warming, and is now cooling” for “cherry picking” the recent data. They examined the distribution of 10 year slopes of both the realized and modelled global temperature. They argued that because there were a small number of periods of flat 10 year temperatures that the long-term warming trend is intact.
Both E&W and EFP agree that there is a small chance of flat temperatures for 10 years (EFP says around 5%) during a longer-term warming trend. What E&W’s are saying is that given a small chance at one time, the chance of flat temperatures at any time, over the last 50 years say, is much higher. This doesn’t alter the fact that to an observer during any of those decades when temperature was flat (as now) there would still be a 5% chance of a break in the long-term trend.
Breusch and Vahid (2008 updated in 2011) chimed in with “Global Temperature Trends”, stating “there is no significant evidence for a break in trend in the late 1990s”, and “There is nothing to suggest that anything remarkable has happened around 1998.” As hard as I looked I could not find any estimates of significance to back up their claim of significant evidence.
The statement is even more puzzling as the last 15% at the ends of the series are typically not tested for breaks due to low power of the test on the diminishing numbers of data. The 1990’s fall in the outside 15%. Breaks the size of the break in 1976 would not have been detected on their data.
Of course, there are a variety of other observations of the Earth’s radiative balance and ocean heat content, supporting of the “no warming” claim, by top researchers such as Douglass and Loehle. There does not appear to be any credible empirical evidence from the AGW camp that the atmosphere is still warming.
I suspect that as in “Recent Climate Observations” where climate scientists were fooled into thinking that “climate change will be worse than expected” by the steep up-tick in global temperatures during a strong El Nino, they have also been fooled by a steep but longer-term up-tick in global temperatures associated with a positive phase of the PDO.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“When I hear the words “pretty flat” or “pretty close” I immediately think “He must be a climate scientist”. As to whether it rings true for you, or me — who cares?”
Best put-down of the week.
And this will be music to TC’s ears:
“The most basic assumption of linear regression is that the slope parameter m in y=mx+c is constant. It’s not. So the model is wrong.”
Hansens promises come true:
http://stevengoddard.wordpress.com/2011/04/14/hiding-the-la-nina-at-giss/
What is up with GISS????
Frequently we hear climate science castigated for publishing results that are not reproducible because computer programs and even data are not available. I would like to reproduce the graphs that David Stockwell has provided, and indeed make some of my own in the same. But he does not give me enough information to do that. To be sure I am very much a tyro at statistics – but I do have ‘R’ installed on my PC. I can also obtain various temperature records. I’m interested particularly in how the breaks were identified after EFP gave a verdict that a simple linear model was inappropriate. [Aside: “The model y=mx+c is WRONG as m is not a constant.”. This is not true, the model is wrong, but ‘m’ is a constant].
All that aside it is a really excellent post – many thanks.
PS Which colour line is UAH data?
David Stockwell says: “When I say PDO I mean all those processes that PDO represents, that seem to explain the multi-decadal temperature fluctuations so well.”
I know of no processes represented by the PDO. It’s an aftereffect of ENSO.
A simple explanation of how the PDO patterns are formed in the North Pacific: The Eastern North Pacific SST anomalies (north of 20N) rise during an El Niño due to changes in atmospheric circulation, coastally trapped Kelvin waves, etc. The El Niño also slows or reverses trade winds in the western tropical Pacific, so less warm water is spun up into the Kuroshio Extension and the SST anomalies in the western and central North Pacific can drop. Warm SST anomalies in the eastern North Pacific and cool SST anomalies in the central and western North Pacific result in a positive PDO. During a La Niña, the Eastern North Pacific can cool, and due to the increased strength of the trade winds, more warm water than normal (leftover from the El Nino and created by the increase in Downward Shortwave Radiation over the tropical Pacific) is spun up to the Kuroshio Extension. That causes higher than normal SST anomalies in the western and central North Pacific. The pattern with the cool SST anomalies in the east and warm anomalies in the central and west is represented by a negative PDO.
Jimbo says: “What is up with GISS????” And you linked Steve Goddard’s post that starts with the sentence, “Incredibly, GISS has higher temperature anomalies now than they did in the middle of last year’s El Nino summer.”
http://stevengoddard.wordpress.com/2011/04/14/hiding-the-la-nina-at-giss/
There was no El Nino last summer. The average June-July-August 2010 NINO3.4 SST anomalies was about -1 deg C. That’s well into La Nina range. While I understand he’s trying to show the divergence between GISS and the other datasets, starting the post off with an incorrect statement doesn’t help. Then there’s the other very obvious question: why did he exclude the NCDC data?
I’m not sure what the significance of this analysis is.
David Stockwell’s hypothesis tests
H0: m1 = m2 verses the alternative H1: m1 not equal to m2
Of course, m1 is not equal to m2. Why would it be. Any underlying trend (due to AGW or other) is bound to be amplified or attenuated over various timescales by natural factors – particularly ENSO. For example the Pinatubo eruption in 1991 would have the effect of reducing the trend of any 10 year period which ended in the early 1990s but increasing any trend beginning in the late 1980s. Include 1998 and adjacent trends will be markedly different.
As for the recent “flat” period
1. Since 2002 solar activity has fallen (from max->min). This may be responsible for a drop of ~0.1 deg due to the widely accepted fall in TSI over the solar cycle.
2. Since 2002 the Multivariate ENSO Index has fallen significantly.
The combination of these 2 factors could easily accout for 0.2 deg per decade trend. I’d hang on a few more years before calling the end of global warming.
The salient question is whether the atmosphere has warmed at all from anthropogenic CO2. Theoretically it should have! In the real world I have my doubts. We have reliable temperature records continental Antarctica dating back to 1955. There is no warming going on there. None at all. Zero. Zip. Zilch. Nada. In fact the Amundsen-Scott station at the south pole has shown 0.17c/decade cooling trend from 1958 to 2000 although when that’s combined with the other stations the cooling is statistically insignificant.
Now dig this. Antarctica is special because it is by far the driest air on the earth. There is essentially no water vapor there to interfere with the greenhouse effect of carbon dioxide. There is no anthropogenic soot that reaches it to darken the snow and thus raise the surface temperature. There are no anthropogenic land use changes to muddy up the picture. There are no improperly sited thermometers or urban heat islands there. CO2 concentration in Antarctica mirrors the rise observed at Mauna Loa. In short Antarctica is the perfect place to observe CO2 induced warming and there ain’t no warming there.
I’m not disputing the observations of significant warming over land surfaces in the northern hemisphere over the past century but I’m more than a little skeptical about it being due to anthropogenic CO2 given that the best place to observe its effect in isolation shows no significant change in temperature over the past 60 years.
David Stockwell said:
“Either it increased from 1978 because of CO2 but has ‘maxed out’, or it was caused by the sun all along.”
Or the Kyoto protocol is working and warming will resume when it expires? The team will thank you for proving that recent mitigation efforts have been more successfull than hoped. This is a grand day for Al Gore; somebody call the Nobel comittee. Or maybe it’s the windmills that are saving us? /sarc
All kidding aside, I agree with above comments suggesting that the record isn’t long enough to eliminate noise. This analysis does do a good job of showing the effects of decadal noise on the trends though. The next 30 years of satellite observations should be very illuminating. If we can only hold back the knee jerk reactions long enough to get better evidence…
Well Dave, I almost agree with you.
Wiki says this about the Atacama Desert in Chile; reputedly the driest place on earth; quote “”””” Aridity
Some parts of Atacama Desert, especially, surroundings of the abandoned Yungay town[9] (in Antofagasta Region, Chile) are arguably the driest places on Earth,[10] and are virtually sterile because they are blocked from moisture on both sides by the Andes mountains and by the Chilean Coast Range. ……………………………… The average rainfall in the Chilean region of Antofagasta is just 1 millimetre (0.04 in) per year. Some weather stations in the Atacama have never received rain. Evidence suggests that the Atacama may not have had any significant rainfall from 1570 to 1971.[5] It is so arid that mountains that reach as high as 6,885 metres (22,589 ft) are completely free of glaciers and, in the southern part from 25°S to 27°S, may have been glacier-free throughout the Quaternary, though permafrost extends down to an altitude of 4,400 metres (14,400 ft) and is continuous above 5,600 metres (18,400 ft). Studies by a group of British scientists have suggested that some river beds have been dry for 120,000 years.[11] …….. “””””
Now because of its Temperature, the actual molecular abundance of the average atmosphere in Antarctica, may be lower than the Atacama; but after all, there had to be SOMETHING to put all of that ice there on Antarctica; whereas the Atacama has no ice, even where it is cold enough.
But I will give you this point anyway. The Temperature over most of Antarctica, is so low, that even if CO2 was as greenhouse potent as a kindling wood; and even if there were ten times as much of it, it still couldn’t raise the surface Temperature enough to put any significant amount of additional water in the atmosphere.
In other words; you cannot start your barbecue in Antarctica, with the CO2 kindling wood theory; and therefore I agree with you; Antarctica proves the point that it cannot be CO2 that got the Grteenhouse warming going in the first place.
The notion that earth would be a frozen ice ball, sans CO2 (in the atmosphere) is a figment of the phony energy budget cartoon of Kevin Trenberth (et al).
At 340.5 W/m^2 average global TSI of KT’s model, you might be able to show that a frozen ice ball will remain frozen; but that is not what happens in Mother Gaia’s laboratory. She sees the whole 1362 W/m^2 of TSI acting locally over part of the tropics at all times, and with that blow torch going; particularly in a sky devoid of clouds, and with low moisture (orCO2); there’s not a snowball’s chance in hell, that the sun won’t burn through any ice and evaporate water into the atmosphere. That is one of the benefits, of having H2O be largely transparent to the main energy carrying portion of the solar spectrum.
I’m absolutely convinced beyond any shadow of a doubt, that earth with its entire water mass frozen solid; and with nary a single molecule of CO2 or ANY other GHG in the atmosphere, would recover to its present condition given just the present sun and earth orbit. Maybe sans CO2, it might be slightly less cloudy than it is now, but nobody would notice the Temperature change. And of course you would never ever get to that frozen ice ball in the first place; unless you shut the sun off for a very long time. Of course that’s stretching a point a bit, since sans CO2 there would be no life on earth so no biosphere to interract with the weather/climate sytem.
But I agree with you Dave, Antarctica demonstrates the utter impotence of CO2.
David, it is evident that atmospheric and sea temps are not rising and haven’t been since the close of Earth’s great year, which is when they spiked in July 1998.
“Theoretically it should have!”
Maybe, maybe not. I’m still waiting for anyone of the physicists out there to quantify the cooling effect of CO2. Yes, we know it intercepts radiation from the ground and sends some of it back towards earth. But, it can also be heated by the atmosphere itself and send some of that energy to space. So, is one effect greater than the other? Or, do they balance out like so many things in nature?
the decline in warming is worse than we thought.
David Stockwell says:
That is exactly the sense in which I used those terms. I meant that although I agree with the idea that it “appears” temps have been “pretty flat” (ie I’m basically in agreement with your conclusion) I can only say that in vague, hand-waving terms.
I would welcome some objective method (that cannot be dismissed as hand-picking start and end points) that could back up my impression of no warming.
I think that is what you are trying to do and I support that effort. I just remain unconvinced by your method (not) shown here. Maybe because you have not presented it well.
I agree that is what is needed and for those reasons. Sadly you do not explain what your method is.
It is quite obvious that a single slope is a gross simplification It does not follow that several linear slopes are adequate. There are known pseudo-cyclic tendencies and some would suggest and exponential growth term 😉 . You are making an arbitrary and unfounded assumption that several discontinuous slopes are better.
It is difficult to “go to the literature” when you don’t say what you are doing but you are not understanding my question. You are applying some abstract statistical test that says two slopes with a discontinuous jump has less residuals that one straight line.
What I am asking is how you think you can apply this to the physical system you are looking at. Suggesting a model with a step discontinuity in global temperature clearly does not apply to the physical world. You cannot suggest that the slopes you derive have any physical meaning. This seems to make the result rather pointless.
You need to impose a further condition on your maths to ensure that the end of one slope is continuous with the next. I would not expect to be able to do a “optimised” segmentation of this data with less than about 10 segments , some of which would be negative. You then have to re-apply your original test to see what efp says about each of the new linear fits you have done. If you think efp is relevant to dismiss one straight line , why do you not use it to test your suggested improvements.
It’s good to see someone involved in environmental science that is not hooked up in the AGW hysteria but that does not stop me being critical of what you present here.
Sorry that a bit long. The main point is that I don’t think you explain what you’ve done so it’s impossible to evaluate your method and I don’t think you discontinuous model has any physical meaning, whatever the slopes or method used.
Sadly, in the complete absence of any proper explanation of what you did to get your results that’s all I can say. Maybe I should have said: undisclosed statistical methods – who cares?
P.Solar: Yours are all good questions to go into. Looking at this question:
“how you think you can apply this to the physical system you are looking at. Suggesting a model with a step discontinuity in global temperature clearly does not apply to the physical world.”
This needs to be considered in the comparison to an alternative. In the case of a volcanic eruption, it seems quite well modeled by a sudden change in level. Another approach of fitting a straight line through it is not right, as it ignores the known perturbation. Other smooth approaches could have more parameters, or their own problems near the end points.
Similarly, the increase in temperature from the 1997-98 El Nino was so fast, it is also well approximated by an abrupt change in level.
A model is, after all, only an abstraction. Allowing changes in both level and slope allows you to represent discrete events, within a system that tends to trend, with a relatively small number of parameters.
I have thought about segmented models (without changes in level) but here you are making another assumption. These climatic events (volcanos and big El Nino) are relatively very quick.
What it is saying is without getting too complicated is: there are very fast changes (level), there are slow changes (trend). The fast changes are caused by known events (volcanos and big ENSO events).
There is a limit to the number of parameters that can be used for a given length of data, and this is suggested by the minimum of the BIC.
I say “who cares about what you or I feel” is not to be rude, but to say that the point is a repeatable defensible method, so I think we are on the same page actually. This is not “my method”, but a well-used method that the interested reader can follow up from the links I provided, a full description being beyond the scope of this post.
Your suggestion to “why do you not use it to test your suggested improvements” seems a good test of the residuals.
As to cyclicity, there is a new R package that builds on strucchange called bfast that removes periodicity first, and also tests for a change in the amplitude of periodicity. The result is no different for global temperatures, as the periodicity is very small, relatively. However, I have run it on monthly Arctic ice extent data and it is very interesting. Here you really do need to remove the cyclical component first as its very pronounced. There have been some very sudden changes in Arctic ice that it identifies, and its quite different to global temperatures.
If people are interested and Anthony is OK with it I can write up a post with those results too.
peter2108:
The code is roughly as follows, where D is a time series of temperature. I can send all the code if you want to contact me.
e=efp(D~time(D),type="Rec-MOSUM"); plot(e)
bp=breakpoints(D~time(D))
fm1 <- lm(Temp ~ breakfactor(bp)/time(Temp))
plot(D)
lines(ts(fitted(fm1),start=start(D),frequency=frequency(D)),col=2,lwd=2)
David Stockwell says:
“I have thought about segmented models (without changes in level) but here you are making another assumption. These climatic events (volcanos and big El Nino) are relatively very quick.”
I don’t really get your point here. What am I assuming?
There are many changed in *both* directions which are “relatively very quick”. In fact there are slopes in both directions both long and short that do not get modelled in the same way. All get modelled by steady rise and discontinuous drops. That seems to indicate an artefact of your method rather than the data.
We still don’t know your method so I can’t comment further on why that may be. You seem to steer around explaining what you do by vague references to the literature and “beyond the scope of” comments.
Your claim of a “minimum” in the BIC seems very contentious based on the limited plot you showed. Did you try running up to 10 segments to see if that minimum becomes more well defined?
Maybe I’m overlooking something but I don’t see the links you say explain your method. You only use efp to say one straight line is invalid. What you do instead seems rather opaque.
If I’ve missed something please point out my error and how you applied it to what you have done.
There are many changed in *both* directions which are “relatively very quick”. In fact there are slopes in both directions both long and short that do not get modelled in the same way. All get modelled by steady rise and discontinuous drops.
The 1998 El Nino gets modelled by an abrupt rise.
I don’t see the links you say explain your method.
I attempted to emulate the method used in the paper “Testing, Monitoring, and Dating Structural Changes in Exchange Rate Regimes” linked to in the post. Here it is again
http://eeecon.uibk.ac.at/~zeileis/papers/Zeileis+Shah+Patnaik-2010.pdf
BIC seems very contentious based on the limited plot
I am not going to argue with BIC, its been around since the beginning (http://en.wikipedia.org/wiki/Bayesian_information_criterion) .
David Stockwell says:
I’m not asking you to argue with BIC but I am asking you not to chop out half of what I say and then offer some irrelevant reply to something I never said in place of answering what I did say.
Your blatant avoidance of the question and disingenuous editing of my post seems to indicated you did not, and that you do not have any good reason for pretending that graph has a “minimum” in anything but the most banal literal meaning.
It seems you are as incapable of holding a meaningful discussion as doing meaningful science.
P Solar: For discussion on the limitations of BIC see Bai and Perron 2003, http://onlinelibrary.wiley.com/doi/10.1002/jae.659/pdf, the section Computation and Analysis of Multiple Structural Change Models.
I think I answered your question already: But BIC should be a global optimum — it would always be greater with more parameters I think, unless it is a really weird type of series (I could imagine one I suppose if it was highly periodic).
I don’t think a “double-dip” is likely with a BIC type index.
P Solar: If I do have concerns with the BIC, it would be that the maximum has too many breaks, in the case of auto-correlated data. If it was used in a situation where robustness of the model was important (not the case in an exploratory study) I would consider using less breaks than suggested. However, I am using the suggested method here.
Dr Stockwell,
BIC is a mathematically derived method based on the assumption that the data is an exponential class distribution. That may be true for the exampled exch-rate date in between changes in structure due to central bank intervention etc. (to take the case explored in the paper).
The climate data you are looking at has a pseudo-cyclic element , to a grossly simplified extent like a rectified sine wave. The size of this part of the signal is larger than linear trends everyone is trying to fit to the data.
That in itself would seem to indicate that applying BIC to this data is not valid. It’s like applying OLS to data with significant uncertainly in x-data. The derivation assumes minimal x uncertainty and if you ignore that criterion you get still get a least-squares fit but the slope is wrong. This may or may not be obvious to the eye , depending upon the actual data.
This sort of fundamental criterion can not be waved aside even by saying it’s “just and exploratory study”.
P Solar: One validation I have done is to test the significance of 1 break by simulating random AR(1) series with an without a known break. The F value for the model with one break, that occurs in 1998, exceeds the 95%CL. So I know the real distributions of the F value for a one break model. This is the model that is flat from 1998, so I have confidence that the change in slope is a real feature of the data, and this has nothing to do with BIC.
What is meant by exploratory, is that whether it is a 3 or 4 or 5 break model is not really important for this study. The validation, I think, comes about from the independent identification of abrupt features like eruptions. Moreover, the flatness of the period since 1997-8 is not affected by the number of breaks, as the additional breaks tend to occur during the 70’s and 80’s. So its a robust feature that is not really affected by the use of BIC.
The BIC is not unique, there are other measures used with pros and cons, so you could get a different number of breaks with AIC for example. But since we do not ‘know’ how many breaks there should be, there is always going to be a level of uncertainty there.
All of the common error distributions come from the exponential class, so I don’t understand your objection. That includes normal, Poisson, etc, all with exponential tails. It would be strange if they did not. The exponential refers to the errors, not to the shape of the model, or whether it is cyclical.
I am not sure if this answers your question. I look for validation through robust, independent predictions, and trying to get the main features right, so there are choices about what things I think might undermine the result. If you can say how the choice of 3, 4, or 5, or 10 breakpoints might undermine the result, maybe I’ll get it.
If the BIC gave 10 breakpoints as a maximum and they were random, with no coincidence with any abrupt event, then that would be a problem. As it is, I think there could be a few more breakpoints in there (Pinatubo is not recognized in the 1950-2010 series for example).
re exponential class distributions: the “errors” here are the deviations from the straight line that is being attempted as a model. If there is a strong pseudo cyclical element to the data this will be equally present in the “errors” . That is why I was suggesting that I don’t think the errors can be even approximately described as conforming to exponential class.
The point it appears you are missing is that these are not simple experimental errors which may often conform to some kind of exponential distribution, they are real physical variations .
While I agree that the idea of an objective method “discovering” known features would be a good indication, I think attribution of your discoveries here is dangerously close to seeing what you would like to see and could be easily attacked.
If the model “discovers” events both before and after they happen, by largely varying margins an misses other equally important features like the dips in 1988 and 2007 , I’m not sure this is much more than chance.
I don’t think this would stand up to a sceptical criticism, and science (apart from climatology of course) demands scepticism.
I do think the EFP shows that one single slope is not valid because there are changes in structure, notably in 1987/88 and after 1995. Unfortunately what you do after that fails to detect the 88 break point and runs straight through it.
In fact your EFP result contradicts the latter part of what you do.
One possible reason for this is that BIC is not applicable to this data.
Dr Stockwell says:
I suspect that as in “Recent Climate Observations” where climate scientists were fooled into thinking that “climate change will be worse than expected” by the steep up-tick in global temperatures during a strong El Nino, they have also been fooled by a steep but longer-term up-tick in global temperatures associated with a positive phase of the PDO.
http://atmoz.org/blog/2008/05/14/timescale-of-the-pdo-nao-and-amo/
From the University of Washington, we see that the PDO is defined as “leading PC of monthly SST anomalies in the North Pacific Ocean, poleward of 20N. The monthly mean global average SST anomalies are removed to separate this pattern of variability from any “global warming” signal that may be present in the data.”
So based on this definition, the PDO cannot be a direct contributor to the trend of the average global SST.
The El Nino index is a natural variation that does contribute to the global average temperature, and so do sunspot cycles and the aerosal emissions from volcanoes. When these sources of natural variation are analyzed to determine their effects on global average temperature, and their impact is subtracted from global average temperature, a long lasting trend emerges since 1975.
http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/
The consistency of the trend that one gets is remarkable:
http://tamino.files.wordpress.com/2011/01/adj1yr.jpg