Pielke Sr. on the 30 year random walk in surface temperature record

First some background for our readers that may not be familiar with the term “random walk”

See: http://en.wikipedia.org/wiki/Random_walk

From Wikipedia: Example of eight random walks in one dimension starting at 0. The plot shows the current position on the line (vertical axis) versus the time steps (horizontal axis). Click for more info on the random walk concept
============================================================

New Paper “Random Walk Lengths Of About 30 Years In Global Climate” By Bye Et Al 2011

There is a new paper [h/t to Ryan Maue and Anthony Watts] titled

Bye, J., K. Fraedrich, E. Kirk, S. Schubert, and X. Zhu (2011), Random walk lengths of about 30 years in global climate, Geophys. Res. Lett., doi:10.1029/2010GL046333, in press. (accepted 7 February 2011)

The abstract reads [highlight added]

“We have applied the relation for the mean of the expected values of the maximum excursion in a bounded random walk to estimate the random walk length from time series of eight independent global mean quantities (temperature maximum, summer lag, temperature minimum and winter lag over the land and in the ocean) derived from the NCEP twentieth century reanalysis (V2) (1871-2008) and the ECHAM5 IPCC AR4 twentieth century run for 1860-2100, and also the Millenium 3100 yr control run mil01, which was segmented into records of specified period. The results for NCEP, ECHAM5 and mil01 (mean of thirty 100 yr segments) are very similar and indicate a random walk length on land of 24 yr and over the ocean of 20 yr. Using three 1000 yr segments from mil01, the random walk lengths increased to 37 yr on land and 33 yr over the ocean. This result indicates that the shorter records may not totally capture the random variability of climate relevant on the time scale of civilizations, for which the random walk length is likely to be about 30 years. For this random walk length, the observed standard deviations of maximum temperature and minimum temperature yield respective expected maximum excursions on land of 1.4 and 0.5 C and over the ocean of 2.3 and 0.7 C, which are substantial fractions of the global warming signal.”

The text starts with

The annual cycle is the largest climate signal, however its variability has often been overlooked as a climate diagnostic, even though global climate has received intensive study in recent times, e.g. IPCC (2007), with a primary aim of accurate prediction under global warming.”

We agree with the authors of the paper on this statement. This is one of the reasons we completed the paper

Herman, B.M. M.A. Brunke, R.A. Pielke Sr., J.R. Christy, and R.T. McNider, 2010: Global and hemispheric lower tropospheric temperature trends. Remote Sensing, 2, 2561-2570; doi:10.3390/rs2112561

where our abstract reads

“Previous analyses of the Earth’s annual cycle and its trends have utilized surface temperature data sets. Here we introduce a new analysis of the global and hemispheric annual cycle using a satellite remote sensing derived data set during the period 1979–2009, as determined from the lower tropospheric (LT) channel of the MSU satellite. While the surface annual cycle is tied directly to the heating and cooling of the land areas, the tropospheric annual cycle involves additionally the gain or loss of heat between the surface and atmosphere. The peak in the global tropospheric temperature in the 30 year period occurs on 10 July and the minimum on 9 February in response to the larger land mass in the Northern Hemisphere. The actual dates of the hemispheric maxima and minima are a complex function of many variables which can change from year to year thereby altering these dates.

Here we examine the time of occurrence of the global and hemispheric maxima and minima lower tropospheric temperatures, the values of the annual maxima and minima, and the slopes and significance of the changes in these metrics. The statistically significant trends are all relatively small. The values of the global annual maximum and minimum showed a small, but significant trend. Northern and Southern Hemisphere maxima and minima show a slight trend toward occurring later in the year. Most recent analyses of trends in the global annual cycle using observed surface data have indicated a trend toward earlier maxima and minima.”

The 2011 Bye et al GRL paper conclusion reads

“In 1935, the International Meteorological Organisation confirmed that ‘climate is the average weather’ and adopted the years 1901-1930 as the ‘climate normal period’. Subsequently a period of thirty years has been retained as the classical period of averaging (IPCC 2007). Our analysis suggests that this administrative decision was an inspired guess. Random walks of length about 30 years within natural variability are an ‘inconvenient truth’ which must be taken into account in the global warming debate. This is particularly true when the causes of trends in the temperature record are under consideration.”

This paper is yet another significant contribution that raises further issues on the use of multi-decadal linear surface temperature trends to diagnose climate change.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

107 Comments
Inline Feedbacks
View all comments
John Whitman
February 18, 2011 9:12 pm

sky says:
February 18, 2011 at 2:00 pm
” . . . discretely sampled continuous signals are subject to arbitrary choices of sampling interval that can introduce aliasing and other artifacts in the time series.”

– – – – – – –
sky,
I appreciate your dialog.
The GST time series of widespread use from GISS and HadCRU span the periods of 1881 to present and 1850 to present, respectively. The sampling frequency is fixed and historical. Given that, there is no arbitrary sampling frequency. It is a given. Therefore I find your point confusing. I also note the time period covered by the data is not insignificant.
Now, again, this is actual empirical data from a physical system (our climate). The increasing collaboration and interest of professional statisticians in the data has shown that there have been/are significant statistical misapplications/errors in major climate research papers and in IPCC analysis. So, continued focus on the question of whether that data contain certain statistical properties (such as unit roots) will likely persist. Likewise, verifying correct inference statistically is now highly visible in the scientific community.
So, what model best describes the DGP imbedded in the data? That is now an interesting thrust of climate research.
We now have the luxury of a growing set of independent professional statisticians performing audits of past climate research. That is a wonderful development.
John

sky
February 19, 2011 1:33 pm

John Whitman says:
February 18, 2011 at 9:12 pm
John,
I’ll make my last attempt to clarify certain vital points:
1) The “fixed and historical” sampling interval of computing monthly averages is arbitrary; it imposes the somewhat uneven interval of a “month” in constructing the series. But far more important is the potential aliasing of the significantly large even-ordered harmonics of the powerful diurnal cycle into the lowest frequencies of the “climatic” series of comparatively miniscule monthly “anomalies.”
2) The GISS and HadCRU global anomaly series are synthesized over the time-intervals that you cite from an inconsistent set of stations, most of which are inconsistently afflicted by UHI effects. This results in a time-dependent bias that may ultimately resemble a logistics curve. It is baldly claimed, however, to be a linear secular trend not only over the entire available series, but over much shorter stretches that constitute effectively a half-cycle of irregular quasi-centennial oscillations that are a persistent feature of the best proxy records (GISP2) available for the Holocene.
3) Model identification for wide-band stochastic processes is a very tricky poblem even without the foregoing data corruptions. A plethora of assumed stochastic structures may prove statistically “consistent” with the data of over decades or even a century or two, but diverge materially therafter. So far I have seen nothing from academic statisticians oriented toward various linear combinations of iid “innovations” processes that offers a model for GST variability that stands up to rigorous testing. Low-order ARIMA models fail repeatedly, whereas high-order analysis schemes (e.g., Burg’s maximum entropy method) point to several quasi-centennial oscillations imbedded in a broad spectral continnuum. IMO, advances in understanding climate variabilty will come from real-world-oriented signal and system analyses, rather than from the iid restrictive academic viewpoint of statistical time-series.
Let’s both enjoy the remaining weekend.

VS
February 21, 2011 12:58 am

I haven’t had the time to read this paper yet (I will), but I presume that the results are another side of the same coin I’ve addressed, now almost a year ago, on Bart’s blog.
[snip . . reading the paper before commenting is considered to be good manners]

VS
February 21, 2011 2:35 am

Dear moderator,
I was replying to the comments which were related to the discussion at Bart’s blog (the so called ‘unit root thread’). Given that it’s my results and findings being discussed here, I didn’t think it bad form to actually clarify some misconceptions.
I haven’t commented on the paper itself , precisely because I haven’t read it. I also have to add that I have no access to it through my institution (following the link above), so I wonder how many of the posters commenting here actually read it (re: consistent application of the ‘rule’ you cite).
***
Having said that, this is your blog, and these are your rules, so I’ll respect that. In any case, I hope this was just a misunderstanding, so I’ll post my comment again, below.
If you snip it again, I know enough.
All the best, VS

VS
February 21, 2011 2:52 am

I haven’t had the time to read this paper yet (I will), but I presume that the results are another side of the same coin I’ve addressed, now almost a year ago, on Bart’s blog.
I the near future, I plan to write up the whole story in a short methodological piece, and then I guess we’ll have another debate on the topic, and perhaps my argument will then seem less elusive (ts ts, Bart ;).
Having said that, it does seem important to address one particular question; that of stationarity and cyclicality of global mean temperature, and how my results relate to the method via which one should proceed to model it for the purpose of forecasting and inference.
I have never claimed that the hypercomplex process governing the global mean temperature itself is non-stationary. I believe quite the opposite: namely that it is cyclical and stationary.
However, care has to be taken to distinguish the process governing the realization of our observation set (i.e. our sample draw), and the underlying hypercomplex process from which it stems. I simply pointed out that over the period of 120 something years, annual global mean temperatures should be modeled as a non-stationary process.
This is not just ‘my opinion’, this is the result of formal inference. Anybody interested in the argument (and the diagnostics, and the references, and the Matlab code for the simulations which disqualify the PP unit root test in this instance, etc etc… I think I alone wrote over 50,000 words there) is kindly referred to Bart’s blog.
What follows trivially from this is that any trend estimation on this particular data (GISS), which ignores these analytical facts, and implicitly takes ‘trend stationarity’ as the starting point, is inherently inferior to one taking non-stationarity as a starting point. Hence, the conclusions (e.g. a ‘knick’ in the trend somewhere in the 70s, ‘accelerated warming’, etc) are also inferior than those stemming from an non-stationarity based approach to trend estimation.
That’s it.
It seems to me that some (some!) physicists nowadays have forgotten the basic tenant of their discipline, namely that models, or if you will, conjectures which connect together facts, are per definition *false*(!). From this it follows that ‘true model’ is an oxymoron. Taking a more positive view, I believe that Niels Bohr worded the idea behind modeling best, when he said:
“The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth.”
Religiously believing in one’s model automatically blinds and creates (non-scientific) attachments to conjectures. This is unfortunate, as it breeds vicious intolerance towards dissenting views. You know, the type of intolerance that had Galileo locked up. But I guess anybody frequenting the climate blogosphere, from either ‘side’, is more than familiar with this.
All the best,
VS

Brian H
February 21, 2011 11:47 pm

VS;
Any tenant of the Warming Church will not be comfortable with the tenets of science regarding testing of conjectures.

VS
February 22, 2011 4:58 am

Ah, what an eloquent save of my, hasty-rewrite-induced, typo 😉 … (the remaining credits go to the, equally hasty, moderator 🙂

1 3 4 5