Pielke Sr. on the 30 year random walk in surface temperature record

First some background for our readers that may not be familiar with the term “random walk”

See: http://en.wikipedia.org/wiki/Random_walk

From Wikipedia: Example of eight random walks in one dimension starting at 0. The plot shows the current position on the line (vertical axis) versus the time steps (horizontal axis). Click for more info on the random walk concept
============================================================

New Paper “Random Walk Lengths Of About 30 Years In Global Climate” By Bye Et Al 2011

There is a new paper [h/t to Ryan Maue and Anthony Watts] titled

Bye, J., K. Fraedrich, E. Kirk, S. Schubert, and X. Zhu (2011), Random walk lengths of about 30 years in global climate, Geophys. Res. Lett., doi:10.1029/2010GL046333, in press. (accepted 7 February 2011)

The abstract reads [highlight added]

“We have applied the relation for the mean of the expected values of the maximum excursion in a bounded random walk to estimate the random walk length from time series of eight independent global mean quantities (temperature maximum, summer lag, temperature minimum and winter lag over the land and in the ocean) derived from the NCEP twentieth century reanalysis (V2) (1871-2008) and the ECHAM5 IPCC AR4 twentieth century run for 1860-2100, and also the Millenium 3100 yr control run mil01, which was segmented into records of specified period. The results for NCEP, ECHAM5 and mil01 (mean of thirty 100 yr segments) are very similar and indicate a random walk length on land of 24 yr and over the ocean of 20 yr. Using three 1000 yr segments from mil01, the random walk lengths increased to 37 yr on land and 33 yr over the ocean. This result indicates that the shorter records may not totally capture the random variability of climate relevant on the time scale of civilizations, for which the random walk length is likely to be about 30 years. For this random walk length, the observed standard deviations of maximum temperature and minimum temperature yield respective expected maximum excursions on land of 1.4 and 0.5 C and over the ocean of 2.3 and 0.7 C, which are substantial fractions of the global warming signal.”

The text starts with

The annual cycle is the largest climate signal, however its variability has often been overlooked as a climate diagnostic, even though global climate has received intensive study in recent times, e.g. IPCC (2007), with a primary aim of accurate prediction under global warming.”

We agree with the authors of the paper on this statement. This is one of the reasons we completed the paper

Herman, B.M. M.A. Brunke, R.A. Pielke Sr., J.R. Christy, and R.T. McNider, 2010: Global and hemispheric lower tropospheric temperature trends. Remote Sensing, 2, 2561-2570; doi:10.3390/rs2112561

where our abstract reads

“Previous analyses of the Earth’s annual cycle and its trends have utilized surface temperature data sets. Here we introduce a new analysis of the global and hemispheric annual cycle using a satellite remote sensing derived data set during the period 1979–2009, as determined from the lower tropospheric (LT) channel of the MSU satellite. While the surface annual cycle is tied directly to the heating and cooling of the land areas, the tropospheric annual cycle involves additionally the gain or loss of heat between the surface and atmosphere. The peak in the global tropospheric temperature in the 30 year period occurs on 10 July and the minimum on 9 February in response to the larger land mass in the Northern Hemisphere. The actual dates of the hemispheric maxima and minima are a complex function of many variables which can change from year to year thereby altering these dates.

Here we examine the time of occurrence of the global and hemispheric maxima and minima lower tropospheric temperatures, the values of the annual maxima and minima, and the slopes and significance of the changes in these metrics. The statistically significant trends are all relatively small. The values of the global annual maximum and minimum showed a small, but significant trend. Northern and Southern Hemisphere maxima and minima show a slight trend toward occurring later in the year. Most recent analyses of trends in the global annual cycle using observed surface data have indicated a trend toward earlier maxima and minima.”

The 2011 Bye et al GRL paper conclusion reads

“In 1935, the International Meteorological Organisation confirmed that ‘climate is the average weather’ and adopted the years 1901-1930 as the ‘climate normal period’. Subsequently a period of thirty years has been retained as the classical period of averaging (IPCC 2007). Our analysis suggests that this administrative decision was an inspired guess. Random walks of length about 30 years within natural variability are an ‘inconvenient truth’ which must be taken into account in the global warming debate. This is particularly true when the causes of trends in the temperature record are under consideration.”

This paper is yet another significant contribution that raises further issues on the use of multi-decadal linear surface temperature trends to diagnose climate change.

0 0 votes
Article Rating
107 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
February 14, 2011 9:13 am

Yes, 2 is a substantial fraction. Especially when you place it over a 1.

Douglas DC
February 14, 2011 9:13 am

Hmm. How about 30-60 year Ocean cycles? Another inconvenient factor ?..

David Larsen
February 14, 2011 9:19 am

How about interglacial periods of 50-100 thousand years. Oscillations during interglacials. Another inconvenient truth.

reliapundit
February 14, 2011 9:33 am

how about they select any period which helps them prove their case against industrialism and helps them advocate socialism?

February 14, 2011 9:39 am

Chaos and randomness are in the mind of the beholders. Chaos and randomness are always politically convenient.

John Blake
February 14, 2011 9:43 am

Paper ought to state that a 30-component set is the threshold for computing a statistically “normal distribution.” Though 95% Confidence Limits require 120 components, 30-units suffice to plot a Gaussian “normal curve.”
The fact that 30-year climate cycles tend to stabilize at “normal” intervals during this relatively quiescent Holocene Interglacial Epoch indicates a climate-dynamic akin to “random walks” exhibited by broad-market equities as measured by the classic 30 Dow Industrials, representing a disparate amalgam of economic entities related only by large-scale capitalization.
To the extent 30-year climate epochs reflect both land- and sea-surface temperatures, generated for whatever reason, recurrent random-action makes nonsense of linear extrapolations (functions in any case of corrupt data-manipulation, skewed by agenda-driven parties careless of objective fact). As Edward Lorenz showed c. 1960, complex dynamic systems exhibit non-random but indeterminate patterns keyed to “strange attractors”– mathematical artifacts which nonetheless incorporate inescapable features of the real world.
Quite likely, the Green Gang of climate hysterics will (as ever) belittle and bemoan such findings as reported here. Alas for Luddite Thanatists such as Ehrlich, Holdren, lately Keith Farnish, nature takes her course regardless of their vicious prejudices. Meantime, Science as a philosophy, a method, and a practice distinguishes fact from fiction in ways that Briffa, Hansen, Jones, Mann, Trenberth and Steig et al. will not be able to evade for long.

Doug Proctor
February 14, 2011 9:45 am

If I understand the drift of this work correctly, it goes to the heart of what I have been arguing for a while: time and location have an important impact when daily and annual variation in insolation, non-cloud-based albedo differences and cloud type/amounts are taken into account. The averaging of these things insufficiently reflects the impacts the short-term variations have on global heating and cooling. This results in regional differences that in a sense distort the global record on the one hand, and give an appearance to a minor variation in input-output that can be discounted within even a decadal-long period. Small differences add up or add down in a non-random manner within time periods we don’t understand. Feedbacks of a natural nature amplify these perturbations. Going into and out of a LIA could be a manifestation of such a combination as a “random walk”. Certainly the period from 1965 to today could be such a thing.
A gentle global mist gives you the same statistical precipitation as a major cyclone over northeast Australia, but they do not have the same impact or place in the records.

mikep
February 14, 2011 9:46 am

You might also like to look at the article by T C Mills in the Journal of Cosmology 2010, here
http://journalofcosmology.com/ClimateChange112.html
He finds the temperature history to be best described as the combination of a cyclic component with a random element, and a random walk. He speculates that the
” rather volatile behaviour of the cyclical component is also apparent , with the random innovations continually shocking the component away from a smooth oscillation, although a stochastic cycle with an average period of approximately six years is still evident, perhaps reflecting El Niño induced temperature movements.”
The pre-print linked to has some omissions, but the basic idea is clear enough. From the abstract
“This paper considers fitting a flexible model, known as the structural model, to global hemispheric temperature series for the period 1850 to 2009 with the intention of assessing whether a global warming signal can be detected from the trend component of the model. For all three series this trend component is found to be a simple driftless random walk, so that all trend movements are attributed to natural variation and the optimal forecast of the long term trend is the current estimate of the trend, thus ruling out continued global warming and, indeed, global cooling. Polynomial trend functions, being special cases of the general structural model, are found to be statistically invalid, implying that to establish a significant warming signal in the temperature record, some other form of nonlinear trend must be considered.”

Slabadang
February 14, 2011 9:51 am

This is to test the basics … and this is back to what climate science should have started with….but ignored.

Gary Pearse
February 14, 2011 9:55 am

I know what random walk stats are but still don’t see very clearly what is being pointed out here, I’m a bit studpid today it seems. Does its use not imply that temp fluctuations are natural variations. A year or so ago in a comment on major floooding of the Red River of the North on WUWT, I used permutation analysis of the flood record going back 150 years (I believe) to show that the frequency of new records in flooding (assuming that year 1 was a record) roughly equalled the function ln(n) where n is the number of years considered (150). Ln 150 = 5, which, if flooding is a random event there will be 4 new records established after years one. As an aside, this also means, if changes are random, that there should be about 4 new snowfall records (season), rainfall records (annual period), temperature records (annual average) at a given location. Note also that the nature of the function is such that the records get ever more widely space in time. Eg. ln (400) is 6, meaning that there would only be 5 new records set in 400 years. Its fun to play with this simple function in putting records into perspective and in judging whether “weather” variation is random or not. Check out CET by making year 1 a record and then counting the number of successively higher temp records there are over 350 years – I haven’t done this, honest, but it should be 4 or 5 if random. If much more than this, then the warming is real.

RiHo08
February 14, 2011 9:56 am

Temperature time series data correlation was assessed last year March 2010 by VS on Bart Verhengeen’ s blog using 1880 to 2008 data and concluded that all temperatures fell within normal variance parameters. I described and commented upon the process and conclusions on Real Climate and the usual vitriolic discussion ensued including comments from Bart V. and VS. Again, the data that is available does not show a signal apart from normal variance. Over the last year, there are now at least 3 informed discussions regarding time series temperature correlations demonstrating temperatures having characteristics of random walks. This recent paper adds to the increasing cloudy picture of climate science and the gathering storm of public awareness.

Brian H
February 14, 2011 10:04 am

Random walks are often staggering. By definition.
😉

Brian H
February 14, 2011 10:06 am

Oops, meant to say “drunkards’ walks”. Oh, well.
Is there, btw, a “technical” distinction between “random” and “drunkards'”, anyone? Inebriated minds want to know.

art johnson
February 14, 2011 10:06 am

reliapundit says:
“how about they select any period which helps them prove their case against industrialism and helps them advocate socialism?”
Of course everyone’s entitled to their point of view. My point of view is that I really dislike this sort of comment. The notion that alarmist scientists are somehow all conspiring to destroy capitalism is just hogwash. More importantly, it feeds into certain stereotypes that do the skeptics no good. I wish people would stick to the science, or lack thereof.

ge0050
February 14, 2011 10:19 am

It is amazing the number of climate measures that are integral harmonics of the orbital periods of the planets. It is quite something that the climate on earth, from its position at the center of the solar system, is able to drive the orbit of the planets in this fashion.

Tenuc
February 14, 2011 10:31 am

John Blake says:
February 14, 2011 at 9:43 am
“…To the extent 30-year climate epochs reflect both land- and sea-surface temperatures, generated for whatever reason, recurrent random-action makes nonsense of linear extrapolations (functions in any case of corrupt data-manipulation, skewed by agenda-driven parties careless of objective fact). As Edward Lorenz showed c. 1960, complex dynamic systems exhibit non-random but indeterminate patterns keyed to “strange attractors”– mathematical artefacts which nonetheless incorporate inescapable features of the real world…”
Thanks for bringing this one up, John, and it horrifies me that the IPCC cabal of climate scientists still try to ignore deterministic chaos and rely on linear trends to try to ‘prove’ their case.
Understanding chaos and how complex dynamic systems, like climate, can change behaviour due to strange attractors and the law of MEP is fundamental to being able to understand what the future will bring. It would seem climate scientists today understand less about climate than Lorenz did back in the 60’s – go figure!

Mohib
February 14, 2011 10:39 am

On my timeline, “ClimateGate:30 Years in the Making” (http://joannenova.com.au/global-warming/climategate-30-year-timeline/), I made the following entry in 2001 about a little known 2001 report by the UN FAO It is quite ironic that one side of the UN was making this report while the other side was producing the IPCC reports:
UN FAO: EARTH WARMS AND COOLS EVERY 30 YEARS
The UN Food and Agriculture Organization sought to understand the effect of climate change on long-term fluctuations of commercial catches and reported that several independent measures showed “a clear 55-65 year periodicity” (i.e. approx 30 year warming then cooling) over both short terms (150 years) and long terms (1500 years). [146:1] The report also highlighted that the current “‘latitudinal’ … epoch of the 1970-1990s” is in its final stage and a “‘meridional’ epoch … is now in its initial stage.” [146:2] Latidudinal circulations have corresponded to warm periods and meridional circulations to cool ones.
Abstract [146:1] from the paper:
The main objective of this study was to develop a predictive model based on the observable correlation between well-known climate indices and fish production, and forecast the dynamics of the main commercial fish stocks for 5–15 years ahead. Spectral analysis of the time series of the global air surface temperature anomaly (dT), the Atmospheric Circulation Index (ACI), and Length Of Day (LOD) estimated from direct observations (110-150 years) showed a clear 55-65 year periodicity. Spectral analysis also showed similar periodicity for a reconstructed time series of the air surface temperatures for the last 1500 years, a 1600 years long reconstructed time series of sardine and anchovy biomass in Californian upwelling areas, and catch statistics for the main commercial species during the last 50-100 years. These relationships are used as a basis for a stochastic model intended to forecast the long-term fluctuations of catches of the 12 major commercial species for up to 30 years ahead. According to model calculations, total catch of Atlantic and Pacific herring, Atlantic cod, South African sardine, and Peruvian and Japanese anchovy for the period 2000–2015 will increase by approximately two million tons, and will then decrease. During the same period, total catch of Japanese, Peruvian, Californian and European sardine, Pacific salmon, Alaska pollock and Chilean jack mackerel is predicted to decrease by about 4 million tons, and then increase. The probable scenario of climate and biota changes for next 50-60 years is considered.
Here are the references:
[146]
1. Leonid Klyashtorin, “Climate Change and Long-Term Fluctuations…” (abstract), Food and Agriculture Organization of the UN, Rome, 2001
http://www.fao.org/documents/pub_dett.asp?pub_id=61004&lang=en
2. “Chapter 2: Dynamics Of Climatic And Geophysical Indices”
http://www.fao.org/docrep/005/Y2787E/y2787e03.htm#bm03

Dave Springer
February 14, 2011 10:42 am

A truly random walk might not exist in the real world. The debate over whether the universe is deterministic or not has not been settled. In the meantime there are plenty of things that appear random but in reality are simply too complex for practical prediction or things that appear random but in reality have deterministic influences too subtle to detect over short periods of time.

ge0050
February 14, 2011 10:43 am

>>Gary Pearse says:
February 14, 2011 at 9:55 am
Check out CET by making year 1 a record and then counting the number of successively higher temp records there are over 350 years – I haven’t done this, honest, but it should be 4 or 5 if random. If much more than this, then the warming is real.<<
From my eyeball check of the CTE currently on the WUWF home page the number of record highs and lows over the past 350 years appears pretty well matched at around 5. There are a couple of entries that are so close that you really would need to call them tied given the likely size of the error bars.

A C Osborn
February 14, 2011 10:47 am

Is one of the Co Authors VS form the Bart Verhengeen’ s blog?

Solomon Green
February 14, 2011 10:48 am

For a walk to be truly random the size and direction of each step must be independent of those of its predecessors. Where the size of the steps is predetermined the directions must still be random. Little in his book “Higglety Pigglety Growth” supposed that share price movements in the stock market were independent and that therefore a random walk could be assumed.
Forty years ago Granger, who later went on to gain a Nobel in economics, proved that this supposition did not hold in the UK stockmarket and the supposition has also been shown to be invalid in the US and Australian stock markets. In all three there markets was a small but significant positive serial correlation between share price movements. Mandelbrot, at about the same time, also discovered that the random walk theory did not hold in stock markets as recounted, for example, in his book “The (Mis)Behaviour of Markets”. Incidentally, as Mandelbrot also pointed out, dependence can exist without correlation.

February 14, 2011 11:00 am

Terence Mills also has a very important paper on representation of trends in climate data series in a recent volume of Climatic Change.
Since the late 1980s there has been a steady stream of papers by econometricians and time series analysts looking at whether temperature is a random walk. It is possible to find conclusions on both sides, as well as the intermediate cases of fractional integration, though my impression is that the longer the data sets get, the more likely it is to find RW behaviour. Mills’ conclusion is that the current data sets are best represented by a combination of RW, trend and cyclical components, but a researcher still has some leeway to specify the trend model. Nonetheless, the model getting the most support in the data indicates RW behaviour and yields a contemporary trend component well below GCM forecasts.
This is an area where it is difficult to fully convey how important the underlying question is. The qualitative difference between a data series that contains a random walk and one that doesn’t is enormous. It is completely routine in economics to test for RW behaviour, or as it is known more formally, unit root processes. If your data contain a unit root it simply comes from another planet than non-unit root data, and you have to use completely different analytical methods both to estimate trends and to fit explanatory models. Conventional methods are built on the assumption of stationary processes that converge to Gaussian distributions in the limit. But unit roots are non-stationary and they do not converge to Gaussian limits. They also imply some fundamental differences about the underlying phenomena, namely that means, variances and covariances vary over time, so any talk about (for example) detecting climate change is very problematic if the underlying system is nonstationary. If the mean is variable over time, observing a change in the mean is no longer evidence that the system itself has changed.
If the IPCC knew enough about this issue to deal with it properly they would have an entire chapter, if not an entire special report, devoted to the question of whether climate is nonstationary and over what time scales. Until this is known there is a very high probability that a lot of statistical analysis of climate is illusory. If that seems harsh, ask any macroeconomist about the value that can be placed on empirical work in macro prior to the 1987 work of Engle and Granger on spurious regression. They wiped out more than a generation’s worth of empirical work, and got a Nobel Prize for their efforts.
If the IPCC were willing or able to deal with this issue, there are a lot of qualified people who could contribute–people of considerable expertise and goodwill. Unfortunately the only mention of the stationarity issue in the AR4 was a brief, indirect comment inserted in the second draft in response to review comments:

Determining the statistical significance of a trend line in geophysical data is difficult, and many oversimplified techniques will tend to overstate the significance. Zheng and Basher (1999), Cohn and Lins (2005) and others have used time series methods to show that failure to properly treat the pervasive forms of long-term persistence and autocorrelation in trend residuals can make erroneous detection of trends a typical outcome in climatic data analysis.

There was also a comment in the chapter appendix cautioning that their “linear trend statistical significances are likely to be overestimated”.
Sometime after the close of IPCC peer review in summer 2006 the above paragraph was deleted, the cautionary statement in the Appendix was muted, and new text was added that claimed IPCC trend estimation methods “accommodated” the persistence of the error term. We know on other grounds that their method was flawed–they applied a DW test to residuals after correcting for AR1, which is an elementary error. The email traffic pertaining to the AR4 Ch 3 review process (not in the Climategate archive, but public nonetheless, if you know where to look) shows that the Ch 3 lead authors knew they were in over their heads. But rather than get help from actual experts, they just made up their own methods, and then after the review process was over they scrubbed the text to remove the caveats inserted during the review process.

Regg
February 14, 2011 11:11 am

30 years of data is an ”inconveniant truth”. mmmmm
There’s only 30 years of satellite data in that paper. What’s up Dr. Pielke ?

Horace the Grump
February 14, 2011 11:14 am

mmmm….. brownian motion….

rbateman
February 14, 2011 11:17 am

‘climate is the average weather’
For you C++ fans out there, weather is an instance of the class climate.
You will need many climates to make up the next class. Then there is the added complexity of regions, which can and do act independently and oppositely of neighboring regions, as well as sympathetically.

Mooloo
February 14, 2011 11:37 am

For a walk to be truly random the size and direction of each step must be independent of those of its predecessors.
But we know the “walk” in this case is not random. It is driven by features too complicated for us to determine though. From our point of view this might be taken to be the same thing.
However that over a 30 year period climate has the features of a random walk strongly suggests that there is no single driving “forcing” (CO2).
The issue with the CO2 warmists is that they think they can actually model the world’s climate with some degree of realism. That a simple random walk model works just as well suggests that they are wrong, not that the climate is actually random.

Billy Liar
February 14, 2011 11:42 am

Dave Springer says:
February 14, 2011 at 10:42 am
Perhaps you could illuminate your waffle with a couple of well-chosen examples?

jorgekafkazar
February 14, 2011 11:43 am

Brian H says: “[Drunkard’s] walks are often staggering. By definition.” 😉
I find them rather pedestrian, myself.

Ammonite
February 14, 2011 12:05 pm

rbateman says: February 14, 2011 at 11:17 am
‘climate is the average weather’ For you C++ fans out there, weather is an instance of the class climate.
An alternate is to consider multiple instances of Class Weather with Climate being a container within which they reside. Climate may then be averaged on a sensible basis, regardless of chaotic behaviour within the individual Weather objects…

KR
February 14, 2011 12:11 pm

“…eight independent global mean quantities (temperature maximum, summer lag, temperature minimum and winter lag over the land and in the ocean)…”
How are these independent? Mins and Maxes should rise and fall with temperature trends, summer and winter lag should be related to how warm or cold it is. I cannot see that these are truly independent quantities.
And, if you account for known forcings (http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/, http://tamino.wordpress.com/2011/01/06/sharper-focus/ – yearly cycle, volcanic activity, solar activity, ENSO, etc.) the actual variability of these quantities is much smaller than what seems to have been used to compute these excursions.
I do not believe this amount of random walk excursion is justified by the data, given actual observed variability of these rather interdependent values after accounting for known perturbations.

KD
February 14, 2011 12:13 pm

art johnson says:
February 14, 2011 at 10:06 am
reliapundit says:
“how about they select any period which helps them prove their case against industrialism and helps them advocate socialism?”
Of course everyone’s entitled to their point of view. My point of view is that I really dislike this sort of comment. The notion that alarmist scientists are somehow all conspiring to destroy capitalism is just hogwash. More importantly, it feeds into certain stereotypes that do the skeptics no good. I wish people would stick to the science, or lack thereof.
_________________________
Out of curiosity, how do you characterize the behavior of the alarmist scientists?
As a group they are advocating a position that would require unprecedented controls by governments around the world to have any chance of lowering CO2 emissions. They advocate this based on flawed science.
What is their motivation?

DesertYote
February 14, 2011 12:17 pm

rbateman says:
February 14, 2011 at 11:17 am
‘climate is the average weather’
For you C++ fans out there, weather is an instance of the class climate.
You will need many climates to make up the next class. Then there is the added complexity of regions, which can and do act independently and oppositely of neighboring regions, as well as sympathetically.
###
Climate is a factory that returns type “weather” :b

DesertYote
February 14, 2011 12:19 pm

DesertYote says:
Your comment is awaiting moderation.
February 14, 2011 at 12:17 pm
rbateman says:
February 14, 2011 at 11:17 am
#####
OOPs, make that “…returns an instance of type weather”.

Martin Brumby
February 14, 2011 12:51 pm

Yup, despite the best computerised prognostications,
the Climate just Keeps on a’Doin’ Whatta Climate’s Gotta Do…….

DeWitt Payne
February 14, 2011 1:03 pm

Ross McKitrick,
I have a hard time believing that any natural process on the planet has a unit root time series. Everything is bounded eventually because the planet has a finite size. If it’s bounded, it cannot have a root identical to one, although it can be very close to one. The classic example is a tank of water to which you randomly add or remove a bucket of water at fixed time intervals. That will behave like a unit root process until the tank overflows or empties. Then there’s the problem that the tank actually leaks. That also makes pure unit root behavior impossible in the long term (I think). Fractional integration, which amounts to long term persistence, makes more sense to me. However, it’s not at all clear that, given spatio-temporal chaos, any statistical process is valid.

February 14, 2011 1:24 pm

KR says: “How are these independent? Mins and Maxes should rise and fall with temperature trends, summer and winter lag should be related to how warm or cold it is. I cannot see that these are truly independent quantities.”
Have you plotted and analysed them for the full term of GISS LOTI or HADCRUT or the NCDC’s merged land plus sea surface temperature data or are you simply expressing a belief?

cowichan
February 14, 2011 1:29 pm

[snip. You need to provide a short explanation, not just a stand-alone link. ~dbs, mod.]

K
February 14, 2011 1:39 pm

the actual variability of these quantities is much smaller than what seems to have been used to compute these excursions.
In time series random walk tends to manifest itself in small undetectable errors at each measurement at time t(i). The forcing values may seem small at each measurement but if they are part of a random walk process then, over a series of measurements they show up generating a large error. Each time a drunk takes an apparently random step, some of his error is white noise and cancels out, but some is RW and he subsequently wanders hither and yon even though each step is small.

February 14, 2011 1:40 pm

KR says: “And, if you account for known forcings (…yearly cycle, volcanic activity, solar activity, ENSO, etc.) the actual variability of these quantities is much smaller than what seems to have been used to compute these excursions.”
The seasonal component in global temperature anomaly data and ENSO are not forcings!
Also Tamino’s finding of a solar component with a lag of a few months in the global temperature data is questionable. It’s likely due to his limiting his analysis to the past few decades and possibly due to the ENSO residuals he’s left in the adjusted data. Solar lag is a function of the thermal inertia of the oceans, and if memory serves me well, estimates of the lag vary from 5-7 years at the low end to a couple of decades at the high.

APACHEWHOKNOWS
February 14, 2011 2:26 pm

Mr. Watts,
What is needed is an unbiased third party. They some how get the data on all the people from both sides of this dispute who have recived grants, money by others that might bend their opinions. That third party would need to match that to the posters here in such a way that only the third party knows who took money from who.
Then a number/ranking would come up beside each posters nick to show how much they recived on which ever side. You and the blog would not have that data, just the off site third party.
[snip]
[reply] can’t post unverified info – sorry RT-mod

KR
February 14, 2011 2:45 pm

Bob Tisdale“Have you plotted and analysed them for the full term of GISS LOTI or HADCRUT or the NCDC’s merged land plus sea surface temperature data or are you simply expressing a belief?”
Given the warming trend, we’re seeing more record highs than record lows (http://www2.ucar.edu/news/1036/record-high-temperatures-far-outpace-record-lows-across-us), which means that they are correlated to temperature and inversely correlated to each other.
Northern hemisphere annual snow extent is declining (http://web.unbc.ca/%7Esdery/publicationfiles/2007GL031474.pdf), which affects both winter and spring lag dates – again, not independent variables.
So no, I’m not just expressing a belief, I’m looking at the data, and considering the causal links between these various values.

KR
February 14, 2011 2:50 pm

Bob Tisdale“The seasonal component in global temperature anomaly data and ENSO are not forcings!”
You are correct, Bob, bad terminology on my part, my apologies. They are components of expected variations in temperatures based upon seasonal and oceanic cycles. And accounting for (correcting for) these cyclic variations reduces the underlying variability of the observed temperature data.
Solar lag is (to me) the most questionable part of Tamino’s analysis, but that may be a limitation of my knowledge. I will note that he’s perhaps one of the most skilled time series analysts I know of – if he says that there is a good correlation than I believe that’s what he’s seeing in the data.
I believe he’s finding the transient response to solar variations, not any long-term response – just the upper 100 meters of well-mixed ocean.

February 14, 2011 3:07 pm

Uncanny! Last week I wrote a short random walk program to make 30 step walks which I graphed in EXCEL just for kicks. I gave it 3 options in each iteration, same,up,down and graphed them all out to compare with the new 30 year UAH baseline.
It’s pretty hard to tell blind which is the real and which is made up. I found it a pretty interesting exercise. I’m not a programmer, so in BASIC (don’t laugh!) I used a FOR NEXT loop with 2 random number generators and 2 IF THEN statements. Pretty simple fun which demonstrates how many “trends” can appear in spite of the purely random set. It doesn’t tell me much about the real world, but it does tell me something about assuming a trend means anything.

February 14, 2011 3:11 pm

here’s my random walk code for anyone who wants to make their own data and knows even less about code than me!
20 FOR T=1 TO 30
30 S= RND (2)
32 IF S=2 THEN GOTO 60
40 V= RND (6)
60 PRINT V
70 NEXT T

DJA
February 14, 2011 3:14 pm

VS’s first entry at Bart Verhengeen’ s blog said this
“# VS Says:
March 4, 2010 at 13:54
Hi Bart,
Actually, statistically speaking, there is no clear ‘trend’ here, and the Ordinary Least Squares (OLS) trend you estimated up there is simply non-sensical, and has nothing to do with statistics.
Here is a series of Augmented Dickey-Fuller tests performed on temperature series (lag selection on basis of a standard enthropy measure, the SIC), designed to distinguish between deterministic and stochastic trends. This is the first and most essential step in any time series analysis, see for starters Granger’s work at http://nobelprize.org/nobel_prizes/economics/laureates/2003/
Test resutls:
** CRUTEM3, global mean, 1850-2008:
Level series, ADF test statistic (p-value<):
-0.329923 (0.9164)
First difference series, ADF test statistic (p-value<):
-13.06345 (0.0000)
Conclusion: I(1)
** GISSTEMP, global mean, 1881-2008:
Level series, ADF test statistic (p-value<):
-0.168613 (0.6234)
First difference series, ADF test statistic (p-value<):
-11.53925 (0.0000)
Conclusion: I(1)
** GISSTEMP, global mean, combined, 1881-2008:
Level series, ADF test statistic (p-value<): -0.301710 (0.5752)
First difference series, ADF test statistic (p-value): -10.84587 (0.0000)
Conclusion: I(1)
** HADCRUT, global mean, 1850-2008
Level series, ADF test statistic (p-value<):
-1.061592 (0.2597)
First difference series, ADF test statistic (p-value<):
-11.45482 (0.0000)
Conclusion: I(1)
These results are furthermore in line with the literature on the topic. See the following:
** Woodward and Grey (1995)
– reject I(0), don’t test for I(1)
** Kaufmann and Stern (1999)
– confirm I(1) for all series
** Kaufmann and Stern (2000)
– ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
– PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
** Kaufmann and Stern (2002)
– confirm I(1) for NHEM
– find I(0) for SHEM (weak rejection of H0)
** Beenstock and Reingewertz (2009)
– confirm I(1)
In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk. Simply calculating OLS trends and claiming that there is a 'clear increase' is non-sense (non-science). According to what we observe therefore, temperatures might either increase or decrease in the following year (so no 'trend').
There is more. Take a look at Beenstock and Reingewertz (2009). They apply proper econometric techniques (as opposed to e.g. Kaufmann, who performs mathematically/statistically incorrect analyses) for the analysis of such series together with greenhouse forcings, solar irradiance and the like (i.e. the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated, as this makes them asymptotically independent. They, therefore have to be related via more general methods such as polynomial cointegration).
Any long term relationship between CO2 and global temperatures is rejected. This amounts, at the very least, to a huge red flag.
Claims of the type you made here are typical of 'climate science'. You guys apparently believe that you need not pay attention to any already established scientific field (here, statistics). In this context, much of McIntyre's criticism is valid, however much you guys experience it as 'obstructionism'.
It would do your discipline well to develop a proper methodology first, and open up all of your methods to external scrutiny by other scientists, before diving head first into global policy consulting.
PS. Also, even if the temperature series contained a deterministic trend (which it doesn't), your 'interpretation' of the 95% confidence interval is inprecise and misleading, at best. I suggest you brush up on your statistics."
This entry provoked an almost record 2184 blog entries by such eminent writers as Tamino, Dhogaza and many others. Not surprising really when VS questioned the CO2/Global Temperature relationship and dismissed Temperature over time as statistically equivalent to a random walk

Mark Twang
February 14, 2011 3:18 pm

This is why I can neither “believe” nor “disbelieve” in AGW. I consider myself relatively sharp, educated, and well-read, but the title of this article conveys absolutely nothing tangible to my mind, and it just gets worse from there. It might as well be written in Lojban. When I contemplate the effort that would be needed to comprehend it, never mind evaluate it for truth, I make the affirmative decision that I can and should and ought to have nothing to say about the issue.

Mike Haseler
February 14, 2011 3:37 pm

Brian H says: February 14, 2011 at 10:06 am
Oops, meant to say “drunkards’ walks”. Oh, well.
Is there, btw, a “technical” distinction between “random” and “drunkards’”, anyone? Inebriated minds want to know.

A drunk still has some goal in their walk … the randomness has a general direction … given time it will become obvious they are heading somewhere.
A random walk is by definition completely random. At any time all directions are equally probable – they arrive somewhere by chance alone.
Climate is much closer to a drunken walk than a random walk – but in short term simulations it is difficult to ascertain the difference.

Charlie A
February 14, 2011 3:40 pm

Gary Pearse said “…A year or so ago in a comment on major floooding of the Red River of the North on WUWT, I used permutation analysis of the flood record going back 150 years (I believe) to show that the frequency of new records in flooding (assuming that year 1 was a record) roughly equalled the function ln(n) where n is the number of years considered (150). Ln 150 = 5, which, if flooding is a random event there will be 4 new records established after years one. … ”
I’m pretty sure that your analysis makes assumptions about the statistical character of the record that is not warranted. Actual weather/climate/hydrological records have very high autocorrelation. Also sometimes described as being pink or red rather than white noise.
So the frequency of new records in a real life climate/weather/hydrological time series will not be space as far apart as you would expect in very long time series.
You might find it interesting to apply your analysis to the longest known hydrological record, the flow measurements on the Nile river.
I’ve found the website by D KOUTSOYIANNIS to be very useful, with
http://itia.ntua.gr/en/docinfo/511/ being a good starting point.

Charlie A
February 14, 2011 3:46 pm

Headpost says ” …..and indicate a random walk length on land of 24 yr and over the ocean of 20 yr. Using three 1000 yr segments from mil01, the random walk lengths increased to 37 yr on land and 33 yr over the ocean. ”
It appears that the three 1000 year segments were from models, not actual (or reconstructed) data. Various studies have shown that the statistical characteristics of various model outputs have less variability than real life records, so it is likely that the random walk length for 1000 year segments is significantly longer than 37 years.
At least that is my belief. Does anybody have knowledge of what the true rescaling factors are for real life temperature records, such as the long Central England Temperature record?

George E. Smith
February 14, 2011 3:54 pm

“”””” rbateman says:
February 14, 2011 at 11:17 am
‘climate is the average weather’ “””””
Well not exactly; and that is the root of Trenberth’s problem. I would agree if weather/climate were linear; but they are not; certainly the T^4 and T^5 aspects off BB radiation total emittance, and spectral peak emittance respectively are anything but linear.
So it is more correct to say that climate is the integral of weather; NOT the average of weather. Climate rests on the real time value of EVERYTHING weatherwise, that has previously happened. Mother Gaia does NOT do averages.

February 14, 2011 3:55 pm

How is “random walk length” defined in the paper?

February 14, 2011 3:56 pm

KR: You could have replied simply, “No. I have not plotted and analyzed them for their full term,” in your February 14, 2011 at 2:45 pm reply to my earlier question. It would have saved you some time.
You replied, “You are correct, Bob, bad terminology on my part, my apologies. They are components of expected variations in temperatures based upon seasonal and oceanic cycles. And accounting for (correcting for) these cyclic variations reduces the underlying variability of the observed temperature data.”
But you are overlooking the fact that these seasonal and ENSO-induced variations are parts of the instrument temperature record and, for ENSO, are responses to internal coupled ocean-atmosphere processes that have multiyear aftereffects. They are not noise, and for the paper presented in this post, they would need to remain in the data.
Also they are not “expected variations”. If they were, climate modelers would be able to duplicate ENSO and its aftereffects since 1880s, but, of course, they cannot.
You wrote, “I believe he’s finding the transient response to solar variations, not any long-term response – just the upper 100 meters of well-mixed ocean.”
The key word in that sentence is “believe”. The other studies (much discussed and debated a few years ago) with the multiyear and multidecadal lags were also investigations of the surface temperature record, not the upper 100 meters of ocean.
You wrote, “I will note that he’s [Tamino’s] perhaps one of the most skilled time series analysts I know of…”
Anyone [Tamino included] who claims to account for the process of ENSO with linear regression and further claims the trend of the volcano- and ENSO-adjusted global temperature anomalies is caused by anthropogenic global warming [as Tamino did in the two posts you linked earlier] is not a “skilled time series analyst”. ENSO is a process and cannot be accounted for (as Tamino attempted) with linear regression. They are [he is] simply attempting to prolong a myth.

Eric the halibut
February 14, 2011 4:00 pm

I have long regarded that icon of the 60’s, the lava lamp, as a particularly interesting, though simple, example of “climate in a bottle”. You have a closed system with an external energy source which in general, functions within fixed boundaries and in a rather predictable fashion, in that the wax will melt, rise to the top, then fall down again. But the detail is very different – there appears to be no real pattern to how the motion unfolds, a little like a log fire, you can watch the seemingly random activity of either of these for ages (if you have nothing else to do). I have often wondered if anybody had bothered to accurately model what goes on in the lava lamp’s environment, my bet would be that it is possible to simulate the behaviour on a computer, but if you ran it on a monitor next to a real lamp, it would fail to predict the actual sequence of events. And this is my problem with models, they simulate and in broad terms describe outcome scenarios, but rarely, if ever, maintain any direct connection with reality.

Ian W
February 14, 2011 4:06 pm

So if the normal climate variations can be ascribed to a ‘random walk’ can the glacials and inter-glacials be ascribed to ‘Levy Flight’ variances?

Darren Parker
February 14, 2011 4:40 pm

I notice no one is incorporating the 3 year GCR lag. Based on 100 AU it takes 1.5 years for the heliopause to adjust to current sun conditions and then a further 1.5 years for the GCR to reach earth from the heliopause.

KR
February 14, 2011 4:48 pm

Bob Tisdale“It would have saved you some time.”
So – you are simply ignoring causal relationships between minimum and maximum temperatures? And the relationship of the beginning and ending of seasons as botanical zones move towards the poles? Ignoring cause and effect? If you had just stated that, well, that would have also saved me some time.
Actual statistical analysis of the random walk components of the climate (Gordon and Bye 1993 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VF0-487DN15-1&_user=10&_coverDate=11%2F30%2F1993&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1641686257&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=333426af25a2723dce790818c565ceb9&searchtype=a) indicate that maximum excursions of random walks are indeed seen on the scale of ~5 years, matching the ENSO and QBO variations. However, “…the projected temperature rise due to the enhanced Greenhouse effect possibly cannot be supported as a random walk…”.
Unless these papers have disproven Gordon and Bye, their argument about random walks are not valid.

KR
February 14, 2011 4:53 pm

A better link for the paper I noted in my previous post, Gordon and Bye 1993, should be http://linkinghub.elsevier.com/retrieve/pii/092181819390007B
Sorry for the overly long URL.

February 14, 2011 4:53 pm

Eric the halibut,
Many years ago I read an article in the Economist [before it caught the CAGW virus] that reported on the use of lava lamps as random number generators. IIRC, a half-dozen lamps were placed in a mirrored room, with a value assigned to the amount of reflection from the moving wax, which was picked up by a CCTV camera.
Since true random number generators don’t exist and would be very much in demand, I’ve often wondered why this idea never caught on.
…We now return you to your regularly scheduled programming.☺

Wally
February 14, 2011 4:56 pm

RE: Gary Pearse
February 14, 2011 at 9:55 am
Comment on expected number of maximums or minimums in a record. I looked at the Mean monthly and yearly data from CET. I recalculated monthly anomalies based on the average of all the months present.
4224 months Ln(4224)=8.3 Counting year one, 12 new maximum and 11 new minimum anomalies.
352 years Ln(352)=5.9 Counting year one, 9 new maximums and 9 new minimums
Looks like the number of max and mins is a little higher than predicted, but the number of both is very even.
The slope of the monthly anomalies is 0.25°C/100yrs

Charlie A
February 14, 2011 5:16 pm

Ian W mentions ‘Levy Flight’ a couple of posts above.
I’ll have to add that to the growing list of various statistical distributions and descriptions I need to further study.
Levy Flight, Hurst Exponent, rescaled range, fractional gaussian, fractional brownian, long range dependence, Hurst-Kolmogorov distribution …… they all show much more low frequency variation than expected from a gaussian or normal distribution.
So it is very foolish to expect climate variations to follow a sqrt(n) sort of random walk.

Bill Hunter
February 14, 2011 5:27 pm

Mike Haseler says:
“A drunk still has some goal in their walk … the randomness has a general direction … given time it will become obvious they are heading somewhere.”
Perhaps you haven’t been as drunk as I have been. When you figure out what my goal was could you write me and let me know?

izen
February 14, 2011 5:27 pm

DeWitt Payne beat me to the punch in making the point that global climate cannot be a random walk because it is bounded. At least since the Hadean the presence of liquid water shows that.
It is arguable that within those ranges and timescales a ‘random walk’ behavoir could dominate variations.
But the obvious correlation with orbital cycles and major volcanic activity indicates that the climate is not causally independent of radiative transfer effects.
I think I remember an article about casinos/gambling devices that sheds light on this. They monitor the wheels etc because although these devices are carefully designed to exhibit ‘random walk’ behavior to prevent prediction and biasing the odds, that random walk may look like a consistent trend over some timescales/magnitudes.
It is actually quite difficult to exclude the possibility that a sequence is a random walk robustly. But the Casino’s take the opposite approach. Given the possibility that small physical effects can cause a trend or bias in the outcomes, and knowing that others doing analysis could make money from them if their wheels etc were NOT random, they remove any device that shows a trend long before it is possible to establish it ISN’T a random walk.
Because the consequences of there being a physical cause of the bias/trend is too great to risk.
Perhaps climate policy need not be as cautious as Las Vegas….

ge0050
February 14, 2011 5:55 pm

>>that global climate cannot be a random walk because it is bounded<<
Short term it can certainly appear to be a random walk. Longer term climate for most of the past 600 million years has swung between an average of 12C and 22C, with little time spent in the middle.
This resembles a drunken walk down a hallway, where the drunk leans against one wall or the other for stability, and occassionally veers from one wall to the other for no apparent reason.
Why the climate appears stable long term at 12C and 22C and unstable elsewhere is an interesting question.

sky
February 14, 2011 5:58 pm

February 14, 2011 at 11:00 am
Ross McKittick provides a clear and concise summary of the misdirection in enconometrics that prevailed on the basis of blind academic faith in the random walk model. In the physical world, apart from diffusion processes, it’s unlikely that a strict random walk (Wiener-Einstein process) is encountered anywhere on geophysical scales. A pervasive problem with many empirical statistical analyses is the employment of weak tests for stochastic independence (e.g., Durban-Watson statistic) on increments in arbitrarily discretized (and often aliased) time-histories of physical signals.
Proper power spectrum analyses almost invariably reveal significant oscillatory components quite apart from the periodic diurnal and seasonal cylces. In climatic series, some oscillations are quite wide-band (ENSO) and unpredictable, whereas others are narrow-band enough (multi-decadal oscillations) to provide some useful predictability. There’s a intrinsic difference between the well-studied noise models that academics appeal to theoretically and the signal behavior that is encountered in practice. Sadly, climate science has not produced its own Granger to tell the difference. Consequently, all sorts of baseless statistical speculation tries to fly in a vacuum of analytic grasp of real-world signals.

John F. Hultquist
February 14, 2011 6:39 pm

Bill Hunter says: at 5:27 pm
“When you figure out what my goal . . .

Agreed. And we are in good company.
Life’s Been Good / Joe Walsh
I go to parties, sometimes until four
It’s hard to leave when you can’t find the door

KR
February 14, 2011 6:53 pm

SmokeyRandom number generation from lava lamps:
That sounds fantastic – and great fun too. Until the statisticians chill out too much to work while watching the lamps, of course! 🙂

Ross Brisbane
February 14, 2011 7:23 pm

We can navel gaze all we like. We can create a theory its all about socialism. However we can bleat all we like about dying ideologies – yes there will death of both if we continue to these work the theory of papers and “play the fiddle whilst Rome burns”.
I know we know about it. You have heard time and time again that the signs globally are coming out since 2005 ever stronger. More and more we are seeing the delays of increased greenhouse gas effects come through loud and clear as a distinct impact.
Latest reading indicate we nearing are 400ppm of CO2. Please do play on the ppm game of concentration. Anyone who has a decent high school education knows about its radiative forcing properties. Anyone also knows that a globally warming world sets in motion the water vapor cycle and speeds it up. That is water -> Water vapor -> upper atmosphere – a potent greenhouse gas effect then interplays on our climate.
DID IT EVER OCCUR to anyone when water vapor goes into the upper atmosphere – it STORES the energy that put it up there in the first place! Did it it ever occur to anyone that this stored energy then rebounds back to the earth and is not lost to space.
And lets be clear on this this – heat returns to earth’s surface when it rains.
Water vapor is a potent greenhouse gas along with other gases such as carbon dioxide and methane. Wikipedia.
The water molecule brings heat energy with it. In turn, the temperature of the atmosphere drops slightly. In the atmosphere, condensation produces clouds, fog and precipitation (usually only when facilitated by cloud condensation nuclei). The dew point of an air parcel is the temperature to which it must cool before water vapor in the air begins to condense.
Also, a net condensation of water vapor occurs on surfaces when the temperature of the surface is at or below the dew point temperature of the atmosphere. Deposition, the direct formation of ice from water vapor, is a type of condensation. Frost and snow are examples of deposition. Wikipedia
The link is obvious – the climate feedbacks of late indicate that what has been said time and time again are 100% correct.
You do need not the parroting of climategate/new papers assertions of just theory -we just need to co-operate together and not politic this debate.
Science papers that attempt to theorize our past century whilst there are obvious signs of definitive global warming highlights that not enough people really get it.
This is not about politics – this will be a brave new world that we will all face together.

jorgekafkazar
February 14, 2011 7:37 pm

Ross Brisbane says: “We can navel gaze all we like…”
Which is exactly what your comment consists of. You’ve not put forth proof that any of what you say is correct, meaningful, or applicable.

Editor
February 14, 2011 10:08 pm

I love this. The paper is called “Random Walk Lengths Of About 30 Years In Global Climate”. But as near as I can tell, they never look at the global climate at all. It’s kinda like a Hollywood movie that the filmmakers say is “based on a true story” but changes the plotline entirely, this study is “based on a true climate” but doesn’t use a scrap of climate data.
Instead they take their so-called “data” from the “NCEP twentieth century reanalysis (V2) (1871-2008) and the ECHAM5 IPCC AR4 twentieth century run for 1860-2100, and also the Millenium 3100 yr control run mil01, which was segmented into records of specified period.” Note that none of those are data, they are all the results of climate model runs. Every one of them.
Nor are their “eight independent global mean quantities” independent in any sense, they are all related to each other.
So instead of using eight independent climate variables as they claim, they have used eight closely related climate model outputs … isn’t there some kind of “Truth in Scientific Advertising” law that prevents this kind of misrepresentation? Because if they tried this kind of “bait and switch” tactic while selling soap flakes, they’d be put in jail …
To me, the length of random walks in climate model outputs is one of the most meaningless statistics I can imagine. Why would anyone possibly care? Random walks in real data are interesting, but it has to be real data.
w.

citizenschallenge
February 14, 2011 11:46 pm

You know, scanning down this thread I just couldn’t help think of my Dad laying out his facts and figures, explaining why his next business scheme would be a hit.
But the thing is, just like him, you folks seem to forget that your map is not the territory.
You can make all your learned arguments – but the Earth Observation Data; along with the considerable uptick in extreme weather phenomena; supported by the consensus scientific understanding for why this is happening – really ought to rattle you folks out of your ivory towers.
[snip]

Brian H
February 14, 2011 11:47 pm

jorge;
you were far too kind to Ross.
Not worth replying to, actually; he’s way too whacked to even represent the average Believer.

February 15, 2011 12:28 am

citizenschallenge says:
February 14, 2011 at 11:46 pm

You can make all your learned arguments – but the Earth Observation Data; along with the considerable uptick in extreme weather phenomena; supported by the consensus scientific understanding for why this is happening – really ought to rattle you folks out of your ivory towers.

Could you provide some reference of this “considerable uptick in extreme weather phenomena”? All other source seem to indicate none.
From your own blog:

God is big, huge, beyond anything anyone of us can imagine. Won’t we recognize that when reading, absorbing, witnessing the Bible (or any Holy Book) we interpret it through our individual eyes while weaving our own spirit into our understanding and further telling? This isn’t denying the truths within sacred texts: it is admitting that God’s mysteries and plan are beyond our human ability to grasp.

May I paraphrase:
Climate is big, huge, beyond anything anyone of us can imagine. Won’t we recognize that when reading, absorbing, witnessing the IPCC reports (or any Holy Book) we interpret it through our individual eyes while weaving our own observations into our understanding and further telling? This isn’t denying the truths within sacred texts: it is admitting that the the Climate’s mysteries are beyond our laymen’s ability to grasp.

John Whitman
February 15, 2011 12:34 am

Ross McKitrick says:
February 14, 2011 at 11:00 am
‘’’’’’’’’’Nonetheless, the model getting the most support in the data indicates RW behavior and yields a contemporary trend component well below GCM forecasts.’’’’’’’’’’
‘’’’’’’’’’The email traffic pertaining to the AR4 Ch 3 review process (not in the Climategate archive, but public nonetheless, if you know where to look) shows that the Ch 3 lead authors knew they were in over their heads.’’’’’’’’’’
——————-
Ross McKitrick,
Your comment was very clear and enlightening. Thank you.
Can you kindly answer two questions?
QUESTION #1: Can you please tell us the name of the statistical model getting the most support in the data.
QUESTION #2: Where do I look for the emails pertaining to the AR4 Ch 3 review process?
John

John Whitman
February 15, 2011 1:35 am

DeWitt Payne says:
February 14, 2011 at 1:03 pm
I have a hard time believing that any natural process on the planet has a unit root time series.

DeWitt Payne,
I think the logic is fundamental in support of the existence of a random component(s) to natural processes on this planet. First logical step is to develop and verify some tests for the presence of a random component in the behavior of nature. Next do the tests on some real data (for example GST time series data). Evaluate and cross reference various test results. If they show a random component is involved then, until the tests can be shown to be in error, the treatment of data must include analytical processes that account for random component behavior; analytical processes that do not account for a random component behavior cannot be logically justified.
DeWitt, do you find errors in the statistical tests or their application?
John

February 15, 2011 1:45 am

Ross Brisbane says:
February 14, 2011 at 7:23 pm
This is not about politics – this will be a brave new world that we will all face together.

You don’t have an idea what Brave New World means, do you? Read it, then report back, please.
At least Bokanovsky’s process is definitely about politics, not science. The issues at stake used to be called freedom in Oldspeak.

February 15, 2011 2:15 am

Since VS’ argument about a random walk came up, may I remind you that he also wrote:
“I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks.”
and
“I’m not claiming that temperatures are a random walk.”
I summarized some of his arguments and my position here:
http://ourchangingclimate.wordpress.com/2010/03/18/the-relevance-of-rooting-for-a-unit-root/
and an analogy regarding a random walk and the energy balance here:
http://ourchangingclimate.wordpress.com/2010/04/01/a-rooty-solution-to-my-weight-gain-problem/
The original thread where most of the discussion took place is:
http://ourchangingclimate.wordpress.com/2010/03/08/is-the-increase-in-global-average-temperature-just-a-random-walk/
where in my last comment I provided pointers to some of VS’ main arguments, as the whole discussion is a little hard to navigate.

Geoff Sherrington
February 15, 2011 3:13 am

John Whitman,
For review comments, try http://hcl.harvard.edu/collections/ipcc/index.html
While I’m on line, our math dept (business, not academia) had a guy who could not really converse unless he had a sheet of paper on which he drew a triple integral symbol. Then he would say that the input data had to be real and verified; then that one must study the shape of the distribution before getting fancy. These simple rules seem to get broken. A brave person indeed would do analysis on non-satellite global temperature data sets, which are known to be adjusted and irreproducible. It’s the rough equivalent of loaded dice.
A further (though limited) comment. One can find really hot years at 28 year intervals (approx) starting 1914. They appear all over the world, but not always at all weather stations. They seem incompatible with GHG effects.

February 15, 2011 3:19 am

KR says: “So – you are simply ignoring causal relationships between minimum and maximum temperatures? And the relationship of the beginning and ending of seasons as botanical zones move towards the poles? Ignoring cause and effect? If you had just stated that, well, that would have also saved me some time.”
I’m not ignoring anything. I asked you a question. (Have you plotted and analysed them for the full term of GISS LOTI or HADCRUT or the NCDC’s merged land plus sea surface temperature data or are you simply expressing a belief?) It’s a simple question that could be answered with a yes or a no. The apparent answer is no.
And with your recent answer, you’re assuming that “the beginning and ending of seasons as botanical zones move towards the poles”, etc., can be used as a proxy for the data subsets being discussed in the paper. Half of them are ocean subsets. And the timing of the botanical responses may not coincide with the land surface temperature subsets being used in the paper.
You continued, “Unless these papers have disproven Gordon and Bye, their argument about random walks are not valid.”
It appears that the John A.T. Bye from the 1993 Gordon and Bye paper you linked presented his new findings in the John Bye et al 2011 preprint “Random walk lengths of about 30 years in global climate”. The John Bye from the 2011 paper is from the School of Earth Sciences, The University of Melbourne, and his CV includes the 1993 Gordon and Bye paper.

John Whitman
February 15, 2011 4:31 am

Bart Verheggen says:
February 15, 2011 at 2:15 am
– – – – – –
Bart,
Please desist. I do not think it is at all appropriate for you to speak to the overall position of VS.
Let him say his own words.
John

John Brookes
February 15, 2011 5:39 am

So global temperatures look like they are going up, but it is very hard to tell if it is just a random walk, or a random walk with a bias in one direction.
My questions is, how much longer will global temperatures need to keep going up before the statistical tests will be able to discount sheer randomness?
Once I did a random walk simulation on a computer. I envisaged a group of gamblers each with a dollar at a casino. They would bet the dollar at 50/50 odds. After the first bet, about half the gamblers would be broke, and the other half would have 2 dollars. The ones with money would then bet another dollar and so on. I was interested in how long people could bet before all their money was gone. So I do a run with 20 gamblers, and get one of them betting for 93 turns before he runs out of money. Run it again, with 100 gamblers. It doesn’t stop. I interrupt it to find that everyone has lost their money, except one gambler who is still going strong. I set it going again, and leave it overnight. Our singularly successful gambler gets to 57,000 turns before I terminate the program for good (it was a slow computer). Of course, in this simulation, the odds were 50/50. The casino can’t always win in the end, its a fair game, and its expected return (same as for the gamblers) is zero. At least one gambler must keep going forever. Interestingly enough, I ran the simulation again with the casino having 19/36 odds, and out of 100 gamblers, one still made it through some 3000+ bets before he lost his money. He’s the lucky bastard you notice at the casino.
You may think that the random number generator was not up to the simulation, but it was run in Mathematica, where the random number generator is based on Stephen Wolfram’s rule 30 cellular automata. Its pretty good.

John Brookes
February 15, 2011 5:52 am

In fact, that gives me an idea. Make up a few datasets. Some totally random, and some random but with a bias in one direction. Then give them to the statisticians, and ask them to tell the difference.

eadler
February 15, 2011 7:01 am

Ross McKitrick says:
February 14, 2011 at 11:00 am
Terence Mills also has a very important paper on representation of trends in climate data series in a recent volume of Climatic Change. …..
One of the papers mentioned in McKitrick’s short essay is
http://www.agu.org/journals/ABS/2005/2005GL024476.shtml
Nature’s style: Naturally trendy
Timothy A. Cohn
Harry F. Lins
U.S. Geological Survey, Reston, Virginia, USA
Hydroclimatological time series often exhibit trends. While trend magnitude can be determined with little ambiguity, the corresponding statistical significance, sometimes cited to bolster scientific and political argument, is less certain because significance depends critically on the null hypothesis which in turn reflects subjective notions about what one expects to see. We consider statistical trend tests of hydroclimatological data in the presence of long-term persistence (LTP). Monte Carlo experiments employing FARIMA models indicate that trend tests which fail to consider LTP greatly overstate the statistical significance of observed trends when LTP is present. A new test is presented that avoids this problem. From a practical standpoint, however, it may be preferable to acknowledge that the concept of statistical significance is meaningless when discussing poorly understood systems.

I think this abstract makes the case that the detection of trends is not proof that the trend is real and ongoing. This is true. Describing the behavior of a physical system with a statistical model the is random in nature is an admission of ignorance, because there is nothing better. The quantum theory is the only example of a real physical theory where a random process is controlling, but this applies only to the microscopic world, not to climate and weather.
The fact that trends that resemble climate behavior can be artificially created with some random numbers and some statistical functions that involve persistence and memory begs the question of what is creating this persistence.
In the area of climate and weather, there are models based on physics, and observations of reproducible weather phenomena that provide a better level of prediction than the various formulae for random statistical data generation. Such models are used in weather and climate prediction. The underlying physics of radiation and weather provides the basis for the predictions of global warming and climate change that result for AGW, not simply the existence of a trend in the data.
McKittrick’s little essay seems to miss this point, and could be classified as an argument from ignorance.

KR
February 15, 2011 7:27 am

Bob Tisdale – Minimum and maximum temperatures are related by variance around the temperature mean, and hence not independent. Seasonal length (biological) is directly correlated to to mean yearly temperature too, and hence not independent on either land or water. Hence the initial assumption of 8 independent values (which I suspect required to show such a large and long term random walk variation) is rather challenging to justify in my opinion.
The work I’ve seen (including Hasselmann 2010, http://onlinelibrary.wiley.com/doi/10.1111/j.2153-3490.1976.tb00696.x/abstract) indicates that there is a strong break between random variation of the climate (short term, up to 5-10 years), and the long term behavior of statistical means such as surface temperature and average specific humidity. This isn’t terribly surprising, as weather is an initial value problem (behaving as a chaotic attractor), while climate is a boundary value problem (energy equilibrium over long time scales), changing the center points of the weather chaotic attractor.

As to the new paper having Bye as an author – mea cupla, I overlooked that. I will hold off on any further comments until I have a chance to see a publicly available copy of his paper updating the earlier results. I’ll be very interested in seeing how he justifies the claim of independence of his variables.

art johnson
February 15, 2011 7:52 am

KD writes “Out of curiosity, how do you characterize the behavior of the alarmist scientists? As a group they are advocating a position that would require unprecedented controls by governments around the world to have any chance of lowering CO2 emissions. They advocate this based on flawed science.
What is their motivation?”
Their motivation is economic, generally speaking. They want the grants, they want the prestige, they want the career advancement, they want the awards. This is easily demonstrated, while trying to ascribe political motives to the scientists such as wanting the “overthrow of capitalism” just draws derisive laughter from the alarmist camp. And rightfully so in my opinion.
And if you think about it, selling out the science for the reasons I gave above is far more heinous than acting in accordance with some supposed political conspiracy.
My main point is why leave ourselves open in this way? Argue the science, that’s where we’re on firm ground and that’s what the discussion is about. The rest in my opinion only of course, is paranoid speculation, and utterly baseless.

izen
February 15, 2011 8:03 am

John Whitman says:
February 15, 2011 at 1:35 am
“I think the logic is fundamental in support of the existence of a random component(s) to natural processes on this planet. First logical step is to develop and verify some tests for the presence of a random component in the behavior of nature.”
The underlying quantum processes of chemistry, radioactivity and energy transfer are random. But at macroscopic scales those process are ergodic, they reduce to NON-random basic relationships like the gas laws, chemical reaction dynamics or thermal diffusion rates. They are deterministic NOT random.
You may be confusing the deterministic but unpredictable chaotic behavior of complex non-linear processes with random behavior. They are not the same. Chaotic behavior generally occurrs within an envelope defined by the thermodynamics of the system.
A random process is by definition unbounded and can vary outside of any specific limits.

Richard A.
February 15, 2011 8:19 am

“Of course everyone’s entitled to their point of view. My point of view is that I really dislike this sort of comment. The notion that alarmist scientists are somehow all conspiring to destroy capitalism is just hogwash. More importantly, it feeds into certain stereotypes that do the skeptics no good. I wish people would stick to the science, or lack thereof.” – art johnson
To a certain degree, I agree, art. But then I do have to wonder why all the ‘solutions’ proposed for this alleged emergency invariably involve the government regulating to the hilt or outright seizing massive portions of the economy and controlling them. I’d be more inclined to agree with you about letting the science simply be the science if these people weren’t constantly advocating measures that brought the USSR to it’s glorious and prominent position as a current world economic power.
More to the point, whatever AGW alarmists know or do not know about the climate, and even granting they’re right, they’re economic knowledge is one step below a gerbil’s. And they keep venting on both fronts and should be answered on both. In a larger context this pattern has been going on for millenia, literally. There has always been some one or group proclaiming the world was just about to end and that the only way to avoid it was to give them a boatload of money and power. Funnily enough, the world is still here, and so are the Malthusians milking the dupes.

art johnson
February 15, 2011 9:03 am

No argument Richard. These guys are leaving their labs and classrooms these days much too often to advocate policy. And yes, of course, the inevitable result of accepting first, that AGW is real and potentially cataclysmic, and second that we can do something about it, is more and more stifling regulations and a bigger role for government in general. These things are certainly true…
But I do believe that trying to ascribe political motives to the scientists, as opposed to some politicians, is counter-productive. Even if it’s true…which I don’t accept… it can’t be proven. The science however, is another matter. All I’m saying is that’s where we should be concentrating. Otherwise, we’re just talking to ourselves.
Again, obviously just my opinion.

Dave Andrews
February 15, 2011 2:09 pm

art johnson,
You are correct of course, simple left/right classifications underestmate the complexity of individuals and their responses to AGW. Many of us are well educated and able to judge much of the science for ourselves.
You are also right about scientists feeling the need to move into advocacy. See, for example, biologist Jeff Harvey’s personal page at the Netherlands Institute for Ecology.
Harvey is a prominent poster at that ‘temperate’ blog Deltoid.
http://www.nioo.knaw.nl/users/jharvey

sky
February 15, 2011 4:41 pm

izen says:
February 15, 2011 at 8:03 am
I agree with most of your points. But in claiming that “A random process is by definition unbounded and can vary outside of any specific limits,” you may be simply unaware of random processes other than the random “walk” of stochastically independent increments. A clear counterexample to your claim is a process that abruptly switches between a pair of fixed values, say +1 and -1, at times governed by the Poisson distribution. It has an analytically known acf and corresponding power density spectrum. More applicable to geophysical problems is any band-limited quasi-gaussian process, whose acf depends on the spectral components physically manifested by the process. That basic stochastic structure has far more varieties than Heinz.

John Whitman
February 15, 2011 5:42 pm

izen says:
February 15, 2011 at 8:03 am

John Whitman says:
February 15, 2011 at 1:35 am
“DeWitt Payne,
I think the logic is fundamental in support of the existence of a random component(s) to natural processes on this planet. First logical step is to develop and verify some tests for the presence of a random component in the behavior of nature. Next do the tests on some real data (for example GST time series data). Evaluate and cross reference various test results. If they show a random component is involved then, until the tests can be shown to be in error, the treatment of data must include analytical processes that account for random component behavior; analytical processes that do not account for a random component behavior cannot be logically justified.
DeWitt, do you find errors in the statistical tests or their application?
John”

The underlying quantum processes of chemistry, radioactivity and energy transfer are random. But at macroscopic scales those process are ergodic, they reduce to NON-random basic relationships like the gas laws, chemical reaction dynamics or thermal diffusion rates. They are deterministic NOT random.
You may be confusing the deterministic but unpredictable chaotic behavior of complex non-linear processes with random behavior. They are not the same. Chaotic behavior generally occurs within an envelope defined by the thermodynamics of the system.
A random process is by definition unbounded and can vary outside of any specific limits.
– – – – – – – – – – –
izen,
Thank you for your comment.
Please realize this is not a discussion of any ‘a prior’ conception of what should be the behavior of any earth climate system parameters. This is about taking the parameters measured over time (time series data) and testing in a statistical manner for the kind of behavior that is underlying in the data. The behavior shown is a determining factor on selecting the appropriate methods for analyzing the data. To treat the data with a random component with analytical tools not appropriate for such data is to act toward biasing the analysis or to outright commit error.
There is mounting evidence of a random (walk) component behavior of the GST time series. I think that does tell us that any previous analyses of the data which did not include recognition of that underlying behavior (a random component) of the data are subject to being falsified.
I think Ross McKitrick’s words in his earlier comment strike a reasonable note.

Ross McKitrick says,
February 14, 2011 at 11:00 am
“”””They [unit roots] also imply some fundamental differences about the underlying phenomena, namely that means, variances and covariances vary over time, so any talk about (for example) detecting climate change is very problematic if the underlying system is nonstationary. If the mean is variable over time, observing a change in the mean is no longer evidence that the system itself has changed.””””

The participation of professional statistician in collaboration on climate science was very sorely needed; it is happening now on an accelerating pace . . . that is wonderful news. Those professional statisticians are very welcome, especially by physical scientists with a healthy (normal) skeptical mind.
John

sky
February 15, 2011 6:11 pm

John Whitman says:
February 15, 2011 at 5:42 pm
“There is mounting evidence of a random (walk) component behavior of the GST time series.”
The random walk model necessarily implies that the power spectrum of first differences is that of white noise, i.e., totally flat. No GST time series exhibits anything near such a simple spectrum.

David Socrates
February 16, 2011 12:00 am

This blog trail has been great fun and intellectually stimulating but utterly inconclusive. In particular, I wonder what the point of it is if our overriding goal is to convince others (neutral or warmist) that the temperature series exhibits no evidence of significantly measurable anthropogenic CO2-induced warming.
There is far too much wriggle room in these type of abstruse debates to convince anybody, even reasonable neutral people (who constitute the overwhelming majority, dont forget) let alone the irrational warmists who shout the loudest.

John Brookes
February 16, 2011 1:48 am

Global temperatures can’t follow a random walk, because (as pointed out above) a random walk is unbounded, and the laws of physics won’t allow too great a departure from equilibrium for too long a period of time (I’m pretty sure the stock market has a similar mechanism built in).
So is the paper saying that over a time scale of ~30 years it looks like a random walk? If so, how do you get from one 30 year block to the next one? You can’t just start the next 30 years of temperatures from where the last lot finished off, or you just have one long random walk. I don’t get it. If anyone can explain it so that it makes sense, I would be most grateful!

Smoking Frog
February 16, 2011 3:54 am

Berényi Péter How is “random walk length” defined in the paper?
Good question! I was wondering the same thing myself. Just because the trend of a random walk changes sign doesn’t mean that you’re at the end of the random walk. Is that what you have in mind?

Smoking Frog
February 16, 2011 3:55 am

Oops – I think I forgot to check “Notify me of follow-up…” So I’m doing it now.

February 16, 2011 3:34 pm

KR: Sorry it took so long to get back to you.
You wrote: “Bob Tisdale – Minimum and maximum temperatures are related by variance around the temperature mean, and hence not independent.”
The maximums and minimums are not dependent on the mean. And this is relatively easy to illustrate. Let’s look at the components of HADCRUT3GL. It consists of CRUTEM3 Land Surface data…
http://i52.tinypic.com/1qrevm.jpg
…and HADSST2 Sea Surface Temperature:
http://i54.tinypic.com/2hydw08.jpg
And if we look at the annual Maximum, Mean and Minimum for CRUTEM3…
http://i55.tinypic.com/2ujnkpj.jpg
…and for HADSST2…
http://i55.tinypic.com/70ym8h.jpg
…we can see that the maximums and minimums do not necessarily follow the mean.
The differences are easier to see if we subtract the annual Mean from the annual Maximum and Minimum values for CRUTEM3…
http://i54.tinypic.com/34o4m1f.jpg
…and for HADSST2:
http://i51.tinypic.com/1zx2iv7.jpg
Or if you prefer, we can subtract the Annual Mean from the Maximum, and the Annual Minimum from the Mean for CRUTEM3…
http://i54.tinypic.com/2qumbde.jpg
…and for HADSST2:
http://i51.tinypic.com/nzkc1z.jpg
And to put those differences in the land and sea surface temperature datasets in perspective:
http://i52.tinypic.com/rw6iom.jpg

John Whitman
February 16, 2011 4:36 pm

sky says:
February 15, 2011 at 6:11 pm

John Whitman says:
February 15, 2011 at 5:42 pm
“There is mounting evidence of a random (walk) component behavior of the GST time series.”

“””””The random walk model necessarily implies that the power spectrum of first differences is that of white noise, i.e., totally flat. No GST time series exhibits anything near such a simple spectrum.””””””
———-
sky,
Sorry, a little late . . . I am traveling around in Tokyo these days.
I take it that you are implying that there is no unit root in any GST time series. Please confirm that you are indeed saying that. Thanks.
John

sky
February 16, 2011 7:55 pm

What I’m saying is that year-to-year variability of GST or other indices (Nino3.4), when viewed over intervals of more than 30 years, is very far from a random walk in its stochastic structure . The concept of unit-root autoregressive processes is not entirely applicable to time series that are obtained by discrete sampling of a CONTINUOUS signal. There are other means of modeling non-stationarity that are far more appropriate in the geophysical context.

John Whitman
February 17, 2011 8:35 pm

sky says:
February 16, 2011 at 7:55 pm
“””””What I’m saying is that year-to-year variability of GST or other indices (Nino3.4), when viewed over intervals of more than 30 years, is very far from a random walk in its stochastic structure . The concept of unit-root autoregressive processes is not entirely applicable to time series that are obtained by discrete sampling of a CONTINUOUS signal. There are other means of modeling non-stationarity that are far more appropriate in the geophysical context”””””
————-
sky,
Thanks for your reply. Had a late business dinner and multiple ‘parties’ last night in the Ginza, just getting back to responding. : )
So, we agree there are a growing body of studies and reviews of GST that show presences of unit roots in GST time series datasets. Right? Seems straight forward.
Regarding the number of years of a given GST dataset, tests can be made to determine if there are enough data points to infer anything statistically; so that is not an against the evaluation of time series data for presence of trend stationary (unit root).
You imply “time series that are obtained by discrete sampling of a CONTINUOUS signal” are not suitable subject for applying the formal broad array of established methodologies of time series analysis. Indeed, those type datasets are exactly the type that are best treated by the array of statistical models/tests, especially the time series data sets containing unit roots.
You imply geophysical data (I assume you mean GST records) has special aspects that exempt it from formal general statistical treatment. I do not think geophysical science is exempt.
I would like to paraphrase the words of commenter ‘HAS’ from the famous VS thread (comment # 1784 April 14, 2010 at 23:06) at Bart Verheggen’s blog.

1. The ‘GISSTEMP/ CRUTEM3 (GST)’ data is real, we can see it (i.e. this series is stochastic, as far as we can see)
2. These kinds of statistical tests are the appropriate ones to use when looking at these kinds of time series
No one is going to want to go further in a situation where inconvenient observations are ignored because they don’t fit peoples belief about what the physics should be saying, and statistical techniques are treated with suspicion because they don’t produce the results people want.
To solve this stuff we are going to need to respect both statistics and physics.

John

sky
February 18, 2011 2:00 pm

John Whitman says:
February 17, 2011 at 8:35 pm
“You imply geophysical data (I assume you mean GST records) has special aspects that exempt it from formal general statistical treatment. I do not think geophysical science is exempt.”
John,
I imply nothing of the kind. Unlike stochastic processes that are intrinsically disrete-valued series, discretely sampled continuous signals are subject to arbitrary choices of sampling interval that can introduce aliasing and other artifacts in the time series. That’s why continous random-phase processes rather than discrete autoregressions provide the usual stochastic modeling framework in geophysics, not just climatology.
In that framework, trend is intertwined with the very lowest frequency components of the signal, about which little can be discerned on the basis of short records from the satellite era. If there are quasi-centennial components in the physical signal, it may fool some into imputing a LINEAR SECULAR trend, where there is none. Unlike economics, where growth or inflation may produce secular (but not necessarily linear) trends, physical constraints preclude such in geophysics.
The discrete-time statistical framework of econometricians is very far from being an exhaustive analytic discipline when it comes to random processes.

John Whitman
February 18, 2011 9:12 pm

sky says:
February 18, 2011 at 2:00 pm
” . . . discretely sampled continuous signals are subject to arbitrary choices of sampling interval that can introduce aliasing and other artifacts in the time series.”

– – – – – – –
sky,
I appreciate your dialog.
The GST time series of widespread use from GISS and HadCRU span the periods of 1881 to present and 1850 to present, respectively. The sampling frequency is fixed and historical. Given that, there is no arbitrary sampling frequency. It is a given. Therefore I find your point confusing. I also note the time period covered by the data is not insignificant.
Now, again, this is actual empirical data from a physical system (our climate). The increasing collaboration and interest of professional statisticians in the data has shown that there have been/are significant statistical misapplications/errors in major climate research papers and in IPCC analysis. So, continued focus on the question of whether that data contain certain statistical properties (such as unit roots) will likely persist. Likewise, verifying correct inference statistically is now highly visible in the scientific community.
So, what model best describes the DGP imbedded in the data? That is now an interesting thrust of climate research.
We now have the luxury of a growing set of independent professional statisticians performing audits of past climate research. That is a wonderful development.
John

sky
February 19, 2011 1:33 pm

John Whitman says:
February 18, 2011 at 9:12 pm
John,
I’ll make my last attempt to clarify certain vital points:
1) The “fixed and historical” sampling interval of computing monthly averages is arbitrary; it imposes the somewhat uneven interval of a “month” in constructing the series. But far more important is the potential aliasing of the significantly large even-ordered harmonics of the powerful diurnal cycle into the lowest frequencies of the “climatic” series of comparatively miniscule monthly “anomalies.”
2) The GISS and HadCRU global anomaly series are synthesized over the time-intervals that you cite from an inconsistent set of stations, most of which are inconsistently afflicted by UHI effects. This results in a time-dependent bias that may ultimately resemble a logistics curve. It is baldly claimed, however, to be a linear secular trend not only over the entire available series, but over much shorter stretches that constitute effectively a half-cycle of irregular quasi-centennial oscillations that are a persistent feature of the best proxy records (GISP2) available for the Holocene.
3) Model identification for wide-band stochastic processes is a very tricky poblem even without the foregoing data corruptions. A plethora of assumed stochastic structures may prove statistically “consistent” with the data of over decades or even a century or two, but diverge materially therafter. So far I have seen nothing from academic statisticians oriented toward various linear combinations of iid “innovations” processes that offers a model for GST variability that stands up to rigorous testing. Low-order ARIMA models fail repeatedly, whereas high-order analysis schemes (e.g., Burg’s maximum entropy method) point to several quasi-centennial oscillations imbedded in a broad spectral continnuum. IMO, advances in understanding climate variabilty will come from real-world-oriented signal and system analyses, rather than from the iid restrictive academic viewpoint of statistical time-series.
Let’s both enjoy the remaining weekend.

VS
February 21, 2011 12:58 am

I haven’t had the time to read this paper yet (I will), but I presume that the results are another side of the same coin I’ve addressed, now almost a year ago, on Bart’s blog.
[snip . . reading the paper before commenting is considered to be good manners]

VS
February 21, 2011 2:35 am

Dear moderator,
I was replying to the comments which were related to the discussion at Bart’s blog (the so called ‘unit root thread’). Given that it’s my results and findings being discussed here, I didn’t think it bad form to actually clarify some misconceptions.
I haven’t commented on the paper itself , precisely because I haven’t read it. I also have to add that I have no access to it through my institution (following the link above), so I wonder how many of the posters commenting here actually read it (re: consistent application of the ‘rule’ you cite).
***
Having said that, this is your blog, and these are your rules, so I’ll respect that. In any case, I hope this was just a misunderstanding, so I’ll post my comment again, below.
If you snip it again, I know enough.
All the best, VS

VS
February 21, 2011 2:52 am

I haven’t had the time to read this paper yet (I will), but I presume that the results are another side of the same coin I’ve addressed, now almost a year ago, on Bart’s blog.
I the near future, I plan to write up the whole story in a short methodological piece, and then I guess we’ll have another debate on the topic, and perhaps my argument will then seem less elusive (ts ts, Bart ;).
Having said that, it does seem important to address one particular question; that of stationarity and cyclicality of global mean temperature, and how my results relate to the method via which one should proceed to model it for the purpose of forecasting and inference.
I have never claimed that the hypercomplex process governing the global mean temperature itself is non-stationary. I believe quite the opposite: namely that it is cyclical and stationary.
However, care has to be taken to distinguish the process governing the realization of our observation set (i.e. our sample draw), and the underlying hypercomplex process from which it stems. I simply pointed out that over the period of 120 something years, annual global mean temperatures should be modeled as a non-stationary process.
This is not just ‘my opinion’, this is the result of formal inference. Anybody interested in the argument (and the diagnostics, and the references, and the Matlab code for the simulations which disqualify the PP unit root test in this instance, etc etc… I think I alone wrote over 50,000 words there) is kindly referred to Bart’s blog.
What follows trivially from this is that any trend estimation on this particular data (GISS), which ignores these analytical facts, and implicitly takes ‘trend stationarity’ as the starting point, is inherently inferior to one taking non-stationarity as a starting point. Hence, the conclusions (e.g. a ‘knick’ in the trend somewhere in the 70s, ‘accelerated warming’, etc) are also inferior than those stemming from an non-stationarity based approach to trend estimation.
That’s it.
It seems to me that some (some!) physicists nowadays have forgotten the basic tenant of their discipline, namely that models, or if you will, conjectures which connect together facts, are per definition *false*(!). From this it follows that ‘true model’ is an oxymoron. Taking a more positive view, I believe that Niels Bohr worded the idea behind modeling best, when he said:
“The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth.”
Religiously believing in one’s model automatically blinds and creates (non-scientific) attachments to conjectures. This is unfortunate, as it breeds vicious intolerance towards dissenting views. You know, the type of intolerance that had Galileo locked up. But I guess anybody frequenting the climate blogosphere, from either ‘side’, is more than familiar with this.
All the best,
VS

Brian H
February 21, 2011 11:47 pm

VS;
Any tenant of the Warming Church will not be comfortable with the tenets of science regarding testing of conjectures.

VS
February 22, 2011 4:58 am

Ah, what an eloquent save of my, hasty-rewrite-induced, typo 😉 … (the remaining credits go to the, equally hasty, moderator 🙂