Pielke Sr. on the 30 year random walk in surface temperature record

First some background for our readers that may not be familiar with the term “random walk”

See: http://en.wikipedia.org/wiki/Random_walk

From Wikipedia: Example of eight random walks in one dimension starting at 0. The plot shows the current position on the line (vertical axis) versus the time steps (horizontal axis). Click for more info on the random walk concept
============================================================

New Paper “Random Walk Lengths Of About 30 Years In Global Climate” By Bye Et Al 2011

There is a new paper [h/t to Ryan Maue and Anthony Watts] titled

Bye, J., K. Fraedrich, E. Kirk, S. Schubert, and X. Zhu (2011), Random walk lengths of about 30 years in global climate, Geophys. Res. Lett., doi:10.1029/2010GL046333, in press. (accepted 7 February 2011)

The abstract reads [highlight added]

“We have applied the relation for the mean of the expected values of the maximum excursion in a bounded random walk to estimate the random walk length from time series of eight independent global mean quantities (temperature maximum, summer lag, temperature minimum and winter lag over the land and in the ocean) derived from the NCEP twentieth century reanalysis (V2) (1871-2008) and the ECHAM5 IPCC AR4 twentieth century run for 1860-2100, and also the Millenium 3100 yr control run mil01, which was segmented into records of specified period. The results for NCEP, ECHAM5 and mil01 (mean of thirty 100 yr segments) are very similar and indicate a random walk length on land of 24 yr and over the ocean of 20 yr. Using three 1000 yr segments from mil01, the random walk lengths increased to 37 yr on land and 33 yr over the ocean. This result indicates that the shorter records may not totally capture the random variability of climate relevant on the time scale of civilizations, for which the random walk length is likely to be about 30 years. For this random walk length, the observed standard deviations of maximum temperature and minimum temperature yield respective expected maximum excursions on land of 1.4 and 0.5 C and over the ocean of 2.3 and 0.7 C, which are substantial fractions of the global warming signal.”

The text starts with

The annual cycle is the largest climate signal, however its variability has often been overlooked as a climate diagnostic, even though global climate has received intensive study in recent times, e.g. IPCC (2007), with a primary aim of accurate prediction under global warming.”

We agree with the authors of the paper on this statement. This is one of the reasons we completed the paper

Herman, B.M. M.A. Brunke, R.A. Pielke Sr., J.R. Christy, and R.T. McNider, 2010: Global and hemispheric lower tropospheric temperature trends. Remote Sensing, 2, 2561-2570; doi:10.3390/rs2112561

where our abstract reads

“Previous analyses of the Earth’s annual cycle and its trends have utilized surface temperature data sets. Here we introduce a new analysis of the global and hemispheric annual cycle using a satellite remote sensing derived data set during the period 1979–2009, as determined from the lower tropospheric (LT) channel of the MSU satellite. While the surface annual cycle is tied directly to the heating and cooling of the land areas, the tropospheric annual cycle involves additionally the gain or loss of heat between the surface and atmosphere. The peak in the global tropospheric temperature in the 30 year period occurs on 10 July and the minimum on 9 February in response to the larger land mass in the Northern Hemisphere. The actual dates of the hemispheric maxima and minima are a complex function of many variables which can change from year to year thereby altering these dates.

Here we examine the time of occurrence of the global and hemispheric maxima and minima lower tropospheric temperatures, the values of the annual maxima and minima, and the slopes and significance of the changes in these metrics. The statistically significant trends are all relatively small. The values of the global annual maximum and minimum showed a small, but significant trend. Northern and Southern Hemisphere maxima and minima show a slight trend toward occurring later in the year. Most recent analyses of trends in the global annual cycle using observed surface data have indicated a trend toward earlier maxima and minima.”

The 2011 Bye et al GRL paper conclusion reads

“In 1935, the International Meteorological Organisation confirmed that ‘climate is the average weather’ and adopted the years 1901-1930 as the ‘climate normal period’. Subsequently a period of thirty years has been retained as the classical period of averaging (IPCC 2007). Our analysis suggests that this administrative decision was an inspired guess. Random walks of length about 30 years within natural variability are an ‘inconvenient truth’ which must be taken into account in the global warming debate. This is particularly true when the causes of trends in the temperature record are under consideration.”

This paper is yet another significant contribution that raises further issues on the use of multi-decadal linear surface temperature trends to diagnose climate change.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

107 Comments
Inline Feedbacks
View all comments
February 15, 2011 2:15 am

Since VS’ argument about a random walk came up, may I remind you that he also wrote:
“I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks.”
and
“I’m not claiming that temperatures are a random walk.”
I summarized some of his arguments and my position here:
http://ourchangingclimate.wordpress.com/2010/03/18/the-relevance-of-rooting-for-a-unit-root/
and an analogy regarding a random walk and the energy balance here:
http://ourchangingclimate.wordpress.com/2010/04/01/a-rooty-solution-to-my-weight-gain-problem/
The original thread where most of the discussion took place is:
http://ourchangingclimate.wordpress.com/2010/03/08/is-the-increase-in-global-average-temperature-just-a-random-walk/
where in my last comment I provided pointers to some of VS’ main arguments, as the whole discussion is a little hard to navigate.

Geoff Sherrington
February 15, 2011 3:13 am

John Whitman,
For review comments, try http://hcl.harvard.edu/collections/ipcc/index.html
While I’m on line, our math dept (business, not academia) had a guy who could not really converse unless he had a sheet of paper on which he drew a triple integral symbol. Then he would say that the input data had to be real and verified; then that one must study the shape of the distribution before getting fancy. These simple rules seem to get broken. A brave person indeed would do analysis on non-satellite global temperature data sets, which are known to be adjusted and irreproducible. It’s the rough equivalent of loaded dice.
A further (though limited) comment. One can find really hot years at 28 year intervals (approx) starting 1914. They appear all over the world, but not always at all weather stations. They seem incompatible with GHG effects.

February 15, 2011 3:19 am

KR says: “So – you are simply ignoring causal relationships between minimum and maximum temperatures? And the relationship of the beginning and ending of seasons as botanical zones move towards the poles? Ignoring cause and effect? If you had just stated that, well, that would have also saved me some time.”
I’m not ignoring anything. I asked you a question. (Have you plotted and analysed them for the full term of GISS LOTI or HADCRUT or the NCDC’s merged land plus sea surface temperature data or are you simply expressing a belief?) It’s a simple question that could be answered with a yes or a no. The apparent answer is no.
And with your recent answer, you’re assuming that “the beginning and ending of seasons as botanical zones move towards the poles”, etc., can be used as a proxy for the data subsets being discussed in the paper. Half of them are ocean subsets. And the timing of the botanical responses may not coincide with the land surface temperature subsets being used in the paper.
You continued, “Unless these papers have disproven Gordon and Bye, their argument about random walks are not valid.”
It appears that the John A.T. Bye from the 1993 Gordon and Bye paper you linked presented his new findings in the John Bye et al 2011 preprint “Random walk lengths of about 30 years in global climate”. The John Bye from the 2011 paper is from the School of Earth Sciences, The University of Melbourne, and his CV includes the 1993 Gordon and Bye paper.

John Whitman
February 15, 2011 4:31 am

Bart Verheggen says:
February 15, 2011 at 2:15 am
– – – – – –
Bart,
Please desist. I do not think it is at all appropriate for you to speak to the overall position of VS.
Let him say his own words.
John

John Brookes
February 15, 2011 5:39 am

So global temperatures look like they are going up, but it is very hard to tell if it is just a random walk, or a random walk with a bias in one direction.
My questions is, how much longer will global temperatures need to keep going up before the statistical tests will be able to discount sheer randomness?
Once I did a random walk simulation on a computer. I envisaged a group of gamblers each with a dollar at a casino. They would bet the dollar at 50/50 odds. After the first bet, about half the gamblers would be broke, and the other half would have 2 dollars. The ones with money would then bet another dollar and so on. I was interested in how long people could bet before all their money was gone. So I do a run with 20 gamblers, and get one of them betting for 93 turns before he runs out of money. Run it again, with 100 gamblers. It doesn’t stop. I interrupt it to find that everyone has lost their money, except one gambler who is still going strong. I set it going again, and leave it overnight. Our singularly successful gambler gets to 57,000 turns before I terminate the program for good (it was a slow computer). Of course, in this simulation, the odds were 50/50. The casino can’t always win in the end, its a fair game, and its expected return (same as for the gamblers) is zero. At least one gambler must keep going forever. Interestingly enough, I ran the simulation again with the casino having 19/36 odds, and out of 100 gamblers, one still made it through some 3000+ bets before he lost his money. He’s the lucky bastard you notice at the casino.
You may think that the random number generator was not up to the simulation, but it was run in Mathematica, where the random number generator is based on Stephen Wolfram’s rule 30 cellular automata. Its pretty good.

John Brookes
February 15, 2011 5:52 am

In fact, that gives me an idea. Make up a few datasets. Some totally random, and some random but with a bias in one direction. Then give them to the statisticians, and ask them to tell the difference.

eadler
February 15, 2011 7:01 am

Ross McKitrick says:
February 14, 2011 at 11:00 am
Terence Mills also has a very important paper on representation of trends in climate data series in a recent volume of Climatic Change. …..
One of the papers mentioned in McKitrick’s short essay is
http://www.agu.org/journals/ABS/2005/2005GL024476.shtml
Nature’s style: Naturally trendy
Timothy A. Cohn
Harry F. Lins
U.S. Geological Survey, Reston, Virginia, USA
Hydroclimatological time series often exhibit trends. While trend magnitude can be determined with little ambiguity, the corresponding statistical significance, sometimes cited to bolster scientific and political argument, is less certain because significance depends critically on the null hypothesis which in turn reflects subjective notions about what one expects to see. We consider statistical trend tests of hydroclimatological data in the presence of long-term persistence (LTP). Monte Carlo experiments employing FARIMA models indicate that trend tests which fail to consider LTP greatly overstate the statistical significance of observed trends when LTP is present. A new test is presented that avoids this problem. From a practical standpoint, however, it may be preferable to acknowledge that the concept of statistical significance is meaningless when discussing poorly understood systems.

I think this abstract makes the case that the detection of trends is not proof that the trend is real and ongoing. This is true. Describing the behavior of a physical system with a statistical model the is random in nature is an admission of ignorance, because there is nothing better. The quantum theory is the only example of a real physical theory where a random process is controlling, but this applies only to the microscopic world, not to climate and weather.
The fact that trends that resemble climate behavior can be artificially created with some random numbers and some statistical functions that involve persistence and memory begs the question of what is creating this persistence.
In the area of climate and weather, there are models based on physics, and observations of reproducible weather phenomena that provide a better level of prediction than the various formulae for random statistical data generation. Such models are used in weather and climate prediction. The underlying physics of radiation and weather provides the basis for the predictions of global warming and climate change that result for AGW, not simply the existence of a trend in the data.
McKittrick’s little essay seems to miss this point, and could be classified as an argument from ignorance.

KR
February 15, 2011 7:27 am

Bob Tisdale – Minimum and maximum temperatures are related by variance around the temperature mean, and hence not independent. Seasonal length (biological) is directly correlated to to mean yearly temperature too, and hence not independent on either land or water. Hence the initial assumption of 8 independent values (which I suspect required to show such a large and long term random walk variation) is rather challenging to justify in my opinion.
The work I’ve seen (including Hasselmann 2010, http://onlinelibrary.wiley.com/doi/10.1111/j.2153-3490.1976.tb00696.x/abstract) indicates that there is a strong break between random variation of the climate (short term, up to 5-10 years), and the long term behavior of statistical means such as surface temperature and average specific humidity. This isn’t terribly surprising, as weather is an initial value problem (behaving as a chaotic attractor), while climate is a boundary value problem (energy equilibrium over long time scales), changing the center points of the weather chaotic attractor.

As to the new paper having Bye as an author – mea cupla, I overlooked that. I will hold off on any further comments until I have a chance to see a publicly available copy of his paper updating the earlier results. I’ll be very interested in seeing how he justifies the claim of independence of his variables.

art johnson
February 15, 2011 7:52 am

KD writes “Out of curiosity, how do you characterize the behavior of the alarmist scientists? As a group they are advocating a position that would require unprecedented controls by governments around the world to have any chance of lowering CO2 emissions. They advocate this based on flawed science.
What is their motivation?”
Their motivation is economic, generally speaking. They want the grants, they want the prestige, they want the career advancement, they want the awards. This is easily demonstrated, while trying to ascribe political motives to the scientists such as wanting the “overthrow of capitalism” just draws derisive laughter from the alarmist camp. And rightfully so in my opinion.
And if you think about it, selling out the science for the reasons I gave above is far more heinous than acting in accordance with some supposed political conspiracy.
My main point is why leave ourselves open in this way? Argue the science, that’s where we’re on firm ground and that’s what the discussion is about. The rest in my opinion only of course, is paranoid speculation, and utterly baseless.

izen
February 15, 2011 8:03 am

John Whitman says:
February 15, 2011 at 1:35 am
“I think the logic is fundamental in support of the existence of a random component(s) to natural processes on this planet. First logical step is to develop and verify some tests for the presence of a random component in the behavior of nature.”
The underlying quantum processes of chemistry, radioactivity and energy transfer are random. But at macroscopic scales those process are ergodic, they reduce to NON-random basic relationships like the gas laws, chemical reaction dynamics or thermal diffusion rates. They are deterministic NOT random.
You may be confusing the deterministic but unpredictable chaotic behavior of complex non-linear processes with random behavior. They are not the same. Chaotic behavior generally occurrs within an envelope defined by the thermodynamics of the system.
A random process is by definition unbounded and can vary outside of any specific limits.

Richard A.
February 15, 2011 8:19 am

“Of course everyone’s entitled to their point of view. My point of view is that I really dislike this sort of comment. The notion that alarmist scientists are somehow all conspiring to destroy capitalism is just hogwash. More importantly, it feeds into certain stereotypes that do the skeptics no good. I wish people would stick to the science, or lack thereof.” – art johnson
To a certain degree, I agree, art. But then I do have to wonder why all the ‘solutions’ proposed for this alleged emergency invariably involve the government regulating to the hilt or outright seizing massive portions of the economy and controlling them. I’d be more inclined to agree with you about letting the science simply be the science if these people weren’t constantly advocating measures that brought the USSR to it’s glorious and prominent position as a current world economic power.
More to the point, whatever AGW alarmists know or do not know about the climate, and even granting they’re right, they’re economic knowledge is one step below a gerbil’s. And they keep venting on both fronts and should be answered on both. In a larger context this pattern has been going on for millenia, literally. There has always been some one or group proclaiming the world was just about to end and that the only way to avoid it was to give them a boatload of money and power. Funnily enough, the world is still here, and so are the Malthusians milking the dupes.

art johnson
February 15, 2011 9:03 am

No argument Richard. These guys are leaving their labs and classrooms these days much too often to advocate policy. And yes, of course, the inevitable result of accepting first, that AGW is real and potentially cataclysmic, and second that we can do something about it, is more and more stifling regulations and a bigger role for government in general. These things are certainly true…
But I do believe that trying to ascribe political motives to the scientists, as opposed to some politicians, is counter-productive. Even if it’s true…which I don’t accept… it can’t be proven. The science however, is another matter. All I’m saying is that’s where we should be concentrating. Otherwise, we’re just talking to ourselves.
Again, obviously just my opinion.

Dave Andrews
February 15, 2011 2:09 pm

art johnson,
You are correct of course, simple left/right classifications underestmate the complexity of individuals and their responses to AGW. Many of us are well educated and able to judge much of the science for ourselves.
You are also right about scientists feeling the need to move into advocacy. See, for example, biologist Jeff Harvey’s personal page at the Netherlands Institute for Ecology.
Harvey is a prominent poster at that ‘temperate’ blog Deltoid.
http://www.nioo.knaw.nl/users/jharvey

sky
February 15, 2011 4:41 pm

izen says:
February 15, 2011 at 8:03 am
I agree with most of your points. But in claiming that “A random process is by definition unbounded and can vary outside of any specific limits,” you may be simply unaware of random processes other than the random “walk” of stochastically independent increments. A clear counterexample to your claim is a process that abruptly switches between a pair of fixed values, say +1 and -1, at times governed by the Poisson distribution. It has an analytically known acf and corresponding power density spectrum. More applicable to geophysical problems is any band-limited quasi-gaussian process, whose acf depends on the spectral components physically manifested by the process. That basic stochastic structure has far more varieties than Heinz.

John Whitman
February 15, 2011 5:42 pm

izen says:
February 15, 2011 at 8:03 am

John Whitman says:
February 15, 2011 at 1:35 am
“DeWitt Payne,
I think the logic is fundamental in support of the existence of a random component(s) to natural processes on this planet. First logical step is to develop and verify some tests for the presence of a random component in the behavior of nature. Next do the tests on some real data (for example GST time series data). Evaluate and cross reference various test results. If they show a random component is involved then, until the tests can be shown to be in error, the treatment of data must include analytical processes that account for random component behavior; analytical processes that do not account for a random component behavior cannot be logically justified.
DeWitt, do you find errors in the statistical tests or their application?
John”

The underlying quantum processes of chemistry, radioactivity and energy transfer are random. But at macroscopic scales those process are ergodic, they reduce to NON-random basic relationships like the gas laws, chemical reaction dynamics or thermal diffusion rates. They are deterministic NOT random.
You may be confusing the deterministic but unpredictable chaotic behavior of complex non-linear processes with random behavior. They are not the same. Chaotic behavior generally occurs within an envelope defined by the thermodynamics of the system.
A random process is by definition unbounded and can vary outside of any specific limits.
– – – – – – – – – – –
izen,
Thank you for your comment.
Please realize this is not a discussion of any ‘a prior’ conception of what should be the behavior of any earth climate system parameters. This is about taking the parameters measured over time (time series data) and testing in a statistical manner for the kind of behavior that is underlying in the data. The behavior shown is a determining factor on selecting the appropriate methods for analyzing the data. To treat the data with a random component with analytical tools not appropriate for such data is to act toward biasing the analysis or to outright commit error.
There is mounting evidence of a random (walk) component behavior of the GST time series. I think that does tell us that any previous analyses of the data which did not include recognition of that underlying behavior (a random component) of the data are subject to being falsified.
I think Ross McKitrick’s words in his earlier comment strike a reasonable note.

Ross McKitrick says,
February 14, 2011 at 11:00 am
“”””They [unit roots] also imply some fundamental differences about the underlying phenomena, namely that means, variances and covariances vary over time, so any talk about (for example) detecting climate change is very problematic if the underlying system is nonstationary. If the mean is variable over time, observing a change in the mean is no longer evidence that the system itself has changed.””””

The participation of professional statistician in collaboration on climate science was very sorely needed; it is happening now on an accelerating pace . . . that is wonderful news. Those professional statisticians are very welcome, especially by physical scientists with a healthy (normal) skeptical mind.
John

sky
February 15, 2011 6:11 pm

John Whitman says:
February 15, 2011 at 5:42 pm
“There is mounting evidence of a random (walk) component behavior of the GST time series.”
The random walk model necessarily implies that the power spectrum of first differences is that of white noise, i.e., totally flat. No GST time series exhibits anything near such a simple spectrum.

David Socrates
February 16, 2011 12:00 am

This blog trail has been great fun and intellectually stimulating but utterly inconclusive. In particular, I wonder what the point of it is if our overriding goal is to convince others (neutral or warmist) that the temperature series exhibits no evidence of significantly measurable anthropogenic CO2-induced warming.
There is far too much wriggle room in these type of abstruse debates to convince anybody, even reasonable neutral people (who constitute the overwhelming majority, dont forget) let alone the irrational warmists who shout the loudest.

John Brookes
February 16, 2011 1:48 am

Global temperatures can’t follow a random walk, because (as pointed out above) a random walk is unbounded, and the laws of physics won’t allow too great a departure from equilibrium for too long a period of time (I’m pretty sure the stock market has a similar mechanism built in).
So is the paper saying that over a time scale of ~30 years it looks like a random walk? If so, how do you get from one 30 year block to the next one? You can’t just start the next 30 years of temperatures from where the last lot finished off, or you just have one long random walk. I don’t get it. If anyone can explain it so that it makes sense, I would be most grateful!

Smoking Frog
February 16, 2011 3:54 am

Berényi Péter How is “random walk length” defined in the paper?
Good question! I was wondering the same thing myself. Just because the trend of a random walk changes sign doesn’t mean that you’re at the end of the random walk. Is that what you have in mind?

Smoking Frog
February 16, 2011 3:55 am

Oops – I think I forgot to check “Notify me of follow-up…” So I’m doing it now.

February 16, 2011 3:34 pm

KR: Sorry it took so long to get back to you.
You wrote: “Bob Tisdale – Minimum and maximum temperatures are related by variance around the temperature mean, and hence not independent.”
The maximums and minimums are not dependent on the mean. And this is relatively easy to illustrate. Let’s look at the components of HADCRUT3GL. It consists of CRUTEM3 Land Surface data…
http://i52.tinypic.com/1qrevm.jpg
…and HADSST2 Sea Surface Temperature:
http://i54.tinypic.com/2hydw08.jpg
And if we look at the annual Maximum, Mean and Minimum for CRUTEM3…
http://i55.tinypic.com/2ujnkpj.jpg
…and for HADSST2…
http://i55.tinypic.com/70ym8h.jpg
…we can see that the maximums and minimums do not necessarily follow the mean.
The differences are easier to see if we subtract the annual Mean from the annual Maximum and Minimum values for CRUTEM3…
http://i54.tinypic.com/34o4m1f.jpg
…and for HADSST2:
http://i51.tinypic.com/1zx2iv7.jpg
Or if you prefer, we can subtract the Annual Mean from the Maximum, and the Annual Minimum from the Mean for CRUTEM3…
http://i54.tinypic.com/2qumbde.jpg
…and for HADSST2:
http://i51.tinypic.com/nzkc1z.jpg
And to put those differences in the land and sea surface temperature datasets in perspective:
http://i52.tinypic.com/rw6iom.jpg

John Whitman
February 16, 2011 4:36 pm

sky says:
February 15, 2011 at 6:11 pm

John Whitman says:
February 15, 2011 at 5:42 pm
“There is mounting evidence of a random (walk) component behavior of the GST time series.”

“””””The random walk model necessarily implies that the power spectrum of first differences is that of white noise, i.e., totally flat. No GST time series exhibits anything near such a simple spectrum.””””””
———-
sky,
Sorry, a little late . . . I am traveling around in Tokyo these days.
I take it that you are implying that there is no unit root in any GST time series. Please confirm that you are indeed saying that. Thanks.
John

sky
February 16, 2011 7:55 pm

What I’m saying is that year-to-year variability of GST or other indices (Nino3.4), when viewed over intervals of more than 30 years, is very far from a random walk in its stochastic structure . The concept of unit-root autoregressive processes is not entirely applicable to time series that are obtained by discrete sampling of a CONTINUOUS signal. There are other means of modeling non-stationarity that are far more appropriate in the geophysical context.

John Whitman
February 17, 2011 8:35 pm

sky says:
February 16, 2011 at 7:55 pm
“””””What I’m saying is that year-to-year variability of GST or other indices (Nino3.4), when viewed over intervals of more than 30 years, is very far from a random walk in its stochastic structure . The concept of unit-root autoregressive processes is not entirely applicable to time series that are obtained by discrete sampling of a CONTINUOUS signal. There are other means of modeling non-stationarity that are far more appropriate in the geophysical context”””””
————-
sky,
Thanks for your reply. Had a late business dinner and multiple ‘parties’ last night in the Ginza, just getting back to responding. : )
So, we agree there are a growing body of studies and reviews of GST that show presences of unit roots in GST time series datasets. Right? Seems straight forward.
Regarding the number of years of a given GST dataset, tests can be made to determine if there are enough data points to infer anything statistically; so that is not an against the evaluation of time series data for presence of trend stationary (unit root).
You imply “time series that are obtained by discrete sampling of a CONTINUOUS signal” are not suitable subject for applying the formal broad array of established methodologies of time series analysis. Indeed, those type datasets are exactly the type that are best treated by the array of statistical models/tests, especially the time series data sets containing unit roots.
You imply geophysical data (I assume you mean GST records) has special aspects that exempt it from formal general statistical treatment. I do not think geophysical science is exempt.
I would like to paraphrase the words of commenter ‘HAS’ from the famous VS thread (comment # 1784 April 14, 2010 at 23:06) at Bart Verheggen’s blog.

1. The ‘GISSTEMP/ CRUTEM3 (GST)’ data is real, we can see it (i.e. this series is stochastic, as far as we can see)
2. These kinds of statistical tests are the appropriate ones to use when looking at these kinds of time series
No one is going to want to go further in a situation where inconvenient observations are ignored because they don’t fit peoples belief about what the physics should be saying, and statistical techniques are treated with suspicion because they don’t produce the results people want.
To solve this stuff we are going to need to respect both statistics and physics.

John

sky
February 18, 2011 2:00 pm

John Whitman says:
February 17, 2011 at 8:35 pm
“You imply geophysical data (I assume you mean GST records) has special aspects that exempt it from formal general statistical treatment. I do not think geophysical science is exempt.”
John,
I imply nothing of the kind. Unlike stochastic processes that are intrinsically disrete-valued series, discretely sampled continuous signals are subject to arbitrary choices of sampling interval that can introduce aliasing and other artifacts in the time series. That’s why continous random-phase processes rather than discrete autoregressions provide the usual stochastic modeling framework in geophysics, not just climatology.
In that framework, trend is intertwined with the very lowest frequency components of the signal, about which little can be discerned on the basis of short records from the satellite era. If there are quasi-centennial components in the physical signal, it may fool some into imputing a LINEAR SECULAR trend, where there is none. Unlike economics, where growth or inflation may produce secular (but not necessarily linear) trends, physical constraints preclude such in geophysics.
The discrete-time statistical framework of econometricians is very far from being an exhaustive analytic discipline when it comes to random processes.