Doug Keenan has just written to Julia Slingo about a problem with the Fifth Assessment Report (see here for context).

Dear Julia,

The IPCC’s AR5 WGI Summary for Policymakers includes the following statement.

The globally averaged combined land and ocean surface temperature data as calculated by a linear trend, show a warming of 0.85 [0.65 to 1.06] °C, over the period 1880–2012….

(The numbers in brackets indicate 90%-confidence intervals.) The statement is near the beginning of the first section after the Introduction; as such, it is especially prominent.

The confidence intervals are derived from a statistical model that comprises a straight line with AR(1) noise. As per your paper “Statistical models and the global temperature record” (May 2013), that statistical model is insupportable, and the confidence intervals should be much wider—perhaps even wide enough to include 0°C.

It would seem to be an important part of the duty of the Chief Scientist of the Met Office to publicly inform UK policymakers that the statement is untenable and the truth is less alarming. I ask if you will be fulfilling that duty, and if not, why not.

To me, this is just more indication that the 95% number claimed by IPCC wasn’t derived mathematically, but was a consensus of opinion like was done last time.

Your article asks “Were those numbers calculated, or just pulled out of some orifice?” They were not calculated, at least if the same procedure from the fourth assessment report was used. In that prior climate assessment, buried in a footnote in the Summary for Policymakers, the IPCC admitted that the reported 90% confidence interval was simply based on “expert judgment” i.e. conjecture. This, of course begs the question as to how any human being can have “expertise” in attributing temperature trends to human causes when there is no scientific instrument or procedure capable of verifying the expert attributions.

144 thoughts on “Uh oh, a significant error spotted in the just released IPCC AR5 SPM”

The liars picked 95% because it was higher than the last AR, IMO. As some wit commented earlier, it will probably be Cook’s legendary 97% in the next AR, if there be one.

Same same this time around: 95% probability does not equal confidence interval. Unscientific jiggery-pokery!
“Probabilistic estimates of quantified measures of uncertainty in a finding are based on statistical analysis of observations or model results, or both, and expert judgment2.”
2 In this Summary for Policymakers, the following terms have been used to indicate the assessed likelihood of an outcome or a result: virtually certain 99–100% probability, very likely 90–100%, likely 66–100%, about as likely as not 33–66%, unlikely 0–33%, very unlikely 0–10%, exceptionally unlikely 0–1%. Additional terms (extremely likely: 95–100%, more likely than not >50–100%, and extremely unlikely 0–5%) may also be used when appropriate. Assessed likelihood is typeset in italics, e.g., very likely (see Chapter 1 and Box TS.1 for more details).

From what I have seen of the raw data, the ‘confidence limits’ are untenable – the 95% CL on the ‘regression’, for example, should bound roughly 95% of the data. Anyone who has read chapter 2 of a statistics book discussing linear regression would know that the CL on a regression line comprises a pair of hyperbolas (one above the regression line and one below) with their vertices pointing at the overall data mean (the midpoint of the regression line). These hyperbolas are asymptotic to the lines passing through the data mean with different slopes, as illustrated here: http://www.statsoft.com/textbook/multiple-regression/
representing the regression slope plus or minus the CLs on the slope itself.
I suspect what they are reporting as a CL is on the variance of the slopes among the *models*, which is totally meaningless, as has already been pointed out elsewhere.

When the IPCC pulls numbers out of their collective behinds this is what happens. I am praying their credibility will be shot sooner rather than later.

Billion more dollars needed to fix report.

Well we cannot expect the Summary for Policy Makers to reflect the science can we? After all, the science hasn’t even been published yet! Worse, policy makers had a heavy hand in the wording of the summary of the science which hasn’t yet been published!
A better name might by “Summary of the Policy Makers, By the Policy Makers, For the Policy Makers”.

From the University of Colorado Boulder, Headline: “Shrinking atmosphere linked to lower solar radiation”. In summary, the upper atmosphere has shrunk 30% and cooled by 74 degrees since 1998.
“It is now clear that the record low temperature and density were primarily caused by unusually low levels of solar radiation at the extreme-ultraviolet level,” Solomon said.
C02 had an impact of less than 5%.
Hmmmmmmmmm……

When I was going to school, a mark between 85% and 100% was considered an A grade. I do not know what I would have done if it was further divided up into “likely,” “very likely,” and “most likely.” If it had, I would have been so confused.

The Belgian news is still on the CAGW line. In the Netherlands there was at last more nuance. Marcel Crok pointed out in a very gentle way that the models don’t fit and the warming could be far less than assumed, whereupon an AGW supporter made the remark that the warming disappeared in the oceans. They don’t realise that models failing to predict the actual conditions, predicting for the year 2100 is scientific blasphemia.

Manny M says:
September 27, 2013 at 11:37 am
From the University of Colorado Boulder, Headline: “Shrinking atmosphere linked to lower solar radiation”. In summary, the upper atmosphere has shrunk 30% and cooled by 74 degrees since 1998.
“It is now clear that the record low temperature and density were primarily caused by unusually low levels of solar radiation at the extreme-ultraviolet level,” Solomon said.
C02 had an impact of less than 5%.
Hmmmmmmmmm……
Manny M says:
September 27, 2013 at 11:38 am
Forgot to post the URL for the shrinking atmosphere article… http://artsandsciences.colorado.edu/magazine/2010/08/shrinking-atmosphere-linked-to-low-solar-radiation/

Stephen Wilde has been saying this for some time.

My guess is Julia Slingo will not respond or acknowledge any problem, that is her role!

The following is a comment from the InterAcademy Council review of the IPCC process and procedures in 2010:
“The IPCC uncertainty guidance urges authors to provide a traceable account of how authors determined what ratings to use to describe the level of scientific understanding (Table 3.1) and the likelihood that a particular outcome will occur (Table 3.3). However, it is unclear whose judgments are reflected in the ratings that appear in the Fourth Assessment Report or how the judgments were determined. How exactly a consensus was reached regarding subjective probability distributions needs to be documented.”
I couldn’t find any such documentation in the SPM. Perhaps it’s in the AR5 WG1 report coming out soon? Or perhaps it doesn’t exist. Hmmm….

Over a year ago we had the SREX IPCC report that said.

March 2012 IPCC Special Report on Extreme Events and Disasters:
FAQ 3.1 Is the Climate Becoming More Extreme? […]None of the above instruments has yet been developed sufficiently as to allow us to confidently answer the question posed here. Thus we are restricted to questions about whether specific extremes are becoming more or less common, and our confidence in the answers to such questions, including the direction and magnitude of changes in specific extremes, depends on the type of extreme, as well as on the region and season, linked with the level of understanding of the underlying processes and the reliability of their simulation in models. http://www.ipcc-wg2.gov/SREX/images/uploads/SREX-All_FINAL.pdf

Recently we had the draft Summary for Policymakers.

There is high confidence that this has warmed the ocean,melted snow and ice,raised global mean sea level, and changed some climate extremes, in the second half of the 20th century (see Figure SPM.5 and Table SPM.1).{10.3–10.6,10.9} http://wattsupwiththat.files.wordpress.com/2013/09/wg1ar5-spm_fd_final-1.pdf

Did the instruments develop over the last week? Wasn’t it scary enough? Bring in the government representatives and what do you get, a consensus among civil servants. The scientists are left scratching their heads, but he who pays the piper……..

27 September 2013
AR5 Summary For Policymakers There has been further strengthening of the evidence for human influence on temperature extremes since the SREX. It is now very likely that human influence has contributed to observed global scale changes in the frequency and intensity of daily temperature extremes

Sorry,
Messed up the html. The last paragraph should also be indented and is part of the quote.

Since a linear trend model plus error has least squares estimators that are asymptotically nomal, one standard error of the trend margin is about .2 /1.645 degrees C, or on order of magnitude 0.15. So 0.85, the quoted trend estimate, has a Z-score of about 5+, which is way more significant than 95, 99%, 99.99%, etc…. Seems to me the authors are downplaying the evidence. While I have not read the report, I wonder what is Doug arguing? That the AR(1) model is inappropriate? Sure. But statistics has advanced way beyond this the last few decades. Elaborating, put in a fractionally differenced long-memory component in the model if you want. You’re not going to knock that z-score below significant. Please reinforce your argument.

Manny M says:
September 27, 2013 at 11:37 am
From the University of Colorado Boulder, Headline: “Shrinking atmosphere linked to lower solar radiation”. In summary, the upper atmosphere has shrunk 30% and cooled by 74 degrees since 1998.

Hi. Thanks for the post. RE:

“It is now clear that the record low temperature and density were primarily caused by unusually low levels of solar radiation at the extreme-ultraviolet level,” Solomon said. C02 had an impact of less than 5%.

Worth noting: the Solomon of this 2010 (?) paper is “Stanley”, not Susan, who used to work at NOAA, and won the “Nobble Prize” along with other members of the Algore fan club.

Weather forecasts use the same approach. If three weathermen say it will rain and one disagrees, there’s a 75% chance of showers.

“While I have not read the report, I wonder what is Doug arguing? That the AR(1) model is inappropriate?”
There’s a bit of history behind this. The IPCC in their last report used 90% confidence intervals based on REML linear regression to state bounds on the amount of warming. Doug had a very long argument with the UK Met Office when they used this model to claim the warming was “significant”, via a number of Questions in Parliament, that eventually ended with the Met Office conceding that AR(1) was unphysical and far less likely than some other noise models they could have used. The chief scientist there, Julia Slingo, said that AR(1) was unrealistic and tried to claim they hadn’t used it in making their assessment, instead like the IPCC using a wide range of evidence.
So evidently, when the new report came out Doug immediately checked what model they were using. Turns out they’re still using AR(1), the model Julia Slingo said was rubbish. The confidence intervals are wrong, because the error model used to generate them is wrong – on the authority of the Met Office chief scientist.
The question is, will she say so? “Elaborating, put in a fractionally differenced long-memory component in the model if you want. You’re not going to knock that z-score below significant. Please reinforce your argument.”
Actually, yes you can. That was the basis of Doug’s earlier argument – that a trendless ARIMA(3,1,0) model fits the data a thousand times better than AR(1). There are links to the context at Bishop Hill.
—
In case anyone else wants to check, the following R script ought to replicate (roughly) the IPCC’s calculation. I used GISTEMP here, although the IPCC didn’t say I’m assuming they used the combination of several global temperature series, or possibly different versions. But the closeness of the result indicates that this is indeed what they’ve done.
# ###################
# Replicate IPCC’s confidence interval for warming 1880-2012
library(nlme) # nlme contains gls
# Read in GISTEMP data obtained from
# http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
# Downloaded 27 Sep 2013
gistemp<-ts(c(-22,-13,-16,-19,-27,-25,-24,-31,-19,
-10,-33,-27,-31,-36,-32,-25,-18,-18,-31,-20,-14,-21,
-30,-36,-44,-29,-26,-42,-43,-46,-45,-44,-41,-39,-23,
-16,-36,-44,-31,-29,-27,-21,-29,-26,-24,-22,-9,
-18,-16,-31,-11,-7,-10,-25,-9,-15,-10,3,6,1,6,8,5,6,
14,1,-8,-4,-10,-11,-19,-6,2,9,-11,-12,-18,4,4,3,
-4,5,4,7,-20,-10,-4,-1,-5,6,4,-7,2,16,-7,-1,-12,15,
6,12,23,28,9,27,12,8,15,29,35,24,39,38,19,21,28,
43,33,45,61,40,40,53,61,60,51,65,59,63,49,59,66,
55,58)/100,start=1880)
# Do the regression using AR(1) model, restricted maximum
# likelihood, and show the coefficients of the best fit
glsREML<-gls(gistemp ~ time(gistemp), cor=corARMA(p=1,q=0), method="REML"); coefficients(glsREML)
# Calculate 90% confidence interval on the slope
confint(glsREML,level=0.9)
# Calculate 90% confidence interval on the increase from 1880 to the end of 2012
confint(glsREML,level=0.9)[c(2,4)]*(2013-1880)
# Plot the data and slope on a chart
plot(gistemp)
abline(glsREML)

IPCC news flash ,the global temperature trend going forward is going to be DOWN, not up.
Natural causes can explain the temperature rise from 1880-1998 from high solar activity, to a warm PDO post 1980, up through 1998, featuring more El Nino activity.

Davidmhoffer:
Dont you mean “Summary of the Money Makers, By the Money Makers, For the Money Makers”?

Nullius in Verba says: September 27, 2013 at 1:38 pm
“Turns out they’re still using AR(1), the model Julia Slingo said was rubbish.”
Would you care to quote Julia Slingo saying AR(1) was rubbish?
This seems to be just another episode in Doug Keenan berating people for not using his pet AR(3,1,0) model, which gets a better fit at the expense of extra parameters and physical impossibility. But the IPCC is not claiming that linear+AR(1) is the best model of temperature. They are simply using it as the basis for calculating temperature change over the period.
According to the MO, AR(3,1,0) would give a temperature change of 0.73°C, which is well within the IPCC stated range. So I can’t see how this can be construed as an error.

Simon, except they are not “making money”… they are spending our money.
How about “Summary of the Tax Spenders, By the Tax Spenders, For the Tax Spenders”.
There, much better.

Summary for ScareMongers — 100 Ways to Spread Scare Stories and Make a Million!

Manny M says:
September 27, 2013 at 11:37 am
Would like to hear Dr. Leif Svalgaard’s take on this.

Nick, “Would you care to quote Julia Slingo saying AR(1) was rubbish?”
“However, considering the complex physical nature of the climate system, there is no
scientific reason to have expected that a linear model with first order autoregressive noise
would be a good emulator of recorded global temperatures, as the ‘residuals’ from a linear
trend have varying timescales ”
You could have found that yourself. Doug linked to it. “But the IPCC is not claiming that linear+AR(1) is the best model of temperature. They are simply using it as the basis for calculating temperature change over the period.”
They’re claiming that these are 90% confidence intervals on the actual temperature rise. And implying, to a statistically non-literate audience who wouldn’t recognise the issues with the AR(1) choice, that they would be justified in thinking these have a 90% probability of covering the value that is being estimated. (Which from a Bayesian point of view is not true, either. That would be a ‘credible interval’, not a ‘confidence interval’. A common error, that.)
I know there’s this thing about “not giving ammunition to sceptics”, but wouldn’t it be simpler, more straightforward, and a lot less desperate, on hearing that the IPCC was basing its confidence intervals on an linear+AR(1) model, to simply say: “That’s wrong; they ought to have either picked a better model, explained the difficulty, or not given confidence intervals at all”?

Nullius in Verba says: September 27, 2013 at 4:06 pm
That’s not Julia Slingo saying AR(1) is rubbish. She’s saying it would not be a good emulator of global temperature. Noone ever thought it would be. And that’s nothing to do with AR(1). No linear model would be a good emulator. Nor is Keenan’s model. When he says it is a thousand times more likely, that covers over the fact that it is still impossibly unlikely.
That’s not the point. It’s used as a basis for computing the difference between temperatures at two times. Regression fits are used for this in all kinds of fields, and they work well.

Whatever happened to 1850 to the present??
Or is that Inconvenient??
regards.

Manny M says:
September 27, 2013 at 11:38 am Ian W says:
September 27, 2013 at 11:51 am Bill Parsons says:
September 27, 2013 at 1:21 pm
You all may be interested in this little snippet from NASA about shrinking atmosphere and the cooling effects of CO2 in the thermosphere. http://www.nasa.gov/topics/earth/features/coolingthermosphere.html

“It’s used as a basis for computing the difference between temperatures at two times.”
And reporting confidence intervals. It’s the confidence intervals that are the issue.
And it isn’t a computation of the difference in temperatures at two times. To do that, you would subtract the temperature at one time from the temperature at the other. It’s a much simpler process. What you’re trying to do is something a lot more complicated – by “temperature” you don’t mean the temperature, but an underlying equilibrium temperature due to forcing that has short-term weather superimposed on top of it – a purely theoretical concept that assumes that’s how weather works. You’re trying to estimate the change in the underlying equilibrium, and using a low-pass filter to cut out the high-frequency ‘noise’ – a process that requires accurate statistical models of both signal and noise to do with any quantifiable validity.
The mainstream constantly conflate these two concepts – the observed temperature and the underlying equilibrium temperature – because it gives the impression that the statements are about direct empirical observation, while actually being about an unobservable parameter in a question-begging assumed model.
Had they simply given the OLS trend, you could have argued that it was merely informally descriptive, a rough and unscientific indication of how much temperatures generally had gone up, without making any comment on its significance. However, they stuck a confidence interval on it. Worse, they said there was a 90% likelihood of it covering the quantity being estimated. That gives the impression of a scientifically testable statistical statement. But the “confidence interval” here is a meaningless pair of numbers, because it relies for its validity on an assumption known not to be true. “Regression fits are used for this in all kinds of fields, and they work well.”
Sadly so. That doesn’t make it right, though. You might well know what they’re doing and that such estimates are to be treated cautiously, but the intended readers of this report don’t. They read it as authoritative science, and if they see confidence intervals being written down, by scientists, they’re going to assume they’re meaningful. In this case, as in so many others, they’d be wrong.

Nullius in Verba says: September 27, 2013 at 5:11 pm
“But the “confidence interval” here is a meaningless pair of numbers, because it relies for its validity on an assumption known not to be true.”
They have given an estimate, and the basis on which it was calculated. And they have given confidence intervals for that calculation. That’s appropriate.
I agree that AR(1) is not the only basis for calculating confidence intervals, and there is a case for others (discussed here). But it’s not meaningless.

Not entirely meaningless, but surely deceptive. It is written in the Summary in such a way as to create a false impression. What it means is not stated clearly. SOP.

You’re right…it’s not “meaningless.” In fact, I’d say it means a lot that they selected an inappropriate basis to determine their estimate and confidence intervals.

“I agree that AR(1) is not the only basis for calculating confidence intervals, and there is a case for others (discussed here). But it’s not meaningless.”
AR(1) is the wrong basis for calculating confidence intervals.
It’s meaningless if it’s based on an untrue assumption. Policy makers need to know how accurately you can state the amount of global warming observed. This does not answer that question.
And the IPCC didn’t fully explain the basis on which it is calculated – that’s something we had to deduce from what they did last time around (and buried in an appendix to the main report), and the fact that the interval they report this time matches that method. What the IPCC say is that we can be confident there’s a 90% likelihood that this interval covers the amount of global warming there has actually been. That’s not true.

The 0.65 and 1.06 °C figures are obviously P.O.O.M.A. numbers. [That stands for Preliminary Order of Magnitude Approximation. Really it does.]

(Fake ‘David Socrates’ sockpuppet ID -mod)

Thomas Stocker had the best line of the IPCC press conference, claiming in substance that we do not have enough data about the last 15 years to properly evaluate the “hiatus”. Really not enough data in the past 15 years!!!! That have been the most instrumented, observed period ever… except that it showed no warming.
This guys deserves a IgNoble prize, just for that one!!!

To me, this is just more indication that the 95% number claimed by IPCC wasn’t derived mathematically …

Maybe, but if so, it was at least the result of impeccable risk assessment logic:
The clients must receive what they specified. Otherwise they will defund the project.

Well, so long as the MSM censor dissent, does it matter??
The Guardian is back to censoring again – censored a within the rules challenge to Liberal Democrat (the great Greenies of our major UK political parties) Tim Farron to face reality.
I do wonder whether they have the honesty to draw out historical coverage of the Duma under Brezhnev and compare it to how they write some tripe and get fawning Kommisar after Kommisar to say ‘oh wonderful benefactor, how wise you are!’?
It’s really getting beyond a joke.

“I would say: An ARIMA(3,1,0)? Surely you jest in saying that is a thousand times more likely? I would sure like to see that likelihood comparison.”
Follow the earlier Met Office discussion at Bishop Hill. There are links back to Doug’s calculations, which Slingo confirms. “Do prove me wrong, but the model you propose has a random walk component, meaning the variance increases linearly in time. That is clearly not the case with this data. What you propose isn’t even a stationary model, which should be the null hypothesis of any climate change argument.”
It’s an approximation for a subset of data, like a linear trend is.
It’s a standard procedure in time series analysis – if there are roots of the characteristic equation very close to the unit circle, it makes any short enough segment of the series look approximately as if it was on the unit circle (i.e. random-walk-like), and a lot of the standard tools don’t work or give invalid answers. So the standard approach on analysing a new time series is to first test for unit roots, and if “found”, take differences until the result is definitely stationary. It’s an approximate measure to handle situations when you don’t have a long enough sample to fully explore the data’s behaviour, and to avoid getting misleading results because of that.
Think of it as like the situation you get with the series x(t+1) = 0.999999999 x(t) + rand(t) where rand(t) is a zero-mean Gaussian random number series. Technically it’s AR(1) and stationary, but over any interval short of massive it’s going to look indistinguishable from x(t+1) = 1 x(t) + rand(t), which is a random walk. You don’t have enough data to resolve the difference.
Usually, after testing for unit roots and taking differences, the next step is to test to find what ARMA process best fits the result. This is where the ARIMA(3,1,0) model came from – it is the ARIMA process that best fits the short-term behaviour of the data. The process is analogous to fitting a polynomial to a short segment of a function to model its curves. It’s a local approximation that is not expected to apply indefinitely.

While I agree the AR(1) model is lacking, I can’t understand why people would endorse Keenan’s letter when he seriously suggests the “correct” error margins might include zero. Does anyone actually think we shouldn’t be able to rule out the possibility of no warming in the last 100+ years?

“Nor is Keenan’s model. ”
Din not Keenan say repeatedly that he wasn’t advocating ‘his’ model, but merely using it to illustrate his point?

Brandon,
Depends what you mean by “correct”.
My view is that all this talk about whether changes in temperature are “significant” or not are meaningless without a validated statistical model of ‘signal’ and ‘noise’ derived independently of the data, which we don’t have. We don’t know the statistical characteristics of the normal background variability precisely enough, so it is simply impossible to separate any ‘global warming signal’ from it. All these attempts where you make nice neat mathematical assumptions simply get out what you put in, and your conclusion depends on what you assumed. If you assumed a trend you’ll find a trend. If you assume no trend, you’ll find there’s no trend. Doug’s ARIMA(3,1,0) is merely a standard example derived by the textbook method to illustrate that point.
But it’s got no independent validation, either, so it’s no more “correct” than anything else we could do. It’s simply a better fit.
There are no correct error margins because we don’t have an independent, validated model of the errors. We cannot rule out, by purely statistical means, the possibility of no warming in the last 100+ years. And the IPCC’s confidence intervals are just the same sort of significance testing in disguise.
However, I don’t expect the mainstream is ready to accept that one, so I’ll let it pass. That you accept that linear+AR(1) is “lacking” is a good start, and sufficient for the time being.

Maybe this is too simplistic of a way to look at it, but say you have a system with several subsystems, each of which you are 99% confident that you have a sufficient understanding to model accurately. If there are 6 or more of these subsystems, is it possible for you to be 95% confident of the accuracy of your model of the entire system? 0.99^6 = 94.1%
Given that the climate has well over six subsystems, few of which if any, we are 99% confident that we can model accurately, let alone any interactive effects, it would seem nonsensical simply from a probability standpoint to claim 95% confidence.

Did anyone notice when the panel were questioned about the pause, that the models can not be used to predict individual rain showers or storms but can be used to show you the trend?
It seems to have escaped them that their models are not good a predicting trends either.
If your trend predicting model does not map across to recently gathered real measurements, but is consistently producing higher temperature outputs, then your model is wrong and all of your predictions that come from it are wrong.
How many wrongs make a right?

Nullius in Verba:
You make some interesting comments about the suitability of various statistical methods to be used on different occasions.
Could you state what you currently feel would be the best sequence to use to analyse the various global temperature datasets?
I would really like to know what a person skilled in statistical analysis would say is the correct method or sequence to use.
Its a shame that most statistical methods give answers irrespective on whether the method should have been used in a particular case.

I have seen this before in my own profession, where I’ve been asked to provide a percentage of accomplishment in a project which involves research into unknowns— First, how can i put a finite number value on an open-ended research project? Secondly, since the research is hardly a linear process, how can any percentage of completion be anything but an ‘idiot meter’ indication of MY confidence that I’ll be done by the deadline I’m assigned?
Answers, respectively: I cannot, it cannot.
Everyone knows this, though several try to pretend it’s not the case. For many, who are stymied in a phase of a project, but wish to show that they’re actually making progress so as to not alarm someone higher up in the food-chain, starting with a very low number and leaving themselves lots of room to up the percentage as time goes by is a familiar strategy which allows them to show “progress” even when there is none.
Trouble occurs as the deadline looms, and you have to show progress but have less and less room before hitting 100%. Where weeks ago you could make 10% per week, at 85%, you can’t. The “idiot meter” indications now take on a definite asymptotic curve, approaching completion.
At what point to people stop and say, “You’re Bee Essing me, right?”
Apparently, the IPCC and their believers aren’t at that point yet, and think they still have room to increase the numbers…we also know that they create more room by lowering their starting point…
Anyone who puts confidence in an “idiot meter” indication…. Well, there’s a REASON we call them “idiot meters”…

“Could you state what you currently feel would be the best sequence to use to analyse the various global temperature datasets?”
The primary problem, like I said, is the lack of a validated model for the background noise. This is where effort needs to be concentrated. Until we have one, none of the methods are going to give reliable answers.
More sophisticated studies in detection and attribution (‘detection’ is what we’re looking at here) use the big climate models to generate statistics on the background variation. There’s a bit more reason to pay attention to these, since they are at least partially based on physics. But there are lots of approximations and parameterisations and fudge-factors galore, they’re not validated in the sense required, they don’t match observations of climate in many different aspects and areas, and their failure to predict the pause (or rather, pauses of a similar length and depth) falsifies them even at the game they were primarily designed and built to play – global temperature.
Because they’re not validated, the fact that they can’t produce a rise like 1978-1998 doesn’t mean anything. But because they’re not validated, the fact they can’t produce a pause like 1998-2013 doesn’t mean anything either, except that one way or another the models are definitely invalidated. The pause doesn’t show that global warming theory is wrong, because it could just be that the models underestimated the natural variation.
If, one day, they can build a climate model that can be shown to predict climate accurately over the range of interest, then the correct approach would be to use this to generate statistics on trend sizes over various lengths, and use that to perform the trend analysis and confidence intervals and so on. That would be the right way of doing it. But we’re not there yet. “I would really like to know what a person skilled in statistical analysis would say is the correct method or sequence to use.”
Thanks! But I’d only describe myself as ‘vaguely competent’ not ‘skilled’. There are a lot of people far better at this stuff than me!

The 95% # is total f***ing b*llsh*t.

Paul Vaughan is right. As the temperature continues to decline, the IPCC’s ‘confidence’ continues to rise.
It’s a ‘confidence‘ scam.

Nullius in Verba says:
September 28, 2013 at 1:21 am
You’re doing a great job, even if it appears to be falling on deaf ears. lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time. He or she is mistaking an ensemble statistic with a sample statistic.

@ dbstealey September 28, 2013 at 12:29 pm)
We’re dealing with kamikaze suicide-bomber types. Status quo daily wuwt operations alone can’t and won’t stop such a viciously aggressive force of nature.

For a thorough scientific treatment of a similar perspective as that which Doug is presenting here, I would recommend people read Cohn and Lins 2005 (“Nature’s Style: Naturally Trendy”). It describes in some detail different calculations and the consequence thereof. http://www.timcohn.com/Publications/GRL2005Naturallytrendy.pdf
Saying AR(1) CIs provide some insight is, I am afraid, incorrect. It is like saying IID CIs would provide some insight. Well, no, it just gives an arbitrary and meaningless set of CIs which tell us nothing about the real world. Likewise AR(1) tells us nothing about the real world.
For Brandon: you ask a good question, and in fact the CIs do not say we cannot tell the world is warming. There are two separate tests; one is can we tell whether the world is warmer than 100 years ago. To do this, we would use the error associated with the measurement of global temperature. And I think we can be quite confident that the world has warmed. But this has nothing to do with AR(1) CIs, since the error associated with temperature measuring equipment is not AR(1).
The second question is: can we rule out the possibility of natural variability causing that warming. For this, we need a model for how temperature behaves naturally, and that is where the AR(1) assumption comes in. But the AR(1) assumption is as inappropriate as IID and the CIs generated are meaningless. The consequences of this are outlined in Tim Cohn’s article above.

Sorry for the double post. I felt for clarity (particularly wrt Brandon’s question) I should quote from Cohn and Lins’ conclusions: […] with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: natural climatic excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century.with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: natural climatic excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century.

Aargh! Double paste in the above italicised quote – the same text is repeated twice. Sorry 🙁 If a mod could fix I would be hugely grateful… otherwise readers should just read half of it 🙂

If it has been repeated twice, it must be there three times. Can only see one repetition?

Spence_UK (September 29, 2013 at 1:29 am) wrote: “But could this warming be due to natural dynamics?”
Centennial timescale warming not only “could” be natural — it is natural.

Anthony/Mods
umm
the title
SIGNIFICANT missed an I fellas:-)
[Done. Mod]

As painful and ridiculous the various IPCC ARxx reports are, at least we know there will never be an AR15 report. I wonder, will they go from AR14 to AR16 like hotel floors go from 12 to 14?

Dudley, very good, yes it is repeated once 🙂
Paul, indeed, that is the null hypothesis (although some want to reverse it…)

Bart says:
September 28, 2013 at 12:35 pm
Nullius in Verba says:
September 28, 2013 at 1:21 am
You’re doing a great job, even if it appears to be falling on deaf ears. lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time. He or she is mistaking an ensemble statistic with a sample statistic.
**************
Lund sez: I hope you guys know that I have been teaching graduate time series for 20 years at the PHD level in a math department. Much of the above is jibberrish. Fine…tell me I don’t jack about the topic. That I don’t understand sampling varaibility versus theoretical means. Somehow, I don’t think my stint as chief editor of Journal of the American Statistical Association puts me in the statistical hack category.
As for what the best model is, it contains errors that are fractionally differenced (not fully differenced). How do I know this? I’ve done the comparison on the CONUS series (admittedly not the global series).
Nullius, we don’t fit random walk models to temperature series because they are overdifferenced and have physically unreasonable properties. Please read about fractionally differenced, fully differenced (ARIMA), and ARMA series in a time series text. All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.
You might want to read Bloomfield and Nychka (1992), Climatic Change.

Lund@clemson.edu says:
September 29, 2013 at 3:52 pm
This is total gibberish. Your words from previous post: “Do prove me wrong, but the model you propose has a random walk component, meaning the variance increases linearly in time. That is clearly not the case with this data.”
There is no such “clearly”. A sample series of a random walk does not grow more noisy as you move away from the origin. It simply wanders from the origin, with 1-sigma bounds growing as the square root of time. There is no way to identify by inspection whether this data series has such a non-stationary component as it wanders quite a bit. So clearly, your assertion, that it was “clearly not the case”, is wrong. “All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.”
Incorrect. The PSD of the detrended temperature series looks like this. There are significant AR(2) processes with energy concentrated at frequencies associated with periods of about 65 and 21 years. You account for these, and I guarantee your conclusions will be wildly different.

Lund@clemson.edu
Please read the paper by Cohn and Lins I linked to above. It shows if you deal with longer range dependencies properly you DO get different conclusions.
Please note that time series that exhibit long term persistence (such as climate) result in nontrivial biases in estimated parameters. Many people who have classical training in time series analysis miss this important point.

“Were those numbers calculated, or just pulled out of some orifice?”
Not constructive.

… but big thank yous to both Nullius and Lund, who fought bravely and on topic. I wish I could contribute something more meaningful than a compliment from the peanut gallery.
I judged it an even match until Lund collapsed into an argument to the authority of his own resume, and attributed Bart’s comment to Nullius rather than responding to Nullius , September 28, 2013 at 12:17 pm
whicjh looks to me like the winning answer to the thread.
Don’t be a stanger, Lund. People who enjoy thinking will take the time to read you here, and even if our intent be to attack you, you’ll be reaching a wider and more attentive audience than your Journal ever did. It ain’t that bad.

Bart said “There is no such “clearly”. A sample series of a random walk does not grow more noisy as you move away from the origin. It simply wanders from the origin, with 1-sigma bounds growing as the square root of time.”
Bart, you are very wrong. Your first two sentences are complete contradictions.
If X_t is a random walk at time t, then
X_t = X_{t-1}+Z_t, t >= 1 , with X_0=0
and Z_t is some IID noise. So X_t = Z_1 + Z_2+ …. + Z_t and the variance of X_t = t times the variance of Z_1. I think you have been told this!!!! Standard deviations are the square root of the variance.

Grover Trattingham says:
September 30, 2013 at 10:44 am “I think you have been told this!!!”
I think I wrote it!!!!!:

It simply wanders from the origin, with 1-sigma bounds growing as the square root of time.

The mean square error does not tell you everything you need to know about how a time series varies. You need to know the full autocorrelation.
Random walks look like this (upper plot). Not like the lower plot.
When we speak of “noise”, we generally mean higher frequency variation. Random walks do not display this, as the accumulation of independent increments naturally attenuates high frequencies.
There is very clearly some random walk-like behavior in the temperature data within the timeline of interest. Lund made an error. Perhaps in a rush to write something in opposition, but an error nevertheless.

At the end of the day the “global’ climate is simply the sum of its factors – an equation. Some of the factors we know, or think that we know, and can attribute a value to them. There are many similar factors that we know, or think that we know, must feature, but to whose value we can only guess. We know that other factors exist, but we do not know how many and can attribute no weight to them. The equation is, for the moment unsolvable, It follows that any attempted solution, no matter how esoteric the statistical manipulation, can be allocated a very precise confidence level, and that level is Zero.

Yessir! Knocked that one out of the park!! I’m not a statistician, but I can tell when we’ve degenerated to the point of discussing the size of angel’s arses and the area of a pin’s head. It is incontrovertible fact that GCMs are NOT reality. This is simply the modern argument about determinism taken to a new level of abstraction: it is presently impossible to put all the variables into a program and have it accurately model the real world. Even were it possible to “model” the real world, it could not accurately APE the real world, as we cannot predict all the possible extra-solar input to the system, even if we DO know it’s impact (currently we do not).
To assert that we know enough about all the variables, and that they are all accurately accounted for in the GCMs, such that those GCMs are accurate enough that a statistical analysis of their results is the equivalent of a statistical analysis of actual historical record, is utter absurdity.
Neither GCMs nor their human, thus fallible, programmers are prescient to any reasonable degree. Anyone who argues to the contrary, regardless the math they cite or the degrees the claim, is a feather merchant. The Second Law of Thermodynamics is supreme, the universe moves toward increased entropy, and computer models are not reality, and cannot be made to be.
In spite of all the alarmist’s assertions to the contrary, reality persists in being what it is, not what they wish it to be.

Sorry, the preceding in agreement with and appreciation of @ Ken Harvey:
At the end of the day the “global’ climate is simply the sum of its factors – an equation. Some of the factors we know, or think that we know, and can attribute a value to them. There are many similar factors that we know, or think that we know, must feature, but to whose value we can only guess. We know that other factors exist, but we do not know how many and can attribute no weight to them. The equation is, for the moment unsolvable, It follows that any attempted solution, no matter how esoteric the statistical manipulation, can be allocated a very precise confidence level, and that level is Zero.

Bart, “lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time.”
What I think Lund is talking about is an initiated random walk. That’s a random variable that has a known value (zero, say) at the first time, is Bernoulli +/-1 at the second time, a Binomial (probabilities 1/4,1/2,1/4 of values -2, 0 , 2) at the second step, and so on. The variance of the variable at each timestep increases linearly with time, which is equivalent to saying the standard deviation increases as the square root of time. (Variances of independent variables add.)
A random walk, however, extends forever back in time too and has infinite variance at every time; all you can talk about is the variance of differences in position for times with a given separation, and stuff like that. Non-stationary processes are indeed very strange objects, and we tend to abuse the rigorous mathematics somewhat in practical applications. It’s only because we’re discussing a finite segment of data that we can get away with it.
Lund, “Nullius, we don’t fit random walk models to temperature series because they are overdifferenced and have physically unreasonable properties. Please read about fractionally differenced, fully differenced (ARIMA), and ARMA series in a time series text. All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.”
Ummm. We were talking about ARIMA. ARIMA(3,1,0), to be specific.
ARFIMA, which I presume is what you meant, is a further generalisation. And sure, if you want to argue that ARFIMA would have been an even better choice, I doubt either Doug or I would disagree. Would you agree that the original point still stands – that the IPCC’s use of linear+AR(1) is an untenable choice for a model, given that we know the climate is not?
But as for whether its properties are physically unrealistic, bear in mind that a linear trend, if extended backwards a few thousand years, drops below absolute zero. Which is impossible. The point being that the linear trend is only intended as an approximation for a finite segment of the series, and in just the same way the ARIMA model is only an approximation for a finite segment. We can be sure that if we had sufficient data, we would eventually be able to resolve the difference.
If you wish to demonstrate that you wouldn’t get different conclusions, that would be interesting. “You might want to read Bloomfield and Nychka (1992), Climatic Change.”
I prefer Box and Jenkins, thanks.

If a random walk was best, how likely do you think it would be that the mean is not changing in time but the variance is…..this is a property of the ARIMA(3,1,0) statistical error model proposed and should be one reason it is discarded.

Nullius in Verba says:
September 30, 2013 at 2:17 pm
I am speaking of a finite random walk segment as well. Informally, in engineering applications, we usually model such a process as the sampled output of an integrator driven by wideband (effectively “white”, over the frequencies we are interested in), zero mean noise (Wiener Process). We generally assume the input noise is Gaussian which, due to the Central Limit Theorem, is generally a fairly safe assumption. If the input noise has an RMS measure of “sigma”, then the standard deviation of the random walk relative to the initial value grows as sigma * sqrt(time).
The point, though, is that the samples of the random walk are highly correlated in time. If x(t_k) is the process at kth sampled instant t_k, then the autocorrelation function is E{x(t_k)*x(t_n)} = sigma * min(t_k,t_n). This is not independent noise which flails wildly between the RMS bounds, but rather a slowly evolving trajectory which “walks” up and down over time. Such trajectories typically look like the sample series I plotted in the upper plot here. The standard deviation of sigma * sqrt(time) is a measure of the RMS spread over an ensemble of such processes at the given time, but not of a single sample of such a process.
If you filter out the processes in the temperature record associated with what I suspect is an oceanic-atmospheric resonance at ~65 years, and the ~21 year process which is probably associated with solar cycles, then you are left with something which looks very much like some of these random walk trajectories. There’s no way you can see this by inspection of the raw data. Hence, Lund’s claim that it was “clearly” not there was puzzling. I think it is possible that he momentarily forgot how random walks behave, expecting something more akin to the lower plot at my link above. At least, that was the impression under which I made my comments.
On the other hand, maybe he was saying that any such behavior was clearly not dominant. I can agree with that. Or, maybe he was saying that, as a random walk tends to infinity with time, there clearly cannot be an open ended accumulation over eons of time. I can agree with that, too. But, the answer to that, as you pointed out in a post above, is that over a finite interval of time, a true random walk can be indistinguishable from a stationary low frequency process which is ultimately bounded over time.

lund@clemson.edu says:
September 30, 2013 at 4:32 pm
Please do not disregard my point about the PSD. There are powerful, at-least quasi-cyclical processes evident in the data record.
The ~65 year process, in particular, boosted the underlying trend during the latter portion of the 20th century. This natural (obviously, because it has executed more than two complete cycles within the record, before human influence could have been significant) boost in the trend was mistaken for CO2 accelerated warming. If you remove that cyclical influence, you are left with a modest trend which, again, is natural because it has been in evidence since the end of the LIA, before humans could have had an influence.
Properly analyzed, the data show no statistically significant evidence for CO2 induced warming at all.

Now the discussion is getting somewhere.

(Note: This is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Buster Brown’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. All the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

lund@clemson.edu
Yes, a fractionally integrated solution would be correct. I linked you to a peer-reviewed paper that showed when accounting for the fractionally integrated component, the 95% confidence intervals expand to include zero. I told you to read that paper and warned of the dangers (and difficulty) in estimating this value.
In summary: you claimed using such a model would have no effect, and I linked you to a peer-reviewed paper that does the calculation that shows it has a very large effect.
And without answering the scientific points I raise, you announce you are dropping out of the discussion at this point.
What a surprise.

Bart, the “quasi-periodic” cycles you describe are almost certainly a consequence of fractionally integrated random variations, and not deterministic cycles at all.
Unfortunately you plot your PSD on a linear scale. The CIs of a PSD become very large on this type of scale so the uncertainty is unclear. You should (generally) plot PSD on a log-log or semi-log Y scale. The CIs are fixed width on this type of scale and it is easier to determine if those “quasi periodic cycles” are actually anything of interest.

Spence_UK says:
October 1, 2013 at 6:15 am “Bart, the “quasi-periodic” cycles you describe are almost certainly a consequence of fractionally integrated random variations, and not deterministic cycles at all.”
They are definitely not deterministic cycles. I never suggested they were. But, they are very likely representative of ordinary resonant systems driven by random excitation.
The ~21 year cycles are likely due to quasi-periodic variations in solar activity. The Hale cycle has a nominal period of about this duration corresponding to oscillation of magnetic polarity (now that I think of it, this might be a manifestation of Svensmark-type cosmic ray modulation). A model for the sunspots, which displays characteristics very similar to observations, can be found here. See, for example, the simulations here and here, and compare to actual sunspot data here. Note, I am pointing out qualitative similarity, not quantitative replication. That analysis is TBD.
Partial differential equations on bounded domains typically produce solutions which can be expanded in a series of normal modes. Random excitation of these normal modes, with some inherent energy dissipation leading to damping, can be represented using a system model such as proffered above for the sun spots. This is standard operating procedure in, e.g., modeling structural vibrations, and is very widespread and well-established.
IMO, the ~65 year process is very likely such a normal mode excitation of the oceanic-atmospheric system. The random excitation doesn’t even actually have to be wideband – just nominally stationary with repeatable energy levels at the resonant frequency. “Unfortunately you plot your PSD on a linear scale.”
It is better for seeing the spikes indicative of the resonances. A log-log scale is useful for observing power-law types of noise, but not very good for picking out resonances, as the varying scale distorts the Cauchy peaks.

Bart, natural variability *is* a power law. The variability extends across nine orders of magnitude (see here: https://itia.ntua.gr/en/docinfo/1297/)
Your 20 year and 65 year cycle fits right in to that relationship. And yes, pretty much any cycle you can come up with can be matched to some physical phenomenon.

Spence_UK: I must apologize. Your cited paper is certainly relevant. I didn’t read it the first two times you mentioned it. Perhaps I was misled by a lot of the misinformation here. I don’t normally look at this page. You have to yell pretty loud to be heard over all the noise. There is so much to correct.
But your reference contains exactly the type of analysis that I would think is hard to refute. The ARIMA(3,1,0) stuff is going the wrong way. While, I have not read your citation in detail, if the fractionally differenced p-value is .07, I would say okay, that increased the p-value way way more than I expected for the value of d quoted. This is not quite the case for the Continental US Series, which I have examined. I’m going to guess there are specifics (yearly instead of monthly series, interval of observation (does it contain the last few years, etc.) to dicker around, but I believe you.
BTW, I am still here. I just don’t have time to reply to everything.

Spence_UK says:
October 1, 2013 at 1:26 pm “Bart, natural variability *is* a power law.”
To me, a power law is something which manifests as a linear slope in a log-log plot, generally indicative of red or pink noise. Spurious peaks may appear in a PSD analysis performed on such data, but they are generally amorphous and fail to be persistent. “Your 20 year and 65 year cycle fits right in to that relationship.”
These are concentrated regions of elevated “energy” with common morphology. They indicate distinct processes above and beyond any background variability.
The ~65 year quasi-cycle is of particular interest, because it was the upswing of that cycle which was interpreted as accelerated warming due to CO2, and which has now got the climate community tied up in knots as it shifts into the downward cycle, spoiling their expectations, and poised to bring the entire house of cards tumbling down.
There was a time in which it could be argued that the ~65 year process was a fluke, and just a mirage of random variability. However, when the cycle turned in in mid-two-ohs right on schedule, that contention became a lot less tenable.

Lund@clemson.edu
In which case I should apologise back for allowing my frustrations to cause me to question your motivations. Sorry for that. I’m glad you found the paper interesting.

Bart, To me, a power law is something which manifests as a linear slope in a log-log plot
Yes indeed – and that is exactly what the paper I link above demonstrates, across nine orders of magnitude (out to millions of years). The peaks you show are not special in this context.

Spence_UK says:
October 2, 2013 at 5:55 am “The peaks you show are not special in this context.”
I always used to tell my students, you can’t rely on all this fancy math. It’s all based on models, and the models aren’t always applicable. You have to dig down into the actual data and do sanity checks.
If you can really look at this plot and not see the ~65 year component blazing in your eyes, then you need to take a break, and drop down to your local pub for a glass of perspective and soda.

Spence_UK says:
October 2, 2013 at 5:54 am “In which case I should apologise…”
Frankly, I don’t see any reason to apologize. Lund came in swinging with some very churlish comments. Now, he is apologizing to you because he sees that you are sympathetic to his point of view. So, only people sharing his outlook and priorities deserves respect and a fair hearing. Pshaw.

Bart: You are the one who has spewed a ton of misinformation here. Learn what a random walk is.

Lund – charming. You need to learn that there are other conventions in other disciplines than the one in which you are engaged, and expand your mind. “Learn what a random walk is.”
If you have an objection, state it clearly. My definition is standard, and consonant with descriptions widely available on the web.

Bart: all fractionally integrated time series will show strong “quasi-periodic” cycles near to the length of the time series. You can demonstrate this yourself easily by generating random fractionally integrated time series of a similar length.

Spence_UK says:
October 2, 2013 at 1:47 pm ” …all fractionally integrated time series will show strong “quasi-periodic” cycles near to the length of the time series.”
But, not all quasi-periodic cycles are phantoms of a power law process. Furthermore, you appear to be glossing over the fact that the ~65 year quasi-periodic cycle is half the length of the modern instrument record. There are very nearly two full cycles in evidence here.
This argument has been going on here longer than you may realize. Back in the early 2000’s, many people noticed that there appeared to be a ~65 year cyclicality to the temperature data. In particular, the run-up in temperatures from about 1910-1940 was almost precisely the same as that between about 1970-2000.
Even greater force was given to the argument when the turning point came roughly in 2005, right on schedule.
Fractionally integrated noise is a Martingale – the future does not depend on the past. These two events, the replication of the 1910-1940 and 1970-2000 increases and the turnabout in roughly 2005, would be astounding coincidences under the assumption that this is a figment of fractionally integrated noise.

“Fractionally integrated noise is a Martingale…”
This is incorrect. I am looking for the property which will convey my intention. That is basically that there is no compelling reason for fBm to exhibit repeating coherent patterns, and it is unlikely among all the paths that it could take for it to do so.

Bart,
It is easy to see patterns in fractionally integrated noise, especially when you get to interpret those patterns post hoc.

Spence_UK says:
October 2, 2013 at 3:25 pm “…especially when you get to interpret those patterns post hoc.”
A) But, I did not see them post hoc. As I stated, many people were waiting to see if the turnaround would come after 2000 as would be expected for a persistent quasi-cyclical process. It did.
B) These patterns would then be uncannily precise. The increase from about 1910-1940 is virtually identical to that from 1970-2000. The turnaround came at exactly the right time.

Put Spence my team,dawg.. Bart, ???????, this is out of control.
[“on” my team? “in” ? Mod]
(Another fake identity -mod)

bart@hotmail.com says:
October 2, 2013 at 5:55 pm “Please, then give us, your altenative def of a random walk, replete with frequency reasoning.”
This has been done upthread. Don’t be so lazy. “Folks: do we blather on or STFU?”
Oh, great. The feces flinging monkeys have arrived.
I would recommend you STFU, since you evidently have nothing to contribute.

carol@stat.usc.edu says:
October 2, 2013 at 5:39 pm
If you are confused, perhaps you should ask actual questions. I am not equipped to interpret “?????”.

Bart,
Please, then give us, your altenative def of a random walk, replete with frequency reasoning.
[How many screen names are you using? ~ mod.]
At least 20:

(Note: “Bart” is a fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Buster Brown’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. All the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

Bart says:
October 2, 2013 at 3:06 pm
(This sockpuppet can’t keep his screen names straight. -mod) [1. The repeated STFU’s in repeated replies are not needed, nor useful, nor desired. Cut them out.<
2. What did you mean by “for fBm to exhibit repeating coh” ? Mod]

Fake name. -mod.
[When cursing the world, it is usually best to tell the un-cursed-at rest of the world, which other person the writer is mad at. Mod]

(Fake name. -mod)

Lund@hotmail.com says:
October 2, 2013 at 8:44 pm “Da**it, dude: we are not [retarded].”
I think you may be… “Bart: you make no sense to me.”
I expect not. You are delving into a new discipline you obviously know nothing about, and your first reaction is to deny there is anything more for you to learn. That’s a really… shall we be nice and say imprudent?… thing to do. “Rather, I find your frequency domain gibberish (j in the states) non-cool.”
That pretty much says it all. You don’t do frequency domain. Got it. “But what did you say: random walks are subject to high -frequency noise cancellation? Please tell us more.”
The roll off of gain with frequency due to integration is one of the most elementary control actiions there is. In Laplace notation, the transfer function of an integrator is 1/s, where s is the Laplace variable. Evaluating that function at s = j*w, where w is the radial frequency and j is the square root of -1, the gain falls off as the reciprocal of frequency, with a phase shift of -90 degrees, due to the j in the denominator.
It is trivial to see how this affects, e.g., sinusoids. The integral of cos(w*t) is (1/w)*sin(w*t). The higher the frequency, the smaller its integrated amplitude.
This is so absurdly simple, it is utterly amazing that you would stick your neck out without first asking politely for clarification. It is beyond basic.
But then, there’s a good reason Clemson U was never on my short list for schools I would attend. Stick to football, guys. Mathematics, apparently, isn’t your strong suit.
I tell you what, here is a tutorial on PID control design from a real school. PID stands for “proportional-integral-derivative”, and it is just about the most basic and widespread control technique available. There, you will see some discussion of the frequency response of an open loop with an integral control element.

I doubt this doofus calling himself “Lund” is actually Dr. Robert Lund of Clemson. Nobody granted such a position would make such a fool of himself, unless things at Clemson are even worse than most people assume.
So, for whatever person who has appropriated his title, before you say something even stupider like “what does the frequency response of an integral have to do with the frequency content of a random walk,” I will again point out that a Gaussian random walk is equivalent (and often the result of) sampling the output of an integrating process fed by wideband noise, in the limit as that input noise bandwidth approaches infinity, i.e., the standard Wiener Process.
Random walks look like the top plot here. Or, the plot here, for a non-Gaussian case. As is plainly evident, higher frequency motion is attenuated in each of these cases, in the latter because discrete accumulation, like continuous integration, attenuates higher frequencies.
This property is also immediately evident from the autocorrelation function, which I provided previously for the sampled data case: E{x(t_k)*x(t_n)} = sigma^2 * min(t_k,t_n). The cross correlation coefficient of nearby points is sqrt(min(t_k,t_n)/max(t_k,t_n)), which will be near unity when t_k is near t_n. That means the points tend to stay in the same neighborhood for an extended time, and fail to jump around significantly in narrow time intervals, i.e., their frequency content is weighted toward the low frequencies.
But then, this is obvious in the power law of -2 which such a process produces in a PSD estimate. I mean, this is really, really basic stuff.
So, guys… you’ve embarrassed your institution enough. How about you run along and play somewhere else now.

0) OK Class, you’ve had your fun. Cool it. No more comments.
1) Bart: You at least have the covariance function of a random walk right:
Cov(X_t, X_s)=\sigma^2 min(t,s).
Taking t=s show that the variance of X_t is \sigma^2 t. Ergo, the ratio of the varaince of X_n to the varaince of X_1 is n. That is the whole premise on why a random walk, and hence why an ARIMA(3,1,0) model is inapprorpiate. The issue has nothing to do with frequency domain properties of time series. That is non-sequiter.
2) Please stop with the insults. It torques me that you’ve insulted my math skills and my university. Really.

Robert Lund says:
October 3, 2013 at 8:55 am “You at least have the covariance function of a random walk right:”
I had it right when you were still in diapers. I stated it well before this comment when you had first entered in. Apparently, you were in a blood frenzy, looking forward to tearing into someone who didn’t share your viewpoint, so you glossed over it. “That is the whole premise on why a random walk, and hence why an ARIMA(3,1,0) model is inapprorpiate.”
It’s a flawed premise. Over a finite interval, AR processes with long time constants can behave essentially like a random walk. If the aim is prediction in the near term, there is nothing wrong with it. “It torques me that you’ve insulted my math skills and my university. Really.”
It was intended to. It torques me that you paid so little attention to the things I stated, and made to ridicule me based on your incomplete knowledge and lack of experience outside your narrow field. Has the lesson been learned?

” Over a finite interval, AR processes with long time constants can behave essentially like a random walk”
This is more jibberish. An AR process with long time constants? I can’t even begin to try to make sense of this. Do you even know what an autoregression is? Do tell us how a long time can be constant. And you wonder why you’re being ignored.
I look at it this way, Bart: You’re better at insults than math.

Robert Lund says:
October 3, 2013 at 10:27 am
[trimmed. Mod]
Do you even know how to derive the frequency response of a discrete AR system? Have you ever even heard of the Z-Transform? Do you have any idea how engineers design digital control systems? Do you know what a time constant is?
[trimmed. Mod]

“Do you even know how to derive the frequency response of a discrete AR system? Have you ever even heard of the Z-Transform? Do you have any idea how engineers design digital control systems? Do you know what a time constant is?”
What you mean to ask me is “Do I know how to derive the spectral density of an autoregression (your word discrete is inappropriate)”? The answer is yes (use a transfer function argument with a causal linear process driven by white noise, the latter having a constant spectral density). Do I know what a Z-transform is? Yes (better called a power series transform). I also know about Laplace transforms, characteristic functions, moment generating functions, etc (I do teach differential equations)…………..
Do I know what you are trying to say? Not in the slightest.

Robert Lund says:
October 3, 2013 at 11:22 am “Do I know what you are trying to say? Not in the slightest.”
Then, ask questions, instead of accusing me of not knowing what I am talking about. If you had asked nicely to begin with, we wouldn’t have had all this nastiness.
A simple example may help. Consider the difference equation
x(k+1) = 0.999*x(k) + w(k)
where w(k) is white noise. The time constant of this system is -T/log(0.999) = 995T, where T is the sample period. If you look at the output of this over a time less than a time constant, it will be very close to a random walk.

-1/log(0.999) = 999.5

Bart,
Respectfully, this whole thread is about the nuances between a random walk, a fractionally differenced random walk, and a causal AR(1) model. Some of us here have decades of proficiency with the topic. I truly don’t know what you mean to say most of the time. Like in the example above, there is no period T. An AR(1) with autoregressive coefficient of .999 is stationary.
Here’s the deal: an autoregression with an autoregressive polynomial that has roots close to the unit circle will exhibit more persistence than one with roots far from it. Often, a sample path of such a series could be mistaken for a random walk. What we are trying to tell you is that there is something in between an AR(1) and a random walk, dubbed an ARFIMA model, that is the most appropriate error model here. And its stationary (a random walk is not).
Look, I’m sorry my time series class picked on you. It was a bad idea to tell them about this thread. Can we move on?

Robert Lund says:
October 3, 2013 at 12:41 pm “Like in the example above, there is no period T. An AR(1) with autoregressive coefficient of .999 is stationary.”
Then, how are the samples at step k and n separated in time? This is how digital control systems are constructed – we sample sensor data at uniform intervals, and apply a control signal back in a feedback loop at that uniform sample interval.
Yes, it is ultimately stationary. But, you wouldn’t know that from a finite sample of data with record length on the order of less than a time constant. “Can we move on?”
Yes, we can move on.
Don’t take my earlier taunts too seriously. They were intended as a wake-up call. Clemson is a fine university, and I know people I respect who studied there. I have known idiots who attended MIT. The best engineer I ever worked with was from Purdue – that is not a plug, I did not go there. I judge a person by what he or she can do, not by what school he or she managed to get into and out of.
I, also, have many decades of experience with noise modeling and handling in a wide variety of electro-mechanical systems. We have to design systems which work to exacting specifications. I am very good at making them do that.
I am fluent in these topics within the argot of my milieu, which evidently makes for a communications problem. But, once upon a time, I delved into them extensively in an academic setting. My earlier texts included Larson & Shubert, Karlin and Taylor, and Doob.
Since graduating and entering the practical world, I have had very little use for Ito and Stratonovich. Most of the time, I have seen fBm used as a crutch to explain processes which, upon closer examination, bear the marks of poor data collection technique. IMHO, it is a GUT theory of noise. Sort of like string theory – a mathematically elegant artifice, but of little practical value.
But, we could argue about that all day long. I take it as read that you disagree. The bottom line is the same, no matter which of us is closer to the truth. If the apparent ~65 year process is, as I maintain, a result of random excitation of an oceanic-atmospheric mode, then temperatures are poised to go down. If it is more closely a process of fractionally integrated noise, then temperatures are poised to go down, because this fBm obviously would have Hurst coefficient > 0.5, and so would tend to keep going in the same direction it was currently going for an extended time.

because this fBm obviously would have Hurst coefficient > 0.5, and so would tend to keep going in the same direction it was currently going for an extended time.
No, that is not how fractionally integrated noise behaves.
I’m all for making predictions from models, but you need to ensure that (1) you understand how the models actually behave and (2) your predictions need confidence intervals. Without these, predictions are worthless.

The main difference between fractional Brownian motion and regular Brownian motion is that while the increments in Brownian Motion are independent, the opposite is true for fractional Brownian motion. This dependence means that if there is an increasing pattern in the previous steps, then it is likely that the current step will be increasing as well. (If H > 1/2.)

Bart, just look at the example plots (H=0.75, H=0.95) in the wiki article you just linked to. You will notice those plots have many local minima and maxima.
You know what local minima and maxima means? It means that if the current step goes up, the following step might just go down.
Long term persistence (H>0.5, H<=1.0) has a defined population mean, and although it can spend arbitrary periods of time to one side of that mean, it does *not* follow that the current step direction is dependent on the previous one
Also note that the change in step constitutes a change in the first derivative. Note that differentiation is the equivalent of dividing the power spectral density by a value linearly proportional to f. Since the power spectral density of LTP is 1/f, the first derivative of a long term persistent can be white, i.e. independent.
Demetris Koutsoyiannis has a nice term of phrase for this. He notes that the expression "long memory" often associated with long term persistence is a misnomer. In fact, the behaviour of long term persistent series is not one of memory, but much more one of amnesia. But the error you make is a common one.
If you want to see an example of a prediction based on fractionally integrated noise, I recommend you look at UC's blog here: http://uc00.wordpress.com/2011/08/30/first-ever-successful-prediction-of-gmt-3-years-done/
UCs prediction is based on half integrated noise, back in 2008. Please note the central prediction dips slightly downward, conversely to your incorrect understanding. FWIW my instinct is that his confidence intervals are too narrow, but the presence of CIs allows his prediction to be tested. Please note UC’s recommendations on making predictions.
When your prediction rises to this standard, we can see if you can do as well as UC’s did.

An example to help you better understand Bart: a random walk exhibits greater persistence than either a fractionally integrated time series or an autoregressive time series. In fact a random walk can wander far further than fractionally integrated noise; the random walk does not have a defined population mean, whereas fractionally integrated noise does.
I would hope, since you understand how a random walk is generated (the algorithm is rather simple), you would recognise that such a claim that the direction of the current step is dependent on the previous step is quite incorrect. The next step direction in a random walk is random – by definition! So the direction of steps is independent from sample to sample.
Fractionally integrated time series are no different in this regard. One complication of fractionally integrated time series is what constitutes the next step – scaling properties and self similarity and all. It’s a complex topic.

Spence_UK says:
October 5, 2013 at 1:24 pm “Note that differentiation is the equivalent of dividing the power spectral density by a value linearly proportional to f.”
I assume you were in a hurry, but I believe you meant to say “multiplying”, and differentiation multiplies the PSD by f squared. “…the random walk does not have a defined population mean, whereas fractionally integrated noise does..”
I think we are possibly speaking of two different things, and need to more carefully delineate them.
First of all, it is not true that “the random walk does not have a defined population mean”. The expected value of a random walk, specifically the accumulation from zero of zero mean independent increments, is in fact zero. It is the excursion from zero which is expected to increase with time.
I suspect the property you were referring to was stationarity. But, fBm is not stationary. I am not sure about the process you and Dr. Lund refer to as “fractionally integrated noise”, but both you and he seem to maintain that it is stationary. Frankly, I do not see how this can be when the spectrum appears to still have a singularity at zero, but I have not looked closely at this yet.
In any case, if you have a difference of opinion with the Wikipedia page to which I linked, perhaps you should sign up as an editor and make your disagreement known.

Bart, quite right, I originally wrote something different then edited it and garbled it. Yes, multiplying rather than dividing. But the key point here is that persistence does not result in the property that you claim exists (that the direction of the current step is tied to the previous step). I hope we are clear on that point now, having given both emprical examples and the underlying principles.
As for the rest, I am comfortable that what I say is correct, but beware I am using technical terms with specific meaning.
A random walk does not have a defined *population* mean. It is not a stationary process. Note population mean is quite different to the sample mean.
Secondly, fractionally integrated time series (“noise” is not really appropriate and I try not to use it, although occasionally I slip into bad habits, especially when commenting on blogs…) can be stationary processes. They have a defined, fixed population mean. However, the sample mean is a poor estimator of the population mean (and, on top of that, does not improve much averaging).
For a more thorough treatment of the concept of stationarity in the context of long term persistence I would recommend the following presentation: Hurst-Kolmogorov dynamics and uncertainty

Spence_UK says:
October 5, 2013 at 5:26 pm “You will notice those plots have many local minima and maxima.”
Yes, but the point is what is likely. Sooner or later, they are likely to switch direction. The question is, how long do they tend, on average, to go largely in one direction or the other before switching to go the other way? I specifically do not mean every bump or bobble which switches directions, but the longer term, quasi-trends. This is largely determined by how strongly succeeding points are positively correlated on a particular timescale. “But the key point here is that persistence does not result in the property that you claim exists (that the direction of the current step is tied to the previous step).”
I don’t think I see that, at least not yet from the point of view of this particular argument. As I mentioned, the differentiated PSD is weighted by the frequency squared, so you cannot get a flat spectrum except in the particular case of random walk. It is indeed true that a random walk is expected neither to go up nor down based on where it was previously heading – it is expected to stay the same because it is a Martingale – but that is the special case of H = 1/2. If you differentiate a process with H > 1/2, you still end up with a downward slope in the PSD, which suggests continued long range positive correlation.
IIRC, one of the links you presented previously estimated H = 0.92, so it appears to me there is still, from this point of view, long range correlation which is significantly positive for an extended interval within the temperature series. This interpretation appears to jibe with what the Wikipedia excerpt to which I linked was stating.
Getting back to the PSD question, one of the reasons I have always been bothered by descriptions of fBm is that it is generally rooted in these power law PSD descriptions. But, the PSD is not really even defined for non-stationary processes, so all we are seeing in PSD estimates is basically a 1-d projection of a 2-d entity. It’s a little like creatures of Flatland trying to work out what 3-d creatures look like based on the various cross-sections they observe. Now we, as 3-d creatures, could do that with enough cross sections, but the Flatlanders have no conception of the 3rd dimension, and so can never visualize it within their sphere (or, circle) of comprehension. Similarly, we have no widely utilized 2-d analysis tool of which I am aware which would allow us to fully comprehend what is going on in every case.
I see, e.g., no reason that the processes which produce 1/f signatures in PSD estimates should have a unique, all-encompassing description. Many non-stationary processes can produce approximately 1/f behavior when processed through a PSD estimation routine. I think better tools which observe the full dimensionality are needed. I could say a bit more on this topic, based on some of my memories of trying to hash out such an approach back when I was studying these topics, but the memories are a bit faded, and it would probably be difficult to get across the concepts in this venue. It had something to do with 2-d Laplace transforms, but that is as far as I can go into it at this moment, for whatever it is worth. “Secondly, fractionally integrated time series … can be stationary processes.”
Hmm… that’s a little less general than your previous statement. I will have to read your link, and reaquaint myself with these things to either find out precisely what you mean, or ask questions which would help elucidate it. I doubt that will happen before this thread gets closed, so maybe we will take it up again at a later time.
But, again, I still believe that the two coincidences in the temperature data set to which I have referred previously indicate that there is a resonance involved, rather than just a random fractal drift, and this indicates that it is likely that temperatures in the next few decades will behave similarly to the era between roughly 1940-1970. I was hoping to make the question moot from your point of view, but it appears that sort of weak agreement will not happen on this thread.
But, I appreciate our civil discussion, as much as I regret the ugliness which passed between myself and Dr. Lund. If the clock runs out on us before there is time to say so, thanks for your time and insights.

Bart, thanks for the discussion as well, and I think all three of us (you, me, Dr Lund) would probably get on pretty well (even if we disagree on some points) if we were to meet over a beer rather than over the internet. Such is the nature of this debate.
I was thinking more about your observation that the first derivative multiplies through by 1/(f sq). This is a point we are in agreement on (once you corrected my silly errors above). When you put white noise through this, you get greater amplitudes at high frequencies than low. This means that we should expect differences to reverse.
I did a quick experiment to confirm this, and indeed randomly generated white noise exhibits anti-persistence in its first derivative. This can be thought through from a probabilistic perspective as well; consider three drawn samples from a iid gaussian random number generator. We then filter out those cases where the first difference is positive (i.e. sample 2 is greater than sample 1). For this pattern to continue, sample 3 must be greater than sample 2. So sample 3 must be the maximum of the three samples. This happens just one in three times. So for white noise, we are twice as likely to reverse the step than we are to continue it. This is quite consistent with what we have discussed on derivatives.
We also know and agree a random walk step direction is independent random (which is obvious by the algorithmic definition of a random walk, but confirmed by the behaviour of the derivative.
So I ran a short procedure in MATLAB to artificially generate flicker noise, and tested the probability that a positive step would be followed by a positive step. In fact I tested all 3 cases with a sample of 1000 points and my results were:
White noise 36% (expected 33%)
Flicker noise 39% (expected ??)
Random Walk 51% (expected 50%)
As you can see, flicker noise (a form of fractionally integrated time series) sits between white noise and a random walk in terms of the dependency on step direction. Flicker noise is more likely to reverse direction than continue its current direction, although the probability is close to 50% so it takes a reasonable number of samples to confirm this.
The other thing to be careful of is fractionally integrated time series show different properties at different scale, and are continuous systems, so the concept of a “step” can be defined but is potentially misleading.

Spence_UK says:
October 6, 2013 at 11:10 am “Such is the nature of this debate.”
And, the nature of the internet, which suspends the normal bounds of propriety we observe when dealing with one another directly. “When you put white noise through this, you get greater amplitudes at high frequencies than low. This means that we should expect differences to reverse… Flicker noise is more likely to reverse direction than continue its current direction, although the probability is close to 50% so it takes a reasonable number of samples to confirm this.”
But, doesn’t flicker noise have H < 0.5, which is the boundary noted in the Wikipedia article? So, isn't your finding actually consistent with this?

No, flicker noise is H>0.5 and H<1 (aka 1/f noise, or excess noise). All of these have higher spectral power density at the lower frequencies in comparison to the higher frequencies so exhibit long term persistence.
The wikipedia article is incorrect in its statement. Fractionally integrated noise is more likely to reverse direction than continue in the current direction. The probability of this lies between white noise (67% likely to reverse) and a random walk (50% likely to reverse).
Note this also explains UC's prediction quite nicely. A reversal is slightly more probable than not, so the central prediction is slightly down in comparison to the late 20th cent. warming.

OK, apparently my interpretation of H is not quite right. This particular branch of stochastic processes has not been my bag for… decades. But… “So I ran a short procedure in MATLAB to artificially generate flicker noise, and tested the probability that a positive step would be followed by a positive step.”
What we really want is not the conditional expectation of one sample at a time, but of many. What is the likelihood of an overall trend slope being, say, positive in the future, given that it has been positive in the past? And, what timelines are associated with the past trend, and the projected one?
I do not really have the time to formulate a precise statement of the question I am trying to ask which can be tested. But, given that the correlations are all positive, I suspect that some measure capturing this inchoate thought might well prove the Wikipedia statement correct in some sense. I am, at least, predisposed to believe that the author of the statement had something upon which to base it. Even if he was wrong, I don’t think we can make the determination until we know precisely what he meant.

Bart, I cannot think of any reasonable definition that would result in the statement in wikipedia being true. Remember that fractionally integrated time series are self-similar – a “trend” at one scale is simply a step at another scale. If it holds at one scale, it will hold at all.
Of course it is always difficult to know exactly what was intended without a formal definition. But I cannot think of any situation where the wikipedia statement is correct.

Another note – the correlations are all positive, but that is with respect to the absolute values, not the relative change in value (step). That is, if one value is above the population mean, then the next is likely to be also above the population mean. That does not mean the rate of change will be.

Spence_UK says:
October 7, 2013 at 12:31 pm
Spence_UK says:
October 7, 2013 at 12:31 pm
The question is fairly moot from my perspective. But, how about we just try a simple calculation. Let me see…
Given the normalized autocorrelation function
E{x(t2)x(t1)} = 0.5 * ( abs(t2)^2H + abs(t1)^2H – abs(t2-t1)^2H )
Then,
E{(x(2) – x(1)) * (x(1) – x(0))} = 2^(2H-1) – 1
So, the succeeding increment is expected to be the same sign if 2^(2H-1) > 1, i.e., if H > 0.5. Mmmm… Seems to say Wikipedia is on track, I think.

Bart, sorry but your algebra is doing something odd. The first expectation operator is the normalised autocorrelation coefficient. But the second expectation operator you have used as if determining the expectation of the series itself – these are two completely different operators, and you’ve used them interchangeably.
As a result, your second expression is just a summation of autocorrelation coefficients, which tell us very little of interest. I agree the autocorrelation coefficients of fractionally integrated time series are all positive, but your conclusion about step of the data itself does not follow.
I don’t have much time right now but it is trivial to give an example to show your expression is wrong. Consider white noise, H=0.5. Your expression says E[(x(2)-x(1))*(x(1)-x(0)) should be zero. I can test this easily in MATLAB (or excel, or any other tool which can draw independent normally distributed random numbers):
>> x = randn(10000,1);
>> x2 = x(3:end);
>> x1 = x(2:end-1);
>> x0 = x(1:end-2);
>> mean((x2-x1).*(x1-x0))
ans = -0.9610
I did several more draws, and they are all around -1, within about 0.1, which is what I would expect (white noise necessarily tends to reverse the sign for the reasons I gave above). But your expression claims the expected value is zero. Why? Well the autocorrelation coefficients of white noise are zero, since the samples are independent. Your strange sum of autocorrelation coefficients will also be zero. But the expected value from the series is highly negative.
Sorry Bart, but your expression is simply not correct, and the wikipedia comment is still wrong.

Spence, it’s a linear operator.
E{(x(2) – x(1)) * (x(1) – x(0))} = E{x(2) * x(1)} – E{x(2) * x(0)} – E{x(1) * x(1)} + E{x(1) * x(0)}
You just plug in the formula. H = 0.5 is random walk with an initial starting point of zero, so yes, the expectation should be zero. You would test it like this:
>> x = cumsum(randn(3,100000));
>> mean((x(3,:)-x(2,:)).*(x(2,:)-x(1,:)))
ans =
2.3749e-04

Or, rather
x = [zeros(1,100000) ; cumsum(randn(2,100000))];
mean((x(3,:)-x(2,:)).*(x(2,:)-x(1,:)))
ans =
5.1115e-05

Note that with H = 0.5,
E{x(t2)x(t1)} = 0.5 * ( abs(t2)^2H + abs(t1)^2H – abs(t2-t1)^2H ) = min(t2,t1)
which is the normalized autocorrelation of a random walk.

Or, you can do it your way, with a slight change:
x = cumsum(randn(1000000,1));
x2 = x(3:end);
x1 = x(2:end-1);
x0 = x(1:end-2);
mean((x2-x1).*(x1-x0))
ans =
5.0511e-04
though it takes more trials to get the number to generally come out insignificant. Over many trials, it tends to zero.
There appears, then, to be a disconnect between your definition of H and that used on the Wiki page, which may explain why we had different ideas about flicker noise. On the Wiki page, H = 0.5 is the designator for random walk, as is seen in my post previous relating their autocorrelation function for H = 0.5 to the standard one for random walk.

Note that for normalized white noise
E{(x(2) – x(1)) * (x(1) – x(0))} = E{x(2) * x(1)} – E{x(2) * x(0)} – E{x(1) * x(1)} + E{x(1) * x(0)} = -1
which is essentially your monte carlo result.

“H = 0.5 is the designator for random walk”
I am being informal. H = 0.5 is actually the designation for Wiener noise on the Wiki page, the sampled-in-time-data version of which is a random walk.

“The first expectation operator is the normalised autocorrelation coefficient.”
This may be a source of confusion. It is normalized autocorrelation, period. The normalization is to a constant noise level. E.g., for a random walk modeled informally as the sampled-in-time integration of white noise, the white noise input is normalized to have standard deviation sigma = unity. The formula gives not a coefficient of correlation, but the actual correlation.

Bart,
H=0.5 is not the designator for a random walk. You are wrong.
The Hurst exponent H is undefined for a random walk.
H=0.5 is for white noise, i.i.d. gaussian.
Your equations are wrong. Full stop.

Spence_UK says:
October 8, 2013 at 11:03 am
Well, I’m sorry you are getting heated, Spence. Clearly, the Wiki site defines H = 0.5 as being a random walk. That may not be the convention with which you are familiar, but it is what the Wiki site is using.
Look at the autocorrelation function they give. For H = 0.5, it is the autocorrelation function for a random walk.
E{x(t2)x(t1)} = 0.5 * ( abs(t2)^2H + abs(t1)^2H – abs(t2-t1)^2H )
Plug in H = 0.5
E{x(t2)x(t1)} = 0.5 * ( abs(t2) + abs(t1) – abs(t2-t1))
If t2 > t1 and both are positive, then
E{x(t2)x(t1)} = 0.5 * ( abs(t2) + abs(t1) – abs(t2-t1)) = 0.5 * (t2 + t1 – (t2-t1)) = t1
The same will hold when t1 > t2. Hence
E{x(t2)x(t1)} = min(t1,t2)
It appears to me there is likely a clash of conventions here. But, under the convention they are using, they are right.

The same will hold mutatis mutandis when t1 > t2.

“H=0.5 is for white noise, i.i.d. gaussian.”
Perhaps you mean the increments are white?

I know the equations are right the way I am using them. And, they agree with the Wiki site. That’s 2:1. If you are sure you are right, then I think the only conclusion can be that there is a schism in the conventions being used, and this is leading to confusion and frustration.
Sometimes… strike that… Often, the biggest part of the problem is defining the conventions and making sure everyone is on the same page.

I’m not getting heated – I’m just telling you that you are wrong.
If the increments are white, you have a random walk, H is undefined for a random walk.
x = cumsum(randn(…));
Gives a random walk. Since H is undefined for this expression, you cannot use your equations at all for your analysis. H *is* defined for white noise, H=0.5. This is calculated in MATLAB as:
x = randn(…);
As we saw, the expected product between the current step and the next step for a time series of H=0.5 with zero mean and unity variance is approximately -1. This is simple and expected as I described above. But your equation does not give -1; it gives 0, because it is wrong.
The equation you give, less a constant applied to each component, is the equation describing the autocorrelation coefficient. You then interchangeably use this with the expected value of the series, which is an entirely different operator.
I’m explaining it in as simple terms as I can, but if you don’t understand the principles under which the maths is defined, I cannot help you.

Well, these guys, these, and these, contradict you. Honestly, I don’t care. But, clearly, if you take the autocorrelation function (not coefficient, function) provided on this page and provided in many other references as well, then for H > 0.5, succeeding delta’s are positively correlated, and this should lead to precisely what the article states, that this “dependence means that if there is an increasing pattern in the previous steps, then it is likely that the current step will be increasing as well.”
You haven’t explained why you think they are wrong, you have only asserted it. I do not know why you think there is a principle contained within your assertion which I should be understanding. You have provided no maths. You have provided no alternative autocorrelation function. You have provided no definitions. What I do not understand is why you think I should merely take your word for it that all these sources are wrong, and only you hold the truth. And then, I do not know what you expect me to do with that information once I have accepted it.

Ah, okay, I’ve realised why you’re confused. I’m referring to the Hurst exponent, H, in all my discussion here (and the equations you give above are the equations for the normalised autocorrelation *coefficient* of the Hurst exponent).
You have linked there to a different (but unfortunately similarly named) measure, called the generalised Hurst exponent, Hq.
These are different measures and *cannot* be used interchangeably as you have done here.
The coefficient used to measure the properties of fractionally integrated time series is the original Hurst exponent coined by Mandelbrot, H, not Hq. Dr Lund and I have argued that a climate follows a pattern associated with H>0.5, *not* Hq>0.5. These are quite different things and should not be confused in the way you have here.
As I explained, the original Hurst exponent is undefined for a random walk. And, as shown by my analysis above and by UC’s prediction of the behaviour of half integrated noise, the future expectation is in either direction, but the central prediction is slightly down (reversing the recent trend).

Is this autocorrelation function valid or not?
E{x(t2)x(t1)} = 0.5 * ( abs(t2)^2H + abs(t1)^2H – abs(t2-t1)^2H )
If it is, then with H = 0.5, it is equal to the minimum of t1 or t2, which is the autocorrelation function of a random walk.

Comments are closed.

Advertisements

Advertisements

%d bloggers like this:

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!OkPrivacy policy

The liars picked 95% because it was higher than the last AR, IMO. As some wit commented earlier, it will probably be Cook’s legendary 97% in the next AR, if there be one.

Same same this time around: 95% probability does not equal confidence interval. Unscientific jiggery-pokery!

“Probabilistic estimates of quantified measures of uncertainty in a finding are based on statistical analysis of observations or model results, or both, and expert judgment2.”

2 In this Summary for Policymakers, the following terms have been used to indicate the assessed likelihood of an outcome or a result: virtually certain 99–100% probability, very likely 90–100%, likely 66–100%, about as likely as not 33–66%, unlikely 0–33%, very unlikely 0–10%, exceptionally unlikely 0–1%. Additional terms (extremely likely: 95–100%, more likely than not >50–100%, and extremely unlikely 0–5%) may also be used when appropriate. Assessed likelihood is typeset in italics, e.g., very likely (see Chapter 1 and Box TS.1 for more details).

From what I have seen of the raw data, the ‘confidence limits’ are untenable – the 95% CL on the ‘regression’, for example, should bound roughly 95% of the data. Anyone who has read chapter 2 of a statistics book discussing linear regression would know that the CL on a regression line comprises a pair of hyperbolas (one above the regression line and one below) with their vertices pointing at the overall data mean (the midpoint of the regression line). These hyperbolas are asymptotic to the lines passing through the data mean with different slopes, as illustrated here: http://www.statsoft.com/textbook/multiple-regression/

representing the regression slope plus or minus the CLs on the slope itself.

I suspect what they are reporting as a CL is on the variance of the slopes among the *models*, which is totally meaningless, as has already been pointed out elsewhere.

When the IPCC pulls numbers out of their collective behinds this is what happens. I am praying their credibility will be shot sooner rather than later.

Billion more dollars needed to fix report.

Well we cannot expect the Summary for Policy Makers to reflect the science can we? After all, the science hasn’t even been published yet! Worse, policy makers had a heavy hand in the wording of the summary of the science which hasn’t yet been published!

A better name might by “Summary of the Policy Makers, By the Policy Makers, For the Policy Makers”.

From the University of Colorado Boulder, Headline: “Shrinking atmosphere linked to lower solar radiation”. In summary, the upper atmosphere has shrunk 30% and cooled by 74 degrees since 1998.

“It is now clear that the record low temperature and density were primarily caused by unusually low levels of solar radiation at the extreme-ultraviolet level,” Solomon said.

C02 had an impact of less than 5%.

Hmmmmmmmmm……

Forgot to post the URL for the shrinking atmosphere article…

http://artsandsciences.colorado.edu/magazine/2010/08/shrinking-atmosphere-linked-to-low-solar-radiation/

When I was going to school, a mark between 85% and 100% was considered an A grade. I do not know what I would have done if it was further divided up into “likely,” “very likely,” and “most likely.” If it had, I would have been so confused.

The Belgian news is still on the CAGW line. In the Netherlands there was at last more nuance. Marcel Crok pointed out in a very gentle way that the models don’t fit and the warming could be far less than assumed, whereupon an AGW supporter made the remark that the warming disappeared in the oceans. They don’t realise that models failing to predict the actual conditions, predicting for the year 2100 is scientific blasphemia.

Manny M says:

September 27, 2013 at 11:37 am

From the University of Colorado Boulder, Headline: “Shrinking atmosphere linked to lower solar radiation”. In summary, the upper atmosphere has shrunk 30% and cooled by 74 degrees since 1998.

“It is now clear that the record low temperature and density were primarily caused by unusually low levels of solar radiation at the extreme-ultraviolet level,” Solomon said.

C02 had an impact of less than 5%.

Hmmmmmmmmm……

Manny M says:

September 27, 2013 at 11:38 am

Forgot to post the URL for the shrinking atmosphere article…

http://artsandsciences.colorado.edu/magazine/2010/08/shrinking-atmosphere-linked-to-low-solar-radiation/

Stephen Wilde has been saying this for some time.

My guess is Julia Slingo will not respond or acknowledge any problem, that is her role!

The following is a comment from the InterAcademy Council review of the IPCC process and procedures in 2010:

“The IPCC uncertainty guidance urges authors to provide a traceable account of how authors determined what ratings to use to describe the level of scientific understanding (Table 3.1) and the likelihood that a particular outcome will occur (Table 3.3). However, it is unclear whose judgments are reflected in the ratings that appear in the Fourth Assessment Report or how the judgments were determined. How exactly a consensus was reached regarding subjective probability distributions needs to be documented.”

I couldn’t find any such documentation in the SPM. Perhaps it’s in the AR5 WG1 report coming out soon? Or perhaps it doesn’t exist. Hmmm….

Over a year ago we had the SREX IPCC report that said.

Recently we had the

draftSummary for Policymakers.Did the instruments develop over the last week? Wasn’t it scary enough? Bring in the government representatives and what do you get, a consensus among civil servants. The scientists are left scratching their heads, but he who pays the piper……..

since the mid-20th century, and likely that human influence has more than doubled the probability of occurrence of heat waves in some locations (see Table SPM.1). {10.6}

http://www.climatechange2013.org/images/uploads/WGIAR5-SPM_Approved27Sep2013.pdf

Sorry,

Messed up the html. The last paragraph should also be indented and is part of the quote.

Since a linear trend model plus error has least squares estimators that are asymptotically nomal, one standard error of the trend margin is about .2 /1.645 degrees C, or on order of magnitude 0.15. So 0.85, the quoted trend estimate, has a Z-score of about 5+, which is way more significant than 95, 99%, 99.99%, etc…. Seems to me the authors are downplaying the evidence. While I have not read the report, I wonder what is Doug arguing? That the AR(1) model is inappropriate? Sure. But statistics has advanced way beyond this the last few decades. Elaborating, put in a fractionally differenced long-memory component in the model if you want. You’re not going to knock that z-score below significant. Please reinforce your argument.

Hi. Thanks for the post. RE:

Worth noting: the Solomon of this 2010 (?) paper is “Stanley”, not Susan, who used to work at NOAA, and won the “Nobble Prize” along with other members of the Algore fan club.

Weather forecasts use the same approach. If three weathermen say it will rain and one disagrees, there’s a 75% chance of showers.

“While I have not read the report, I wonder what is Doug arguing? That the AR(1) model is inappropriate?”

There’s a bit of history behind this. The IPCC in their last report used 90% confidence intervals based on REML linear regression to state bounds on the amount of warming. Doug had a very long argument with the UK Met Office when they used this model to claim the warming was “significant”, via a number of Questions in Parliament, that eventually ended with the Met Office conceding that AR(1) was unphysical and far less likely than some other noise models they could have used. The chief scientist there, Julia Slingo, said that AR(1) was unrealistic and tried to claim they hadn’t used it in making their assessment, instead like the IPCC using a wide range of evidence.

So evidently, when the new report came out Doug immediately checked what model they were using. Turns out they’re still using AR(1), the model Julia Slingo said was rubbish. The confidence intervals are wrong, because the error model used to generate them is wrong – on the authority of the Met Office chief scientist.

The question is, will she say so?

“Elaborating, put in a fractionally differenced long-memory component in the model if you want. You’re not going to knock that z-score below significant. Please reinforce your argument.”Actually, yes you can. That was the basis of Doug’s earlier argument – that a trendless ARIMA(3,1,0) model fits the data a thousand times better than AR(1). There are links to the context at Bishop Hill.

—

In case anyone else wants to check, the following R script ought to replicate (roughly) the IPCC’s calculation. I used GISTEMP here, although the IPCC didn’t say I’m assuming they used the combination of several global temperature series, or possibly different versions. But the closeness of the result indicates that this is indeed what they’ve done.

# ###################

# Replicate IPCC’s confidence interval for warming 1880-2012

library(nlme) # nlme contains gls

# Read in GISTEMP data obtained from

# http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

# Downloaded 27 Sep 2013

gistemp<-ts(c(-22,-13,-16,-19,-27,-25,-24,-31,-19,

-10,-33,-27,-31,-36,-32,-25,-18,-18,-31,-20,-14,-21,

-30,-36,-44,-29,-26,-42,-43,-46,-45,-44,-41,-39,-23,

-16,-36,-44,-31,-29,-27,-21,-29,-26,-24,-22,-9,

-18,-16,-31,-11,-7,-10,-25,-9,-15,-10,3,6,1,6,8,5,6,

14,1,-8,-4,-10,-11,-19,-6,2,9,-11,-12,-18,4,4,3,

-4,5,4,7,-20,-10,-4,-1,-5,6,4,-7,2,16,-7,-1,-12,15,

6,12,23,28,9,27,12,8,15,29,35,24,39,38,19,21,28,

43,33,45,61,40,40,53,61,60,51,65,59,63,49,59,66,

55,58)/100,start=1880)

# Do the regression using AR(1) model, restricted maximum

# likelihood, and show the coefficients of the best fit

glsREML<-gls(gistemp ~ time(gistemp), cor=corARMA(p=1,q=0), method="REML"); coefficients(glsREML)

# Calculate 90% confidence interval on the slope

confint(glsREML,level=0.9)

# Calculate 90% confidence interval on the increase from 1880 to the end of 2012

confint(glsREML,level=0.9)[c(2,4)]*(2013-1880)

# Plot the data and slope on a chart

plot(gistemp)

abline(glsREML)

IPCC news flash ,the global temperature trend going forward is going to be DOWN, not up.

Natural causes can explain the temperature rise from 1880-1998 from high solar activity, to a warm PDO post 1980, up through 1998, featuring more El Nino activity.

Davidmhoffer:

Dont you mean “Summary of the Money Makers, By the Money Makers, For the Money Makers”?

Nullius in Verba says: September 27, 2013 at 1:38 pm“Turns out they’re still using AR(1), the model Julia Slingo said was rubbish.”

Would you care to quote Julia Slingo saying AR(1) was rubbish?

This seems to be just another episode in Doug Keenan berating people for not using his pet AR(3,1,0) model, which gets a better fit at the expense of extra parameters and physical impossibility. But the IPCC is not claiming that linear+AR(1) is the best model of temperature. They are simply using it as the basis for calculating temperature change over the period.

According to the MO, AR(3,1,0) would give a temperature change of 0.73°C, which is well within the IPCC stated range. So I can’t see how this can be construed as an error.

Simon, except they are

not“making money”… they arespendingour money.How about “Summary of the Tax Spenders, By the Tax Spenders, For the Tax Spenders”.

There, much better.

Summary for ScareMongers — 100 Ways to Spread Scare Stories and Make a Million!

Manny M says:

September 27, 2013 at 11:37 am

Would like to hear Dr. Leif Svalgaard’s take on this.

Nick,

“Would you care to quote Julia Slingo saying AR(1) was rubbish?”“However, considering the complex physical nature of the climate system, there is no

scientific reason to have expected that a linear model with first order autoregressive noise

would be a good emulator of recorded global temperatures, as the ‘residuals’ from a linear

trend have varying timescales ”

You could have found that yourself. Doug linked to it.

“But the IPCC is not claiming that linear+AR(1) is the best model of temperature. They are simply using it as the basis for calculating temperature change over the period.”They’re claiming that these are 90% confidence intervals on the actual temperature rise. And implying, to a statistically non-literate audience who wouldn’t recognise the issues with the AR(1) choice, that they would be justified in thinking these have a 90% probability of covering the value that is being estimated. (Which from a Bayesian point of view is not true, either. That would be a ‘credible interval’, not a ‘confidence interval’. A common error, that.)

I know there’s this thing about “not giving ammunition to sceptics”, but wouldn’t it be simpler, more straightforward, and a lot less desperate, on hearing that the IPCC was basing its confidence intervals on an linear+AR(1) model, to simply say: “That’s wrong; they ought to have either picked a better model, explained the difficulty, or not given confidence intervals at all”?

Nullius in Verba says: September 27, 2013 at 4:06 pmThat’s not Julia Slingo saying AR(1) is rubbish. She’s saying it would not be a good emulator of global temperature. Noone ever thought it would be. And that’s nothing to do with AR(1). No linear model would be a good emulator. Nor is Keenan’s model. When he says it is a thousand times more likely, that covers over the fact that it is still impossibly unlikely.

That’s not the point. It’s used as a basis for computing the difference between temperatures at two times. Regression fits are used for this in all kinds of fields, and they work well.

Whatever happened to 1850 to the present??

Or is that Inconvenient??

regards.

Manny Msays:September 27, 2013 at 11:38 am

Ian Wsays:September 27, 2013 at 11:51 am

Bill Parsonssays:September 27, 2013 at 1:21 pm

You all may be interested in this little snippet from NASA about shrinking atmosphere and the

coolingeffects of CO2 in the thermosphere.http://www.nasa.gov/topics/earth/features/coolingthermosphere.html

“It’s used as a basis for computing the difference between temperatures at two times.”And reporting confidence intervals. It’s the confidence intervals that are the issue.

And it isn’t a computation of the difference in temperatures at two times. To do that, you would subtract the temperature at one time from the temperature at the other. It’s a much simpler process. What you’re trying to do is something a lot more complicated – by “temperature” you don’t mean the

temperature, but an underlying equilibrium temperature due to forcing that has short-term weather superimposed on top of it – a purely theoretical concept that assumes that’s how weather works. You’re trying to estimate the change in theunderlying equilibrium, and using a low-pass filter to cut out the high-frequency ‘noise’ – a process that requires accurate statistical models of both signal and noise to do with any quantifiable validity.The mainstream constantly conflate these two concepts – the observed temperature and the underlying equilibrium temperature – because it gives the impression that the statements are about direct empirical observation, while actually being about an unobservable parameter in a question-begging assumed model.

Had they simply given the OLS trend, you could have argued that it was merely informally descriptive, a rough and unscientific indication of how much temperatures generally had gone up, without making any comment on its significance. However, they stuck a

confidence intervalon it. Worse, they said there was a 90% likelihood of it covering the quantity being estimated. That gives the impression of a scientifically testable statistical statement. But the “confidence interval” here is a meaningless pair of numbers, because it relies for its validity on an assumption known not to be true.“Regression fits are used for this in all kinds of fields, and they work well.”Sadly so. That doesn’t make it right, though.

Youmight well know what they’re doing and that such estimates are to be treated cautiously, but the intended readers of this report don’t. They read it asauthoritative science, and if they see confidence intervals being written down, byscientists, they’re going to assume they’re meaningful. In this case, as in so many others, they’d be wrong.Nullius in Verba says: September 27, 2013 at 5:11 pm“But the “confidence interval” here is a meaningless pair of numbers, because it relies for its validity on an assumption known not to be true.”

They have given an estimate, and the basis on which it was calculated. And they have given confidence intervals for that calculation. That’s appropriate.

I agree that AR(1) is not the only basis for calculating confidence intervals, and there is a case for others (discussed here). But it’s not meaningless.

Not entirely meaningless, but surely deceptive. It is written in the Summary in such a way as to create a false impression. What it means is not stated clearly. SOP.

You’re right…it’s not “meaningless.” In fact, I’d say it means a lot that they selected an inappropriate basis to determine their estimate and confidence intervals.

“I agree that AR(1) is not the only basis for calculating confidence intervals, and there is a case for others (discussed here). But it’s not meaningless.”AR(1) is the

wrongbasis for calculating confidence intervals.It’s meaningless if it’s based on an untrue assumption. Policy makers need to know how accurately you can state the amount of global warming observed. This does not answer that question.

And the IPCC

didn’tfully explain the basis on which it is calculated – that’s something we had to deduce from what they did last time around (and buried in an appendix to the main report), and the fact that the interval they report this time matches that method. What the IPCC say is that we can be confident there’s a 90% likelihood that this interval covers the amount of global warming there has actually been. That’s not true.The 0.65 and 1.06 °C figures are obviously P.O.O.M.A. numbers. [That stands for Preliminary Order of Magnitude Approximation. Really it does.]

(Fake ‘David Socrates’ sockpuppet ID -mod)

Thomas Stocker had the best line of the IPCC press conference, claiming in substance that we do not have enough data about the last 15 years to properly evaluate the “hiatus”. Really not enough data in the past 15 years!!!! That have been the most instrumented, observed period ever… except that it showed no warming.

This guys deserves a IgNoble prize, just for that one!!!

Maybe, but if so, it was at least the result of impeccable risk assessment logic:

The clients must receive what they specified. Otherwise they will defund the project.

Well, so long as the MSM censor dissent, does it matter??

The Guardian is back to censoring again – censored a within the rules challenge to Liberal Democrat (the great Greenies of our major UK political parties) Tim Farron to face reality.

I do wonder whether they have the honesty to draw out historical coverage of the Duma under Brezhnev and compare it to how they write some tripe and get fawning Kommisar after Kommisar to say ‘oh wonderful benefactor, how wise you are!’?

It’s really getting beyond a joke.

“I would say: An ARIMA(3,1,0)? Surely you jest in saying that is a thousand times more likely? I would sure like to see that likelihood comparison.”Follow the earlier Met Office discussion at Bishop Hill. There are links back to Doug’s calculations, which Slingo confirms.

“Do prove me wrong, but the model you propose has a random walk component, meaning the variance increases linearly in time. That is clearly not the case with this data. What you propose isn’t even a stationary model, which should be the null hypothesis of any climate change argument.”It’s an approximation for a subset of data, like a linear trend is.

It’s a standard procedure in time series analysis – if there are roots of the characteristic equation very close to the unit circle, it makes any short enough segment of the series look approximately as if it was

onthe unit circle (i.e. random-walk-like), and a lot of the standard tools don’t work or give invalid answers. So the standard approach on analysing a new time series is to first test for unit roots, and if “found”, take differences until the result is definitely stationary. It’s an approximate measure to handle situations when you don’t have a long enough sample to fully explore the data’s behaviour, and to avoid getting misleading results because of that.Think of it as like the situation you get with the series x(t+1) = 0.999999999 x(t) + rand(t) where rand(t) is a zero-mean Gaussian random number series. Technically it’s AR(1) and stationary, but over any interval short of

massiveit’s going to look indistinguishable from x(t+1) = 1 x(t) + rand(t), which is a random walk. You don’t have enough data to resolve the difference.Usually, after testing for unit roots and taking differences, the next step is to test to find what ARMA process best fits the result. This is where the ARIMA(3,1,0) model came from – it is the ARIMA process that best fits the short-term behaviour of the data. The process is analogous to fitting a polynomial to a short segment of a function to model its curves. It’s a local approximation that is not expected to apply indefinitely.

While I agree the AR(1) model is lacking, I can’t understand why people would endorse Keenan’s letter when he seriously suggests the “correct” error margins might include zero. Does anyone actually think we shouldn’t be able to rule out the possibility of no warming in the last 100+ years?

“Nor is Keenan’s model. ”

Din not Keenan say repeatedly that he wasn’t advocating ‘his’ model, but merely using it to illustrate his point?

Brandon,

Depends what you mean by “correct”.

My view is that all this talk about whether changes in temperature are “significant” or not are meaningless without a validated statistical model of ‘signal’ and ‘noise’ derived independently of the data, which we don’t have. We don’t know the statistical characteristics of the normal background variability precisely enough, so it is simply impossible to separate any ‘global warming signal’ from it. All these attempts where you make nice neat mathematical assumptions simply get out what you put in, and your conclusion depends on what you assumed. If you assumed a trend you’ll find a trend. If you assume no trend, you’ll find there’s no trend. Doug’s ARIMA(3,1,0) is merely a standard example derived by the textbook method to illustrate that point.

But it’s got no independent validation, either, so it’s no more “correct” than anything else we could do. It’s simply a better fit.

There

areno correct error margins because we don’t have an independent, validated model of the errors. Wecannotrule out, by purely statistical means, the possibility of no warming in the last 100+ years. And the IPCC’s confidence intervals are just the same sort of significance testing in disguise.However, I don’t expect the mainstream is ready to accept that one, so I’ll let it pass. That you accept that linear+AR(1) is “lacking” is a good start, and sufficient for the time being.

http://stevengoddard.files.wordpress.com/2013/09/screenhunter_1013-sep-28-00-13.jpg

mwhite says:

September 28, 2013 at 3:22 am

http://stevengoddard.files.wordpress.com/2013/09/screenhunter_1013-sep-28-00-13.jpg

That’s pretty funny.

Maybe this is too simplistic of a way to look at it, but say you have a system with several subsystems, each of which you are 99% confident that you have a sufficient understanding to model accurately. If there are 6 or more of these subsystems, is it possible for you to be 95% confident of the accuracy of your model of the entire system? 0.99^6 = 94.1%

Given that the climate has well over six subsystems, few of which if any, we are 99% confident that we can model accurately, let alone any interactive effects, it would seem nonsensical simply from a probability standpoint to claim 95% confidence.

Did anyone notice when the panel were questioned about the pause, that the models can not be used to predict individual rain showers or storms but can be used to show you the trend?

It seems to have escaped them that their models are not good a predicting trends either.

If your trend predicting model does not map across to recently gathered real measurements, but is consistently producing higher temperature outputs, then your model is wrong and all of your predictions that come from it are wrong.

How many wrongs make a right?

Nullius in Verba:

You make some interesting comments about the suitability of various statistical methods to be used on different occasions.

Could you state what you currently feel would be the best sequence to use to analyse the various global temperature datasets?

I would really like to know what a person skilled in statistical analysis would say is the correct method or sequence to use.

Its a shame that most statistical methods give answers irrespective on whether the method should have been used in a particular case.

I have seen this before in my own profession, where I’ve been asked to provide a percentage of accomplishment in a project which involves research into unknowns— First, how can i put a finite number value on an open-ended research project? Secondly, since the research is hardly a linear process, how can any percentage of completion be anything but an ‘idiot meter’ indication of MY confidence that I’ll be done by the deadline I’m assigned?

Answers, respectively: I cannot, it cannot.

Everyone knows this, though several try to pretend it’s not the case. For many, who are stymied in a phase of a project, but wish to show that they’re actually making progress so as to not alarm someone higher up in the food-chain, starting with a very low number and leaving themselves lots of room to up the percentage as time goes by is a familiar strategy which allows them to show “progress” even when there is none.

Trouble occurs as the deadline looms, and you have to show progress but have less and less room before hitting 100%. Where weeks ago you could make 10% per week, at 85%, you can’t. The “idiot meter” indications now take on a definite asymptotic curve, approaching completion.

At what point to people stop and say, “You’re Bee Essing me, right?”

Apparently, the IPCC and their believers aren’t at that point yet, and think they still have room to increase the numbers…we also know that they create more room by lowering their starting point…

Anyone who puts confidence in an “idiot meter” indication…. Well, there’s a REASON we call them “idiot meters”…

“Could you state what you currently feel would be the best sequence to use to analyse the various global temperature datasets?”The primary problem, like I said, is the lack of a validated model for the background noise. This is where effort needs to be concentrated. Until we have one,

noneof the methods are going to give reliable answers.More sophisticated studies in detection and attribution (‘detection’ is what we’re looking at here) use the big climate models to generate statistics on the background variation. There’s a bit more reason to pay attention to these, since they are at least partially based on physics. But there are lots of approximations and parameterisations and fudge-factors galore, they’re

notvalidated in the sense required, they don’t match observations of climate in many different aspects and areas, and their failure to predict the pause (or rather, pauses of a similar length and depth) falsifies them even at the game they were primarily designed and built to play – global temperature.Because they’re not validated, the fact that they can’t produce a rise like 1978-1998 doesn’t mean anything. But because they’re not validated, the fact they can’t produce a pause like 1998-2013 doesn’t mean anything either, except that one way or another the models are definitely invalidated. The pause doesn’t show that global warming theory is

wrong, because it could just be that the models underestimated the natural variation.If, one day, they can build a climate model that can be shown to predict climate accurately over the range of interest, then the correct approach would be to use this to generate statistics on trend sizes over various lengths, and use that to perform the trend analysis and confidence intervals and so on. That would be the right way of doing it. But we’re not there yet.

“I would really like to know what a person skilled in statistical analysis would say is the correct method or sequence to use.”Thanks! But I’d only describe myself as ‘vaguely competent’ not ‘skilled’. There are a lot of people far better at this stuff than me!

The 95% # is total f***ing b*llsh*t.

Paul Vaughan is right. As the temperature continues to decline, the IPCC’s ‘confidence’ continues to rise.

It’s a ‘confidence‘ scam.

Nullius in Verba says:

September 28, 2013 at 1:21 am

You’re doing a great job, even if it appears to be falling on deaf ears. lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time. He or she is mistaking an ensemble statistic with a sample statistic.

@ dbstealey September 28, 2013 at 12:29 pm)

We’re dealing with kamikaze suicide-bomber types. Status quo daily wuwt operations alone can’t and won’t stop such a viciously aggressive force of nature.

For a thorough scientific treatment of a similar perspective as that which Doug is presenting here, I would recommend people read Cohn and Lins 2005 (“Nature’s Style: Naturally Trendy”). It describes in some detail different calculations and the consequence thereof.

http://www.timcohn.com/Publications/GRL2005Naturallytrendy.pdf

Saying AR(1) CIs provide some insight is, I am afraid, incorrect. It is like saying IID CIs would provide some insight. Well, no, it just gives an arbitrary and meaningless set of CIs which tell us nothing about the real world. Likewise AR(1) tells us nothing about the real world.

For Brandon: you ask a good question, and in fact the CIs do not say we cannot tell the world is warming. There are two separate tests; one is can we tell whether the world is warmer than 100 years ago. To do this, we would use the error associated with the measurement of global temperature. And I think we can be quite confident that the world has warmed. But this has nothing to do with AR(1) CIs, since the error associated with temperature measuring equipment is not AR(1).

The second question is: can we rule out the possibility of natural variability causing that warming. For this, we need a model for how temperature behaves naturally, and that is where the AR(1) assumption comes in. But the AR(1) assumption is as inappropriate as IID and the CIs generated are meaningless. The consequences of this are outlined in Tim Cohn’s article above.

Sorry for the double post. I felt for clarity (particularly wrt Brandon’s question) I should quote from Cohn and Lins’ conclusions:

[…] with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: natural climatic excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century.with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: natural climatic excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century.Aargh! Double paste in the above italicised quote – the same text is repeated twice. Sorry 🙁 If a mod could fix I would be hugely grateful… otherwise readers should just read half of it 🙂

If it has been repeated twice, it must be there three times. Can only see one repetition?

Spence_UK (September 29, 2013 at 1:29 am) wrote:

“But could this warming be due to natural dynamics?”Centennial timescale warming not only

“could”be natural — itnatural.isAnthony/Mods

umm

the title

SIGNIFICANT missed an I fellas:-)

[Done. Mod]

As painful and ridiculous the various IPCC ARxx reports are, at least we know there will never be an AR15 report. I wonder, will they go from AR14 to AR16 like hotel floors go from 12 to 14?

Dudley, very good, yes it is repeated once 🙂

Paul, indeed, that is the null hypothesis (although some want to reverse it…)

Bart says:

September 28, 2013 at 12:35 pm

Nullius in Verba says:

September 28, 2013 at 1:21 am

You’re doing a great job, even if it appears to be falling on deaf ears. lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time. He or she is mistaking an ensemble statistic with a sample statistic.

**************

Lund sez: I hope you guys know that I have been teaching graduate time series for 20 years at the PHD level in a math department. Much of the above is jibberrish. Fine…tell me I don’t jack about the topic. That I don’t understand sampling varaibility versus theoretical means. Somehow, I don’t think my stint as chief editor of Journal of the American Statistical Association puts me in the statistical hack category.

As for what the best model is, it contains errors that are fractionally differenced (not fully differenced). How do I know this? I’ve done the comparison on the CONUS series (admittedly not the global series).

Nullius, we don’t fit random walk models to temperature series because they are overdifferenced and have physically unreasonable properties. Please read about fractionally differenced, fully differenced (ARIMA), and ARMA series in a time series text. All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.

You might want to read Bloomfield and Nychka (1992), Climatic Change.

Lund@clemson.edu says:

September 29, 2013 at 3:52 pm

This is total gibberish. Your words from previous post:

“Do prove me wrong, but the model you propose has a random walk component, meaning the variance increases linearly in time. That is clearly not the case with this data.”There is no such “clearly”. A sample series of a random walk does not grow more noisy as you move away from the origin. It simply wanders from the origin, with 1-sigma bounds growing as the square root of time. There is no way to identify by inspection whether this data series has such a non-stationary component as it wanders quite a bit. So clearly, your assertion, that it was “clearly not the case”, is wrong.

“All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.”Incorrect. The PSD of the detrended temperature series looks like this. There are significant AR(2) processes with energy concentrated at frequencies associated with periods of about 65 and 21 years. You account for these, and I guarantee your conclusions will be wildly different.

Lund@clemson.edu

Please read the paper by Cohn and Lins I linked to above. It shows if you deal with longer range dependencies properly you DO get different conclusions.

Please note that time series that exhibit long term persistence (such as climate) result in nontrivial biases in estimated parameters. Many people who have classical training in time series analysis miss this important point.

“Were those numbers calculated, or just pulled out of some orifice?”

Not constructive.

… but big thank yous to both Nullius and Lund, who fought bravely and on topic. I wish I could contribute something more meaningful than a compliment from the peanut gallery.

I judged it an even match until Lund collapsed into an argument to the authority of his own resume, and attributed Bart’s comment to Nullius rather than responding to Nullius , September 28, 2013 at 12:17 pm

whicjh looks to me like the winning answer to the thread.

Don’t be a stanger, Lund. People who enjoy thinking will take the time to read you here, and even if our intent be to attack you, you’ll be reaching a wider and more attentive audience than your Journal ever did. It ain’t that bad.

Bart said “There is no such “clearly”. A sample series of a random walk does not grow more noisy as you move away from the origin. It simply wanders from the origin, with 1-sigma bounds growing as the square root of time.”

Bart, you are very wrong. Your first two sentences are complete contradictions.

If X_t is a random walk at time t, then

X_t = X_{t-1}+Z_t, t >= 1 , with X_0=0

and Z_t is some IID noise. So X_t = Z_1 + Z_2+ …. + Z_t and the variance of X_t = t times the variance of Z_1. I think you have been told this!!!! Standard deviations are the square root of the variance.

Grover Trattingham says:

September 30, 2013 at 10:44 am

“I think you have been told this!!!”I think I wrote it!!!!!:

The mean square error does not tell you everything you need to know about how a time series varies. You need to know the full autocorrelation.

Random walks look like this (upper plot). Not like the lower plot.

When we speak of “noise”, we generally mean higher frequency variation. Random walks do not display this, as the accumulation of independent increments naturally attenuates high frequencies.

There is very

clearlysome random walk-like behavior in the temperature data within the timeline of interest. Lund made an error. Perhaps in a rush to write something in opposition, but an error nevertheless.At the end of the day the “global’ climate is simply the sum of its factors – an equation. Some of the factors we know, or think that we know, and can attribute a value to them. There are many similar factors that we know, or think that we know, must feature, but to whose value we can only guess. We know that other factors exist, but we do not know how many and can attribute no weight to them. The equation is, for the moment unsolvable, It follows that any attempted solution, no matter how esoteric the statistical manipulation, can be allocated a very precise confidence level, and that level is Zero.

Yessir! Knocked that one out of the park!! I’m not a statistician, but I can tell when we’ve degenerated to the point of discussing the size of angel’s arses and the area of a pin’s head. It is incontrovertible fact that GCMs are NOT reality. This is simply the modern argument about determinism taken to a new level of abstraction: it is presently impossible to put all the variables into a program and have it accurately model the real world. Even were it possible to “model” the real world, it could not accurately APE the real world, as we cannot predict all the possible extra-solar input to the system, even if we DO know it’s impact (currently we do not).

To assert that we know enough about all the variables, and that they are all accurately accounted for in the GCMs, such that those GCMs are accurate enough that a statistical analysis of their results is the equivalent of a statistical analysis of actual historical record, is utter absurdity.

Neither GCMs nor their human, thus fallible, programmers are prescient to any reasonable degree. Anyone who argues to the contrary, regardless the math they cite or the degrees the claim, is a feather merchant. The Second Law of Thermodynamics is supreme, the universe moves toward increased entropy, and computer models are not reality, and cannot be made to be.

In spite of all the alarmist’s assertions to the contrary, reality persists in being what it is, not what they wish it to be.

Sorry, the preceding in agreement with and appreciation of @ Ken Harvey:

At the end of the day the “global’ climate is simply the sum of its factors – an equation. Some of the factors we know, or think that we know, and can attribute a value to them. There are many similar factors that we know, or think that we know, must feature, but to whose value we can only guess. We know that other factors exist, but we do not know how many and can attribute no weight to them. The equation is, for the moment unsolvable, It follows that any attempted solution, no matter how esoteric the statistical manipulation, can be allocated a very precise confidence level, and that level is Zero.

Bart,

“lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time.”What I think Lund is talking about is an

initiated random walk. That’s a random variable that has a known value (zero, say) at the first time, is Bernoulli +/-1 at the second time, a Binomial (probabilities 1/4,1/2,1/4 of values -2, 0 , 2) at the second step, and so on. The variance of the variable at each timestep increases linearly with time, which is equivalent to saying the standard deviation increases as the square root of time. (Variances of independent variables add.)A random walk, however, extends forever back in time too and has

infinitevariance at every time; all you can talk about is the variance of differences in position for times with a given separation, and stuff like that. Non-stationary processes are indeed very strange objects, and we tend to abuse the rigorous mathematics somewhat in practical applications. It’s only because we’re discussing a finite segment of data that we can get away with it.Lund,

“Nullius, we don’t fit random walk models to temperature series because they are overdifferenced and have physically unreasonable properties. Please read about fractionally differenced, fully differenced (ARIMA), and ARMA series in a time series text. All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.”Ummm. We were

talkingabout ARIMA. ARIMA(3,1,0), to be specific.ARFIMA, which I presume is what you meant, is a further generalisation. And sure, if you want to argue that ARFIMA would have been an even better choice, I doubt either Doug or I would disagree. Would you agree that the original point still stands – that the IPCC’s use of linear+AR(1) is an untenable choice for a model, given that we know the climate is not?

But as for whether its properties are physically unrealistic, bear in mind that a

lineartrend, if extended backwards a few thousand years, drops below absolute zero. Which is impossible. The point being that the linear trend is only intended as an approximation for a finite segment of the series, and in just the same way the ARIMA model is only an approximation for a finite segment. We can be sure that if we had sufficient data, we would eventually be able to resolve the difference.If you wish to

demonstratethat you wouldn’t get different conclusions, that would be interesting.“You might want to read Bloomfield and Nychka (1992), Climatic Change.”I prefer Box and Jenkins, thanks.

If a random walk was best, how likely do you think it would be that the mean is not changing in time but the variance is…..this is a property of the ARIMA(3,1,0) statistical error model proposed and should be one reason it is discarded.

Nullius in Verba says:

September 30, 2013 at 2:17 pm

I am speaking of a finite random walk segment as well. Informally, in engineering applications, we usually model such a process as the sampled output of an integrator driven by wideband (effectively “white”, over the frequencies we are interested in), zero mean noise (Wiener Process). We generally assume the input noise is Gaussian which, due to the Central Limit Theorem, is generally a fairly safe assumption. If the input noise has an RMS measure of “sigma”, then the standard deviation of the random walk relative to the initial value grows as sigma * sqrt(time).

The point, though, is that the samples of the random walk are highly correlated in time. If x(t_k) is the process at kth sampled instant t_k, then the autocorrelation function is E{x(t_k)*x(t_n)} = sigma * min(t_k,t_n). This is not independent noise which flails wildly between the RMS bounds, but rather a slowly evolving trajectory which “walks” up and down over time. Such trajectories typically look like the sample series I plotted in the upper plot here. The standard deviation of sigma * sqrt(time) is a measure of the RMS spread over an ensemble of such processes at the given time, but not of a single sample of such a process.

If you filter out the processes in the temperature record associated with what I suspect is an oceanic-atmospheric resonance at ~65 years, and the ~21 year process which is probably associated with solar cycles, then you are left with something which looks very much like some of these random walk trajectories. There’s no way you can see this by inspection of the raw data. Hence, Lund’s claim that it was “clearly” not there was puzzling. I think it is possible that he momentarily forgot how random walks behave, expecting something more akin to the lower plot at my link above. At least, that was the impression under which I made my comments.

On the other hand, maybe he was saying that any such behavior was clearly not dominant. I can agree with that. Or, maybe he was saying that, as a random walk tends to infinity with time, there clearly cannot be an open ended accumulation over eons of time. I can agree with that, too. But, the answer to that, as you pointed out in a post above, is that over a finite interval of time, a true random walk can be indistinguishable from a stationary low frequency process which is ultimately bounded over time.

lund@clemson.edu says:

September 30, 2013 at 4:32 pm

Please do not disregard my point about the PSD. There are powerful, at-least quasi-cyclical processes evident in the data record.

The ~65 year process, in particular, boosted the underlying trend during the latter portion of the 20th century. This natural (obviously, because it has executed more than two complete cycles within the record, before human influence could have been significant) boost in the trend was mistaken for CO2 accelerated warming. If you remove that cyclical influence, you are left with a modest trend which, again, is natural because it has been in evidence since the end of the LIA, before humans could have had an influence.

Properly analyzed, the data show no statistically significant evidence for CO2 induced warming at all.

Now the discussion is getting somewhere.

lund@clemson.edu

Yes, a fractionally integrated solution would be correct. I linked you to a peer-reviewed paper that showed when accounting for the fractionally integrated component, the 95% confidence intervals expand to include zero. I told you to read that paper and warned of the dangers (and difficulty) in estimating this value.

In summary: you claimed using such a model would have no effect, and I linked you to a peer-reviewed paper that does the calculation that shows it has a very large effect.

And without answering the scientific points I raise, you announce you are dropping out of the discussion at this point.

What a surprise.

Bart, the “quasi-periodic” cycles you describe are almost certainly a consequence of fractionally integrated random variations, and not deterministic cycles at all.

Unfortunately you plot your PSD on a linear scale. The CIs of a PSD become very large on this type of scale so the uncertainty is unclear. You should (generally) plot PSD on a log-log or semi-log Y scale. The CIs are fixed width on this type of scale and it is easier to determine if those “quasi periodic cycles” are actually anything of interest.

Spence_UK says:

October 1, 2013 at 6:15 am

“Bart, the “quasi-periodic” cycles you describe are almost certainly a consequence of fractionally integrated random variations, and not deterministic cycles at all.”They are definitely not deterministic cycles. I never suggested they were. But, they are very likely representative of ordinary resonant systems driven by random excitation.

The ~21 year cycles are likely due to quasi-periodic variations in solar activity. The Hale cycle has a nominal period of about this duration corresponding to oscillation of magnetic polarity (now that I think of it, this might be a manifestation of Svensmark-type cosmic ray modulation). A model for the sunspots, which displays characteristics very similar to observations, can be found here. See, for example, the simulations here and here, and compare to actual sunspot data here. Note, I am pointing out qualitative similarity, not quantitative replication. That analysis is TBD.

Partial differential equations on bounded domains typically produce solutions which can be expanded in a series of normal modes. Random excitation of these normal modes, with some inherent energy dissipation leading to damping, can be represented using a system model such as proffered above for the sun spots. This is standard operating procedure in, e.g., modeling structural vibrations, and is very widespread and well-established.

IMO, the ~65 year process is very likely such a normal mode excitation of the oceanic-atmospheric system. The random excitation doesn’t even actually have to be wideband – just nominally stationary with repeatable energy levels at the resonant frequency.

“Unfortunately you plot your PSD on a linear scale.”It is better for seeing the spikes indicative of the resonances. A log-log scale is useful for observing power-law types of noise, but not very good for picking out resonances, as the varying scale distorts the Cauchy peaks.

Bart, natural variability *is* a power law. The variability extends across nine orders of magnitude (see here: https://itia.ntua.gr/en/docinfo/1297/)

Your 20 year and 65 year cycle fits right in to that relationship. And yes, pretty much any cycle you can come up with can be matched to some physical phenomenon.

Spence_UK: I must apologize. Your cited paper is certainly relevant. I didn’t read it the first two times you mentioned it. Perhaps I was misled by a lot of the misinformation here. I don’t normally look at this page. You have to yell pretty loud to be heard over all the noise. There is so much to correct.

But your reference contains exactly the type of analysis that I would think is hard to refute. The ARIMA(3,1,0) stuff is going the wrong way. While, I have not read your citation in detail, if the fractionally differenced p-value is .07, I would say okay, that increased the p-value way way more than I expected for the value of d quoted. This is not quite the case for the Continental US Series, which I have examined. I’m going to guess there are specifics (yearly instead of monthly series, interval of observation (does it contain the last few years, etc.) to dicker around, but I believe you.

BTW, I am still here. I just don’t have time to reply to everything.

Spence_UK says:

October 1, 2013 at 1:26 pm

“Bart, natural variability *is* a power law.”To me, a power law is something which manifests as a linear slope in a log-log plot, generally indicative of red or pink noise. Spurious peaks may appear in a PSD analysis performed on such data, but they are generally amorphous and fail to be persistent.

“Your 20 year and 65 year cycle fits right in to that relationship.”These are concentrated regions of elevated “energy” with common morphology. They indicate distinct processes above and beyond any background variability.

The ~65 year quasi-cycle is of particular interest, because it was the upswing of that cycle which was interpreted as accelerated warming due to CO2, and which has now got the climate community tied up in knots as it shifts into the downward cycle, spoiling their expectations, and poised to bring the entire house of cards tumbling down.

There was a time in which it could be argued that the ~65 year process was a fluke, and just a mirage of random variability. However, when the cycle turned in in mid-two-ohs right on schedule, that contention became a lot less tenable.

Lund@clemson.edu

In which case I should apologise back for allowing my frustrations to cause me to question your motivations. Sorry for that. I’m glad you found the paper interesting.

Bart,

To me, a power law is something which manifests as a linear slope in a log-log plotYes indeed – and that is exactly what the paper I link above demonstrates, across nine orders of magnitude (out to millions of years). The peaks you show are not special in this context.

Spence_UK says:

October 2, 2013 at 5:55 am

“The peaks you show are not special in this context.”I always used to tell my students, you can’t rely on all this fancy math. It’s all based on models, and the models aren’t always applicable. You have to dig down into the actual data and do sanity checks.

If you can really look at this plot and not see the ~65 year component blazing in your eyes, then you need to take a break, and drop down to your local pub for a glass of perspective and soda.

Spence_UK says:

October 2, 2013 at 5:54 am

“In which case I should apologise…”Frankly, I don’t see any reason to apologize. Lund came in swinging with some very churlish comments. Now, he is apologizing to

youbecause he sees that you are sympathetic to his point of view. So, only people sharing his outlook and priorities deserves respect and a fair hearing. Pshaw.http://stevengoddard.files.wordpress.com/2013/09/screenhunter_1013-sep-28-00-13.jpg

Add a bit of explanation and it makes a great

~~tee-shirt~~sweatshirt.Bart: You are the one who has spewed a ton of misinformation here. Learn what a random walk is.

Lund – charming. You need to learn that there are other conventions in other disciplines than the one in which you are engaged, and expand your mind.

“Learn what a random walk is.”If you have an objection, state it clearly. My definition is standard, and consonant with descriptions widely available on the web.

Bart: all fractionally integrated time series will show strong “quasi-periodic” cycles near to the length of the time series. You can demonstrate this yourself easily by generating random fractionally integrated time series of a similar length.

Spence_UK says:

October 2, 2013 at 1:47 pm

” …all fractionally integrated time series will show strong “quasi-periodic” cycles near to the length of the time series.”But, not all quasi-periodic cycles are phantoms of a power law process. Furthermore, you appear to be glossing over the fact that the ~65 year quasi-periodic cycle is

halfthe length of the modern instrument record. There are very nearly two full cycles in evidence here.This argument has been going on here longer than you may realize. Back in the early 2000’s, many people noticed that there appeared to be a ~65 year cyclicality to the temperature data. In particular, the run-up in temperatures from about 1910-1940 was almost precisely the same as that between about 1970-2000.

Even greater force was given to the argument when the turning point came roughly in 2005, right on schedule.

Fractionally integrated noise is a Martingale – the future does not depend on the past. These two events, the replication of the 1910-1940 and 1970-2000 increases and the turnabout in roughly 2005, would be astounding coincidences under the assumption that this is a figment of fractionally integrated noise.

“Fractionally integrated noise is a Martingale…”This is incorrect. I am looking for the property which will convey my intention. That is basically that there is no compelling reason for fBm to exhibit repeating coherent patterns, and it is unlikely among all the paths that it could take for it to do so.

Bart,

It is easy to see patterns in fractionally integrated noise, especially when you get to interpret those patterns post hoc.

Spence_UK says:

October 2, 2013 at 3:25 pm

“…especially when you get to interpret those patterns post hoc.”A) But, I did not see them post hoc. As I stated, many people were waiting to see if the turnaround would come after 2000 as would be expected for a persistent quasi-cyclical process. It did.

B) These patterns would then be

uncannilyprecise. The increase from about 1910-1940 is virtually identical to that from 1970-2000. The turnaround came at exactly the right time.Put Spence my team,dawg.. Bart, ???????, this is out of control.

[“on” my team? “in” ? Mod]

(Another fake identity -mod)

bart@hotmail.com says:

October 2, 2013 at 5:55 pm

“Please, then give us, your altenative def of a random walk, replete with frequency reasoning.”This has been done upthread. Don’t be so lazy.

“Folks: do we blather on or STFU?”Oh, great. The feces flinging monkeys have arrived.

I would recommend you STFU, since you evidently have nothing to contribute.

carol@stat.usc.edu says:

October 2, 2013 at 5:39 pm

If you are confused, perhaps you should ask actual questions. I am not equipped to interpret “?????”.

Bart,

Please, then give us, your altenative def of a random walk, replete with frequency reasoning.

[How many screen names are you using? ~

mod.]At least 20:

Bart says:

October 2, 2013 at 3:06 pm

(This sockpuppet can’t keep his screen names straight. -mod)

[1. The repeated STFU’s in repeated replies are not needed, nor useful, nor desired. Cut them out.<2. What did you mean by “for fBm to exhibit repeating coh” ? Mod]

Fake name. -mod.

[When cursing the world, it is usually best to tell the un-cursed-at rest of the world, which other person the writer is mad at. Mod]

(Fake name. -mod)

Lund@hotmail.com says:

October 2, 2013 at 8:44 pm

“Da**it, dude: we are not [retarded].”I think you may be…

“Bart: you make no sense to me.”I expect not. You are delving into a new discipline you obviously know nothing about, and your first reaction is to deny there is anything more for you to learn. That’s a really… shall we be nice and say

imprudent?… thing to do.“Rather, I find your frequency domain gibberish (j in the states) non-cool.”That pretty much says it all. You don’t do frequency domain. Got it.

“But what did you say: random walks are subject to high -frequency noise cancellation? Please tell us more.”The roll off of gain with frequency due to integration is one of the most elementary control actiions there is. In Laplace notation, the transfer function of an integrator is 1/s, where s is the Laplace variable. Evaluating that function at s = j*w, where w is the radial frequency and j is the square root of -1, the gain falls off as the reciprocal of frequency, with a phase shift of -90 degrees, due to the j in the denominator.

It is trivial to see how this affects, e.g., sinusoids. The integral of cos(w*t) is (1/w)*sin(w*t). The higher the frequency, the smaller its integrated amplitude.

This is so absurdly simple, it is utterly amazing that you would stick your neck out without first asking politely for clarification. It is beyond basic.

But then, there’s a good reason Clemson U was never on my short list for schools I would attend. Stick to football, guys. Mathematics, apparently, isn’t your strong suit.

I tell you what, here is a tutorial on PID control design from a real school. PID stands for “proportional-integral-derivative”, and it is just about the most basic and widespread control technique available. There, you will see some discussion of the frequency response of an open loop with an integral control element.

I doubt this doofus calling himself “Lund” is actually Dr. Robert Lund of Clemson. Nobody granted such a position would make such a fool of himself, unless things at Clemson are even worse than most people assume.

So, for whatever person who has appropriated his title, before you say something even stupider like “what does the frequency response of an integral have to do with the frequency content of a random walk,” I will again point out that a Gaussian random walk is equivalent (and often the result of) sampling the output of an integrating process fed by wideband noise, in the limit as that input noise bandwidth approaches infinity, i.e., the standard Wiener Process.

Random walks look like the top plot here. Or, the plot here, for a non-Gaussian case. As is plainly evident, higher frequency motion is attenuated in each of these cases, in the latter because discrete accumulation, like continuous integration, attenuates higher frequencies.

This property is also immediately evident from the autocorrelation function, which I provided previously for the sampled data case: E{x(t_k)*x(t_n)} = sigma^2 * min(t_k,t_n). The cross correlation coefficient of nearby points is sqrt(min(t_k,t_n)/max(t_k,t_n)), which will be near unity when t_k is near t_n. That means the points tend to stay in the same neighborhood for an extended time, and fail to jump around significantly in narrow time intervals, i.e., their frequency content is weighted toward the low frequencies.

But then, this is obvious in the power law of -2 which such a process produces in a PSD estimate. I mean, this is really, really basic stuff.

So, guys… you’ve embarrassed your institution enough. How about you run along and play somewhere else now.

0) OK Class, you’ve had your fun. Cool it. No more comments.

1) Bart: You at least have the covariance function of a random walk right:

Cov(X_t, X_s)=\sigma^2 min(t,s).

Taking t=s show that the variance of X_t is \sigma^2 t. Ergo, the ratio of the varaince of X_n to the varaince of X_1 is n. That is the whole premise on why a random walk, and hence why an ARIMA(3,1,0) model is inapprorpiate. The issue has nothing to do with frequency domain properties of time series. That is non-sequiter.

2) Please stop with the insults. It torques me that you’ve insulted my math skills and my university. Really.

Robert Lund says:

October 3, 2013 at 8:55 am

“You at least have the covariance function of a random walk right:”I had it right when you were still in diapers. I stated it well before this comment when you had first entered in. Apparently, you were in a blood frenzy, looking forward to tearing into someone who didn’t share your viewpoint, so you glossed over it.

“That is the whole premise on why a random walk, and hence why an ARIMA(3,1,0) model is inapprorpiate.”It’s a flawed premise. Over a finite interval, AR processes with long time constants can behave essentially like a random walk. If the aim is prediction in the near term, there is nothing wrong with it.

“It torques me that you’ve insulted my math skills and my university. Really.”It was intended to. It torques me that you paid so little attention to the things I stated, and made to ridicule me based on your incomplete knowledge and lack of experience outside your narrow field. Has the lesson been learned?

” Over a finite interval, AR processes with long time constants can behave essentially like a random walk”

This is more jibberish. An AR process with long time constants? I can’t even begin to try to make sense of this. Do you even know what an autoregression is? Do tell us how a long time can be constant. And you wonder why you’re being ignored.

I look at it this way, Bart: You’re better at insults than math.

Robert Lund says:

October 3, 2013 at 10:27 am

[trimmed. Mod]

Do you even know how to derive the frequency response of a discrete AR system? Have you ever even heard of the Z-Transform? Do you have any idea how engineers design digital control systems? Do you know what a time constant is?

[trimmed. Mod]

“Do you even know how to derive the frequency response of a discrete AR system? Have you ever even heard of the Z-Transform? Do you have any idea how engineers design digital control systems? Do you know what a time constant is?”

What you mean to ask me is “Do I know how to derive the spectral density of an autoregression (your word discrete is inappropriate)”? The answer is yes (use a transfer function argument with a causal linear process driven by white noise, the latter having a constant spectral density). Do I know what a Z-transform is? Yes (better called a power series transform). I also know about Laplace transforms, characteristic functions, moment generating functions, etc (I do teach differential equations)…………..

Do I know what you are trying to say? Not in the slightest.

Robert Lund says:

October 3, 2013 at 11:22 am

“Do I know what you are trying to say? Not in the slightest.”Then, ask questions, instead of accusing me of not knowing what I am talking about. If you had asked nicely to begin with, we wouldn’t have had all this nastiness.

A simple example may help. Consider the difference equation

x(k+1) = 0.999*x(k) + w(k)

where w(k) is white noise. The time constant of this system is -T/log(0.999) = 995T, where T is the sample period. If you look at the output of this over a time less than a time constant, it will be very close to a random walk.

-1/log(0.999) = 999.5

Bart,

Respectfully, this whole thread is about the nuances between a random walk, a fractionally differenced random walk, and a causal AR(1) model. Some of us here have decades of proficiency with the topic. I truly don’t know what you mean to say most of the time. Like in the example above, there is no period T. An AR(1) with autoregressive coefficient of .999 is stationary.

Here’s the deal: an autoregression with an autoregressive polynomial that has roots close to the unit circle will exhibit more persistence than one with roots far from it. Often, a sample path of such a series could be mistaken for a random walk. What we are trying to tell you is that there is something in between an AR(1) and a random walk, dubbed an ARFIMA model, that is the most appropriate error model here. And its stationary (a random walk is not).

Look, I’m sorry my time series class picked on you. It was a bad idea to tell them about this thread. Can we move on?

Robert Lund says:

October 3, 2013 at 12:41 pm

“Like in the example above, there is no period T. An AR(1) with autoregressive coefficient of .999 is stationary.”Then, how are the samples at step k and n separated in time? This is how digital control systems are constructed – we sample sensor data at uniform intervals, and apply a control signal back in a feedback loop at that uniform sample interval.

Yes, it is ultimately stationary. But, you wouldn’t know that from a finite sample of data with record length on the order of less than a time constant.

“Can we move on?”Yes, we can move on.

Don’t take my earlier taunts too seriously. They were intended as a wake-up call. Clemson is a fine university, and I know people I respect who studied there. I have known idiots who attended MIT. The best engineer I ever worked with was from Purdue – that is not a plug, I did not go there. I judge a person by what he or she can do, not by what school he or she managed to get into and out of.

I, also, have many decades of experience with noise modeling and handling in a wide variety of electro-mechanical systems. We have to design systems which work to exacting specifications. I am very good at making them do that.

I am fluent in these topics within the argot of my milieu, which evidently makes for a communications problem. But, once upon a time, I delved into them extensively in an academic setting. My earlier texts included Larson & Shubert, Karlin and Taylor, and Doob.

Since graduating and entering the practical world, I have had very little use for Ito and Stratonovich. Most of the time, I have seen fBm used as a crutch to explain processes which, upon closer examination, bear the marks of poor data collection technique. IMHO, it is a GUT theory of noise. Sort of like string theory – a mathematically elegant artifice, but of little practical value.

But, we could argue about that all day long. I take it as read that you disagree. The bottom line is the same, no matter which of us is closer to the truth. If the apparent ~65 year process is, as I maintain, a result of random excitation of an oceanic-atmospheric mode, then temperatures are poised to go down. If it is more closely a process of fractionally integrated noise, then temperatures are poised to go down, because this fBm obviously would have Hurst coefficient > 0.5, and so would tend to keep going in the same direction it was currently going for an extended time.

because this fBm obviously would have Hurst coefficient > 0.5, and so would tend to keep going in the same direction it was currently going for an extended time.No, that is not how fractionally integrated noise behaves.

I’m all for making predictions from models, but you need to ensure that (1) you understand how the models actually behave and (2) your predictions need confidence intervals. Without these, predictions are worthless.

The main difference between fractional Brownian motion and regular Brownian motion is that while the increments in Brownian Motion are independent, the opposite is true for fractional Brownian motion. This dependence means that if there is an increasing pattern in the previous steps, then it is likely that the current step will be increasing as well. (If H > 1/2.)Bart, just look at the example plots (H=0.75, H=0.95) in the wiki article

you just linked to. You will notice those plots have many local minima and maxima.You know what local minima and maxima means? It means that if the current step goes up, the following step might just go down.

Long term persistence (H>0.5, H<=1.0) has a defined population mean, and although it can spend arbitrary periods of time to one side of that mean, it does *not* follow that the current step direction is dependent on the previous one

Also note that the change in step constitutes a change in the first derivative. Note that differentiation is the equivalent of dividing the power spectral density by a value linearly proportional to f. Since the power spectral density of LTP is 1/f, the first derivative of a long term persistent can be white, i.e. independent.

Demetris Koutsoyiannis has a nice term of phrase for this. He notes that the expression "long memory" often associated with long term persistence is a misnomer. In fact, the behaviour of long term persistent series is not one of memory, but much more one of amnesia. But the error you make is a common one.

If you want to see an example of a prediction based on fractionally integrated noise, I recommend you look at UC's blog here:

http://uc00.wordpress.com/2011/08/30/first-ever-successful-prediction-of-gmt-3-years-done/

UCs prediction is based on half integrated noise, back in 2008. Please note the central prediction dips slightly downward, conversely to your incorrect understanding. FWIW my instinct is that his confidence intervals are too narrow, but the presence of CIs allows his prediction to be tested. Please note UC’s recommendations on making predictions.

When your prediction rises to this standard, we can see if you can do as well as UC’s did.

An example to help you better understand Bart: a random walk exhibits greater persistence than either a fractionally integrated time series or an autoregressive time series. In fact a random walk can wander far further than fractionally integrated noise; the random walk does not have a defined population mean, whereas fractionally integrated noise does.

I would hope, since you understand how a random walk is generated (the algorithm is rather simple), you would recognise that such a claim that the direction of the current step is dependent on the previous step is quite incorrect. The next step direction in a random walk is random – by definition! So the direction of steps is independent from sample to sample.

Fractionally integrated time series are no different in this regard. One complication of fractionally integrated time series is what constitutes the next step – scaling properties and self similarity and all. It’s a complex topic.

Spence_UK says:

October 5, 2013 at 1:24 pm

“Note that differentiation is the equivalent of dividing the power spectral density by a value linearly proportional to f.”I assume you were in a hurry, but I believe you meant to say “multiplying”, and differentiation multiplies the PSD by f

squared.“…the random walk does not have a defined population mean, whereas fractionally integrated noise does..”I think we are possibly speaking of two different things, and need to more carefully delineate them.

First of all, it is not true that “the random walk does not have a defined population mean”. The expected value of a random walk, specifically the accumulation from zero of zero mean independent increments, is in fact zero. It is the excursion from zero which is expected to increase with time.

I suspect the property you were referring to was stationarity. But, fBm is not stationary. I am not sure about the process you and Dr. Lund refer to as “fractionally integrated noise”, but both you and he seem to maintain that it is stationary. Frankly, I do not see how this can be when the spectrum appears to still have a singularity at zero, but I have not looked closely at this yet.

In any case, if you have a difference of opinion with the Wikipedia page to which I linked, perhaps you should sign up as an editor and make your disagreement known.

Bart, quite right, I originally wrote something different then edited it and garbled it. Yes, multiplying rather than dividing. But the key point here is that persistence does not result in the property that you claim exists (that the direction of the current step is tied to the previous step). I hope we are clear on that point now, having given both emprical examples and the underlying principles.

As for the rest, I am comfortable that what I say is correct, but beware I am using technical terms with specific meaning.

A random walk does not have a defined *population* mean. It is not a stationary process. Note population mean is quite different to the sample mean.

Secondly, fractionally integrated time series (“noise” is not really appropriate and I try not to use it, although occasionally I slip into bad habits, especially when commenting on blogs…) can be stationary processes. They have a defined, fixed population mean. However, the sample mean is a poor estimator of the population mean (and, on top of that, does not improve much averaging).

For a more thorough treatment of the concept of stationarity in the context of long term persistence I would recommend the following presentation:

Hurst-Kolmogorov dynamics and uncertainty

Spence_UK says:

October 5, 2013 at 5:26 pm

“You will notice those plots have many local minima and maxima.”Yes, but the point is what is

likely. Sooner or later, they are likely to switch direction. The question is, how long do they tend, on average, to go largely in one direction or the other before switching to go the other way? I specifically do not mean every bump or bobble which switches directions, but the longer term, quasi-trends. This is largely determined by how strongly succeeding points are positively correlated on a particular timescale.“But the key point here is that persistence does not result in the property that you claim exists (that the direction of the current step is tied to the previous step).”I don’t think I see that, at least not yet from the point of view of this particular argument. As I mentioned, the differentiated PSD is weighted by the frequency

squared, so you cannot get a flat spectrum except in the particular case of random walk. It is indeed true that a random walk is expected neither to go up nor down based on where it was previously heading – it isexpectedto stay the same because itisa Martingale – but that is the special case of H = 1/2. If you differentiate a process with H > 1/2, you still end up with a downward slope in the PSD, which suggests continued long range positive correlation.IIRC, one of the links you presented previously estimated H = 0.92, so it appears to me there is still, from this point of view, long range correlation which is significantly positive for an extended interval within the temperature series. This interpretation appears to jibe with what the Wikipedia excerpt to which I linked was stating.

Getting back to the PSD question, one of the reasons I have always been bothered by descriptions of fBm is that it is generally rooted in these power law PSD descriptions. But, the PSD is not really even defined for non-stationary processes, so all we are seeing in PSD estimates is basically a 1-d projection of a 2-d entity. It’s a little like creatures of Flatland trying to work out what 3-d creatures look like based on the various cross-sections they observe. Now we, as 3-d creatures, could do that with enough cross sections, but the Flatlanders have no conception of the 3rd dimension, and so can never visualize it within their sphere (or, circle) of comprehension. Similarly, we have no widely utilized 2-d analysis tool of which I am aware which would allow us to fully comprehend what is going on in every case.

I see, e.g., no reason that the processes which produce 1/f signatures in PSD estimates should have a unique, all-encompassing description. Many non-stationary processes can produce approximately 1/f behavior when processed through a PSD estimation routine. I think better tools which observe the full dimensionality are needed. I could say a bit more on this topic, based on some of my memories of trying to hash out such an approach back when I was studying these topics, but the memories are a bit faded, and it would probably be difficult to get across the concepts in this venue. It had something to do with 2-d Laplace transforms, but that is as far as I can go into it at this moment, for whatever it is worth.

“Secondly, fractionally integrated time series … can be stationary processes.”Hmm… that’s a little less general than your previous statement. I will have to read your link, and reaquaint myself with these things to either find out precisely what you mean, or ask questions which would help elucidate it. I doubt that will happen before this thread gets closed, so maybe we will take it up again at a later time.

But, again, I still believe that the two coincidences in the temperature data set to which I have referred previously indicate that there is a resonance involved, rather than just a random fractal drift, and this indicates that it is likely that temperatures in the next few decades will behave similarly to the era between roughly 1940-1970. I was hoping to make the question moot from your point of view, but it appears that sort of weak agreement will not happen on this thread.

But, I appreciate our civil discussion, as much as I regret the ugliness which passed between myself and Dr. Lund. If the clock runs out on us before there is time to say so, thanks for your time and insights.

Bart, thanks for the discussion as well, and I think all three of us (you, me, Dr Lund) would probably get on pretty well (even if we disagree on some points) if we were to meet over a beer rather than over the internet. Such is the nature of this debate.

I was thinking more about your observation that the first derivative multiplies through by 1/(f sq). This is a point we are in agreement on (once you corrected my silly errors above). When you put white noise through this, you get greater amplitudes at high frequencies than low. This means that we should expect differences to reverse.

I did a quick experiment to confirm this, and indeed randomly generated white noise exhibits anti-persistence in its first derivative. This can be thought through from a probabilistic perspective as well; consider three drawn samples from a iid gaussian random number generator. We then filter out those cases where the first difference is positive (i.e. sample 2 is greater than sample 1). For this pattern to continue, sample 3 must be greater than sample 2. So sample 3 must be the maximum of the three samples. This happens just one in three times. So for white noise, we are twice as likely to reverse the step than we are to continue it. This is quite consistent with what we have discussed on derivatives.

We also know and agree a random walk step direction is independent random (which is obvious by the algorithmic definition of a random walk, but confirmed by the behaviour of the derivative.

So I ran a short procedure in MATLAB to artificially generate flicker noise, and tested the probability that a positive step would be followed by a positive step. In fact I tested all 3 cases with a sample of 1000 points and my results were:

White noise 36% (expected 33%)

Flicker noise 39% (expected ??)

Random Walk 51% (expected 50%)

As you can see, flicker noise (a form of fractionally integrated time series) sits between white noise and a random walk in terms of the dependency on step direction. Flicker noise is more likely to reverse direction than continue its current direction, although the probability is close to 50% so it takes a reasonable number of samples to confirm this.

The other thing to be careful of is fractionally integrated time series show different properties at different scale, and are continuous systems, so the concept of a “step” can be defined but is potentially misleading.

Spence_UK says:

October 6, 2013 at 11:10 am

“Such is the nature of this debate.”And, the nature of the internet, which suspends the normal bounds of propriety we observe when dealing with one another directly.

“When you put white noise through this, you get greater amplitudes at high frequencies than low. This means that we should expect differences to reverse… Flicker noise is more likely to reverse direction than continue its current direction, although the probability is close to 50% so it takes a reasonable number of samples to confirm this.”But, doesn’t flicker noise have H < 0.5, which is the boundary noted in the Wikipedia article? So, isn't your finding actually consistent with this?

No, flicker noise is H>0.5 and H<1 (aka 1/f noise, or excess noise). All of these have higher spectral power density at the lower frequencies in comparison to the higher frequencies so exhibit long term persistence.

The wikipedia article is incorrect in its statement. Fractionally integrated noise is more likely to reverse direction than continue in the current direction. The probability of this lies between white noise (67% likely to reverse) and a random walk (50% likely to reverse).

Note this also explains UC's prediction quite nicely. A reversal is slightly more probable than not, so the central prediction is slightly down in comparison to the late 20th cent. warming.

OK, apparently my interpretation of H is not quite right. This particular branch of stochastic processes has not been my bag for… decades. But…

“So I ran a short procedure in MATLAB to artificially generate flicker noise, and tested the probability that a positive step would be followed by a positive step.”What we really want is not the conditional expectation of one sample at a time, but of many. What is the likelihood of an overall trend slope being, say, positive in the future, given that it has been positive in the past? And, what timelines are associated with the past trend, and the projected one?

I do not really have the time to formulate a precise statement of the question I am trying to ask which can be tested. But, given that the correlations are all positive, I suspect that some measure capturing this inchoate thought might well prove the Wikipedia statement correct in some sense. I am, at least, predisposed to believe that the author of the statement had something upon which to base it. Even if he was wrong, I don’t think we can make the determination until we know precisely what he meant.

Bart, I cannot think of any reasonable definition that would result in the statement in wikipedia being true. Remember that fractionally integrated time series are self-similar – a “trend” at one scale is simply a step at another scale. If it holds at one scale, it will hold at all.

Of course it is always difficult to know exactly what was intended without a formal definition. But I cannot think of any situation where the wikipedia statement is correct.

Another note – the correlations are all positive, but that is with respect to the absolute values, not the relative change in value (step). That is, if one value is above the population mean, then the next is likely to be also above the population mean. That does not mean the rate of change will be.

Spence_UK says:

October 7, 2013 at 12:31 pm

Spence_UK says:

October 7, 2013 at 12:31 pm

The question is fairly moot from my perspective. But, how about we just try a simple calculation. Let me see…

Given the normalized autocorrelation function

E{x(t2)x(t1)} = 0.5 * ( abs(t2)^2H + abs(t1)^2H – abs(t2-t1)^2H )

Then,

E{(x(2) – x(1)) * (x(1) – x(0))} = 2^(2H-1) – 1

So, the succeeding increment is expected to be the same sign if 2^(2H-1) > 1, i.e., if H > 0.5. Mmmm… Seems to say Wikipedia is on track, I think.

Bart, sorry but your algebra is doing something odd. The first expectation operator is the normalised autocorrelation coefficient. But the second expectation operator you have used as if determining the expectation of the series itself – these are two completely different operators, and you’ve used them interchangeably.

As a result, your second expression is just a summation of autocorrelation coefficients, which tell us very little of interest. I agree the autocorrelation coefficients of fractionally integrated time series are all positive, but your conclusion about step of the data itself does not follow.

I don’t have much time right now but it is trivial to give an example to show your expression is wrong. Consider white noise, H=0.5. Your expression says E[(x(2)-x(1))*(x(1)-x(0)) should be zero. I can test this easily in MATLAB (or excel, or any other tool which can draw independent normally distributed random numbers):

>> x = randn(10000,1);

>> x2 = x(3:end);

>> x1 = x(2:end-1);

>> x0 = x(1:end-2);

>> mean((x2-x1).*(x1-x0))

ans = -0.9610

I did several more draws, and they are all around -1, within about 0.1, which is what I would expect (white noise necessarily tends to reverse the sign for the reasons I gave above). But your expression claims the expected value is zero. Why? Well the autocorrelation coefficients of white noise are zero, since the samples are independent. Your strange sum of autocorrelation coefficients will also be zero. But the expected value from the series is highly negative.

Sorry Bart, but your expression is simply not correct, and the wikipedia comment is still wrong.

Spence, it’s a linear operator.

E{(x(2) – x(1)) * (x(1) – x(0))} = E{x(2) * x(1)} – E{x(2) * x(0)} – E{x(1) * x(1)} + E{x(1) * x(0)}

You just plug in the formula. H = 0.5 is random walk with an initial starting point of zero, so yes, the expectation should be zero. You would test it like this:

>> x = cumsum(randn(3,100000));

>> mean((x(3,:)-x(2,:)).*(x(2,:)-x(1,:)))

ans =

2.3749e-04

Or, rather

x = [zeros(1,100000) ; cumsum(randn(2,100000))];

mean((x(3,:)-x(2,:)).*(x(2,:)-x(1,:)))

ans =

5.1115e-05

Note that with H = 0.5,

E{x(t2)x(t1)} = 0.5 * ( abs(t2)^2H + abs(t1)^2H – abs(t2-t1)^2H ) = min(t2,t1)

which is the normalized autocorrelation of a random walk.

Or, you can do it your way, with a slight change:

x = cumsum(randn(1000000,1));

x2 = x(3:end);

x1 = x(2:end-1);

x0 = x(1:end-2);

mean((x2-x1).*(x1-x0))

ans =

5.0511e-04

though it takes more trials to get the number to generally come out insignificant. Over many trials, it tends to zero.

There appears, then, to be a disconnect between your definition of H and that used on the Wiki page, which may explain why we had different ideas about flicker noise. On the Wiki page, H = 0.5 is the designator for random walk, as is seen in my post previous relating their autocorrelation function for H = 0.5 to the standard one for random walk.

Note that for normalized white noise

E{(x(2) – x(1)) * (x(1) – x(0))} = E{x(2) * x(1)} – E{x(2) * x(0)} – E{x(1) * x(1)} + E{x(1) * x(0)} = -1

which is essentially your monte carlo result.

“H = 0.5 is the designator for random walk”I am being informal. H = 0.5 is actually the designation for Wiener noise on the Wiki page, the sampled-in-time-data version of which is a random walk.

“The first expectation operator is the normalised autocorrelation coefficient.”This may be a source of confusion. It is normalized autocorrelation, period. The normalization is to a constant noise level. E.g., for a random walk modeled informally as the sampled-in-time integration of white noise, the white noise input is normalized to have standard deviation sigma = unity. The formula gives not a

coefficientof correlation, but the actual correlation.Bart,

H=0.5 is not the designator for a random walk. You are wrong.

The Hurst exponent H is undefined for a random walk.

H=0.5 is for white noise, i.i.d. gaussian.

Your equations are wrong. Full stop.

Spence_UK says:

October 8, 2013 at 11:03 am

Well, I’m sorry you are getting heated, Spence. Clearly, the Wiki site defines H = 0.5 as being a random walk. That may not be the convention with which you are familiar, but it is what the Wiki site is using.

Look at the autocorrelation function they give. For H = 0.5, it

isthe autocorrelation function for a random walk.E{x(t2)x(t1)} = 0.5 * ( abs(t2)^2H + abs(t1)^2H – abs(t2-t1)^2H )

Plug in H = 0.5

E{x(t2)x(t1)} = 0.5 * ( abs(t2) + abs(t1) – abs(t2-t1))

If t2 > t1 and both are positive, then

E{x(t2)x(t1)} = 0.5 * ( abs(t2) + abs(t1) – abs(t2-t1)) = 0.5 * (t2 + t1 – (t2-t1)) = t1

The same will hold when t1 > t2. Hence

E{x(t2)x(t1)} = min(t1,t2)

It appears to me there is likely a clash of conventions here. But, under the convention they are using, they are right.

The same will hold

mutatis mutandiswhen t1 > t2.“H=0.5 is for white noise, i.i.d. gaussian.”Perhaps you mean the

incrementsare white?I know the equations are right the way I am using them. And, they agree with the Wiki site. That’s 2:1. If you are sure you are right, then I think the only conclusion can be that there is a schism in the conventions being used, and this is leading to confusion and frustration.

Sometimes… strike that… Often, the biggest part of the problem is defining the conventions and making sure everyone is on the same page.

I’m not getting heated – I’m just telling you that you are wrong.

If the increments are white, you have a random walk, H is undefined for a random walk.

x = cumsum(randn(…));

Gives a random walk. Since H is undefined for this expression, you cannot use your equations at all for your analysis. H *is* defined for white noise, H=0.5. This is calculated in MATLAB as:

x = randn(…);

As we saw, the expected product between the current step and the next step for a time series of H=0.5 with zero mean and unity variance is approximately -1. This is simple and expected as I described above. But your equation does not give -1; it gives 0, because it is wrong.

The equation you give, less a constant applied to each component, is the equation describing the autocorrelation coefficient. You then interchangeably use this with the expected value of the series, which is an entirely different operator.

I’m explaining it in as simple terms as I can, but if you don’t understand the principles under which the maths is defined, I cannot help you.

Well, these guys, these, and these, contradict you. Honestly, I don’t care. But, clearly, if you take the autocorrelation

function(notcoefficient,function) provided on this page and provided in many other references as well, then for H > 0.5, succeeding delta’s are positively correlated, and this should lead to precisely what the article states, that this “dependence means that if there is an increasing pattern in the previous steps, then it is likely that the current step will be increasing as well.”You haven’t explained why you think they are wrong, you have only asserted it. I do not know why you think there is a principle contained within your assertion which I should be understanding. You have provided no maths. You have provided no alternative autocorrelation function. You have provided no definitions. What I do not understand is why you think I should merely take your word for it that all these sources are wrong, and only you hold the truth. And then, I do not know what you expect me to do with that information once I have accepted it.

Ah, okay, I’ve realised why you’re confused. I’m referring to the Hurst exponent, H, in all my discussion here (and the equations you give above are the equations for the normalised autocorrelation *coefficient* of the Hurst exponent).

You have linked there to a different (but unfortunately similarly named) measure, called the generalised Hurst exponent, Hq.

These are different measures and *cannot* be used interchangeably as you have done here.

The coefficient used to measure the properties of fractionally integrated time series is the original Hurst exponent coined by Mandelbrot, H, not Hq. Dr Lund and I have argued that a climate follows a pattern associated with H>0.5, *not* Hq>0.5. These are quite different things and should not be confused in the way you have here.

As I explained, the original Hurst exponent is undefined for a random walk. And, as shown by my analysis above and by UC’s prediction of the behaviour of half integrated noise, the future expectation is in either direction, but the central prediction is slightly down (reversing the recent trend).

Is this autocorrelation function valid or not?

E{x(t2)x(t1)} = 0.5 * ( abs(t2)^2H + abs(t1)^2H – abs(t2-t1)^2H )

If it is, then with H = 0.5, it is equal to the minimum of t1 or t2, which is the autocorrelation function of a random walk.