*Guest essay by Sam Outcalt*

**Introduction: **The object of this document is to present a brief and concise introduction to Hurst ReScaling. More detail is presented in a paper by Outcalt et.al. (1997), which is posted on the WUWT

Website ( https://wattsupwiththat.files.wordpress.com/2012/07/sio_hurstrescale-1.pdf ). A extensive reference list in this paper can be consulted so references are omitted here.

**Background: **During a study of the hydrology of the Nile a British Engineer discovered that the annual runoff appeared to have a *memory*. The Hurst Exponent (H) named in his honor is calculated using Equation 1, in which R, S and n are the rescaled range, standard deviation and observation number.

Equation 1. H = Log[R(n)/S(n)] / Log (n)

The rescaled range is the amplitude of the integral trace of deviations from the mean of a serial data vector. Hurst had anticipated an exponent near 0.5, which is termed *Brown Noise* and can be simulated from a series of random numbers. Values above 0.5 indicate increasing auto-correlation in the data with 1.0 termed *Black Noise* indicating extreme correlation with the past or strong “memory”. In rather obscure paper (Outcalt et.al.(1997)) discovered that the extremes and inflections of the integral trace used to determine the value of the rescaled range flagged *regime changes* in the data. These regimes were discovered to pass tests for a statistically normal distribution at significance levels where the bulk data failed. Linear trend lines fit to the “regimes” also displayed significant slope differences at the transitions.

**Example: **A useful and instructive example calculation can be carried out on the NASA GISS Data used to document “Global Warming” or “Climate Change” or the pressing need for new Carbon Taxes.

Before presenting the example let’s outline the steps in the calculation.

1. A calculation of the Hurst Exponent can be made to estimate the level of “memory” in the data.

2. The mean is subtracted if the data is not presented as deviations from the record mean.

3. The integral trace is calculated as the accumulated deviations from the record mean.

4. Any slight trend is removed from the integral trace.

An upward linear trend will produce a parabolic trace of negative values below the zero level as the early deviation sum is downward and the later values upward terminating near zero. A downward linear trend will produce a positive parabolic trace. More complex functions with upward and downward trending sectors will display an integral trace with sectors both above and below the zero level.

The NASA GISS data is displayed in Figure 1. It should be mentioned that this data vector is already in the deviation from record mean format.

Figure 1. The data show a “strong memory” with a Hurst Exponent of 0.787.

There are two major inflections on the integral trace in 1936 and 1976. The latter inflection is the base of the “hockey stick” warming regime. A 5 year moving average trace indicates that the end of the warming trend may have occurred early in the 21^{st} Century. Before moving to a consideration of the early 21^{st} century it is necessary to mention that there is an alternate method for estimating the Hurst exponent. The Hurst Exponent can also be calculated from a log-log linear fit to the data FFT as the X,Y axes of an FFT have the following for listed in Equation 2.

Equation 2. Y = a + Exp [H X] or Ln Y = Ln a + H Ln X or Log10 Y = Log10 a + H Log10 X

Figure 2. The slope of the Log-Log transform of the FFT estimates H at 0.774 compared to 0.787 in Figure 1.

To explore the transition into the early years of the 21^{st} Century the period from 1980-2010 was lifted from Figure 1 and the mean subtracted and the analysis carried out on the data. The results are displayed as Figure 3.

Figure 3. The data shows a strong “memory” similar to the 1880-2010 data set and a single inflection at the integral trace minimum value in 1997.

In Figure 3 the integral inflection in 1997 and leveling of the “warming trend” in 2004 hint that the warming regime which began in 1976 may have ended with the turn of the century.

The reader is encouraged to run thru the calculations using a spreadsheet program or other resources. I use Dplot software because it has automated functions to create integral traces and FFT’s. I strongly believe that it is impossible to understand analytical procedures without actually carrying out the calculations or better still writing the source code.

**Data from Fountain Hills, AZ:**

There is some controversy about the requirement for a *strong memory* to display regime transitions. To test this hypothesis I have analyzed a data set created from readings collected from a WiFi temperature probe. The probe was located on a north facing wall of my *winter house* in Fountain Hills, AZ. The data were initially collected a 30 minute interval and decimated to yield only noon readings of air temperature. The decimated data is displayed as Figure 4.

Figure 4. Data collected with a Lascar WiFi Probe from 19 March thru 16 April 2013. This data set was decimated to preserve only noon readings.

Even with a Hurst Exponent just above the threshold of Brown Noise and well below the value of the NASA GISS data the integral trace inflections flag air mass transitions in the Phoenix Region. Although there are only 29 observations in the set the integral information content remains in tact

indicating air mass transitions. A data set from 3 probes at radically different sites at my *winter house *produce some interesting conclusions. This data is displayed as Figure 5.

Figure 5. Data from 3 USB Probe sites at the *Winter House *in Fountain Hills, AZ.

The traces in Figure 5 indicate that the integrals flagged major air mass transitions at 3 probe sites. The strong diurnal signal is still present in the integral but the regime changes are still evident. This indicates that probe location is not critical for airmass transition analysis. The spike in the Wall Plot is due to the beam radiation striking the probe but has little impact on the integral.

**Conclusion: **Hurst Rescaling provides a unique method for detecting regime changes in weather and climate data. The detection of strong airmass control of the integral in *Brown Noise* indicated that strong regimes may be detected even in data with almost no auto correlation. Probe location appears to have little effect on integral transitions. The major consideration in probe location appears to be placing the probes at sites where there in no unnatural time dependent influence. A probe placed in the exhaust area of a clothes dryer running on a irregular schedule is one thing but the diurnal passage of shadows due to buildings may have a minimal impact on the integral trace.

The reader is encouraged to collect data using WiFi or USB Temperature Probes. The Lascar USB-Lite (under 50$US) will record data at 30 minuted intervals for a month before a battery replacement. I waterproof these tiny probes with a section of old 23C road bike inner tube trimmed to the probe length and mount the probes inside a small styrofoam coffee cup attached to a length of stiff wire. The wire is then attached to a tree limb.

A strong characteristic of chaotic systems is this scale invariance, which would mean that at any level the weather can be analysed as the sum of parts of sub-levels; turtles all the way down. Of course one bookend would be global scale and the other bookend would be the molecular level. Mandelbrot discusses Hurst extensively, in his stock-market book.

For the purposes of debunking AGW, all that is necessary is to show that the warming of the late 20’th century could easily be a chaotic blip.

Sam

It’s an interesting way to show the “evolving” integral. I suppose the question as to this “memory” is actually telling us anything new – I couldn’t quite see where you did this. Surely, as interesting as it may be – and as well written the piece is – is it all immaterial since we know location, and more importantly experimental changes (land use change) do affect the recorded trend.

I am aware of the use of the H dimension in the derivation of self-affine fractal dimensions and was just wondering whether or not you were suggesting that experimental changes (location modifications) would lead to a break down in the “scaling”; and that this might help decipher stations that have experimental changes better than comparisons with other neighbouring stations. In short, you’re looking for an experimental signal (per station) rather than correction by a statistical method based on an assumption (as is the case at the moment). Is this perhaps what you were alluding to?

peterg

Correct me if I am wrong but the fractal approach for observed systems is statistical and very often has lower and upper bounds where the scaling holds – hence the introduction of multifractals, themselves statistical. So I can’t see how all this helps other than to state the obvious. I don’t think the onus is on us to prove that AGW is indeed false, just that those who promote it to prove that is real. We can then debunk the “proof” unless it is real. At the moment the evidence is scant if there at all.

There’s an interesting contribution on HK dynamics (Hurst-Kolmogorov model) by Demetris Koutsoyiannis “Stochastics and its importance in studying climate” over at Climate Dialogue, well worth reading:

http://www.climatedialogue.org/long-term-persistence-and-trend-significance

Benoit Mandelbrot commented on the Hurst Exponent as a measure of randomness. He also described reality as fractally complex as he cautioned against ignorant inductive inference. Mandelbrot knew that the Black Swan lurks in any sufficiently complex mapping of reality.

Interesting article. One omission is an explanation of R, the rescaling range. Where does that come from?

Outcalt et al “… a transformed series R. The adjusted range R(n)of the series is equal to

(Rmax – Rmin). The adjusted range, divided by the standard deviation (S) of the record, yields

the rescaled range. The rescaled range is expected to increase asymptotically

with the square root of the number of record observations (n). ”

Seems highly dependant on two single outliers. :?

The ITIA research group led by D. Koutsoyiannis, have published extensively on Hurst Kolmogorov dynamics in climate, especially on runoff. e.g. Markonis, Y., and D. Koutsoyiannis, Hurst-Kolmogorov dynamics in paleoclimate reconstructions, European Geosciences Union General Assembly 2010, Geophysical Research Abstracts, Vol. 12, Vienna, EGU2010-14816, European Geosciences Union, 2010.

Figure 2. The slope of the Log-Log transform of the FFT estimates H at 0.774 compared to 0.787 in Figure 1.

===============

Doesn’t this imply a power-law distribution? That temperature will exhibit greater tendency to extremes as compared to the “normal” distribution? Thus, the statistical tests that are typically applied to climate data are underestimating the frequency of extreme events due to natural causes, because the normal distribution assumes H = 0.5.

This then implies that “climate change” is perhaps not occurring. Rather than the amount of “natural climate change” has been underestimated by incorrect application of statistics. The wrong assumption about temperature has been used to estimate the probability of climate change and extreme events.

” Thus, the statistical tests that are typically applied to climate data are underestimating the frequency of extreme events due to natural causes, because the normal distribution assumes H = 0.5.”

Interesting point. Where do you see temperatures being compared to a normal distribution and declared abnormal, such that this idea can be tested?

I have another method for detecting regime changes in a data series: Looking at the data.

One thing that I got out of this is that GISS is changing. Looking at Figure 1, the change from about 1910 to 1940 used to be almost precisely the same as the change from about 1970 to 2000, as it still is in HADCRUT. Wish I had saved a previous screed shot of GISS.

…screen shot…

There’s an interesting contribution on HK dynamics (Hurst-Kolmogorov model) by Demetris Koutsoyiannis “Stochastics and its importance in studying climate” over at Climate Dialogue, well worth reading:http://www.climatedialogue.org/long-term-persistence-and-trend-significance

Let me second this. Koutsoyiannis is the HK “man”.

Let me also comment on the connection between HK dynamics and statistics and chaos. Complex nonlinear multivariate systems often exhibit “strange attractors” — local fixed points in a set of coupled nonlinear ordinary differential equations — that function as foci for Poincare cycles in the multivariate phase space. In classical deterministic chaos, a system will often end up in a complex orbit around multiple attractors, one that essentially never repeats (and the attractors themselves may migrate around as this is going on). In a system such as the climate, we can never include enough variables to describe the actual system on all relevant length scales (e.g. the butterfly effect — MICROSCOPIC perturbations grow exponentially in time to drive the system to completely different states over macroscopic time) so the best that we can often do is model it as a complex nonlinear set of ordinary differential equations with stochastic noise terms — a generalized Langevin equation or generalized Master equation, as it were — and average behaviors over what one hopes is a spanning set of butterfly-wing perturbations to assess whether or not the resulting system trajectories fill the available phase space uniformly or perhaps are restricted or constrained in some way. We might physically expect this to happen if the system has strong nonlinear negative feedback terms that stabilize it around some particular (family of) attractors. Or, we might find that the system is in or near a “critical” regime where large fluctuations are possible and literally anything can happen, and then change without warning to anything else, with very little pattern in what happens or how long it lasts.

The solutions in question are usually technically integrodifferential equations with a non-Markovian kernel, which makes them damn difficult to solve. To simplify them, one often uses the Markov approximation and simulates them as e.g. a Markov chain rather than necessarily as a non-Markovian integral. A very reasonable interpretation of HK dynamics in climate science is that each distinct regime represents a period where a particular local attractor is stable and some specific pattern of climate holds (one which might be net warming or net cooling or stable, as all three are clearly visible in even the LOCAL record of just over a century in e.g. GISS, most of it without the help or influence of CO_2).

This sort of time evolution is evident in the longer term — 5 million year — climate data. The Earth entered precisely this sort of multistable regime at the beginning of the Pliestocene, with a clearly evident bistability between dominant glaciation punctuated by comparatively brief interglacials (for all that the whole of recorded human history fits into half of ONE such interglacial). There is no evidence of a stable, still warmer multistable phase — even when the Earth has spiked up to much warmer than it is today, negative feedback has quickly driven it first back to interglacial behavior, then (usually quite abruptly) back into the dominant glaciation mode. This behavior is NOT truly chaotic, but rather has only a few distinct frequencies associated with it, frequencies we can identify (weakly) with various orbital periods and changes in the solar system.

Given this natural history of

dramatic, game changing swingsin the Earth’s climate across at least a slowly varying pair of bistable regimes acting as primary attractors, attempting to analyze the Earth’s behavior over the last 30 to 50 years (where we don’t yet understand and cannot predict its gross behavior on all of the timescales longer than this and hence have no real idea what the climate “should” be doing) is a bit of a joke. Or if you prefer, agrand challenge problem, arguably the most difficult problem in science we might have today, more difficult than finding the Higgs or unifying field theory or detecting gravity waves or building a stable exothermic thermonuclear fusion reactor. This isn’t settled science — we haven’t even finished doing thepreliminarywork, thegroundwork, needed to make serious progress in it.In fifty to a hundred years, we might have enough, good enough, data to make some real progress in the field — if people would take the damn thumbs off of the scales and leave politics

out of the science. Yes, the Earth could experience catastrophic warming, catastrophic cooling, or could have a catstrophe unrelated to heating or cooling in between. No, we do not know enough yet to do more than hint at which one(s) are likely, or how likely. If the Earth exhibits this behavior, it might or might not be “our fault”. Or perhaps it is the fault of a Brazilian butterfly. Or the fault of goats turned loose in what became the Sahara, 9000 years ago. Or the fault of the Earth’s inexorable orbital progression. Or the fault of as yet unknown solar dynamics.both waysWhat we

doknow is that politics and science make poor bedfellows, and that confirmation bias is the bete noir of all scientific research. Wealsoknow that the measures proposed to combat an unproven possibility of catastrophe in fifty or a hundred years are themselves causing a directly provable catastrophe today.Even if the catastrophists are right, this is a cost-benefit problem, a risk assessment problem, and we have to trade off the certain damage caused by energy poverty that afflicts some 1/2 of the world’s populationnowagainst the possibility of (probably lesser, quite frankly) damage in fifty years, in a hundred years, should the catastrophists prove correct.rgb

Can you make a testable prediction about the future?

rgbatduke says:

May 7, 2013 at 10:59 am

Every system is chaotic at some level. But, that does not mean classical analysis techniques cannot be applied. In the neighborhood of a particular attractor, even grossly nonlinear systems can often be modeled using linear systems theory – it’s just a matter of how small your neighborhood needs to be.

In all of the climate data I have viewed, there are trends and regular oscillations typical of energy storage mechanisms with low dissipation rates and persistent excitation, processes which are eminently describable using classical analysis techniques. That the major players do not generally display a good grasp of how to apply those techniques does not indicate that they are inapplicable.

“The rescaled range is the amplitude of the integral trace of deviations from the mean of a serial data vector.”

I’m sure you’re right. Unfortunately, not every one can translate jargon like this. The WUWT website has a very wide audience and the few responses generated would indicate that many readers and contributors switched off before your point was made/understood. Please remember that most people are lay readers without the specialist background you clearly have. No doubt each and every reader of WUWT could equally confound you with specialist knowledge of their own.

You have a computational procedure that transforms a roughly cubical curve into a roughly quadratic curve. So what? You still can not distinguish signal from noise, or predict the trajectory for the next 30 years.

“A Brief Introduction to the Detection of Climate and Weather Transitions using Hurst Rescaling”

The idea of the Hurst Exponent as an indicator is interesting but I don’t see the use of Hurst in the ‘detection’ part here.

Behind all the jargon, integration is just a kind of low pass filter, so all that is happening here is a detrending and L.P filter then eye-balling for a “regime change”. Seems to have little to do with Hurst.

A cumulative integral like this is more sensitive to change in the early part than later so contains some bias dependant on where the changes occur. If it has some special advantage over more conventional L.P. filters this needs to be pointed out.

Mathew

I agree – not sure about the cubical bit though – but I don’t really see the point of this piece. I originally assumed I was missing something but everyone else seems to be off the same opinion.

Greg

I’m not sure that an integral (cumulative as plotted here) is similar to a low pass filter. It might have the a similar effect as a LP filter where you are dealing with a residual of stationary series but it certainly isn’t the same.

But I do concur – where is the punchline?

Nick

I think WUWT has in general a good balance between technical and general articles. But most of these types of discussion breakdown into rather verbose commentary filled with what often reads like waffle – it has already started here. But your right it doesn’t help when the author doesn’t supply qualification for rather long winded – and sometimes suspicious – terminology.

“Memory” here presumably implies a delayed nonlinear oscillator, such as the delayed oscillator model Bob Tisdale presents in his book as a mechanism for ENSO (for background on this model see Fiona Eccles 2001.

I wrote an article on Judith Curry’s blog when Hurst dynamics were all the rage about a year ago.

http://judithcurry.com/2012/02/19/autocorrelation-and-trends/

Frankly, I wish I hadn’t!

My upshot was that it is extremely difficult, using real data, to separate a power law, i.e.: Hurst dynamics from other models.

Even if there is a pure Hurst Law relationship, what does this mean? Basically it means that there are variable dynamics with different delays. I don’t think that saying that we can regard temperature as a Hurst system is particularly helpful as it doesn’t uncover other the basic mechanisms underlying temperature, it simply gives one statistical description of the signal, which could be explained in many other ways.

Sam,

Thanks for the interesting article.

At your suggestion, I tried to reproduce you results, but ran into some trouble. The first problem was that the source listed for you data was CO2 Science, but they do not have data past the year 2008. Therefore, I download that data from http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt.

I subtracted the mean from the data as you suggest in (2.). It plots very similar to your data with the peaks and troughs being in the same year, but it is definitely not the same data set. For example, there was a much more pronounced cooling trend between 1880 and 1909 with the data that I downloaded. My calculated values are as follows: R(n) =13.31 (in year 1956), S(n) = 0.282 and n = 130. (I used 130, although there are 131 observations – a degrees of freedom thing I assume.) The calculated Hurst Exponent was 0.79185. In addition to the year 1956, I did get local minima in 1938 and 1976 as you did.

All of the above was done in Excel. I tried checking the results with Octave (a Mathlab like public domain package) and got H = 0.79188.

Did anyone else try this?

RCSaumarez (May 7, 2013 at 3:02 pm) wrote:“I wrote an article on Judith Curry’s blog when Hurst dynamics were all the rage about a year ago.

http://judithcurry.com/2012/02/19/autocorrelation-and-trends/

Frankly, I wish I hadn’t!

My upshot was that it is extremely difficult, using real data, to separate a power law, i.e.: Hurst dynamics from other models.

Even if there is a pure Hurst Law relationship, what does this mean? Basically it means that there are variable dynamics with different delays. I don’t think that saying that we can regard temperature as a Hurst system is particularly helpful as it doesn’t uncover other the basic mechanisms underlying temperature, it simply gives one statistical description of the signal, which could be explained in many other ways.”I welcome Sam Outcalt’s contributions. I’m also pleased to see you encouraging more sober thinking about fundamentals here. The biggest mistake I see in general with applications of HK (not with Sam’s work) is failure to explore the variability of parameter estimates as a function of aggregation criteria. For example, what insights arise if the data are sorted by month of year or by some spatial criteria and by many,

manyother criteria? There’s amajorblindspot in some of the narratives. The insights will deepen and the stories will get better once aggregation criteria make their way onto the radars of key HK advocates.RCSaumarez

Good man/woman. The linked article makes things a lot clearer. Still don’t see the point of the H dimension it seems entirely meaningless outside a purely academic field. A simple Markov approach would do the job without all this arm waving.

1) Detrend the data

2) Discretised data derived from bin ranges (e.g. 1 = -1.0 to -0.6…5 = 0.6 to 1.0).

3) Construct a Markov mesh using these discrete values (indicators) where the mesh is composed of two sample points at distance h = 0.

Repeat 3 using different mesh parameters: h = 1…10

4) Plot the conditional probabilities against h for each transition (1->2, 1->3 etc.)

Yes this does give you – indirectly – a pseudo-autocorrelation of the thresholded values. The difference however is that you don’t need to model the relationship any further as the conditional probabilities will either be a function of h or not. End of story!

R. C. Saumarez:

Even if there is a pure Hurst Law relationship, what does this mean? Basically it means that there are variable dynamics with different delays. I don’t think that saying that we can regard temperature as a Hurst system is particularly helpful as it doesn’t uncover other the basic mechanisms underlying temperature, it simply gives one statistical description of the signal, which could be explained in many other ways.You are more polite than I was, but yeh. There are zillions of statistical summaries that can be computed for extant data, especially extant data that have been worked over as much as these have been. Without explicit methods to test the fit of the corresponding underlying models, all anyone achieves is another set of statistics of dubious value.

“Equation 2. Y = a + Exp [H X] or Ln Y = Ln a + H Ln X or Log10 Y = Log10 a + H Log10 X”

The first equation is not equivalent to the second and third equations.

The second and third equations are equivalent Y=a X^H and not to Y = a + e^ (H X).

Please explain what you had in mind when you took the logarithm of the first equation.