Guest Post by Basil Copeland
Figure 1
Each month, readers here at Watt’s Up With That, over at lucia’s The Blackboard, and elsewhere, anxiously await the latest global temperature estimates, as if just one more month of data will determine one way or the other the eternal destiny of AGW (the theory of “anthropogenic global warming”). For last month, July, the satellite estimates released by UAH and RSS were up sharply, with the UAH estimate up from essentially zero in June, to +0.41°C in July, while the RSS estimate was up from +0.081°C to +0.392°C. Does this sharp rise presage the resumption of global warming, after nearly a decade of relative cooling? Or is it just another in a series of meandering moves reminiscent of what statisticians know as a “random walk?”
I have not researched the literature exhaustively, but the possibility that global temperature follows a random walk was suggested at least as early as 1991 by A.H. Gordon in an article in The Journal of Climate entitled “Global Warming as a Manifestation of a Random Walk.” In 1995 Gordon’s work was extended by Olavi Kӓrner in a note in the Journal of Climate entitled “Global Temperature Deviations as a Random Walk.” Statistician William Briggs has written about climate behaving like a random walk on his blog.
Now even I will confess that the notion that global temperature, as a manifestation of climate processes, might be essentially random is difficult to accept. But I am coming around to that view, based on what I will present here, that monthly global temperature variations do, indeed, behave somewhat like a random walk. The qualifier is important, as I hope to show.
So, what is a “random walk” and why do some think that global temperature behaves, even if only somewhat, like a random walk? And what does it matter, anyhow?
While there are certainly more elegant definitions, a random walk in a time series posits that the direction of change at any point in time is essentially determined by a coin toss, i.e. by chance. As applied to global temperature, that is the same as saying that in any given month, it is just as likely to go up as it is to go down, and vice versa. Were global temperature a true random walk, there would be no underlying trend to the data, and any claimed evidence of a trend would be spurious. One of the best known “features” of a random walk is that in a time series it appears to “trend” up or down over extended periods of time, despite the underlying randomness of the direction of change at each point in time.
So why might we think global temperature follows a random walk? One reason is suggested by a close look at Figure 1. Figure 1 is the familiar HadCRUT3 time series of monthly global temperature anomalies since 1850, with a simple linear trend line fit through the data. When we look close, we see long periods, or “runs,” in which the data are above or below the trend line. If the data were truly generated by a linear process with random variations about the trend, we’d expect to see the deviations scattered approximately randomly above and below the trend line. We see nothing of the kind, suggesting that whatever is happening isn’t likely the result of a linear process.
On the other hand, when we perform what is a very simple transformation in time series analysis to the HadCRUT3 data, we get the result pictured in Figure 2.
Figure 2
A common transformation in time series to investigate the possibility of a random walk is to “difference” the data. Here, because we are using monthly data, a particularly useful type of differencing is seasonal differencing, i.e., comparing one month’s observation to the observation from 12 months preceding. Since 12 months have intervened in computing this difference, it is equivalent to an annual rate of change, or a one month “spot” estimate of the annual “trend” in the undifferenced, or original, series. When we look at Figure 2, it has the characteristic appearance of a random walk.
But we can do more than just look at the series. We can put a number to it: the Hurst exponent. Here’s a very understandable presentation of the Hurst exponent:
“The values of the Hurst Exponent range between 0 and 1.
-
A Hurst Exponent value H close to 0.5 indicates a random walk (a Brownian time series). In a random walk there is no correlation between any element and a future element and there is a 50% probability that future return values will go either up or down. Series of this type are hard to predict.
-
A Hurst Exponent value H between 0 and 0.5 exists for time series with “anti-persistent behaviour”. This means that an increase will tend to be followed by a decrease (or a decrease will be followed by an increase). This behaviour is sometimes called “mean reversion” which means future values will have a tendency to return to a longer term mean value. The strength of this mean reversion increases as H approaches 0.
- A Hurst Exponent value H between 0.5 and 1 indicates “persistent behavior”, that is the time series is trending. If there is an increase from time step [t-1] to [t] there will probably be an increase from [t] to [t+1]. The same is true of decreases, where a decrease will tend to follow a decrease. The larger the H value is, the stronger the trend. Series of this type are easier to predict than series falling in the other two categories.”
So what is the Hurst exponent for the series depicted in Figure 2? It is 0.475, which is very near the value of 0.5 which indicates a pure random walk. And when we exclude the data before 1880, which may be suspect because of a dearth of surface locations in computing the HadCRUT3 series, the Hurst exponent is 0.493, even closer to 0.5. So by all appearances, the global temperature series has the mark of a random walk. But appearances can be deceiving. In Figure 3 I fit a Hodrick-Prescott smooth to the data:
In the upper pane, the undulating blue line depicts the smoothed value derived using Hodrick-Prescott smoothing (lambda is 129,000, for the curious). In the lower panel are detrended seasonal differences, i.e., what is left after removing the smoothed series. Conceptually, the smoothed series can be taken to represent the “true” underlying “trend” in the time series, while the remainder in the bottom pane represents random variations about the trend. In other words, at times, the annual rate of change in temperature is consistently (or persistently, as we shall see) rising, while at other times it is consistently falling. That is, there are trends in the trend, or cycles, if you will. And while it is not obvious, because of the scaling involved, these are essentially the same cycles that Anthony and I have attributed to a lunisolar influence on global temperature trends. That should not be so surprising. In our paper, we smoothed the data first with Hodrick-Prescott smoothing, and then differenced it. Here we’re differencing it first, to show the random walk nature of the series, and then smoothing the differences. But either approach reveals the same pattern of cycles in global temperature trends over time.
Looking more closely at the smoothed series, and the random component (labeled “Cyclical component” in Figure 3), we have an interesting result when we compute the Hurst exponents for the two series. The Hurst exponent for the smoothed (blue line) series is 0.835, while the Hurst exponent for the detrended random component (bottom pane) is 0.383. The first is in the range associated with “persistent” behavior, while the second is in the range associated with “anti-persistent” behavior. Let’s discuss the latter first.
Anti-persistence is evidence of mean reversion or what is also sometimes called “regression toward the mean.” Simply put, when temperatures spike in one direction, there is a strong probability that they will subsequently revert back toward a mean value. Ignoring all other factors, this property would suggest that the dramatic rise in the temperature anomaly for July should lead to subsequent declines back toward some underlying mean or stable value. I think this is probably more what Gordon or Kӓrner had in mind for the physical processes at work when they proposed treating global temperatures as a random walk. I.e.,shocks to the underlying central tendency of the climate system from processes such as volcanism, ENSO events, and similar climate variations create deviations from the central tendency which are followed by reversions back toward the mean or central tendency. Carvalho et. al (2007), using rather complicated procedures, recently laid claim to having first identified the existence of anti-persistence in global temperatures. We’ve identified it here in a much simpler, and more straightforward, fashion. (I’m not trying to take away from the usefulness or significance of their work. Their procedures demonstrate the spatial-temporal nature of anti-persistence in global temperatures, especially on decadal time periods. I think WUWT readers would find their Figure 10 especially interesting, for while they do not use the term, it demonstrates “the great climate shift” of 1976 rather dramatically.)
With respect to the smoothed series, the Hurst exponent of 0.835 indicates persistent behavior, i.e. if the series is trending upward, it will have a tendency continue trending upwards, and vice versa. But that is to be expected from the cyclical undulations we observe in the smoothed series. As to the possible physical processes involved in generating these cycles, after Anthony and I posted our paper, comments by Leif Svalgaard prompted me to perform a “superposed epoch analysis” (also known as a “Chree analysis”) on these cycles:
While Leif contends that the analysis should be performed on the raw data, in this case I would beg to differ. As shown in Figure 3, the raw data is dominated by the essentially random character of the monthly changes, completely obscuring the underlying cycles in the data that emerge when we filter out (detrend) the raw data. Arguably, what we have in the blue line in Figure 3 is a “signal” that has been extracted from the “noise” depicted in the bottom pane. Now as such, the “signal” may mean something, or it may not. That is where Figure 4 comes in to play. The peaks in the cycles depicted by the blue line in Figure 3 show a strong correspondence to maximums in the lunar nodal cycle (the “luni” part of our suggestion of a “lunisolar” influence on global temperature trends). They also show a strong correspondence in solar maxima associated with odd numbered solar cycles, especially beginning with solar cycle 17. Are these correspondences mere coincidence? Anthony and I think not. While each may play an independent role in modulating global temperatures, since the 1920’s the solar and lunar influences appear to have been roughly in phase to strongly influence temperature trends on a bidecadal time frame. In other words, Figure 4 may be revealing the physical processes at work in explaining the persistence revealed by the Hurst exponent for the blue line in Figure 3.
Taken together, the two Hurst exponents – one for the true “signal” in the series, and the other for the “noise” in the series – essentially offset each other, leaving us with a Hurst exponent for the unsmoothed, raw, seasonal difference of ~0.5, i.e., essentially a random walk. And so on a monthly basis, the global temperature anomalies we await anxiously are essentially unpredictable. However, if the cycles in the smoothed series can be plausibly related to physical processes, as Anthony and I believe, that gives us a clue as the “general direction” of the monthly anomalies over time.
In our paper together, Anthony and I presented the following projection using a sinusoidal model based on the same cycles shown in the blue smooth in Figure 3:
The light purple line in Figure 5 is, essentially, a continuation, or projection, of the blue smooth in Figure 3. From this, we derived a projection for the HadCRUT3 anomaly (light blue in Figure 5) which has it essentially meandering between 0.3 and 0.5 for the foreseeable future (here, roughly, the next two decades).
But the monthly values will vary substantially around this basically flat trend, with individual monthly values saying little, if anything, about the long term direction of global temperature. In that sense, global temperature will be very much like a random walk.





I’ve promoted the random walk model of the temperature for quite some time, e.g.
http://motls.blogspot.com/2009/01/record-breaking-years-in-autocorrelated.html
http://motls.blogspot.com/2009/01/weather-and-climate-noise-and.html
so you shouldn’t expect any devastating criticism from me.
Of course, there’s a lot of influences that are (at least qualitatively) well-understood, like the influence of the orbital variations, solar activity, greenhouse gas increases, or the fluctuations of the cosmic rays.
There are also many approximate predictions that are possible because of various lags – such as the delayed influence of El Ninos.
Still, the bulk of the global mean temperature change may be a completely Brownian motion, up to variations of order +-10 deg C where the regulating mechanisms have to become important, like in an AR(1) process, because the temperatures have been stabilized in a 20-deg-C window for milions of years.
But below this +-10 deg C, i.e. below tens of thousands of years, the motion can be completely random. The scaling laws seem to work well. The typical temperature jump after Y centuries is sqrt(Y) degrees Celsius, or one half of it.
For 1 century, the temperature jumps by 1 deg C (or 0.5). After 4 centuries, it’s 2 deg C (or 1). After 9 centuries, it’s 3 deg C (or 1.5). After 100 centuries i.e. 10,000 years, it’s 10 deg C (or 5). This sqrt-scaling law follows from random walk and works remarkably well. These critical exponents contain a lot of actual knowledge about the climate, and one simply shouldn’t deny them just because he would like the world to be predictable. It doesn’t seem to be!
Since you mention the Hurst Exponent:
http://www.itia.ntua.gr/en/docinfo/849/
However I would also note that the supposition that one can use HadCrut for analysis is dangerous, given all the evidence that the trends have biases.
Basil,
Thanks for the explanation – I had thought people were reading more into what you were saying than you intended. It feels to me that what you say in your latest comment (08:15) about the sinusoidal components masking the overall trend is the closest we’ve come to a sane model here for a while. Your search for the causes of the cyclical components is fascinating and valuable – just as long as everyone realises that when they turn positive we will return to – and for a while exceed – the long term trend.
That is, of course, unless there is a very large, very long (100+ years) cycle we’re not aware of yet – but as Steve Fitzpatrick says above, if you attribute *everything* to cycles you have rather thrown the baby out with the bathwater – that is, unless you tear up the basic CO2/radiation physics.
TonyB: The only adjustment done on that graph is to bring everything to the same baseline – see http://www.woodfortrees.org/notes#baselines for details. To believe that GISS, UAH and RSS had all suddenly felt the need to adjust their data upwards in June/July is stretching the conspiracy theory a bit, I think 🙂
Actually, that graph rather illustrates Basil’s discussion here – it makes it very obvious that the random component of the temperature series is a lot bigger than any trends or cycles, at least in the short term. That’s why it’s rather pointless (but fun) to speculate about each new month’s value – although, as John Finn points out above, it does give some interesting pointers to some short-term heat distribution mechanisms.
Darn. The phrase I used earlier
“negative feedback with sufficiently long characteristic time will make time series indistinguishable from the temprature series for any time duration one cares to examine”
should read
“integrated negative feedback with sufficiently long characteristic time will make time series indistinguishable from the temperature series for any time duration one cares to examine”
The integrator is absolutely vital. What provides the integrator? The ocean, soil, albedo as determined with ice, snow, and plants, and so forth all provide some memory of distant past climate and are therefore “integrators”. The characteristic times are not known to me, but some processes involving albedo might be thousands of years.
By the way, I am new to this site, and I should thank everyone here, Basil in this case, who takes time to post these interesting guest papers, those who add erudite commentary, and the moderator who keeps it pretty civil. I really enjoy this site.
All,
More good questions and comments continue to come in. I will not be able to respond in any detail until later this evening (Central Daylight Time in the US).
Basil
Basil (08:15:04) :
Any extrapolation out for two decades, let alone for the next century (a la IPCC) involves what we call in my field “heroic assumptions.”
Except that the lunar influence can be calculated accurately for thousands of years and the solar influence is well-known for centuries and reasonably well-known also for thousands of years, so your input to the ‘signal’ is well-known, there your hindcast is not an extrapolation, but an application of known inputs.
Basil (07:39:33),
Many thanks for your courteous reply.
The Hurst phenomenon is a form of serial correlation – so when applying Hurst type analysis, there is no need to account for the serial correlation separately. It is a separate and distinct form of serial correlation to Markovian models. They are commonly referenced as “Short Term Persistence”, or STP for short, for Markovian dependency, or “Long Term Persistence”, or LTP, for Hurst dependency.
As the two are simply different forms of serial correlation, either method can be used to account for the serial correlation. The method chosen is the method applicable to the time series of interest.
Deciding which type of serial correlation should be used is a difficult question. Since STP with very long time constants look very much like LTP series, statistics alone cannot determine which model is more suitable, either requiring enormous lengths of data or a full understanding of the physics (as noted in Tom Vonk’s very good comment), neither of which are readily available. Prof. Koutsoyiannis provides example of simple chaotic systems which exhibit LTP; this does not prove climate behaves in this way, but it does show that LTP can emerge from simple chaotic systems. (Of course, not all chaotic systems exhibit LTP, so we must keep an open mind).
Tools and estimators designed for STP time series do not always work when applied to LTP time series. Tools for LTP analysis are separate and commonly misunderstood. As suggested, I would strongly recommend reading some of Demetris Koutsoyiannis’ body of work, which does a very good job of explaining the concepts (and common misunderstandings) about LTP series.
You say that a Hurst exponent of 0.967 is a strong indicator of a trend. This is incorrect. A high Hurst exponent shows strong scaling behaviour, i.e. large low-frequency oscillations strongly visible in the time series. However, in the presence of LTP this is a stationary phenomenon and should not be confused as a deterministic trend that needs to be accounted for.
To me, the LTP/STP issue for climate helps to underline how little we know about climate, and how a simple, justifiable change of assumptions can render the 20th century warming statistically insignificant (e.g. Cohn and Lins “Naturally Trendy”)
Basil,
This article starts well but then goes seriously downhill when you start smoothing the data before applying your tests. You must never do this or your results are meaningless. Here’s a quote from the blog of statistician William Briggs:
“Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! “
I would say, instead, a levogyre walk.
Basil,
Are you aware of ARIMA processes? They’re a generalisation of random walks that have the same spurious trend properties but avoid the difficulties of unbounded wandering away from the mean. (Strictly speaking, a random walk has no mean.) The similarity has been noticed before.
Climate Audit discussed them at length, a long way back.
(http://www.climateaudit.org/?p=300 and others.)
Sorry, here’s a better one.
http://www.climateaudit.org/?p=332
I prefer to think of it as more like 1/f noise; the longer you wait, the bigger the transient events will be.
In any case, if one of my students fitted that straight line to that data he would fail my course. There’s no a priori reason that the function should be a straight line.
My eye sees some 30 year ramps; up and down, with some longer term up; but wait long enough and it will coem down again.
Stef wrote: “Dr Sanchez, the articles you gave links for are all from different sources, and this latest article is a guest post by Basil Copeland. It might surprise you that different people actually have different opinions. WUWT publishes articles and information from many sources. You do realise that not every single article is penned by Mr. Watts don’t you?”
Yes, I do realize they all come from different sources. And I also realize that journalists around the world are using this website as a source to spread misinformation about the theory of global warming. And I also do realize that an editor of such a website is supposed to have basic knowledge of the theory, and not publish whatever information is convenient at the time because it agrees with the position they are trying to push forward, regardless of how much it contradicts previous information they gave. There is a concept called “responsible journalism” that Mr Watts ought to read up upon. Until then, all credibility from this website is thrown out the window.
As George E. Smith notes, you could still be looking at some form of colored noise in climate. It is a common misconception to equate “randomness” with gaussian, or white, noise (i.e. the system is completely unpredictable and each time step is governed by nothing more than a coin-toss).
Instead of constructing a difference time series of January to January, February to February and so on (I’m not sure exactly what that tells you), why not look at the month-to-month difference time series which would be a more standard analysis?
If the difference (or “jump”) time series is really gaussian, then a frequency spectrum of that difference time series should exhibit zero slope in log-log space (that is, the “magnitude” of the jump at any frequency should be independent of the frequency).
What you tend to find is that most natural phenomena exhibit some form of fractal or power-law scaling (at least over a certain range of scales) and it can be easily identified in their frequency spectrum. For example, take Earthquakes- you tend to get a lot of small events and very few large events – the resulting power-spectrum exhibits a positive straight-line slope in a log-log plot of Magnitude v.s. Frequency – this is the famous “Gutenberg Richter” relation where the slope indicates something about the scaling and recurrence of Earthquakes in that part of the Earth. Take something closer to home – rainfall – you tend to get a lot of small events, and few large events (floods) – that is, power-law scaling. What this type of scaling implies is the system as a whole is far from being Gaussian (and therefore unpredictable), but exhibits structure and is instead, COMPLEX (in the chaos and complexity sense – see http://en.wikipedia.org/wiki/Complex_system), where it may offer some predictability over certain time-scales (e.g. a weather forecast).
A while back (here http://wattsupwiththat.com/2009/03/16/synchronized-chaos-and-climate-change/) Anthony posted a very interesting article which proposed that the climate system might act like synchronized chaos. If that is true (and I think it is), you should see power-law scaling in the frequency spectrum and not pure gaussian (white) noise.
I’d therefore be interested to see what a power spectrum of the straight month-to-month difference time series looks like.
Fig. 2 looks like tree ring data to me. Only tree ring data has long meaas and grabens in it’s semi-random walk.
Those hockey sticks to the moon are unreal. Rural data sets don’t show hockey sticks.
1/f (flicker noise) is very common in real world “dirty” environments.
No-one knows why it happens: the slower (or lower the frequency) the greater the noise, with no lower frequency limit. (counter intuitive)
This item might interest some.
http://www.dsprelated.com/showarticle/40.php
Wait until it gets to “1/f noise has been observed in the strangest places- electronics, traffic density on freeways, the loudness of classical music, DNA coding, and many others.”
Try some searches.
How about “1/f model for long-time memory of the ocean surface temperature”
http://www.mi.uni-hamburg.de/fileadmin/files/forschung/theomet/docs/pdf/fraelukble04.pdf
Or “1/f (One-Over-F) Noise in Meteorology and Oceanography”
http://www.nslij-genetics.org/wli/1fnoise/1fnoise_meteo.html
As a student of economics who studied complex modeling and its utter futility as a predictive mechanism, I applaud this post!
I’ve always felt that modeling for the climate was always about on par with modeling in financial systems – and if we think Climate Modeling is overfunded just trust me when I say that it’s a drop in the bucket compared to what the Financial World throws into their analysis and a monkey throwing darts at the WSJ still picks a better portfolio than the average mutual fund.
It’s like staring into the sun, our primate brains can’t handle the truth 😉
“Yes, I do realize they all come from different sources. And I also realize that journalists around the world are using this website as a source to spread misinformation about the theory of global warming. And I also do realize that an editor of such a website is supposed to have basic knowledge of the theory, and not publish whatever information is convenient at the time because it agrees with the position they are trying to push forward, regardless of how much it contradicts previous information they gave. There is a concept called “responsible journalism” that Mr Watts ought to read up upon. Until then, all credibility from this website is thrown out the window.”
===
Er. no.
See, Dr Sanchez, it is Mann, Steig, Hansen, and their very, very-well-paid international socialist comrades who are ignoring and riduculing irresponsible results and who are NOT being open about the sources, methods, and even raw data. It is those to who we’ve paid 70 billions in PUBLIC money who are irresponsible and who are displaying an amazing ignorance of basic science. (ie, “Don’t cook the books and manipulate the data and conceal your equations behind lies about security.”
Presenting opposing and exploratory ideas – even those which do NOT agree with the 32,000 real scientists who publically hold skeptical views about global warming.
A privately-supported web blogger is not supposed to be able to find out that your vauted Mann graphs are totally false -> thereby showing your much-publicized IPCC reports for the past 15 years are false. But they (M&M) did that. So who are the “responsible” ones? If a private blogger is actually allowing and encouraging open discussions, who is the “responsible” one – he who promotes discussion and finds real data, or those in academica who conceal it and present lies?
(By the way, when Mann, Hansen, Gore, and their European counterparts present and publicize THEIR lies to manipulate public opinion and public policies threatening the lives of billions, threatening the world with extremist policies that will kill hundreds of millions, what members of the “press” have actually come “here” for their data? How many tens of thousands of lies are being promoted by the AGW ecotheists in their “religion” that are sourced by your missing “press” who NEVER read or quote ANY opposing views?
You are, politely put, also promoting that same propaganda. You are, therefore, also responsible for those deaths. Sleep well with those thoughts.
Re: PaulM (09:58:26)
I can’t let that comment pass. You are propagating the myths that exist about smoothing, which has LEGITIMATE purposes. Some people (including top statisticians I know) have simply not thought about it carefully. I encourage those ‘opposed to smoothing’ to pause for a second to ponder why climatologists have adopted the convention of using anomalies. Spatiotemporal heterogeneity exists. Time-integration is not ‘bad’, but one should assess how parameter estimates vary with scale (as opposed to cherry-picking select scales without providing context for an audience).
George E. Smith (13:53:23) :
“In any case, if one of my students fitted that straight line to that data he would fail my course. There’s no a priori reason that the function should be a straight line.”
===
Yes, there IS a reason for that curve to be straight: Hansen has declared ALL global warming is directly proportional to CO2 levels and CO2 levels have been going up.
Therefore, the “best fit curve” through ANY set of data points IS Hansen’s straight line.
End of discussion. As Dr Sanchez has declared, “The science is settled.”
Basil, thanks for drawing my attention to this:
Carvalho, L.M.V.; Tsonis, A.A.; Jones, C.; Rocha, H.R.; & Polito, P.S. (2007). Anti-persistence in the global temperature anomaly field. Nonlinear Processes in Geophysics 14, 723-733.
http://www.uwm.edu/~aatsonis/npg-14-723-2007.pdf
“[…] significant power exists in the 4-7 years band corresponding to ENSO. Such features, however, are broadband features and do not represent periodic signals; they are the result of nonlinear dynamics (e.g., Eccles and Tziperman, 2004). As such they should not be removed from the records.”
This seems consistent with W.W. Hsieh’s observation that post-1950 NH El Nino response has been nonlinear. This gives cause to review carefully the appropriateness of COWL signal removal (cold oceans – warm land).
I tracked this down:
Eccles, F.; & Tziperman, E. (2004). Nonlinear effects on ENSO’s period. Journal of Atmospheric Science 61, 474-482.
http://www.seas.harvard.edu/climate/eli/reprints/Eccles-Tziperman-2004.pdf
Dr. Sanchez, please note that your views are as welcome here as any. We run a fairly open discussion, here. Compared with pro-CO2/AGW websites, we run a very open shop. I have approved your posts as have other moderators.
Views here are very, very varied. Personally, I am a “lukewarmer”. I think CO2 has had an effect, but seriously doubt the positive feedback assertions, and I question the adjustment procedures of GISS and NOAA. And I deplore the reluctance of many on the severe warming side to disclose data, methods, code, etc. At this point, I tend to be more of a “sea witch” than a “sun worshiper”.
In short, I think there has been some warming, but it has been overestimated and I doubt that 21st century warming will be several times that of the 20th century, as the IPCC estimates.
Many here disagree with this pov (from either direction), but at least we can discuss our views: it takes considerable talent to be repeatedly deleted and even more to be banned entirely.
So feel free to hang out and take part in the debate. Try to be (reasonably) polite and respectful and you are welcome here.
re Ken (06:17:38) :
The “random walk” jargon, by the way, hails to a classic book by Burton Malkiel, “A Random Walk Down Wall Street” 1st published in 1973.
It is an excellent book, but not the originator of the term. I don’t claim to
know the first use, but William Feller’s excellent text of 1950 has a chapter
on the topic.
A few more thoughts:
1. Basil is doing the right thing in differencing the series before applying the test.
2. The Hurst exponent can only be estimated. When I repeat his calculation (using different software) I get slightly lower numbers, about 0.41 for fig 2, rising to about 0.44 if I throw out the earlier data.
3. If I do a month-to-month difference, as some people have suggested, instead of the seasonal difference, I get a much smaller number, around 0.1 – I don’t understand why this is!
4. The figure of 0.835 for the smoothed series is meaningless. It is purely a consequence of the smoothing. The more you smooth, the bigger this number will be. You can generate as many bogus ‘signals’ as you like by smoothing in different ways. It is instructive to do this with data from a random number generator. Leif is right, the analysis should be performed on the raw data. Reading the comments more carefully I see that Spence_uk made this point too.
5. Essential reading about smoothing: http://wmbriggs.com/blog/?p=195
6. realitycheck – the spectrum of the difference is fairly flat. In fact it increases slightly with frequency. What would that indicate?
7. Dr Sanchez, you are so amazingly wrong I hardly know where to start. The point is that there are many possible ‘explanations’ of the temperature record that are discussed, explored and criticised here. We don’t know which is correct. The misinformation spread by most journalists is that there is only one. The duty of a responsible journalist is to question and challenge things.
Flanagan points out, by differentiating first, you are making any ‘linear trend’ into a constant offset
– and then analysing the result to show that we have a ‘random walk’ about a ‘constant offset’ (which is the differential off the linear trend)
– so in effect, you are saying we have a ‘random walk’ about a ‘linear trend’
– so, what is your point, exactly?