Guest essay by Jeffery S. Patterson
My last post on WUWT demonstrated a detection technique that allows us to de-noise the climate data and extract the various natural modes which dominate the decadal scale variation in temperature. In a follow-up post on my blog, I extend the analysis back to 1850 and show why, to first-order, the detection method used is insensitive to amplitude variations in the primary mode. The result is reproduced here as figure 1.
Figure 1a – First-difference of primary mode Fig 1b – De-trended first-difference of primary mode
We see from Figure 1b that once de-trended, the slope of the primary mode has remained bounded within a range of ± 1.2 °C/century over the entire 163 year record.
The linear trend in slope evident in Figure 1a implies a parabolic temperature trend. The IPCC makes oblique reference to this in the recently releases AR-5 Summary for Policymakers:
“Each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850 (see Figure SPM.1). In the Northern Hemisphere, 1983–2012 was likely the warmest 30-year period of the last 1400 years (medium confidence).”
True enough, but that has been true since at least the mid-1800s. The implication of the IPCC’s ominous statement is that anthropogenic effects on the climate have been present since that early time. Let’s examine that hypothesis.
Up to this point I have been using de-trended data in the singular spectrum analysis (SSA) because de-trending helps to isolate the oscillatory modes of the climate system from the low-frequency trend. We are now interested in the characteristics of the trend itself. Figure 2 shows the SSA trend extracted from the raw Hadcrut4 northern hemisphere data.
Figure 2 – SSA[L=82,k = 1,2] on Hadcrut4
We see the data oscillates about the extracted trend with approximately equal peak –to-peak amplitude until about the year 2000. More about this departure later. The really interesting characteristic of the trend is revealed when we look at the first-difference (time derivative of the red curve of figure 2), shown in figure 3.
Figure 3 – First difference of extracted trend
Any engineer will instantly recognize this shape as the step-response of a slightly under-damped 2nd order system as described by equation 1.
where a is the step-size, b the offset, w the natural frequency, z the damping factor and t the offset in time at which the input step occurs.
is the unit step function which is zero when its argument is negative and unity elsewhere.
A parametric fit of (1) to the data of figure 3 is shown in figure 4.
Figure 4 – Parametric fit of (1) versus data ![]()
I know what you are thinking. That fit is too perfect to be true. It must be an internal response of the SSA filter. We can test that hypothesis by integrating equation (1) and comparing it to the unfiltered data.
Figure 5 – Indefinite integral of (1) versus data
We see the resulting integral fits the unfiltered data, with the residual exhibiting the same oscillatory behaviors as before. The integral of (1) yields eqn. 2 below:
I know what you’re thinking. We’ve said all along that the AGW signature would show up as a step in in the slope of the de-noised temperature data, precisely what we see in figure 4. Is this the AGW smoking gun? If we plot figure 3 and the raw data on the same graph we see the real smoking gun.
Figure 6 – First-difference of extracted trend versus data
Around the year 1878, a dramatic shift in the climate occurred coincident with and perhaps triggered by an impulsive spike in temperature. As a result, the climate moved from a cooling phase of about -.7 °C/century to a warming phase of about +.5°C/century, which has remained constant to the present. We see that this period of time was coincident with a large spike in solar activity as shown in figure 7.
Figure 7 – Solanki et al, Nature 2004 Figure 2. Comparison between directly measured sunspot number (SN) and SN reconstructed from different cosmogenic isotopes. Plotted are SN reconstructed from D14C (blue), the 10-year averaged group sunspot number1 (GSN, red)
Virtually all of the climate of the last century and a half is explained by equation (2) and the primary 60+ year mode extracted earlier as shown in figure 8b.
Figure 8 – Primary mode SSA[L=82,k=3,5] vs. residual from eqn.(2) (left) Fig. 8b – eqn. (2) + primary mode vs. hadcrut4
As others have observed, this 60+ year mode plotted in figure 8a is highly correlated to solar irradiance.
Figure 9 – This image was created by Robert A. Rohde from the data sources listed below
1. Irradiance: http://www.pmodwrc.ch/pmod.php?topic=tsi/composite/SolarConstant
2. International sunspot number: http://www.ngdc.noaa.gov/stp/SOLAR/ftpsunspotnumber.html
3. Flare index: http://www.koeri.boun.edu.tr/astronomy/readme.html
4. 10.7cm radio flux: http://www.drao-ofr.hia-iha.nrc-cnrc.gc.ca/icarus/www/sol_home.shtml
Note that the reconstruction due to Solanki et al shown in figure 7 disagrees with figure 9 in terms of present day solar activity. The temperature record clearly tracts Solanki, but I’ll leave that controversy to others.
The residual from Figure 8b, shown in Figure 10, shows no trend or other signs of anthropogenic effects.
Figure 10a – Residual from
primary mode Figure 10b – Smoothed histogram of residual
A similar analysis was done on the sea-surface temperature record. The results as shown in Figure 11:
Figure 11 – SST (red) vs. Hadcrut4 (blue)
We see the land temperatures follow the ocean surface temperature with a 4-5 year lag.
Conclusion
The climate record of the past 163 years is well explained as the integral second-order response to a triggering event that occurred in the mid-to-late 1870s, plus an oscillatory mode regulated by solar irradiance. There is no evidence in the temperature records analyzed here supporting the hypothesis that mankind has had a measurable effect on the global climate.
Related articles
- Detecting the AGW Needle in the SST Haystack (wattsupwiththat.com)
The above points out the big degree of uncertainty that exist about how variable the sun may or may not be.
Jeff: “But even if that shift isn’t related to the coincident impulsive event and was instead due completely to anthropogenic CO2 (that has doubled in concentration every 29 years since), the sensitivity to that forcing must be exceedingly small. ”
Ahh, I’ve just got what you meant. I left comment on your blog but didn’t realise where the mistake was. It’s no good saying human emissions have doubled in x years and thinking that provides the doubling time ignoring what was already in the atmosphere.
It’s the latter that affects climate, not raw emissions. That has not even doubled once yet since pre-industrial. Expected to be double circa 2050-2060.
Also suggesting that CO2 may also be responsible for stopping the late 19th c. cooling rather assumes that cooling is what climate would be doing without AGW. That is not representative of the state of ‘natural’ climate, it was just inter-decadal variation.
If you put that on the back of AGW you’ll probably double sensitivity in the process.
Before Hadley got hold of the data and started playing with their buckets the 19th c. had larger variability than 20th. c. They just cut it in half.
In fact variability in 20th c. was less. One could suggest that is the blanket effect of CO2, but not at the same time as playing the ‘unprecedented’ game.
Those engaged in the solar wars here (all well as everyone else) may be interested in this update. I’ve convolved the system response described by equation 2 above with the daily sunspot number (taken here as the input to the climate system described by eqn 2) and reproduced the SST anomaly from 1850 until 1980 (at which point the system kernel edge effects begin to dominate). Pretty interesting.
This entire series of posts relies upon connecting three naive premises:
1. HadCRUT4 provides genuine indication of “global average temperature.”
2. Orthogonal decomposition of the time series “denoises” the signal, which
is assumed to be simply a sum of periodic components and a linear trend.
3. There are “natural modes” of temperature variation, akin to those of a
second-order dynamic system, revealed by such decomposition.
The author seems unaware that there were no time-series of SST recorded at
any fixed station at sea before the advent of weather ships in the middle
of the last century. The various SST time series that extend further back
in time are manufactured by using the low-order principal components of
extremely sparse and temporally scattered observations made by ships of
opportunity, using very different sampling equipment. Moreover the land
station records throughout most of the continents are heavily corrupted by
UHI.
Empirical orthogonal decompositions, which underlie SSA, are far
from unique and merely decompose the available record into mutually
orthogonal mathehmatical components that have no particular physical
meaning. Lacking any reliable determination of the actual signal, there is
no analytic criterion for distinguishing it from “noise.” What can be said
on the basis of the (first-order) differential equations governing
thermodynamics is that any periodic temperature signal requires a periodic
driver. Once the annual and diurnal cycles are suppressed in decimating the
data into yearly average values, there is no credible periodic driver to
justify the 2-nd premise.
Finally, the appeal to “what every engineer will recognize as the response
of second-order system to a step” is an appeal to scientific ignorance.
In situ temperature variations lack the “natural modes” of response of
second-order systems incorporating a restoring force; the underlying heat
has to be dispersed or dissipated. Meanwhile the power spectra of GISP2
isotope data reveal wide-band multi-CENTENNIAL variations ~3 times more
powerful than the fairly narrow-band ~60yr oscillations. It is these
oscillations that most likely produce what is mistakenly taken to be as
“trend.”
@1sky1 says: October 7, 2013 at 5:28 pm
If you are convinced adaptive filters, of which SSA is a class, can not dramatically improve the signal-to-noise ratio you should throw your cell phone away- it doesn’t work. And if the climate response is not well modeled by the equation derived above, what is you explanation for the fact demonstrated here, that SST = SSN convolved eq2 ?
Hi Jeff, I had not seen your SSN convolution on first reading there. It starts to deviate around 1950, which is when the Hadley Centre’s speculative adjustments start to phase in a 0.5 deg C cooling. Whatever way you try to model climate this down step is a problem.
If you model later period (like IPCC model calibration concentrates on) then your model fails badly pre-WWII. This is shows in Bob Tisdales latest article :
http://bobtisdale.files.wordpress.com/2013/10/figure-22.png?w=640&h=441
If you try fit harmonic functions like Scafetta and others, this dip can never be made to fit properly.
My examination of frequency spectrum showed this “correction” made significant changes to frequency spectrum between earlier and later periods that are very similar without the step.
Now your analysis with a 2nd order function also trips on this down step.
I’m as starting to think that Folland’s Folly and Hadley Centre’s insistance on maintaining this “correction” in one form or another for the last 30 years maybe one of the biggest barriers to our getting any understanding of long term climate change.
Greg says: October 8, 2013 at 12:11 am
It starts to deviate around 1950…
To my eye it deviates abruptly at sample 125 which is 1975 (t=0 is 1850). The kernel used in the circular convolution is eqn 2a, time shifted to place the step in eq1 at t=0 and discretized to 70 samples to capture the majority of the transient behavior. Using circular convolution removes the causality requirement- we can start at “negative time” by rotating the kernel which just time shifts the output. I’ve chosen this lag to align the output with the steep rise in temperature between 1910 and 1940, effectively removing the kernel delay so as to make the correlation obvious to the eye. This rotation moves all of the filter’s edge effects to the end (in essence we’re using the time from 1975 to 2012 to pre-charge the filter, which is why the step transient starts in 1975). All of the data after that is bogus.
1sky1: “The various SST time series that extend further back
in time are manufactured by using the low-order principal components of
extremely sparse and temporally scattered observations made by ships of
opportunity, using very different sampling equipment. ”
Thanks you. I had not been able to find any kind of detail on the icoads processing. Do you have a link to some doc on that?
@Greg
I’ve added a standard zero-padded convolution (puts filter effects at the beginning) compared to your corrected data on my blog. It produces both the early 20th century dip and matches current warming quite well. Very interesting.
Jeff Patterson;
Your conceptualizations are patently derived from work with man-made systems, which scarcely provide guidance to what is happening in the geophysical setting. SSA relies entirely upon empirical orthogonal function decomposition of a data series, Since all of the decomposed “modes” are mutually orthogonal, there is no solid analytic basis for declaring higher-order ones to be “noise.”.That is a far cry from how adaptive filters operate in cases where something is known of the spectral composition of both signal.and noise.
If you apply Kalman.filters to any geophysical time-series you’ll find that, contrary to your presumption, the predictive power of the lower-order modes is lackluster, at best. It’s a rookie mistake to assume that the lower-order modes can be reliably extrapolated. Had you followed up on my previous suggestion to split HadCRUT4 into three 54-yr segments, you could have seen this for yourself. As it stands, you fall into the common trap of believing that you have discovered something intrinsic about the physical system, as opposed to merely a mathematical curve-fitting “explanation.” Even your modest “trend” of .05K/decade would imply an ice age at the time of the MWP.
Greg Goodman:
Can’t help you with particulars of ICOADS processing, but insofar as the manufacture other SST times-series is concerned, they rely upon the standard technique of PC analysis. I’ve seen a few technical reports on this many years ago, but can’t locate one on the web.
1sky1 says:
Jeff Patterson;
Your conceptualizations are patently derived from work with man-made systems, which scarcely provide guidance to what is happening in the geophysical setting.
===
There is no reason that the basics of systems analysis should not be applied to natural systems as well as man-made. In fact that it exactly what has been missing in the last 30y or climatology.
Most work seems stuck at high school level of fitting linear trends to everything and we think we are getting really clever if you try a linear relaxation model akin to a trivial RC circuit.
Running means seem to be de rigour as far as frequency filtering goes and mulitvariate (linear) regression is sufficient to prove ( or disprove ) the influence of a driving force.
That Jeff’s analysis is “conceptualizations are patently derived from work with man-made systems” would seem to be a step in the right direction. Such ‘man made systems’ got several teams to the moon and back with less computer power than is available in an iPhone.
If climate scientists tried to apply some of the centuries worth of engineering experience we have rather than splashing around in the kiddies pool and making it up as they go along we may not have had to deal with Mann’s hockey stick and woefully inadequate models.
1sky1 says:
Greg Goodman:
Can’t help you with particulars of ICOADS processing, but insofar as the manufacture other SST times-series is concerned, they rely upon the standard technique of PC analysis. I’ve seen a few technical reports on this many years ago, but can’t locate one on the web.
===
The problem is not with ICOADS processing but what Hadley does with it to make hadSST3 and derivatives. That is not PC as in principal components but more like PC as in politically correct.
The processing is outlined in Brohnan 2006 and seems based on applying distorting running mean filters in a loop until the result “converges”, as well a some rather speculative “corrections”.
That has come to be regarded as the ‘gold standard’ of surface temperature records.