Guest Post by Willis Eschenbach
There’s a new paper published in Nature Scientific Reports called “Identification of the driving forces of climate change using the longest instrumental temperature record”, by Geli Wang et al, hereinafter Wang2017.
By “the longest instrumental temperature record” they mean the Central England Temperature, commonly called the “CET”. Unfortunately, the CET is a spliced, averaged, and adjusted temperature record. Not only that, but the underlying datasets from which it is constructed have changed over time. Here are some details from the study by Parker et. al.
Between 1772 and 1876 the daily series is based on a sequence of single stations whose variance has been reduced to counter the artificial increase that results from sampling single locations. For subsequent years, the series has been produced from combinations of as few stations as can reliably represent central England in the manner defined by Manley. We have used the daily series to update Manley’s published monthly series in a consistent way.
We have evaluated recent urban warming influences at the chosen stations by comparison with nearby rural stations, and have corrected the series from 1974 onwards.
According to the paper, no less than 14 different and distinct datasets were used to construct the CET.
Perhaps predictably, the authors of Wang2017 completely fail to mention any of this … instead, they simply say:
As the world’s longest instrumental record of temperature, the Met Office Hadley Centre’s CET time series represents the monthly mean surface air temperature averaged over the English midlands and spans the period January 1659 to December 2013.
Well … no, not really. And more to the point, using such a spliced, averaged, and adjusted dataset for an analysis of the underlying “driving forces” is totally inappropriate.
Now, in the Wang2017 analysis, they claim to find a couple of “driving forces” of the CET. Of these they say:
The peak L1 = 3.36 years seems to empirically correspond to the El Niño-Southern Oscillation (ENSO) signal, which has a period range of within 3 to 6 years. ENSO is arguably the most important global climate pattern and the dominant mode of climate variability13. The effect of ENSO on climate in Europe has been studied intensively using both models and observational or proxy data e.g. refs 14, 15, and a consistent and statistically significant ENSO signal on the European climate has been found e.g. refs 14 and 16.
The peak L4 = 22.6 years is coincident with the Hale sunspot cycle.
Let me start by saying that a claim that something “seems to empirically correspond” with something else is not a falsifiable claim … and that means it is not a scientific claim in any sense. And the same is true for a claim that something “is coincident with” something else. The use of such terms is scientific doublespeak, bafflegab of the highest order.
Setting that aside, here’s what the CET actually looks like:

Now, there is a commonly-used way to determine whether two datasets are related. This is a cross-correlation analysis, which shows more than just the correlation of the two datasets. It shows the correlation at various lag and lead times. Here is the cross-correlation analysis of the Central England Temperature and the El Nino datasets:

Does the El Nino affect the temperature in Central England? Well, perhaps, with a half-year lag or so. But it’s a very, very weak effect.
Then we have their claim about the relationship of the CET with sunspots, wherein they make the claim that a 22.6-year signal is “coincident with” the sun’s Hale Cycle. “Coincident with” … sheesh …
Now, the “Hale Cycle” reflects the fact that around the time of the sunspot maximum, the magnetic polarity of the sun reverses. As a result, the Hale Cycle is the length of time from any given sunspot peak to the peak of the second sunspot cycle following the given peak.
And how long is the Hale Cycle? Well, here’s a histogram of the different lengths, from NASA data …

So … is a signal with a 22.6-year cycle “coincident with” the length of the Hale Cycle? Well, sure … but the same is true of any cycle length from 17 to 28 years. Color me totally unimpressed.
Next, do the sunspots actually affect the temperature of Central England? Again, the cross-correlation function comes into play:

Basic answer is … well … no. Cross-correlation shows no evidence that the sunspots have any significant effect on the CET.
Finally, what kinds of signals do show up in the CET data? To answer this question, I use the Complete Ensemble Empirical Mode Decomposition method, as discussed here. Below, I show the CEEMD decomposition of the monthly CET data. The upper graph (blue) shows the different empirical mode cycles (C1 to C7) which when added together along with the Residual give us the raw data in the top panel.
The lower graph (red) shows the periodogram of each of those same empirical mode cycles.


Of all of these empirical modes, the strongest signal is at about 15 years (C4, lower red graph). There is a signal at about 24 years (C5, lower red graph), but it is much weaker, less than half the strength. In the corresponding mode C5 in the upper blue graph we can see why—sometimes we can see a cycle in the 25 to 30-year range, but it fades in and out.
To me, this is one big advantage of the CEEMD method—it shows not only the strength of the various cycle lengths, but also just where in the entire timespan of the dataset the cycles are strong, weak, or non-existent. This lets us see whether we are looking at a real persistent cycle which is visible from the start to the finish of the data, or whether it is a pseudo-cycle which fades in and out of existence.
Finally, is there any evidence of anthropogenic global warming in the CET data? To answer this, look at the residual signal in the bottom panel of the blue graph above. This is what remains after all of the underlying cyclical waves have been removed … looking at that it seems that there is no unusual recent warming of any kind.
My conclusion? If you use enough different statistical methods, as Wang2017 has done, you can dig out even the most trivial underlying cycles of a dataset … but the reality is, when you decompose even the most random of signals, it will show peaks in the underlying cycles. It has to—as Joe Fourier showed, every signal can be decomposed into underlying simpler waves. However, this does not mean that those underlying simpler waves have any kind of meaning or significance.
Finally, an oddity. Look at Mode C2 in the upper blue graph. I suspect that those blips are related to the spliced nature of the CET dataset. When you splice two datasets together, it seems to me that you’ll get some kind of higher-frequency “ringing” around the splice. Below I show Mode C2 along with what I can determine regarding the dates of the splices in the CET …

Is this probative of the theory that these are related to the splices? By no means … but it is certainly suggestive.
Here on the coast north of San Francisco, after two days of one hundred plus temperatures the clouds and fog are returning, and the evening is cool … what a world.
My best to everyone, in warm and cool climes alike,
w.
My Usual Request: When you comment, please QUOTE THE EXACT WORDS THAT YOU ARE DISCUSSING, so that we can all understand just what you are referring to.
My Other Request: Please do not stop after merely claiming I’m using the wrong dataset or the wrong method. I may well be wrong, but such observations are not meaningful unless and until you add a link to the proper dataset or give us a demonstration of the right method.
You’d think so … but these are climate scientists, not signal analysts …
w.
Willis, a nice quick shredding of ‘forced’ science. Two things about the volatility at your conjectured splice points in the record:
a) are the splice points not recorded by Met Office? If available, it is a fine series of data points you’ve teased out.
b) the data of the purturbed points would seem to offer a means to judge the legitimacy of what was done and in some cases, refine and improve the splice.
“Well … no, not really. And more to the point, using such a spliced, averaged, and adjusted dataset for an analysis of the underlying “driving forces” is totally inappropriate.”
Actually even MORE to the point, if they’re averaging anything more than a single temp station, then their result is physically meaningless.
not physically meaningless.
it was colder in tHe LIA
Well done, Willis. Read the paper because am deeply interested in all things attribution. Thought it was awful, but was too lazy to write something up dor here. You have properly shredded their shoddy analysis.
The only interesting attribution analysis I am aware of is Merohasy’s new paper using advanced neural network AI trained on 6 carefully selected high resolution paleoclimate proxies over the past millenium to project natural warming since 1900, with the residual assumed to be AGW. Her numbers are >75% natural and <25% AGW since 1950. The main methodological issue is that proxy time resolution is still arguably poor compared to the attribution period examined. Would be interested in your keen analysis of the paper.
Here’s an interesting fact: the planet’s temperature hasn’t changed since it was made part of the international physical standards.
Day in and out the planet’s temp and all its important parameters remain firmly unchanged.
If it had changed the international legal and regulatory authorities would have modified the certifications of everything from your home oven and air conditioner
to the sensitive equipment in operating rooms and instruments on the planes at the airport.
Indeed if these values were changing as they must were the climate changing
Everyone in instrumentation of gas related fields would be able to explain about the changes.
No such revision of international physical standards has happened because the temperature of the global atmosphere hasn’t changed.
Literally – even the claim that “climate must be changing” is purest of technical falsehoods.
Willis, just a short question to this excellent post: did you calculate the cross-correlations on the ensembles (series1,series2) or did you subtract for each series the average from every data point (i.e. calculate crosscor(series-averageseries1, series2-averageseries2)) ?
Having collected environmental data of all sorts in the AGW “debate” I have pondered how temperature has been measured over time. How we measured temperature has changed form LIG thermometers to thermocouples, to satellites. Even LIG thermometers from different runs from the same manufacturer may have different precision. While there are ways to “adjust” as one replaces a thermometer at a weather stations I know that for a few it is seldom done.
Willis: Excellent post, as we have come to expect. Deflating dubious conclusions is fun.
Philip Eden (very distinguished, retired meteorologist) maintained an independent CET series because he wasn’t sure that The Met Office/Hadley was an appropriate custodian of the data, and he wasn’t sure of how accurately they added current temperature records to their version of CET. This is a quote from his website that articulates his concern, in a nicely understated, English way:
Eden seems to have stopped updating his CET series in 2014 – that’s when his website seems to stop getting updates. His data set from 1974 (when Gordon Manley died) to 2014 is here:
http://www.climate-uk.com/provisional.htm
At one point, he posted these plots on his website. The web page is here:
http://www.climate-uk.com/CETcheck.htm
http://www.climate-uk.com/CETcheck_files/CETcheck_31604_image001.gif
http://www.climate-uk.com/CETcheck_files/CETcheck_31604_image002.gif
And his text (hidden behind the images, you have to get the page source) says:
The plot of “Hadley CET minus MO E&W” (I think that’s Met Office England and Wales) goes negative by about 0.25°C between July 2004 and July 2005, while the plot of “Philip Eden CET minus MO E&W” stays more or less flat. In other words, the Hadley CET went up by 0.25°C in one year, compared with an independent data series that follows a constant methodology.
Does this conclusion have a familiar ring to it? Do I hear “homogenization”? Karlization?
I’m sort of bogged down in work today, but I’m going to play with Eden’s data and see if the difference continues, or continues to increase. I may be back later today if work goes well and I can make the time.
I hope the images come out – they are old and the site isn’t https
0perhaps a new motto is in order “-The MET by Royal Appointment fiddling temperatures since 1772”
The temperature at any specific point (e.g. a CET measurement station) varies continuously during any 24 hr period. This is normal and expected because we see the sun rise and set and note it has gone away for about half of the 24hrs. Interestingly if you stare at the thermometer all day for several days and take regular notes every minute or 5 you will also see the variations are erratic and unpredictable. Sometime the temperature moves quickly in a matter of minutes. Sometimes it hardly moves for hours. It appears quite random sometimes.
If you were interested in the underlying “heat” energy involved in these movements you would really need to take loads of measurements each day, plot the graphs, and look at the areas under the graphs too. Taking just the MAX and MIN isn’t going to show you much about what is really happening. And using MAX+MIN/2 adds nothing to your lack of knowledge!
Of course if what you are really interested in is the surface temperature of the Earth then you had better move those thermometers and stick them in the ground, or at least in contact with it because right now (or in 1752) you appear to be measuring the temperature of that thin wispy stuff with varying amounts of water in it which we call the atmosphere, or at least the tiny bit of it in your measuring cabinet blown one way or another by the wind.
In conclusion : CET data may be interesting but it is of no use whatsoever in studying the SURFACE temperature of the planet BECAUSE THE THERMOMETERS ARE NOT ON THE SURFACE.
FUNDAMENTAL !
https://www.facebook.com/photo.php?fbid=10214058825687420&set=a.1609229073358.2079038.1315144916&type=3&theater
Mods.
I was trying to upload an image I have from a book, a graph of CET record which I had on my Facebook page. Doesn’t seem to have worked! Can anyone tell me the easiest way to do this? It’s really worth seeing.
Charles
Do you have a reference or description as the image will exist elsewhere?
Tonyb
the graph is called
Accumulated temperatures during the growing season in ‘month-degrees’ 1749 -1950
Location: 600 ft above sea level. Western Pennines.
It is from a book called “Climate and the British Scene” by Gordon Manley. Published by Collins in 1952.
It was compiled from ‘unadjusted data’ during a period of genuine scientific curiosity by people with no agenda.
And in my eyes it is clear evidence of scientific fraud.
Charles
I note the book is for sale on Amazon so I might get it.
In what way does it illustrate ‘scientific fraud?’
If you have drop box it might be easier to put it into that and then provide a link?
tonyb
There being no correlation between CET, an energy measure with SSN, a proxy for a forcing (power measure) should not be a surprise. Why should there be? Are you surprised if a plot of your Watt hour meter reading is a different shape from a plot of your Watt meter reading?
A rational comparison would be between CET and the time-integral of the SSN anomaly. Properly accounting also for the net effect of ocean cycles and water vapor achieves a 98% match with measured since 1895.
integral of sunspots is meaningless
Ste – If you understood this stuff you might recognize the SSN anomaly time-integral as a proxy for energy retained by the planet and thus contributing to the average global temperature anomaly. Click my name for an analysis that explains how it works.
Dan Pangburn September 5, 2017 at 11:42 am
Dan, I agree with Steven on this one. The problem is that the shape of the integral is totally determined by the choice of the zero point. If you choose one point the integral increases steeply … choose another zero point and the integral plunges downwards. Choose a third zero point and the integral winds up right back where it started.
Which one is correct? …
As a result, as Steven says, it is useless for analyzing the system.
w.
One other point. Integrals are not totally useless. While the trends are meaningless, the variations sometimes are not. For example, the integral of the Southern Ocean Index has interesting properties … more to follow.
w.
Ste & Wil – Of course I am aware of that. What I did avoids that issue.
Sunspots are a proxy for a power thing. The integral is mandatory to get an energy thing and it must be with respect to a nominal to account for both gains and losses. Divide the energy thing by the effective thermal capacitance and you get the contribution to the temperature anomaly.
It appears that you have reached a conclusion prior to a rigorous assessment of what was done. Perhaps you would gain better understanding by spending some open-mind time with my blog/analysis.
In brief, what I have discovered is that CO2 has no significant effect on climate and that the rising water vapor (which is IR active) is countering the temperature decline that would otherwise be occurring.
DAV September 4, 2017 at 7:44 pm
Perhaps I’m not making myself clear.
“The water at point X is over 2 metres deep” is a statement which is falsifiable.
“The water at point X is really deep” is a statement which is not falsifiable.
In general, science revolves around falsifiable statements. “E=MC^2” is a falsifiable statement. “John is a handsome man” is not falsifiable. One is a proper subject for scientific study. The other is not.
In fact, this is one of the central problems with climate alarmism, in that so few of the claims made are readily falsifiable.
Now, you say:
“As for falsifiable, a measure of correlation is no more falsifiable than the mean of the data. It is what it is.”
No. If you say “the mean of 6, 8, and 12 is 8.1”, that is a falsifiable statement. If you say “The two datasets have a correlation of 0.82”, again that is falsifiable. Why? Because both the mean and the correlation are measurable.
BUt if you say that “10 is coincident with the length of the sunspot cycle”, that is NOT measurable. There is no measurement of “coincidentness”, so we cannot say whether the statement is true or false.
And that, in turn, means that it is NOT a proper scientific statement. Science is a funny game. Someone makes a claim which can be falsified, like say E=MC^2. Other people try to falsify it. If they can do so, it is not accepted as a provisionally true scientific statement. But if they cannot falsify it despite their best efforts, it is considered provisionally true until someone comes along who can overthrow it.
But you see … all of that process, which is the very heart of science itself, cannot occur unless the statement of the first person is falsifiable.
Best regards,
w/
Willis Eschenbach September 4, 2017 at 9:46 pm
The mean is either correctly calculated or it is not. By itself, it is a meaningless value. With the scientific method, you want to falsify predictions. Falsifying or verifying calculations (by redoing them) is just checking the work.
Neither the mean nor the correlations within the data have any meaning in and of themselves. Now if you were to come up with some hypothesis involving them with a prediction made using that hypothesis then you have something falsifiable.
However, if one is just making an observation, it is perfectly fine to qualitatively say that two things appear to be correlated. Saying it quantitatively doesn’t make it any more precise or scientific. The value really tells you nothing except that when the value is higher there is more correlation and when the value is lower there is less — a qualitative answer. To a lot of people, though, having a numerical answer is more sciency. So I can see where you are coming from and you are not alone.
Even if you really do have numbers to the Nth decimal place, the first thing that come to my mind is: that’s nice; so what?
In addition, and somewhat tangentially, reliance on statistical significance gives a false sense of accomplishment. Please read the Briggs link.
The correlation is evidence of water vapor controlling daily Min Temp.
And I show that this regulation by water vapor reduces the effects from an increase in co2 and any other noncondensing GHG’s.
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/
micro6500 September 6, 2017 at 5:26 am
The correlation between any two variables is not evidence of a causal relationship between them — thus the “correlation is not necessarily causation” admonishment. At best, it means that a causal relationship cannot be ruled out. Using correlation alone one needs at least three variables and then this minimum can only indicate causation if one of the three is a cause of the other two. See Causality: Models, Reasoning and Inference by Judea Pearl ISBN-13: 978-0521895606.
https://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/052189560X/ref=sr_1_3?s=books&ie=UTF8&qid=1504701477&sr=1-3&keywords=judea+pearl
The basic flaw with the temperature records is the fact that absolute temperature may vary within a wide margin of about 2 degrees C. That became the argument to use anomaly instead arguing that the anomaly would show a trend if the temperature changed despite the problems with the station network. Nice logical assumption, mostly true but not always.
But then they screwed the pooch by changing stations, adjusting stations, and changing means of extapolating between stations. They have nice 2 degree C range of uncertainty that allows them to turn that concept of anomaly showing the trend to do anything they wish within the original believed bounds of accuracy.
Thus all the changes they make could be justified but the trend be all wrong as a result.
This is a big issue in accounting where determining values is often uncertain. An appraisal for example might come up with multiple potential values. Therefore the accountant has to be aware of that as he does an audit to ensure the company accounts for items in a consistent manner, not cherry picking one method for one asset and another method for another asset. Also period over period creates a picture for the company’s financial progress so consistency must be observed there also. The client can change methods but to do so and account for that properly the method has to be redone for all periods using both the new and old method so the investor can see what effect the change has on company financial results. None of this done in climate science and as a result its highly unreliable.
Especially in temperature measurement to determine global trends “consistency must be observed”. In addition, for temperature, one must rationally attend to confounding things like ‘heat island effect’ and ‘satellite drift’.
This is Basically what BEST is doing, the method is different, but they are pulling the rising GHG signal out of the the noise that represents temp data.
The book I read explained it as being able to filter out a piano out of the noise in time square. Where there is no piano.
Where they fail, is GHG’s are not the defining attribute of surface temps. Water vapor is.
The ruse that water vapor is condensing is both why it controls surface temps, and why they erroneously exclude it. But I ask them this, when was the last time the atm did not hold any water vapor?
The non-condensing ghg’s are really important to an ice ball earth. But we’d want more not less. And we’re not in an ice ball.
Willis Eschenbach, thank you for the essay. I thought that the original and your critique were both worth reading.
If there is (or were ) a causal link between the ENSO and the Central England temperatures (of which the CET series is an imperfect record), what do you think the actual R^2 value is (or would be)? This relates to the problem I posed a while ago about the poor statistical power of statistical tests that adhere to the conventional 5% and 1% levels.
While not CET, there is a greater than 97% correlation between dew point temp and Minimum temp over 79 million surface station daily records in the Air Force’s NCDC GSoD daily summary.
Oh, you can’t run cross-correlation code between humidity and temp, the relationship is nonlinear, and the code doesn’t detect anything.
Matt, thanks for the comments. I do think that there is a link between El Nino and other parts of the globe.

Regarding the value of the link, here’s the oddity.
The temperatures in the Pacific tropics and the North Atlantic seem to be operating in some kind of see-saw pattern. When one is going up the other is going down. Go figure … however, this effect fades out by the time you get to England.
Note that there are other areas of the globe which are more strongly anti-correlated with the NINO3.4 area.
w.
Cool. Thanks again.