Driving Forces

Guest Post by Willis Eschenbach

There’s a new paper published in Nature Scientific Reports called “Identification of the driving forces of climate change using the longest instrumental temperature record”, by Geli Wang et al, hereinafter Wang2017.

By “the longest instrumental temperature record” they mean the Central England Temperature, commonly called the “CET”. Unfortunately, the CET is a spliced, averaged, and adjusted temperature record. Not only that, but the underlying datasets from which it is constructed have changed over time. Here are some details from the study by Parker et. al.

Between 1772 and 1876 the daily series is based on a sequence of single stations whose variance has been reduced to counter the artificial increase that results from sampling single locations. For subsequent years, the series has been produced from combinations of as few stations as can reliably represent central England in the manner defined by Manley. We have used the daily series to update Manley’s published monthly series in a consistent way.

We have evaluated recent urban warming influences at the chosen stations by comparison with nearby rural stations, and have corrected the series from 1974 onwards.

According to the paper, no less than 14 different and distinct datasets were used to construct the CET.

Perhaps predictably, the authors of Wang2017 completely fail to mention any of this … instead, they simply say:

As the world’s longest instrumental record of temperature, the Met Office Hadley Centre’s CET time series represents the monthly mean surface air temperature averaged over the English midlands and spans the period January 1659 to December 2013.

Well … no, not really. And more to the point, using such a spliced, averaged, and adjusted dataset for an analysis of the underlying “driving forces” is totally inappropriate.

Now, in the Wang2017 analysis, they claim to find a couple of “driving forces” of the CET. Of these they say:

The peak L1 = 3.36 years seems to empirically correspond to the El Niño-Southern Oscillation (ENSO) signal, which has a period range of within 3 to 6 years. ENSO is arguably the most important global climate pattern and the dominant mode of climate variability13. The effect of ENSO on climate in Europe has been studied intensively using both models and observational or proxy data e.g. refs 14, 15, and a consistent and statistically significant ENSO signal on the European climate has been found e.g. refs 14 and 16.

The peak L4 = 22.6 years is coincident with the Hale sunspot cycle.

Let me start by saying that a claim that something “seems to empirically correspond” with something else is not a falsifiable claim … and that means it is not a scientific claim in any sense. And the same is true for a claim that something “is coincident with” something else. The use of such terms is scientific doublespeak, bafflegab of the highest order.

Setting that aside, here’s what the CET actually looks like:

CET 1659 2016

Now, there is a commonly-used way to determine whether two datasets are related. This is a cross-correlation analysis, which shows more than just the correlation of the two datasets. It shows the correlation at various lag and lead times. Here is the cross-correlation analysis of the Central England Temperature and the El Nino datasets:

cross correlation cet and enso

Does the El Nino affect the temperature in Central England? Well, perhaps, with a half-year lag or so. But it’s a very, very weak effect.

Then we have their claim about the relationship of the CET with sunspots, wherein they make the claim that a 22.6-year signal is “coincident with” the sun’s Hale Cycle. “Coincident with” … sheesh …

Now, the “Hale Cycle” reflects the fact that around the time of the sunspot maximum, the magnetic polarity of the sun reverses. As a result, the Hale Cycle is the length of time from any given sunspot peak to the peak of the second sunspot cycle following the given peak.

And how long is the Hale Cycle? Well, here’s a histogram of the different lengths, from NASA data

histogram hale cycle

So … is a signal with a 22.6-year cycle “coincident with” the length of the Hale Cycle? Well, sure … but the same is true of any cycle length from 17 to 28 years. Color me totally unimpressed.

Next, do the sunspots actually affect the temperature of Central England? Again, the cross-correlation function comes into play:

cross correlation monthly CET and sunspots

Basic answer is … well … no. Cross-correlation shows no evidence that the sunspots have any significant effect on the CET.

Finally, what kinds of signals do show up in the CET data? To answer this question, I use the Complete Ensemble Empirical Mode Decomposition method, as discussed here. Below, I show the CEEMD decomposition of the monthly CET data. The upper graph (blue) shows the different empirical mode cycles (C1 to C7) which when added together along with the Residual give us the raw data in the top panel.

The lower graph (red) shows the periodogram of each of those same empirical mode cycles.

CEEMD CETCEEMD CET spectra

Of all of these empirical modes, the strongest signal is at about 15 years (C4, lower red graph). There is a signal at about 24 years (C5, lower red graph), but it is much weaker, less than half the strength. In the corresponding mode C5 in the upper blue graph we can see why—sometimes we can see a cycle in the 25 to 30-year range, but it fades in and out.

To me, this is one big advantage of the CEEMD method—it shows not only the strength of the various cycle lengths, but also just where in the entire timespan of the dataset the cycles are strong, weak, or non-existent. This lets us see whether we are looking at a real persistent cycle which is visible from the start to the finish of the data, or whether it is a pseudo-cycle which fades in and out of existence.

Finally, is there any evidence of anthropogenic global warming in the CET data? To answer this, look at the residual signal in the bottom panel of the blue graph above. This is what remains after all of the underlying cyclical waves have been removed … looking at that it seems that there is no unusual recent warming of any kind.

My conclusion? If you use enough different statistical methods, as Wang2017 has done, you can dig out even the most trivial underlying cycles of a dataset … but the reality is, when you decompose even the most random of signals, it will show peaks in the underlying cycles. It has to—as Joe Fourier showed, every signal can be decomposed into underlying simpler waves. However, this does not mean that those underlying simpler waves have any kind of meaning or significance.

Finally, an oddity. Look at Mode C2 in the upper blue graph. I suspect that those blips are related to the spliced nature of the CET dataset. When you splice two datasets together, it seems to me that you’ll get some kind of higher-frequency “ringing” around the splice. Below I show Mode C2 along with what I can determine regarding the dates of the splices in the CET …

CET empirical mode C2

Is this probative of the theory that these are related to the splices? By no means … but it is certainly suggestive.


 

Here on the coast north of San Francisco, after two days of one hundred plus temperatures the clouds and fog are returning, and the evening is cool … what a world.

My best to everyone, in warm and cool climes alike,

w.

My Usual Request: When you comment, please QUOTE THE EXACT WORDS THAT YOU ARE DISCUSSING, so that we can all understand just what you are referring to.

My Other Request: Please do not stop after merely claiming I’m using the wrong dataset or the wrong method. I may well be wrong, but such observations are not meaningful unless and until you add a link to the proper dataset or give us a demonstration of the right method.

0 0 votes
Article Rating
117 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mark Luhman
September 3, 2017 9:50 pm

Trying to me sense of red noise is a fools errand, to bad it pay a few idiots so well. Maybe congress need to see in mass the old film Music Man. that would not help much since the film is well above our congress and and Senators head, somehow Jeff Flake is name is very revealing about his mental state. McCain seems to have had the brain tumor for well over twenty years, his dementia rival Reagan later years.

commieBob
Reply to  Mark Luhman
September 4, 2017 12:14 am

Have you perhaps stayed up too late watching baseball? link

DC Cowboy
Editor
Reply to  commieBob
September 4, 2017 4:42 am

and indulging in a bit too much cognac, perhaps?

Pamela Gray
Reply to  Mark Luhman
September 4, 2017 5:51 am

I’ve done this on occasion. AND typed in a WUWT comment or two. I can see that Mark types like I do after a few.

Nylo
September 3, 2017 10:04 pm

“Finally, is there any evidence of anthropogenic global warming in the CET data? To answer this, look at the residual signal in the bottom panel of the blue graph above. This is what remains after all of the underlying cyclical waves have been removed … looking at that it seems that there is no unusual recent warming of any kind.”
Really? Well, I guess it depends on what you understand by “unusual”. It is certainly unusual if you compare it to the rest of the same data set. Warming from 1900 onwards is clearly bigger and faster than before, in the residual. Although according to the climate stablishment, only the part after 1950 could be anthropogenic. If that’s what you meant (1950-2013 not different from 1900-1950), then I agree.

Reply to  Nylo
September 4, 2017 12:12 am
September 3, 2017 10:17 pm

Really great critical analysis. Thank you.

jorgekafkazar
September 3, 2017 10:18 pm

“…seems to empirically correspond…” I think this puts the Wang paper well into “wiggle matching” territory

J Mac
September 3, 2017 11:01 pm

Willis,
A thoughtful parsing of this ‘data base’ and the conjectures raised by Wang2017, et.al.
Thanks!

Phillip Bratby
September 3, 2017 11:31 pm

Since 1659, Central England has changed from a very rural environment to a highly densely populated region with several large cities and towns and all the associated infrastructure.

richard verney
September 3, 2017 11:55 pm

Whilst CET is informative of past historic behavioral patterns in temperatures, one has to be very careful with CET since the whole of central England is now one great heat island. Not simply due to urbanisation and urban sprawl, but also changes in farming, irrigation and water management.
No doubt Climatereason will check in and comment since he probably knows more about CET than anyone else and has posted articles on Judith Curry’s site.

Germinio
September 3, 2017 11:58 pm

Two quick comments:
Firstly if the paper by Wang et al. was worth reading it wouldn’t be in Scientific Reports. This is a
open access journal designed by Nature to get authors to pay to publish and has essentially no
quality standards except that the reviewers don’t think the manuscript is wrong. If this paper had
anything significant to say then it would be published elsewhere.
Secondly Willis and Yang are analysing two different things. The first thing that Yang et al do is to
construct a theoretical time series of the driving force for the Central England Temperature record. This is
clearly similar to but different from the temperature record itself. I have no idea how this driving force
is constructed and so can’t comment but any discussion about the periodicities in the temperature record
are irrelevant for the purposes of this paper. And given where the work was published I imagine that this work was rejected several times from higher quality journals before ending up here.

Reply to  Germinio
September 4, 2017 1:58 am

“the reviewers don’t think the manuscript is wrong.”
Gosh, I wish all journals were like that – on account of that being exactly and only what peer review ought to be. Trouble is they use “not interesting to our audience” to sink papers that are inconsistent with their thoughts and opinions.

Greg
September 3, 2017 11:59 pm

Hi Willis , always interesting to see periodic analysis of climate and you seem to make a better job of it than Wang et al.
one point:

To me, this is one big advantage of the CEEMD method—it shows not only the strength of the various cycle lengths, but also just where in the entire timespan of the dataset the cycles are strong, weak, or non-existent.

When Fourier decomposition fits a cycle it means that it is always there even when not visible during time frames because it is cancelled by other cycles that are in anti-phase at that time. Fourier does not deal why cycles of variable magnitude.
So if your periodogram of C5 shows two somewhat broad peaks and the decomposition seems to have times which are rather flat that is due to constructive and destructive interference of these cycles. Sometimes they add sometimes they cancel. It does not mean that they have variable amplitudes. They do not.

Germonio
Reply to  Willis Eschenbach
September 4, 2017 12:28 am

Whether or not “beat frequencies” exist is more subtle than it appears. In a linear system they do not
exist and it is only when the intensity is detected with a “square law” detector do they appear. And with
two closely spaced frequencies what you hear is a change in volume which the brain interprets as a third harmonic. It is not real.

Greg
Reply to  Willis Eschenbach
September 4, 2017 1:48 am

Greg, while that may be true, it’s a difference that makes no difference. If there is destructive interference that totally cancels out a given cycle, in the real world that cycle doesn’t exist, regardless of whether Joe Fourier says it is there or not.

If the aim is to detect and determine drivers of a physical process, it is essential to understand that a periodic forcing may still be present active and real even when it is being cancelled out by other effects and not visible.
That is why, when you ask Joe Fourier’s advice on a signal you should be prepare to accept what he tells you 😉
If you do not realise and accept this you may erroneously conclude that a driver has no effect on the system under study because at a certain point in time it is not visible. It is a difference which matters.

Richard G.
Reply to  Willis Eschenbach
September 4, 2017 9:40 am

“When two pure tones are near each other, we hear a “beat frequency”. ”
An example in the real world: Twin prop aircraft with unsynchronized propellers ‘throb’ as they cycle though beat frequency.

Hartley
Reply to  Willis Eschenbach
September 5, 2017 6:33 am

Willis, while “beat notes” is really separate from the main subject, I think it might be appropriate to note that “beat notes” (AKA mixing products of two or more signals) are a product of the detector used (your ears, or the detector stage in a radio receiver), and are not present in the environment ahead of the detector. This is why Fourier doesn’t “hear” them.

Reply to  Willis Eschenbach
September 5, 2017 8:31 am

Ger:
It is real to your ear and brain. The interference pattern creates a volume change at a given period. This is a real change to the position of your ear drum,

Greg
Reply to  Willis Eschenbach
September 4, 2017 1:28 am

Thanks Willis, I did appreciate that the CEEMD decomposition is not FA. however each band is then subject to spectral analysis. That is the level at which I was making those points. Eg the variable visibility in the C5 band and periodogram of that band.
There is no contradiction with the beats phenomenon either. FA breaks down such a signal into two close but different fixed frequencies. But this is mathematically identical to an intermediate frequency which is amplitude modulated. There is nothing more “real world” about either representation.
At certain frequencies , human perception of sound may interpret the sound as modulated beats, that is fine. FA resolves this as two fixed ampltudes: that is the “perception” of the Fourier method. Both are valid, real world and totally mathematically equivalent.
To get the human “beats” from the FA result you take average frequency modulated by a signal of half the difference of the FA frequencies. You then need to realise that human perception of sound only hears the amplitude of the intensity of the sound and is insensitive to the phase. So we do not hear the smoothly varying sine wave envelop but the folded sine wave bumps. ie we perceive two bumps per cycle. The “beats” are twice as fast as the physically real sine wave modulation.
Here is what it looks like:comment image
https://climategrog.wordpress.com/beats/

p1=9.1;p2=10.8;
plot [1900:2100 ]cos(2*pi*x/p1)+cos(2*pi*x/p2) 
Greg
Reply to  Willis Eschenbach
September 4, 2017 1:57 am

Here is a useful reminder of the formulae which convert sum of cosines into a single modulated cosine.
https://trans4mind.com/personal_development/mathematics/trigonometry/sumProductCosSin.htm
See the second equation under the heading : “Sum of Cosine and Sine”.
Beats and Fourier descriptions are mathematically identical results.

Greg
Reply to  Willis Eschenbach
September 4, 2017 2:14 am

comment image

Greg
Reply to  Willis Eschenbach
September 4, 2017 11:05 am

How about the circa 60y in AMO is a “beats” phenomenon between 9.1 y lunar and nominally 10.8 y solar drivers.

Reply to  Willis Eschenbach
September 5, 2017 10:44 am

How about the circa 60y in AMO is a “beats” phenomenon between 9.1 y lunar and nominally 10.8 y solar drivers.

I do not know what drives the AMO cycle, but I do know temps are regulated by the AMO and PDO, and the way these two control how most of the water vapor get distributed across the planet to cool as it moves poleward.

richard verney
September 4, 2017 12:08 am

the same is true for a claim that something “is coincident with” something else. The use of such terms is scientific doublespeak, bafflegab of the highest order.

I must beg to differ. Personally, I like the expression “coincident with” since it emphasises the point that correlation is not causation, and I consider the expression should be used more often.
As a scientist, one knows (or ought to know) that correlation does not mean causation, but even so, the mere use of the expression correlation carries a subliminal message which may impact upon the way one reads a paper or thinks about a point. To a lay person, the use of the word correlation very probably is suggestive of a causative link.
In my opinion coincident is precisely the right expression to use. It is implies that the link may be nothing more than a fortuity and not in any way whatsoever causally connected, or there could be some causality but we just do not know.

richard verney
Reply to  Willis Eschenbach
September 4, 2017 12:56 am

Coincidence is defined by the fact itself. If it cannot be defined by the fact, it cannot be coincident, and one is forced to use more general expressions such as there are some similarities between which is an expression that I might use when comparing paleo proxy records of temperature and CO2. No correlation, no coincidence, but some similarities.
It does not matter whether it can or cannot be falsified. It is simply a factual statement, and nothing more..
Thus for example, If one looks at the satellite data set, one will note that there is a step change in temperature of around 0.3degC coincident with the Super El Nino of 1997/98 (I don’t like using one hundredths of a degree, so I have rounded the change).
That is just a fact. It points out that two happenings have occurred at the same time, ie., in and around 1997/98 there was a Super El Nino, and in and around 1997/98 there is a step change in temperature.
it does not state that the Super El Nino of 1997/98 drove the step change in temperature. We simply do not know what caused the step change in temperature that occurred. All we know is that the step change in temperature happened, when it happened and also that it happened at the same time that there was a Super El Nino. Are the two in some way connected? Well maybe. Presently our knowledge and understanding is not sufficient to answer that question, but it is a noteworthy point that the step change in temperature and the Super El Nino both occurred in similar time frame, such that it highlights an area of investigation. That is why it would be appropriate to say the two events are coincident with one another.
What needs to be falsifiable is conclusions that are drawn. These conclusions need to be capable of testing, if they are to be something other than mere opinion.
As regards your cycle example, you are right that it is subjective, and I consider that one always has to use an element of commonsense. It is difficult to see that a 20 year cycle and an 18 year cycle can be coincident with one another. Factually, they soon become out of phase with one another. With a short data set, it would be possible that the peak of each cycle is coincident with one another, or that the start of one cycle is coincident with the peak in the other cycle, but it is just a factual matter, and one must take account of margins of errors, uncertainties and de minimis

DAV
Reply to  Willis Eschenbach
September 4, 2017 2:32 am

The same with “correlation”. It too is subjective and only applies to the data set used. It’s whether it can be used for prediction that counts. richard verney September 4, 2017 at 12:56 am perhaps says it better.

Reply to  Willis Eschenbach
September 4, 2017 4:33 am

If one looks at the satellite data set, one will note that there is a step change in temperature of around 0.3degC coincident with the Super El Nino of 1997/98

It always surprises me that people will do this association instead of actually looking at the data. The step warming took place after the 2000-2001 strong La Niña.
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_July_2017_v6-550×317.jpg

Yogi Bear
Reply to  Willis Eschenbach
September 4, 2017 5:35 am

Actually looking at the data, I would reckon that it stepped up from 1995 with the AMO warming.

richard verney
Reply to  Willis Eschenbach
September 4, 2017 5:44 am

When one talks about a step, one has to differentiate between the riser and the tread. In the specific case, the riser is the ENSO event. And the ENSO event is not simply the El Nino but also the following La Nina and recovery therefrom.
The fact is that there is quite a long evolution in ENSO events, and the satellite appears to be more sensitive to warming than it is to cooling possibly due to convection carrying warmth up to the altitude at which the satellite takes its measurements. The satellite is also sensitive to short lived volcanic cooling possibly because the cooling occurs at altitude due to high level aerosol emissions.
The step change is the completion of the ENSO cycle, and whatever small lag there is in the system to that event particularly the oceans.
What one sees in the data set is that before the Super El Nino of 1997/98, the anomaly was tracking at around the – 0.15deg C level, and after the completion of the ENSO cycle (that includes the subsequent La Nina and recovery therefrom),the anomaly tracks around the +0.2 degC level.

Reply to  Willis Eschenbach
September 4, 2017 4:54 pm

the ENSO event is not simply the El Nino but also the following La Nina

You are even drifting further apart from the evidence. There is no such thing as an ENSO event made by an El Niño and following La Niña. The big 2015-16 El Niño was not followed by a La Niña, and the frequencies of Niños and Niñas can be quite different over a certain period of time. ENSO is an oscillation that is anything but regular, and the biggest increases in temperatures come after strong La Niña events. This is only logical as El Niño discharges oceanic temperature to the atmosphere and eventually to space, lost by the climate system, while La Niña recharges oceanic temperature from solar irradiation, due to lower cloud cover in the tropics, that remains in the climate system.

DAV
Reply to  Willis Eschenbach
September 4, 2017 7:44 pm

Willis Eschenbach September 4, 2017 at 7:55 am

We have a mathematical method to measure the correlation of two datasets. We also have methods to determine whether that correlation is statistically significant

THEREFORE, one is a scientifically valuable and falsifiable statement, and one is not..

No. One is quantifiable while the other is qualitative but neither is particularly informative.
As for falsifiable, a measure of correlation is no more falsifiable than the mean of the data. It is what it is. You will get the same value every time for the same data.
I’m not a mind reader but I suspect that measures of correlation and statistical significance don’t mean what I suspect you think they mean. See: http://wmbriggs.com/post/4024/ for more on this topic.

Greg
September 4, 2017 12:10 am

Of all of these empirical modes, the strongest signal is at about 15 years (C4, lower red graph). There is a signal at about 24 years (C5, lower red graph), but it is much weaker, less than half the strength.

Why do you not mention the circa 21y peak ? In C6 it is a narrower, clearer peak than the broad 24y lump. It is also present in C5, so you need add the presence in the two bands C5,C6 since the arbitrary processing bands have split this peak across the two because , by chance, it falls away from the centre of both and is attenuated by the bandpass effects of CEEMD in both cases. It is not even clear that adding will give an accurate measure but clearly the attenuated peak in either band is not a full representation of its strength.
Maybe CEEMD could be modified to produce differently centred bands ( maybe by chose more or fewer bands in the processing ).
Since you go to some length to challenge the idea of Hale cycle having an effect it does not seem right to simply ignore this circa 21y peak from your commentary.
Anyway, thanks for the work , it is enlightening.

Greg
Reply to  Willis Eschenbach
September 4, 2017 1:37 am

Ah, my apologies, I was misreading the log scale on the periodogram. It was the 24y peak I was misreading at being about 21y. There is indeed nothing relevant around 21/22y in what you show.
My comments thus relate to the strength of the 24y peak which is about the same as 15y in strength.
Thanks for the correction.

September 4, 2017 12:17 am

As you rightly say the CET is an average. It is an average of all seasons. Thus any seasonal signals are diluted.
If you just look at the January CET it shows winter temperatures steadily becoming milder overlaid with a variable cycle that is no doubt influenced by jet stream periodicity.
Interestingly the June record shows no temperature change over the 300 years.

Reply to  Willis Eschenbach
September 4, 2017 1:15 am

Here is a link to a Clive Best Blog showing CET seasonality.
http://clivebest.com/?attachment_id=7018
Philip Eden, a UK weatherman used to explain that the UK winter temperature was related to the number of days that we experienced high pressure systems that gave an Easterly cold continental airflow and therefore a reduced prevailing SW mild atlantic weather. Periodicity of this I would attribute to the Jetstream.

richard verney
Reply to  Willis Eschenbach
September 4, 2017 1:06 pm

comment image
I have only viewed this with the mark 1 eyeball but there does not appear to be any obvious and remarkable trends to max temperatures.

Greg
Reply to  Willis Eschenbach
September 4, 2017 4:21 pm

Richard, if you are looking for 1 deg C on a scale that has a range of 20 deg C and a signal with over 15 deg C of noise, would you really expect to see it ?
That is why low-pass filters are used to remove noise and expand the scale to look at smaller long term changes.

richard verney
September 4, 2017 12:27 am

Setting that aside, here’s what the CET actually looks like:

comment image
What is interesting from that plot is that the 20th century was not warmer than ~1775 until around the late 1990s.
It was only around the lead up to Super El Nino of 1997/98 and afterwards, that 20th century temperatures exceeded those seen in ~1775. So much for claims of unprecedented warmth.
Perhaps more significant, to that, is to look at the rate of the rise in temperatures during the last half of the 20th century (ie., the modern warm period0 and compare that with the rate of rise in temperatures between 1680 and 1735, or for that matter between 1880 and 1940.
As can be seen, there is nothing remarkable about the rate of temperature rise in the last half of the 20th century; there is nothing unprecedented about the rate of change in temperatures, and no suggestion that increasing levels of CO2 have brought about changes in temperatures at a higher rate than previous warming episodes seen in CET.

Greg
September 4, 2017 12:41 am

Wang:

The peak L1 = 3.36 years seems to empirically correspond to the El Niño-Southern Oscillation (ENSO) signal, which has a period range of within 3 to 6 years.

This is nonsense. Either you have a period or you do not. A period can not change by a factor of two and still be a period. If you do not have a fixed period in ENSO you can not attribute it to a fixed peak in a periodic decomposition.
There is quite a strong peak in cross-correlation of UAH TLT ( lower tropo air temp ) and ENSO index:comment image
This does NOT establish that ENSO is a “driver” , there are many regions showing both positive and negative correlation with ENSO. That is not consistent with “driving” changes, simply that it is a planet wide variability which can be characterised by the ENSO index.

Greg
Reply to  Greg
September 4, 2017 12:44 am

There is a clear periodic variation in the positive lag section of the graph with a period of about 272 mo / 6 = 45 mo or 3.8 years. That is over the rather short period of the satellite data since 1979.

Greg
Reply to  Greg
September 4, 2017 12:47 am

correction : (272-5)/6/12 = 3.7 years.

1saveenergy
September 4, 2017 12:49 am

Huge amount (8 pages) of CET info & analysis on xmetman site
http://www.xmetman.com/wp/cet/page/3/

Mick In The Hills
September 4, 2017 1:07 am

I always look forward to your essays Willis.
Both here and on your Skating blog.
The statistical treatments of temp reconstructions are however a bit beyond my ken.
Can you do a pre-emptive piece soon on whether someone on “The Team” has done a paper asserting the correlation of chicken entrails with agw?

richard verney
September 4, 2017 1:48 am

Willis
It is good to see a scientific post by you. These days, these are too few and too far between.
Given the way climate science has developed, namely the extrapolation of trends from data sets not fit for scientific purpose, you highlight a deficiency in this science, namely a lack of competence in data handling, data presentation and statistics. In fact, statistics ought to be the key foundation course, and we all know what Ernest Rutherford said about the use of statistics and their role in science (“If your experiment needs statistics, you ought to have done a better experiment.“).
In my opinion, there is far too much in the way of curve fitting and seeking to make out causative trends when the data is so poor and inconclusive that no firm conclusions can be drawn. I know that you do not like the use of wishy washy expressions such as may or could etc but since almost all the data sets are not fit for scientific purpose (for a variety of different reasons) this is all an honest scientist can correctly state.
You state:

Finally, is there any evidence of anthropogenic global warming in the CET data? To answer this, look at the residual signal in the bottom panel of the blue graph above. This is what remains after all of the underlying cyclical waves have been removed … looking at that it seems that there is no unusual recent warming of any kind.

Obviously, your presentation bears that out, but the position is so stark that the mark 1 eyeball readily reveals the same point (as I noted at September 4, 2017 at 12:27 am above).

Peta of Newark
September 4, 2017 1:52 am

and was, Central England however may years ago as it is now? The breakpoint being end-of WW2
I am slap bang in the middle of Central England and right now, when I go driving around/exploring, I see mile after mile of of brown dirt.
Only a month ago growing wheat, barley and oilseed – now harvested and gone leaving bare dirt with a bit of dead (organic) trash lying around.
That brown dirt has low albedo, so it gets hot.
It is bone dry – gets hotter than if it were damp.
There are no living plants transpiring water to have a cooling effect.
So it warms up.
That trash, exposed to a still strong sun, is being smashed up (thence oxidised) by the same solar energy that created it less than 2 months ago.
Anything organic in the surface layer of that dirt (the ‘soil organic’ fraction and bacteria) is exposed to the sun is being oxidised.
So it releases carbon dioxide.
And that soil organic fraction is what’s left of what was ‘litter’ from under the forest that covered England, until it was cut down (by Henry VIII?) to make ships, cannons and cannon balls.
That litter was anything up to 2 feet deep and nearly all organic material. The dirt round here now is little different to the sand you’d find in Southern Tunisia. (I say that because I’ve been there)
Tiny little scrap of forest is left (Sherwood Forest they call it round these parts)
I put a datalogger (see the advert for them here) in both a section of forest and another datalogger in my garden – with a cornfield on 2 sides, barley on one side and potatoes on the 4th corner.
The forest datalogger runs 1 degC cooler – on average of course (recording at 30 minute intervals)
And see the CET temp graph.
See it ramp up after WW2.
Because (certainly English) politicians got such a fright from having to beg for food from their colonial cousins, they went hell-for-leather to grow enough ‘at home’
They coincidentally and fortunately got the tools to do that – big tractors and nitrogen fertiliser arrived simultaneously on farms.
CET temperatures and carbon dioxide levels started ramping up at the exact same time.
No coincidence, not in my book.
The bare dirt lifted the temperature and the nitrogen hoisted the CO2,
Tyndall noticed that CO2 had ‘colour’ That’s all.
So what.
Most things do have colour.

richard verney
Reply to  Peta of Newark
September 4, 2017 2:12 am

Unless people have lived in this area, I suspect that they do not appreciate how things have changed these past 100 years. Whilst England still possesses pleasant countryside, the countryside is now very different. The point you make is a good one, providing a little more detail on the point that I made above.

Whilst CET is informative of past historic behavioral patterns in temperatures, one has to be very careful with CET since the whole of central England is now one great heat island. Not simply due to urbanisation and urban sprawl, but also changes in farming, irrigation and water management.

Steve Vertelli
Reply to  Peta of Newark
September 4, 2017 2:48 pm

Most data has been falsely warmed after 1940. If NASA or NOAA have touched it, it’s been intentionally corrupted by government employees.
An excellent place to learn about that is Tony Heller’s site, where he has gone to the trouble of making flashing .GIFs showing the systematic and continual alteration of the past before 1940 and warming all data after 1940.
It’s called ” realclimatescience ” dot com, one of the topics is altered data, at the top of the title page.
This ongoing falsification of records is happening on a continual basis – it’s being done by artificially tying temps toGHG levels.
Everyone should really try to look over his charting of this, it’s simply and transparent once you see it.
I dunno if the records associated with this thread are among those. My bet is that they have been altered, and considerably so.

richard verney
September 4, 2017 2:06 am

Whether CET is a good proxy for the Northern Hemisphere is a matter of conjecture and debate, and whether anything useful can be drawn as from 1950 onwards is also moot.
That said, it is noteworthy to look at the historic data and the large variability in temperatures, It shows a change of about 4 degC from ~7.5degC to ~11.5degC.
If one looks at the historic part of the data set for the period prior to 1950 there is variability of upwards of 2degC. This is very different to the handle of the M@nn Hockey Stick.
One would have expected that if the MBH98 paper sought to proclaim that NH temperatures were essentially flat until the modern warm period, 1960s onwards (ie.the date of the splice on of the adjusted thermometer record), that they would have to have examined that claim in the light of CET for the period 1660 to 1960.
If I recall their first paper did a reconstruction back to about 1400 (and their second paper extended the reconstruction back a thousand years), so CET covers 50% (or more) of their historic reconstruction.
Of course, I accept that the early part of CET is a reconstruction but all of that is something that ought to have been addressed in the MBH98 paper.

climatereason
Editor
September 4, 2017 2:25 am

Hi Willis
Nice article.
As you know I use CET a lot. It was composed in a very painstaking manner by Manley. I have personally discussed it at the Met office with Parker. He believes it to be a pretty accurate historic compilation as does the Met office generally. The older data set to 1659 is not generally used as it provides a monthly rather than daily record hence why Parkers 1772 version-when daily records are available-is used.
However that is not to minimise the accuracy of the older set, or CET in general, but we need to bear in mind Hubert Lambs caveat concerning historic temperature databases that ‘we can believe in their tendency but not their precision’. So is it accurate to a tenth of a degree? No. is It broadly accurate in showing us the vagaries of the British climate enough to be very useful? Yes.
I wrote very extensively on it here citing all the relevant CET back up material ;
https://judithcurry.com/2011/12/01/the-long-slow-thaw/
Can anything be discerned from a ‘spliced’ dataset? Within my study below I looked at the impacts of vulcanism and sun spots on CET. I can not see that either have had a huge impact, although there are one or two extreme examples where there may be some sort of impact.
https://judithcurry.com/2015/02/19/the-intermittent-little-ice-age/
Volcanoes in particular are largely a red herring with the gleeful pointing by certain scientists to sudden dips in temperatures due to a massive eruption (think of tree rings or a lack of them) being contradicted when more detailed records show the cooling had already started to occur several years prior to the eruption
So is CET a useful record of the likely approximate temperature and as some sort of useful proxy for a wider area? Yes. Should it be taken as some definitive record accurate to a fantastic level with all that implies in trying to discern micro trends out of it? No
Note that the graphs in the links above also show my reconstruction of CET prior to 1659. This is not the Met offices material, although they have seen it and have a broad interest. I am currently working on the 13th century CET reconstruction which incudes several large volcanoes enthusiastically endorsed by Mann and Miller as showing cooling. Hmmm.
Is anything much happening in recent times when looking at the broad sweep of British weather records over many centuries? It is hard to see it. What is easy to see is mans impact on their environment by way of UHI. Which is why the recent record makes an allowance for UHI but as I have discussed with David Parker and Richard Betts I do not think enough of an allowance.
All the best.
tonyb

Reply to  climatereason
September 4, 2017 4:18 am

Hi Tony,
The volcanic effect appears to be warming the winters and cooling the summers due to the different stratospheric effect of volcanic stratospheric emissions. While they block a significant part of solar radiation, they also uncouple the Solar-QBO-Polar vortex-NAO signal by reducing ozone levels, and thus tend to be accompanied by higher NAO values in the winter. The winter warming effect at mid-high NH latitudes by stratospheric volcanic eruptions is known since 1992.
Robock, A., & Mao, J. (1992). Winter warming from large volcanic eruptions. Geophysical Research Letters, 19(24), 2405-2408.
Thus the effect of volcanic eruptions on temperatures of the past depends on the proxy you are using. Biological proxies are affected disproportionately as they rely on summer-half year temperatures, and in some cases precipitations. To my knowledge the effect of volcanic eruptions on precipitations has not been clarified. That the time of the temperature effect appears to be off might be a consequence of different dating models. Volcanic eruptions are very precisely dated to the year by their ice core signature, while most biological proxies depend on imprecise age models. Tree rings used for the radiocarbon calibration curve are also precisely dated, and most authors support an effect of volcanic eruptions on tree rings.
Scuderi, L. A. (1990). Tree-ring evidence for climatically effective volcanic eruptions. Quaternary Research, 34(1), 67-85.
“Ringwidth variations from temperature-sensitive upper timberline sites in the Sierra Nevada show a marked correspondence to the decadal pattern of volcanic sulfate aerosols recorded in a Greenland ice-core acidity profile and a significant negative growth response to individual explosive volcanic events.”
Jones, P. D., Briffa, K. R., & Schweingruber, F. H. (1995). Tree‐ring evidence of the widespread effects of explosive volcanic eruptions. Geophysical Research Letters, 22(11), 1333-1336.
“Tree-ring evidence from 97 sites over North America and Europe are used to develop a chronology of widespread cool summers since 1600… A number of the extreme low density years occur in both North America and Europe, suggesting a common response to high-frequency forcing… Of the five common extreme low-density years (1601, 1641, 1669, 1699 and 1912) four are known to be coincident with the year or year following major volcanic eruptions.”

Charlie
September 4, 2017 2:57 am

1. Where are the thermometers used- is it Oxford City?
2. What is change land use for the surrounding 10 miles?
3. How precise are the thermometers?
4. How frequently are readings taken?
From about 1690-1740, over a 50 year period or slightly less , temperatures appears to increase from about 8 C to 11 which coincides with the massive growth in the yield of British agriculture. The increase in temperatures produced a very wealthy yeoman farming class( the farm houses of this period are often large )and aristocratic land owning class which enabled the industrial revolution to occur. In the 1780s, a French aristocratic was surprised that all classes could eat butchers meat every day.
I would suggest, that British very variable and warm periods has produced a well fed, contented, resilient ,resourceful who loved liberty : so what is the problem?

tty
September 4, 2017 3:18 am

Why not use the Uppsala series instread? That is a single record with daily data from 1722 to date. It has a few years spliced in from a nearby station in the earliest part and is affected by UHI, but it is still a better quality record than CET:
http://www.smhi.se/polopoly_fs/1.2848.1502711961!/image/temp_ar_uppsala_2016.png_gen/derivatives/Original_1256px/image/temp_ar_uppsala_2016.png

Sixto
Reply to  tty
September 4, 2017 5:07 pm

In which the current warming doesn’t even equal the 1720s, a warm interval in the LIA, let alone the height of the Medieval WP.

Sixto
Reply to  tty
September 4, 2017 5:12 pm

Or how about Armagh Observatory, N. Ireland, 1844-2012?:
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20140003180.pdf
In which high point was 10.6 degrees in 2007, edging out 1846 at 10.4.
Study also correlates T with sunspots observed there.

Gloateus
Reply to  Sixto
September 5, 2017 3:12 pm

Temperature strongly correlated with SSN.
Can’t post the graph.

September 4, 2017 4:05 am

Trends and periodicity (spectral composition) of the CET’s summer (JJA) and winter (DJF) data have distinct properties, which tend to be suppressed when only the annual data is analysed.

richard verney
Reply to  vukcevic
September 4, 2017 6:26 am

But also of course, a problem is created by the change in equipment.
Modern thermocouples have a materially different response time to that of the old LIG thermometers, and it maybe that this is causing an artificial warming. There is a post about this on Jo Nova’s site.
http://joannenova.com.au/2017/09/bom-scandal-one-second-records-in-australia-how-noise-creates-history-and-a-warming-trend/
This is well worth a read. This is potentially a systemic bias since one never gets a short lived blast of cold air from a jet engine, or a cold gust over a tarmac parking lot etc.
I have for a long time suggested that we should retrofit the best sited stations with the same LIG thermometers used by the stations in question back in the 1930s/1940s and observe using the same practices as used by that station in the 1930s/1940s. We could then obtain RAW data which could be compared to the station’s historic RAW data without the need for any adjustments. That would quickly and easily give us a good idea as to whether temperatures have truly moved as from the 1930s/1940.
It is a pity that BEST did not adopt this type of approach. Ie, come at the temperature record from a different perspective. It should have audited all the stations and selected the best sited and most pristine stations and then retrofitted these stations. 50 to 150 of the best sited most pristine stations would have been plenty.

September 4, 2017 5:04 am

A good piece by willis.
It’s somewhat marred by his attacj on CET.
CET represents an area of england.
combining stations is one way of doing this, it has it’s drawbacks (splicing artefacts)
better to do an area average and avoid splicing.
The differences will be minor, of technical interest only.
A fair test of the hypothesis would be to use some of the other long records.
http://www.geo.umass.edu/faculty/bradley/jones1992a.pdf
One would not expect these drivers to drive one single magical patch on the globe.
Look at all those wiggles… if you went hunting for cycles… wanna bet you’d find some

richard verney
Reply to  Steven Mosher
September 4, 2017 6:12 am

One would not expect these drivers to drive one single magical patch on the globe.

But CO2 is claimed to be a well mixed gas (and is acceptably so at high altitude), and therefore any impact it has ought to be seen to similar extent all over the globe, save only to the extent that there are material differences in humidity, or the impact of unique features due to particular oceanic currents, or unique weather patterns due to topography and the like.
There is no reason why just a handful, or so, of well sited stations could not be capable of observing the signal to CO2, if there be any signal at all to observe.
As you are aware, the contiguous US has not warmed since the 1930s. What makes the contiguous US an outlier? What are its unique features that mean that it does not and could not be expected to behave like the mid latitude region of the NH below the Arctic and above the Tropic of Cancer?

Reply to  richard verney
September 4, 2017 7:54 am

Wrong.
Both on the science of AGW ( its more than c02)
And on the temperature record in the US.
Nice attempt at changing the question that willis raises.

Gary Pearse
Reply to  richard verney
September 4, 2017 9:58 am

Lower 48 raw temperatures have support from records in Canada, Greenland, Iceland, South Africa, Paraguay… What are the statistical chances of these not reflecting actual global patterns? Here is S. Africa for example from 1880.comment image
See Paul Homewood’s blog for Paraguay, Greenland etc.

Steve Vertelli
Reply to  richard verney
September 4, 2017 3:27 pm

You’re right Richard, of course.
AGW is the teaching that the class of gases deflecting 20% of total warming firelight from the sun
is causing instruments to detect and depict more light reaching, warming and leaving Earth
as the refractive, insulating GHGs
make less and less light reach, warm, and leave the planet.
Apparently no one has informed computer programmer Mosher,
that’s a crass, fraudulent, shameless violation of Conservation of Energy.
Too bad he can’t figure out less light reaching the planet
can’t make instruments detect, and depict more light reaching and warming the planet.
What a bunch of kooks to have ever thought such shameless violation of Conservation of Energy. is science.
Fortunately the voters see through these con men’s sociopathological, and criminal fraud.

Reply to  Willis Eschenbach
September 5, 2017 5:06 am

true unsupported.
your agreement with what i know to be true isnt required

Jeff Alberts
Reply to  Steven Mosher
September 4, 2017 9:04 am

“combining stations is one way of doing this, it has it’s drawbacks (splicing artefacts)”
You just broke science. Combining stations means your result is physically meaningless.

Reply to  Jeff Alberts
September 5, 2017 5:04 am

we dont combine stations at berkeley.
stole the idea from willis.

September 4, 2017 7:01 am

Regarding the graph of cross correlation of CET with sunspots: I do not expect that to show correlation, because the Hale cycle has two cycles of the number of sunspots, each with opposite overall solar magnetic field.

Gary Pearse
September 4, 2017 8:41 am

Willis, a nice quick shredding of ‘forced’ science. Two things about the volatility at your conjectured splice points in the record:
a) are the splice points not recorded by Met Office? If available, it is a fine series of data points you’ve teased out.
b) the data of the purturbed points would seem to offer a means to judge the legitimacy of what was done and in some cases, refine and improve the splice.

Jeff Alberts
September 4, 2017 8:59 am

“Well … no, not really. And more to the point, using such a spliced, averaged, and adjusted dataset for an analysis of the underlying “driving forces” is totally inappropriate.”
Actually even MORE to the point, if they’re averaging anything more than a single temp station, then their result is physically meaningless.

Reply to  Jeff Alberts
September 5, 2017 5:02 am

not physically meaningless.
it was colder in tHe LIA

September 4, 2017 9:08 am

Well done, Willis. Read the paper because am deeply interested in all things attribution. Thought it was awful, but was too lazy to write something up dor here. You have properly shredded their shoddy analysis.
The only interesting attribution analysis I am aware of is Merohasy’s new paper using advanced neural network AI trained on 6 carefully selected high resolution paleoclimate proxies over the past millenium to project natural warming since 1900, with the residual assumed to be AGW. Her numbers are >75% natural and <25% AGW since 1950. The main methodological issue is that proxy time resolution is still arguably poor compared to the attribution period examined. Would be interested in your keen analysis of the paper.

Steve Vertelli
Reply to  ristvan
September 4, 2017 5:04 pm

Here’s an interesting fact: the planet’s temperature hasn’t changed since it was made part of the international physical standards.
Day in and out the planet’s temp and all its important parameters remain firmly unchanged.
If it had changed the international legal and regulatory authorities would have modified the certifications of everything from your home oven and air conditioner
to the sensitive equipment in operating rooms and instruments on the planes at the airport.
Indeed if these values were changing as they must were the climate changing
Everyone in instrumentation of gas related fields would be able to explain about the changes.
No such revision of international physical standards has happened because the temperature of the global atmosphere hasn’t changed.
Literally – even the claim that “climate must be changing” is purest of technical falsehoods.

September 4, 2017 9:08 am

Willis, just a short question to this excellent post: did you calculate the cross-correlations on the ensembles (series1,series2) or did you subtract for each series the average from every data point (i.e. calculate crosscor(series-averageseries1, series2-averageseries2)) ?

Edwin
September 4, 2017 9:17 am

Having collected environmental data of all sorts in the AGW “debate” I have pondered how temperature has been measured over time. How we measured temperature has changed form LIG thermometers to thermocouples, to satellites. Even LIG thermometers from different runs from the same manufacturer may have different precision. While there are ways to “adjust” as one replaces a thermometer at a weather stations I know that for a few it is seldom done.

September 4, 2017 9:34 am

Willis: Excellent post, as we have come to expect. Deflating dubious conclusions is fun.
Philip Eden (very distinguished, retired meteorologist) maintained an independent CET series because he wasn’t sure that The Met Office/Hadley was an appropriate custodian of the data, and he wasn’t sure of how accurately they added current temperature records to their version of CET. This is a quote from his website that articulates his concern, in a nicely understated, English way:

Since Professor Manley’s death, the Meteorological Office has become the self-appointed guardian of the CET series, although one wonders whether it is a guardianship of which Manley would have approved.

Eden seems to have stopped updating his CET series in 2014 – that’s when his website seems to stop getting updates. His data set from 1974 (when Gordon Manley died) to 2014 is here:
http://www.climate-uk.com/provisional.htm
At one point, he posted these plots on his website. The web page is here:
http://www.climate-uk.com/CETcheck.htm
http://www.climate-uk.com/CETcheck_files/CETcheck_31604_image001.gif
http://www.climate-uk.com/CETcheck_files/CETcheck_31604_image002.gif
And his text (hidden behind the images, you have to get the page source) says:

The Hadley Centre’s CET calculation has recently undergone a major change, involving the replacement of several of the contributing stations. Their series is now based on Stonyhurst (Lancs), Pershore (Worcs) and Rothamsted (Herts), all of which are Campbell Automatic Weather Stations. The Philip Eden series continues to emulate Manley’s original work which calculates the mean between the Oxford district and the Lancashire Plain, and no changes have been introduced to this series in recent months. You can draw your own conclusions as to the efficacy of the Hadley Centre’s change.

The plot of “Hadley CET minus MO E&W” (I think that’s Met Office England and Wales) goes negative by about 0.25°C between July 2004 and July 2005, while the plot of “Philip Eden CET minus MO E&W” stays more or less flat. In other words, the Hadley CET went up by 0.25°C in one year, compared with an independent data series that follows a constant methodology.
Does this conclusion have a familiar ring to it? Do I hear “homogenization”? Karlization?
I’m sort of bogged down in work today, but I’m going to play with Eden’s data and see if the difference continues, or continues to increase. I may be back later today if work goes well and I can make the time.
I hope the images come out – they are old and the site isn’t https

old44
September 4, 2017 9:48 am

0perhaps a new motto is in order “-The MET by Royal Appointment fiddling temperatures since 1772”

The Reverend Badger
September 4, 2017 1:48 pm

The temperature at any specific point (e.g. a CET measurement station) varies continuously during any 24 hr period. This is normal and expected because we see the sun rise and set and note it has gone away for about half of the 24hrs. Interestingly if you stare at the thermometer all day for several days and take regular notes every minute or 5 you will also see the variations are erratic and unpredictable. Sometime the temperature moves quickly in a matter of minutes. Sometimes it hardly moves for hours. It appears quite random sometimes.
If you were interested in the underlying “heat” energy involved in these movements you would really need to take loads of measurements each day, plot the graphs, and look at the areas under the graphs too. Taking just the MAX and MIN isn’t going to show you much about what is really happening. And using MAX+MIN/2 adds nothing to your lack of knowledge!
Of course if what you are really interested in is the surface temperature of the Earth then you had better move those thermometers and stick them in the ground, or at least in contact with it because right now (or in 1752) you appear to be measuring the temperature of that thin wispy stuff with varying amounts of water in it which we call the atmosphere, or at least the tiny bit of it in your measuring cabinet blown one way or another by the wind.
In conclusion : CET data may be interesting but it is of no use whatsoever in studying the SURFACE temperature of the planet BECAUSE THE THERMOMETERS ARE NOT ON THE SURFACE.
FUNDAMENTAL !

Reply to  Charles Gerard Nelson
September 4, 2017 2:15 pm

Mods.
I was trying to upload an image I have from a book, a graph of CET record which I had on my Facebook page. Doesn’t seem to have worked! Can anyone tell me the easiest way to do this? It’s really worth seeing.

climatereason
Editor
Reply to  Charles Gerard Nelson
September 4, 2017 3:37 pm

Charles
Do you have a reference or description as the image will exist elsewhere?
Tonyb

Reply to  Charles Gerard Nelson
September 4, 2017 8:38 pm

the graph is called
Accumulated temperatures during the growing season in ‘month-degrees’ 1749 -1950
Location: 600 ft above sea level. Western Pennines.
It is from a book called “Climate and the British Scene” by Gordon Manley. Published by Collins in 1952.
It was compiled from ‘unadjusted data’ during a period of genuine scientific curiosity by people with no agenda.
And in my eyes it is clear evidence of scientific fraud.

climatereason
Editor
Reply to  Charles Gerard Nelson
September 5, 2017 12:59 am

Charles
I note the book is for sale on Amazon so I might get it.
In what way does it illustrate ‘scientific fraud?’
If you have drop box it might be easier to put it into that and then provide a link?
tonyb

September 4, 2017 2:46 pm

There being no correlation between CET, an energy measure with SSN, a proxy for a forcing (power measure) should not be a surprise. Why should there be? Are you surprised if a plot of your Watt hour meter reading is a different shape from a plot of your Watt meter reading?
A rational comparison would be between CET and the time-integral of the SSN anomaly. Properly accounting also for the net effect of ocean cycles and water vapor achieves a 98% match with measured since 1895.

Reply to  Dan Pangburn
September 5, 2017 5:00 am

integral of sunspots is meaningless

Reply to  Steven Mosher
September 5, 2017 11:42 am

Ste – If you understood this stuff you might recognize the SSN anomaly time-integral as a proxy for energy retained by the planet and thus contributing to the average global temperature anomaly. Click my name for an analysis that explains how it works.

Reply to  Steven Mosher
September 6, 2017 11:28 am

Ste & Wil – Of course I am aware of that. What I did avoids that issue.
Sunspots are a proxy for a power thing. The integral is mandatory to get an energy thing and it must be with respect to a nominal to account for both gains and losses. Divide the energy thing by the effective thermal capacitance and you get the contribution to the temperature anomaly.
It appears that you have reached a conclusion prior to a rigorous assessment of what was done. Perhaps you would gain better understanding by spending some open-mind time with my blog/analysis.
In brief, what I have discovered is that CO2 has no significant effect on climate and that the rising water vapor (which is IR active) is countering the temperature decline that would otherwise be occurring.

DAV
Reply to  Willis Eschenbach
September 5, 2017 9:35 pm

Willis Eschenbach September 4, 2017 at 9:46 pm

No. If you say “the mean of 6, 8, and 12 is 8.1”, that is a falsifiable statement. If you say “The two datasets have a correlation of 0.82”, again that is falsifiable. Why? Because both the mean and the correlation are measurable.

The mean is either correctly calculated or it is not. By itself, it is a meaningless value. With the scientific method, you want to falsify predictions. Falsifying or verifying calculations (by redoing them) is just checking the work.
Neither the mean nor the correlations within the data have any meaning in and of themselves. Now if you were to come up with some hypothesis involving them with a prediction made using that hypothesis then you have something falsifiable.
However, if one is just making an observation, it is perfectly fine to qualitatively say that two things appear to be correlated. Saying it quantitatively doesn’t make it any more precise or scientific. The value really tells you nothing except that when the value is higher there is more correlation and when the value is lower there is less — a qualitative answer. To a lot of people, though, having a numerical answer is more sciency. So I can see where you are coming from and you are not alone.
Even if you really do have numbers to the Nth decimal place, the first thing that come to my mind is: that’s nice; so what?
In addition, and somewhat tangentially, reliance on statistical significance gives a false sense of accomplishment. Please read the Briggs link.

Reply to  DAV
September 6, 2017 5:26 am

The correlation is evidence of water vapor controlling daily Min Temp.
And I show that this regulation by water vapor reduces the effects from an increase in co2 and any other noncondensing GHG’s.
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/

DAV
Reply to  DAV
September 6, 2017 5:51 am

micro6500 September 6, 2017 at 5:26 am

The correlation is evidence of water vapor controlling daily Min Temp.

The correlation between any two variables is not evidence of a causal relationship between them — thus the “correlation is not necessarily causation” admonishment. At best, it means that a causal relationship cannot be ruled out. Using correlation alone one needs at least three variables and then this minimum can only indicate causation if one of the three is a cause of the other two. See Causality: Models, Reasoning and Inference by Judea Pearl ISBN-13: 978-0521895606.
https://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/052189560X/ref=sr_1_3?s=books&ie=UTF8&qid=1504701477&sr=1-3&keywords=judea+pearl

Bill
September 4, 2017 11:49 pm

The basic flaw with the temperature records is the fact that absolute temperature may vary within a wide margin of about 2 degrees C. That became the argument to use anomaly instead arguing that the anomaly would show a trend if the temperature changed despite the problems with the station network. Nice logical assumption, mostly true but not always.
But then they screwed the pooch by changing stations, adjusting stations, and changing means of extapolating between stations. They have nice 2 degree C range of uncertainty that allows them to turn that concept of anomaly showing the trend to do anything they wish within the original believed bounds of accuracy.
Thus all the changes they make could be justified but the trend be all wrong as a result.
This is a big issue in accounting where determining values is often uncertain. An appraisal for example might come up with multiple potential values. Therefore the accountant has to be aware of that as he does an audit to ensure the company accounts for items in a consistent manner, not cherry picking one method for one asset and another method for another asset. Also period over period creates a picture for the company’s financial progress so consistency must be observed there also. The client can change methods but to do so and account for that properly the method has to be redone for all periods using both the new and old method so the investor can see what effect the change has on company financial results. None of this done in climate science and as a result its highly unreliable.

Reply to  Bill
September 5, 2017 11:58 am

Especially in temperature measurement to determine global trends “consistency must be observed”. In addition, for temperature, one must rationally attend to confounding things like ‘heat island effect’ and ‘satellite drift’.

September 5, 2017 8:03 am

It has to—as Joe Fourier showed, every signal can be decomposed into underlying simpler waves. However, this does not mean that those underlying simpler waves have any kind of meaning or significance.

This is Basically what BEST is doing, the method is different, but they are pulling the rising GHG signal out of the the noise that represents temp data.
The book I read explained it as being able to filter out a piano out of the noise in time square. Where there is no piano.

Reply to  micro6500
September 5, 2017 8:09 am

Where they fail, is GHG’s are not the defining attribute of surface temps. Water vapor is.
The ruse that water vapor is condensing is both why it controls surface temps, and why they erroneously exclude it. But I ask them this, when was the last time the atm did not hold any water vapor?
The non-condensing ghg’s are really important to an ice ball earth. But we’d want more not less. And we’re not in an ice ball.

September 5, 2017 11:13 am

Willis Eschenbach, thank you for the essay. I thought that the original and your critique were both worth reading.
If there is (or were ) a causal link between the ENSO and the Central England temperatures (of which the CET series is an imperfect record), what do you think the actual R^2 value is (or would be)? This relates to the problem I posed a while ago about the poor statistical power of statistical tests that adhere to the conventional 5% and 1% levels.

Reply to  matthewrmarler
September 5, 2017 11:26 am

If there is (or were ) a causal link between the ENSO and the Central England temperatures (of which the CET series is an imperfect record), what do you think the actual R^2 value is (or would be)?

While not CET, there is a greater than 97% correlation between dew point temp and Minimum temp over 79 million surface station daily records in the Air Force’s NCDC GSoD daily summary.

Reply to  micro6500
September 5, 2017 11:29 am

Oh, you can’t run cross-correlation code between humidity and temp, the relationship is nonlinear, and the code doesn’t detect anything.

Reply to  Willis Eschenbach
September 5, 2017 4:05 pm

Cool. Thanks again.