Anthropogenic Warming in the CET Record?

Guest essay by Neil Catto

The CET record started in 1659 close to the minimum of the little ice age. As such, it is with no surprise that last year (2014) was the warmest on record. It would appear to be a natural recovery. The monthly mean temperature of 8.87 Deg C in 1659 has increased to 10.95 Deg C in 2014; which equates to 0.06 Deg C/decade.

I used the CET mean monthly data 1659-2014: Downloaded 6th Jan 2015

My main interest in this data set is to gain better understanding between natural variation and AGW. I consider the CET as a reasonable representation of Northern Hemisphere trends. In 1739 Mount Tarumae in Japan erupted with a VEI force 5. The mean monthly CET temperature in 1739 was 9.21 Deg C, in 1740 there was a significant drop to 6.84 Deg C à and in 1741 a recovery to 9.32 Deg C. This natural occurrence had the equivalent drop in temperature of -23.5 Deg C/decade and recovery of 24.6 Deg C/decade. With a natural variation of this magnitude I never understand the alarm about 2.0 deg C/decade, human life survived and exponentially grew in numbers.

The last time I downloaded CET data was 22nd May 2013. Out of interest I thought I would compare the two data sets. The results were interesting to say the least.


Fig 1 anomalies between CET downloaded in May 2013 with CET downloaded in Jan 2015 (data to Dec 2014)

It is noticeable that nearly every adjustment is positive, with no negative changes. The whole data set shows an average increase of 0.03 Deg C in 20 months or equivalent to 0.18 Deg C/decade.


What is the reason for these data adjustments?

How often and by how much are these data adjusted?

Is this anthropogenic warming caused by man-made adjustments?

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
January 8, 2015 10:25 pm

Remember in school, how horrible we were told the Soviet Union was, how the country was evil because Stalin had rewritten history. At least Baghdad Bob was entertaining.

January 8, 2015 10:31 pm

There is not a dataset which has not seen “adjustments.” It has become increasingly impossible to have faith in any of them. Thankfully, RSS data appears to be holding true.

Village Idiot
Reply to  Alan Poirier
January 9, 2015 1:19 am

RSS data “adjustments” can be checked here:

Reply to  Alan Poirier
January 9, 2015 1:44 am

There isn’t a lot of difference between RSS and UAH, or the other datasets for that matter. We are only talking about a few tenths or hundreths of a degree. And no two data sets agree exactly.
The problem for the alarmist crowd is that global warming essentially stopped — quite a few years ago. That fact contradicts a raft of wild-eyed predictions. ALL of the alarmist predictions were wrong, and not one of their models predicted that warming would stop. But it did.
Why should we still believe anything they tell us?

Reply to  dbstealey
January 9, 2015 6:25 am

All “data sets” containing the same data must “agree exactly”. However, once the data are “adjusted”, becoming “un-data” or “non-data” in the process, they would only “agree exactly” if they were adjusted using the exact same methods.
The numbers used to produce the various temperature anomaly products are the producers’ estimates of what the data might have been, had they been collected timely from properly selected, calibrated, installed and maintained sensors; and, in some cases, what they might have been had they been collected at all..
The calculated anomalies rely on the assumption that, despite the inaccuracies of the data, the condition of the sensors and their surroundings have not changed over the measurement period.

Reply to  dbstealey
January 9, 2015 6:26 am

A couple of years ago I reconstructed CET from its end point of 1659 back to 1538.
I am currently working on pushing back that date to between1200 to 1400AD in order to try to discern the descent from the MWP to the LIA..
Anyone reading the article will see in detail how Manley constructed CET and in carrying out the reconstruction I also compared the ones carried out by Hubert Lamb-first director of CRU-and Dr Michael Mann.
Firstly, in any historic temperature reconstruction we ought to follow Lambs maxim that we can understand the tendency (of the trend) but not the precision (of each data point)
There is far too much certainty in almost any historic record and the idea, for example, that we know to tenths of a degree a global temperature or NH temperature back to say 1400 (or even 1880) is nonsensical, as is the idea that we have an accurate idea of global SST’s back to 1850.
CET -a monthly record-was carefully assembled by Gordon Manley and has been much scrutinised. David Parker of the Met office then carried out the work to calculate a daily index which commences 1772 when sufficient detailed daily information was available. I met him last year at the Met Office to discuss CET and my own work.
A number of adjustments to CET are made in peer reviewed papers detailing the reasons for the changes. Due to the evolving circumstances of the stations used they are sometimes substituted for others. In this regards, in recent decades it was felt that CET was running ‘too warm’ and replacement stations used that might better reflect previous readings.
CET makes an allowance for UHI. Personally I suspect it is not enough as Britain is the size of New York state and has been described as one large heat island, that effect becoming worse as the population escalated and energy consumption rose during the 19th century. Many readings were from previously rural areas that subsequently became urbanised, to which can be added the complication of pollution which encouraged famous artists to visit Britain to marvel at the atmospheric conditions that caused brilliant sunsets. it seems impossible to believe that this didn’t impact on temperatures.
As far as I can see the low point of CET was the 1690 decade. It made such a remarkable recovery in the 1730’s decade that Phil Jones studied the period in 2006 and confirmed that natural variability was much greater than he had hitherto realised, as that decade came to a bone chilling halt with the winter of 1740.
There has been a steady upwards trend since the 1690 low point, albeit with ups and downs, which can be seen in CET and the extended version of BEST.
From 1690, the temperature trends up to the 1538 point (albeit with peaks and troughs) I reached, which appears to me to contain some 4 or 5 of the warmest years in the record around that date.
The LIA was episodic, not one long deep freeze and there were many warm as well as cold periods within it which I hope to explore further in my next article ‘tranquillity, transition and turbulence’ exploring the 1200 to 1400 period.
So, is CET a faithful record of every month accurate to tenths of a degree? No.
Is it a good indicator of the ups, downs and general trends of historic climate? Yes.
Incidentally, anyone reading my article will see that there are many scientists and organisations that believe CET to be a good (but not perfect) proxy for the global or NH record, which includes the Met Office themselves. As such it is a especially invaluable record as studying it is likely to yield much broader lessons on likely climate states elsewhere in the world.

Reply to  dbstealey
January 9, 2015 3:02 pm

climatereason January 9, 2015 at 6:26 am wrote

There is far too much certainty in almost any historic record and the idea, for example, that we know to tenths of a degree a global temperature or NH temperature back to say 1400 (or even 1880) is nonsensical, as is the idea that we have an accurate idea of global SST’s back to 1850.

Could your comment be read as a justification for GISS’s ongoing program of editing the historical record?

Reply to  Alan Poirier
January 9, 2015 3:20 am

Met Office corrected a long standing error in calculating annual data from daily and monthly daily temperatures data compilation. I alerted them to the error in early August 2014 and suggested method of recalculation which they appear to have adopted and corrected the annual values.
For more see:

January 8, 2015 10:32 pm

amateur hour at the CET datacaretaker?

January 8, 2015 10:39 pm

as the globe warms, past temperature will necessarily rise. history will get warmer and warmer due to increased CO2 in the atmosphere.

Robert of Ottawa
Reply to  ferdberple
January 9, 2015 2:53 am

Well, the warmista tactic is to cool the past.

Reply to  ferdberple
January 9, 2015 11:11 am

Pretty sure the warming is really due to increased hot air. Retire and gag a few politicians and we could be in for a serious glacial episode.

January 8, 2015 10:41 pm

The observations must be wrong – CO2 was lower in the pre-industrial age.

January 8, 2015 10:43 pm

They must have had very accurate thermometers in 1659, despite the lack of Stephenson Screens, which weren’t invented until 200 years later. No wonder the data needs ‘adjusting’!

Reply to  phillipbratby
January 8, 2015 10:48 pm

They probably were quite accurate. Each one hand made and calibrated.

M Courtney
Reply to  phillipbratby
January 9, 2015 12:33 am

No wonder the data needs adjusting – since 2013?

Don K
Reply to  phillipbratby
January 9, 2015 3:15 am

Not only did they lack Stephenson screens, prior to 1720 or so, they lacked Mercury thermometers with uniform, fine graduations. And it seems they probably lacked a uniform method of calibration. Not that it would have been utterly impossible to make precise measurements in the seventeenth century that could be translated to degrees Celsius. But it wouldn’t have been easy. I’d like to see some serious estimates of the observation errors in CET temps prior to the mid-eighteenth century before I buy into any analysis based on CET.

Reply to  Don K
January 9, 2015 5:47 am

True , but how lucky do you have to be that all the ‘adjustments’ that need to be done are done in such a way to also favour the narrative that your pushing which is one that offers both personal and professional benefits to those making these ‘adjustments’
If a Fox had to build a hen house , do you think they would build it so it was easy or hard for a fox to get into ?

January 8, 2015 10:44 pm

New fashion warming tuned to belief algorithm?

January 8, 2015 10:47 pm

So, after adjustments, the warming in the CET amounts to about 0.06 Deg C/decade. But adjustments have added about 0.18 Deg C/decade of warming. That means without adjustments we would see about 0.12 Deg C/decade of cooling in the CET record. That can’t be right. What did I do wrong?

Adam Gallon
Reply to  Louis
January 9, 2015 12:27 am

Am I misreading this? Does the OP say that there’s been +.03C adjustments made since he last looked at the dataset in 2013? Or compared to raw data?

Rainer Bensch
Reply to  Louis
January 9, 2015 12:37 am

“What did I do wrong?”
The sign.

Reply to  Louis
January 9, 2015 2:32 am

The CET data was not gathered by a national organisation like the UKMO but by a few rich people with time on their hands. The data is very sparse and not taken at set times by trained personel but by the nearest servant willing to brave the cold. So this data set should be viewed with care.

Alan Robertson
Reply to  johnmarshall
January 9, 2015 4:15 am

An equally likely scenario is that any such duties were assigned to individuals fully trained and personally motivated to perform their tasks with best efforts.

Alan Robertson
Reply to  johnmarshall
January 9, 2015 4:19 am

Still, your point remains. The technology and data collection methods were not evolved and were sketchy, at best.

Reply to  johnmarshall
January 9, 2015 4:27 am

They seemed to be pretty good scientists back then : 1887 Ethernet experiment :–Morley_experiment

Reply to  johnmarshall
January 9, 2015 5:45 am

“…1887 Ethernet experiment”
Ethernet in 1887? I thought that was more of a 1970’s thingy?

Reply to  johnmarshall
January 9, 2015 9:04 am

“…1887 Ethernet experiment”

So, they are now teaching the younger generations that the “telegraph” is/was an Ethernet.
To wit, excerpted from:

The beginning of the National Weather Service we know today started on February 9th, 1870, —————
At 7:35 a.m. on November 1, 1870, the first systematized and synchronous meteorological reports were taken by observer-sergeants at 24 stations in the new agency. ——-
The Signal Service’s field stations grew in number from 24 in 1870 to 284 in 1878. Three times a day (usually 7:35 a.m., 4:35 p.m., and 11:35 p.m.), each station telegraphed an observation to Washington, D.C.

And the majority of those field stations were located east of the Mississippi River.

Reply to  johnmarshall
January 9, 2015 11:15 am

January 9, 2015 at 4:27 am

You need to use a sarc tag there. Otherwise folks are going to think you’re serious.

David R
January 8, 2015 11:04 pm

Hard to see how this can be explained away as a ‘natural recovery’.
The first year that CET recorded an annual average temperature above 10.0C was in 1686. That record wasn’t broken for 47 years, until 1733. The 1733 record wasn’t broken for a further 101 years, until 1834. The 1834 record then stood for 115 years, finally being broken in 1949.
However, in just the past 25 years the CET warmest annual record has been broken three times: 1990, 2006 and now 2014. That’s not evidence of a ‘natural recovery’; it’s evidence of an exceptional period of warming late in the record.

Reply to  David R
January 8, 2015 11:12 pm

Before or after adjustments?

David R
Reply to  Alex
January 8, 2015 11:23 pm

No one questioned the adjustment process in 2013 when temperatures appeared to be falling. Maybe they were adjusting them downwards?

Reply to  David R
January 9, 2015 1:07 am

lol @ your logic…

Reply to  David R
January 9, 2015 1:32 am

Evidence of an exceptional period of warming in which it warmed by a tiny fraction of a degree.
Now give me the evidence that $1bn a DAY of anti carbon spending did anything to prevent this.

Robert B
Reply to  David R
January 9, 2015 1:52 am

The first year that CET recorded an annual average temperature above 10.0C was in 1686.”
It was a spike and the LIA is a misnomer as not all years were colder than the average year. More than half of the years after 1980 were colder than 1688. 1686 is in the top quartile for the warmest years and the average for the last 15 years is only 0.1°C higher with 2011 in the coldest 30% of years.

The trend from 1688 to about 1730 is similar to the two periods in the 20th century, the first that couldn’t be attributed to fossil fuel use and the second that possibly could. The LIA was supposed to have ended at the end of the 19th century.

However, in just the past 25 years the CET warmest annual record has been broken three times: 1990, 2006 and now 2014. That’s not evidence of a ‘natural recovery’; it’s evidence of an exceptional period of warming late in the record.

You could have made a similar claim in 1737, 1834 and 1950. How did that prediction in 1737 of 2°C per century work out? Is it 4°C warmer than the early 18th century?

Robert B
Reply to  Robert B
January 9, 2015 1:54 am

Ooops. pasted the blockquote in the wrong spot. ” marks the end of the quote.

Reply to  David R
January 9, 2015 2:07 am

What’s so great about an annual average temperature below 10°C?! That’s cold!

Old England
Reply to  David R
January 9, 2015 2:16 am

@David R
Weather forecasts in the UK regularly predict night time temperatures to be up to 3 and even 4 Deg C colder outside the urban centres where UHI distorts recorded temperature.
Last time I looked CRU at UEA were adjusting urban temperatures down for UHI by some 1.5 deg C – as opposed to the actual 3 + deg C – small wonder then that the CET shows ‘warming’ as the adjustments are less than the true difference.
There is also a powerful argument that CET temperatures prior to the late 1950s should all be adjusted Up quite significantly to compensate for the artificially low winter temperatures caused by smog and smoke blocking out sunlight for the preceding 200 years or more.
The effect of that would be to significantly reduce the claimed record years you refer to to being nothing unusual at all – so it is highly unlikely that CRU will ever make the truly valid adjustments it should to for the ending of fog and smog in the late 1950s in the UK.
It seems that global temperature data sets have been being adjusted with ever increasing frequency, and particularly so in recent years, as the lack of any increase in global temperaures has persisted for some 18-20 years.
All and any adjustments, if they are be believable, must have a full justification and methodology published. Any changes ever made should be accompanied, not just by that information, but also include the original raw data alongside the adjusted data.
In the meantime keep making regular downloads of the latest record set and archive them for comparison.
Maybe someone could assemble those and make them available to anyone interested in studying them.

David R
Reply to  Old England
January 9, 2015 2:31 am

CET is recorded from very few sites and UHI is accounted for in the quality controlled data. The method is explained here [pdf]:
If you disagree with that, or you have some firm evidence that UHI is being systematically underestimated, then there’s nothing to prevent you from highlighting this via the normal process.

Martin Reed
Reply to  Old England
January 9, 2015 3:10 am

There is only one way around the adjustment problem – rigorously weed out all weather stations that are subject to UHI biases. That is, only use data from correctly located stations. Yes, I know that would decimate the available data but at least it would be remotely believable, which is more than can be said for the adjusted data the AGW religion prefers. If such a filtering process left no data intact then so be it, we’d just have to honestly admit we haven’t a clue as to what has happened over those 350 years and start again with a properly instrumented planet.

Village Idiot
Reply to  David R
January 9, 2015 2:19 am

Never really understood the basis for the term ’natural recovery’. ‘Recovery’ to the planet’s ‘natural’ temperature? Is this a Law of Nature that states that when the temperature goes down a bit, it must ‘recover’?
What if the global temperature goes up (note record high recorded surface temperature for 2014)? Does this Law mean that temperatures must soon undergo a ‘natural relapse’ to get back to what they’re ‘supposed’ to be?.

Reply to  David R
January 9, 2015 2:34 am

No, natural recovery would go in fits and starts. Nothing in nature is as smooth as you expect.

John West
Reply to  David R
January 9, 2015 5:09 am

Have you ever seen a sine wave?

Richards in Vancouver
Reply to  John West
January 9, 2015 6:10 am

Yes, but only in very high winds.

Reply to  John West
January 9, 2015 12:49 pm

Kids around here shoot ’em full of holes so the they don’t wave in the wind

Reply to  David R
January 9, 2015 8:04 am

David R: I think you are attributing far too much importance to ‘records’ which are only after all fraction of a degree changes.
Annual average temperatures above 10C aren’t a rare event in the CET. I note your comment that in the past 25 years the CET warmest annual record was broken three times – but the 10C record was also broken three times in the 1820s: during 1822,1826,and 1828!
In the real world, what difference does it make if it’s 9C or 10C in the record? These averages tell us nothing about the weather conditions for a given year.
We’re told that the pre-industrial (before 1750) CO2 level in the atmosphere was 280ppm. Now it’s 400ppm.
That’s a 43% increase in C02, yet the CET temperatures show changes which are in my view negligible.
Very good evidence for a lack of any disastrous changes due to CO2 – in my view at least.

Reply to  David R
January 9, 2015 9:17 am

David R,
Yes, there was some anomalous warming recently. But it has remained within long term parameters, which have held since the LIA. Note that in the late 1800’s, well before any significant CO2 emissions, global T shot up even more than it did recently. Is your contention that the 1800’s warming was natural, but the recent, less extreme warming is not??
The ‘predictions’ of the alarmist cult were that we would experience man-made runaway global warming. That has not happened; every alarming prediction has failed. Objective people will look at that record of failure and conclude that the alarmists are wrong.
Here is another view showing that everything being observed remains within the parameters of natural climate variability. The planet is warming from an anomalous cooling, the LIA — the second coldest episode of the entire 10,000+ year long Holocene.
The alarmist crowd keeps trying to make human emissions the villain. But where is the proof? So far, there is none at all.

John Finn
Reply to  David R
January 9, 2015 6:28 pm

Yes it does, unfortunately, and as someone who lives in the Central England region and has done for many years I can confirm that 2014 has been a particularly warm year.

david smith
Reply to  John Finn
January 10, 2015 5:07 am

“…2014 has been a particularly warm year.”
Yes. I’ve really enjoyed it and thermageddon has still failed to appear.
Personally, i love it when it’s warm as I can go scuba diving more often.

Cold in Wisconsin
January 8, 2015 11:06 pm

This is an absolute scandal. I cannot understand how the scientific establishment permits this chicanery! Those who would attempt to defraud the public depend on the evidence being destroyed. They will likely claim that you are wrong, and your numbers were corrupted, while theirs are correct. There should be NO adjustments whatsoever, unless presented alongside the original, unadjusted data.

Reply to  Cold in Wisconsin
January 9, 2015 12:16 am

Wholeheartedly agree.

Reply to  Cold in Wisconsin
January 9, 2015 5:09 am

Because a large part of the scientific establishment RELIES on this chicanery.

Cold in Wisconsin
January 8, 2015 11:08 pm

To David R: are those statistics using the “corrected” numbers or the original numbers?

Steven Beck
January 8, 2015 11:10 pm

and don’t forget Urban Warming. Central England is practically one big city now.

The Ghost Of Big Jim Cooley
Reply to  Steven Beck
January 9, 2015 12:03 am

Sorry Steven, but that’s just not accurate. I live within the triangle that is the CET. Vast amounts of that triangle are farmland. Admittedly, it includes a part of London, Birmingham, and Bristol. But all the rest is small towns, villages, and vast areas of green land.

Pete in Cumbria UK
Reply to  The Ghost Of Big Jim Cooley
January 9, 2015 1:32 am

For Jim and Steven – both of those things will cause a warming signal – the Urban Heat Island obviously and well agreed upon, but, also the farmers. They bust their proverbial guts trying to warm up their land (to extend the growing and or harvesting season) and do it very simply by using The Plough to create large areas of ground with low albedo at times of the year when the sun is almost at its strongest. They have to do that because what’s left of the fertile soil on planet earth is at high latitudes where glaciers/ice sheets recently ploughed it, mashed it up and exposed fresh rock for the plants to get teeth into.
With ‘spring sown’ arable this was obviously in April, May and early June until the seeds had germinated and grown sufficiently so as to cover/shield the bare soil.
Could not The Pause be in part explained by the wholesale move to autumn sown crops that go into springtime covering the soil and raising the albedo, and also the trend to ‘no-till’ or ‘low till’ that leaves high albedo trash on the surface. Also, all that trash (straw) was historically burned which produced huge quantities of smoke/soot with yet more albedo reducing properties.
Personally I’ve thought for a long time not to trust the CET record just because it is THE most intensively farmed piece of ground almost anywhere on the planet.

Alan Robertson
Reply to  The Ghost Of Big Jim Cooley
January 9, 2015 4:25 am

Pete, you are correct. Temps around both tilled farmland and UHIs are higher than other areas.

Reply to  The Ghost Of Big Jim Cooley
January 9, 2015 4:59 am

“…Pete in Cumbria UK ”

Just what crops are they sowing in the fall? Winter wheat? Winter Rye? Perhaps some of the brassica family?
A farmer tills the soil to prepare the ground, not for albedo effects. Often when a farmer tills the soil during the fall, winter and summer seasons it is to kill weeds.
Winter ground cover crops are sown to get a jump on the next season aiming for multiple harvests. Extension of harvest season, which I expect means delaying frost kills, is negligible.

“…They have to do that because what’s left of the fertile soil on planet earth is at high latitudes where glaciers/ice sheets recently ploughed it, mashed it up and exposed fresh rock for the plants to get teeth into…”

No. Perhaps better said as Hell No!.
Ice age glaciers did not ever cover as much of the Earth as you suppose. Crops are grown down to the equator.
I have no idea what you think is the relationship between plants, rocks and teeth…
Arable is a primarily British word, (courtesy Merriam-Webster), meaning land able to be plowed.
Land able to be effectively utilized for crops is anywhere where sufficient water is available for the crops desired.
Mankind has commercialized the use of hydroponic farming for providing many of the ‘fresh’ vegetables during ‘winter’. No soil needed.

“…also the trend to ‘no-till’ or ‘low till’ that leaves high albedo trash on the surface…”

High albedo trash? What color are those dead plant stalks, or perhaps better phrased as ‘what color is straw?” and is that high or low albedo?
But that’s a nit pick. Farmers still work the land to ‘remove’ trash into additional harvests, animal feed or compost and to prepare the land for the next crop.
Just leaving crop detritus on the land invites trouble as that indicates the owner is not trying to control pests or disease. Clearing the surface of dead vegetation whether by removal or turning over the soil are control methods for insects and diseases.

John Finn
Reply to  Steven Beck
January 9, 2015 6:30 pm

Says someone who lives where exactly? It’s not one big city or even close.

January 8, 2015 11:11 pm

The author is right: the key problem for warmists is to demonstrate that 1.5 or so warming per century is catastrophic in. System tht handles much larger seasons changes and, where I live, can manage sometimes 20C in half an hour.

Timo Soren
January 8, 2015 11:13 pm

Not sure what you meant. Is the 2014 vs 2013 data graph the differences of the two?

Reply to  Timo Soren
January 8, 2015 11:17 pm

The graph refers to the anomaly between the two ie. the difference.

Non Nomen
January 8, 2015 11:15 pm

Has there been any research on thermometers, its manufacturing, calibration and rules of temperature measurement of that time? If not, any adjustment of date is something like a shot in the dark, i suppose.

Non Nomen
Reply to  Non Nomen
January 8, 2015 11:15 pm

correct: “data”

Reply to  Non Nomen
January 8, 2015 11:20 pm Those guys would have been very careful. The scientific method was developed by them. They probably didn’t calibrate just once.

Reply to  Alex
January 9, 2015 12:26 am

One possible way of checking the accuracy of thermometers 100 or 200 yrs ago might be to look at the temperatures reported in the literature of the time , or better the lab notebooks if they exist, for well- known physical data such as the boiling point of pure alcohol, the melting point of a pure fat or organic chemical or the freezing point of a salt/water solution of accurately known composition. Compare them with modern values.

Gary Pearse
Reply to  Non Nomen
January 9, 2015 5:31 am

See my post at
Gary Pearse
January 9, 2015 at 5:19 am

Reply to  Non Nomen
January 9, 2015 9:36 am

To wit, excerpted from: History of Vintage Outdoor Thermometers

Companies dispatched legions of salesmen to sell every type of product imaginable. Hair gel, chewing tobacco, soda pop, crop seeds and farm implements were offered to local retailers for resale. Believing that bearing gifts while visiting a merchant may produce more sales, many companies would provide merchants with tokens. In a practice that still alive today, salesmen would provide promotional metal signs and outdoor thermometers. Outdoor thermometers and metal signs are now highly prized by collectors of Americana.
It was not uncommon to see general stores and gas stations festooned with metal signs. The outdoor thermometer became particularly popular if, for no other reason, because of its utility. During the first half of the Twentieth Century, the science of meteorology was still evolving. Knowing the temperature and which way the wind blew gave rural folks a pretty good indication of what to expect from Mother Nature.
Some of the most well known brand names in the world first appeared on metal outdoor thermometers. NeHi® soda pop, John Deere® tractors, Mail Pouch® tobacco and dozens of other popular brand names owe their success in part to the humble outdoor thermometer. Quality reproductions of these and other famous outdoor thermometers are available to those folks who are not collectors but can appreciate the feelings of nostalgia evoked by items from our collective past.

January 8, 2015 11:19 pm

What is the correct interpretation here? The commentary seems to suggest that the yearly anomalies accumulate while the natural interpretation of the anomaly graph is that the whole record was adjusted upwards a very slight amount (about .03 C), leaving the trend nearly unchanged. Without seeing the underlying data the reader can’t verify what is being depicted but if the difference between the records was growing over time then shouldn’t that be what is shown in the anomaly graph?

Reply to  Alec Rawls
January 8, 2015 11:44 pm


Reply to  Alec Rawls
January 9, 2015 3:40 am

Hi Mr Rawls
I alerted them to the error in early August 2014 and suggested method of recalculation which they appear to have adopted and corrected the annual values.For more see:

David R
January 8, 2015 11:20 pm

“Is this anthropogenic warming caused by man-made adjustments?”
There weren’t too many people here a couple of years ago questioning the CET record when it was showing a short term cooling trend. In fact, David Archibald was forecasting alarming cooling based on his ‘solar model’ (as usual with DA’s predictions, it didn’t materialise):
Now that CET has returned to warming there’s suddenly a question about the adjustments. No one was questioning the adjustment process when the record showed cooling.

Reply to  David R
January 8, 2015 11:27 pm

‘ No one was questioning the adjustment process when the record showed cooling.’
So you admit the ‘adjustments’?

David R
Reply to  lee
January 9, 2015 2:24 am

Would you prefer them to use data that wasn’t adjusted to reflect UHI, for instance?

Reply to  lee
January 9, 2015 2:54 am

Adjusting more contemporary measurements for increasing UHI effects would lower present day adjusted readings, not increase them.

Reply to  David R
January 8, 2015 11:27 pm

I’ve lost trust in any of the records. If the IRS/ Tax department was auditing those records, then I believe most of the ‘fiddlers’ would be in prison now.

Reply to  David R
January 8, 2015 11:33 pm

David R
It does not matter who questions the “adjustments” to the data, why they question, or when they question.
It only matters if the unadjusted data are retained and that the “adjustments” are explained and justified.
Please address the issues which matter (i.e. have some importance) and desist from making posts about trivia.

Reply to  richardscourtney
January 9, 2015 12:43 am

I wonder if part of this may be that there has been a growing realisation that the “official” records are being fudged, and that these records are being verified more carefully. Just a suggestion.

Alan Robertson
Reply to  richardscourtney
January 9, 2015 4:28 am

David R is just doing his job, defending the meme that last year was the hottest ever.

Reply to  richardscourtney
January 9, 2015 5:37 am

I second your comment Richardscourtney!
As an ex database keeper/feeder I am horrified that records are ‘adjusted’.
All records should be kept in a pristine state.
Any ‘adjustments’ are explicitly identified, verified, justified in detail, dated with ownership of the adjuster identified.
No records are ‘changed’ in place. Error? Document it and enter an ‘adjustment’ under a proper metadata label.
Users should be able to pull reports from the pristine originally kept data along with adjustments to prepare or compile reports. All reports should explicitly define what is being shown in the report; e.g. “figures include quarterly and annual adjustments”.
Ideally everyone reading that report understands exactly what the adjustments described mean!
“It’s too much trouble”, cry the climatologists.
A cry, that I just don’t understand coming from a finance with some industrial background. Try keeping complete pay records for a million employees, including all corrections and adjustments. Any employee at any time is entitled to request and promptly receive explicit details for their entire record with any ‘corrections’ or ‘adjustments’, who applied when applied and why applied.
Since pay is dependent on work, all information about the employee’s activity is also retained usually by detailed records by less than a minute intervals.
Or tracking a machine’s action on product at 35,000 pieces per hour; every piece, when produced, batch, who is running the machine, Supervisor of operation, time run, date run, QC…
All this information is live! Available now in full complete data form!
Instead the climatalogers are thrilled to boast about their ‘most powerful computer’, but their data keeping standards are bogus. (Is that bogus at full mannian meaning?)
Processing mass adjustments by program?
Keeping data only in the adjusted state?
Making it difficult or impossible to pull pristine un-retouched data?
Adjustments sorely lack detail and/or metadata?
Who cares what CET currently shows? As the CET anomaly graph demonstrates, adjusted data especially mysteriously adjusted data can not be trusted.
Whether the trend is positive or negative, adjusted positive or negative, the data foundation is untrustworthy and the keepers of that data should be embarrassed to claim ownership.
Two questions should destroy any public presentation of climatology data; “Is that data adjusted?”, “Please explain every adjustment right here and right now with full justification?
Climatalogers should be shunned, not humored. Are many of them already in Coventry?

Paul Mackey
Reply to  David R
January 9, 2015 1:41 am

@David R. I think everybody here would question any adjustments up or down as bad science, unless the reasons are good and the methods are published and open to scrutiny. Your comment is vacuous since the post describes finding the adjustment and reports it for the first time. Yet you seem surprised to find that no-one mentioned this prior to this post.
I do hope all these data warehouses are keeping the original measurements prior to adjustments on record. Destroying the source material would be unforgiveable if not bordering on criminal. Given existing, published data is adjusted upwards, with no explaination nor reason, the author poses reasonable questions.

Reply to  Paul Mackey
January 9, 2015 5:15 am

re data warehousing of records
one way to hide such, used in aus by Bom is to store it in files the average pc cant handle
as I found when looking.
you might …eventually find where they moved it to
but you cant access it anyway

Reply to  Paul Mackey
January 9, 2015 10:07 am

The currently cited Official Temperature Record from 1880 to present had its humble beginning post-1960 with the majority of the included temperature values being either interpolated or extrapolated ….. and thus any present day “adjustments” to the aforesaid “values” have no meaning whatsoever to the already highly questionable status of said Official Temperature Record.

January 8, 2015 11:22 pm

Lysenkoist vandalism.

January 8, 2015 11:28 pm

Can you compare the rise 1659-1940 and 1940-2014? My cellphone is very disabling,but id love to see how small the difference is

Old Ranga
January 8, 2015 11:37 pm

Please define CET for a curious non-scientist.

Reply to  Old Ranga
January 8, 2015 11:40 pm

Central England Temperature

Reply to  Old Ranga
January 8, 2015 11:46 pm

Old Ranga
I think this brief explanation is what you want.

Old Ranga
Reply to  richardscourtney
January 9, 2015 12:14 am

Alex and Richard: Muchas gracias. Information now filed.

Steve Case
Reply to  Old Ranga
January 9, 2015 6:35 am

The modern world drives me crazy. It seems everyone uses acronyms for nearly everything. Probably because of Twitter & phone texting. Maybe they think it makes them look smart.
I did know what CET meant, but I certainly understand why someone wouldn’t have had a clue.

January 8, 2015 11:43 pm

Very sloppy written. What did the author mean????
“Fig 1 anomalies between CET downloaded in May 2013 with CET downloaded in Jan 2015 (data to Dec 2014)”
What has been extracted from what? What does the graph show? 2013 – 2015 or 2015-2013???
“It is noticeable that nearly every adjustment is positive, with no negative changes.”
So what? There is apparently no trend in the “adjustment”. The graph is flat. Who cares if somebody adds a constant?
” The whole data set shows an average increase of 0.03 Deg C in 20 months or equivalent to 0.18 Deg C/decade.”
What does show the trend? Which one “the whole data set”? 2013 or 2015 or the adjustment?
The alleged “adjustment” does not show any trend evidently.
Thus, if “The whole data set shows” a trend, then it is real and is not due to any “adjustment”.
WUWT should strengthen their quality control before publishing such confusing – and probably wrong – articles.

Reply to  alex
January 8, 2015 11:50 pm

Please rewrite your complaint in more clear and more restrained language. Someone who cites wicki needs to be much more careful.

Reply to  richardscourtney
January 9, 2015 12:02 am

There is an alex and a Alex. Different side of the fence. Please don’t confuse the two. I refer to wiki in non-contreversial subjects.

Reply to  richardscourtney
January 9, 2015 12:04 am

I apologise for any confusion.

Reply to  alex
January 8, 2015 11:56 pm

WUWT should strengthen their quality control before publishing such confusing – and probably wrong – articles.

More proof that skeptics are not well funded.

Reply to  alex
January 8, 2015 11:59 pm

The way I read it was that the Author downloaded two data sets. One in May 2013 and one in January 2015. The data should be the same. It wasn’t and there was no explanation as to why. Is English your second language? Whether the author miscalculated or not could be another question.

Reply to  alex
January 9, 2015 12:07 am

No problem

Reply to  alex
January 9, 2015 5:43 am

Excuse me alex, what do you mean?
Where you should ask in detail you use pronouns (it, what)
You claim the adjustment trend is flat, but you didn’t calculate it. Instead you just assumed even though the author explains the adjustment trend.
Perhaps you should work on your reading comprehension and social skills before demeaning other’s work and quality?

John Finn
Reply to  alex
January 9, 2015 6:21 pm


January 8, 2015 11:45 pm

David R
January 8, 2015 at 11:23 pm
No one questioned the adjustment process in 2013 when temperatures appeared to be falling. Maybe they were adjusting them downwards?
Okay, it is just me, or is this batshit insane
(I was going to just say crazy – but if I understand what this poster said correctly – it’s way over that line).

george e. smith
January 8, 2015 11:57 pm

“””””….. I consider the CET as a reasonable representation of Northern Hemisphere trends. …….”””””
Well there’s your problem right there.
I consider CET as a reasonable representation of Central England Temperature.
I don’t consider CET as a reasonable representation of anything else; it isn’t.
But I also abhor the adjustments; for any reason.

The Ghost Of Big Jim Cooley
Reply to  george e. smith
January 9, 2015 12:24 am

Indeed, we (in England) very often get very different ‘weather’ to another place on the same latitude (due, they say, to the Gulf Stream). It’s often much milder than tourists imagine. This weather becomes climate, so is not indicative of the NH. The only thing you can say is that the data would have been meticulously recorded – we have many anal people here who are anally insistent on something being noted down to the tenth digital place. Having said that, earliest recordings are simply worthless, as the ‘thermometer’ (in its early days) wasn’t calibrated against anything.

Reply to  The Ghost Of Big Jim Cooley
January 9, 2015 12:39 am

Thermometers were calibrated against freezing point and boiling point of water and also against the body heat of a humans (remarkably consistent). 0-100 C would have been quite accurate, with excursions either side. Gentlemen scientists who would have traveled the length and breadth of europe for scientific debates and comparison of equipment. I think those gentlemen were quite thorough because they had to face their peers. They got many things wrong but most were ‘man’ enough to face it. This is not the case today.
Don’t take away the effort of these ‘early guys’. The principle of the thermometer goes back two thousand years.

Reply to  The Ghost Of Big Jim Cooley
January 9, 2015 1:15 am

London is 51.5072° N, Calgary is 51.0500° N. I can assure you that our climate are almost as far apart as can be imagined. For example, I’m looking at -20C right now, yesterday it was +4, and any time during December, January or February we can encounter anything from below -40C to above +25C. So yeah, guaranteed there is no connection whatsoever.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
January 9, 2015 4:39 am

Sorry Alex, I can’t agree with that. Given all that I’ve read in the past, and the musings in this post
The fact remains that historic data cannot be trusted for many reasons, not just calibration (maybe I should have said).

Reply to  The Ghost Of Big Jim Cooley
January 9, 2015 7:09 am

Sorry ghost
I frankly don’t know what happened in 1659. In reality I don’t know the quality of calibration at that time. People seem to think that the temperature records in those times are valuable. I don’t know if they are or not. It is the time that thermometers were being developed. My understanding from various things I read was that those people were very careful with the latest technology at that time.
That is not my real point.
The data obtained at that time should be unchanged. If there is some subsequent finding that the instrumentation was not ‘up to par’ then it should be noted and not changed to fit some political agenda.

Reply to  george e. smith
January 9, 2015 12:27 am

Agree totally

Rainer Bensch
Reply to  george e. smith
January 9, 2015 12:59 am

One reason would be a change in the properties of water led to an adjustment of the Celsius scale somewhere between 2013 and 2015. /sarc because that wouldn’t explain different adjustment values over time.

Reply to  Rainer Bensch
January 9, 2015 2:25 am

There’s a lot more CO2 in the water now.
Haven’t you noticed how fizzy the water out of your tap has become ?

Reply to  Rainer Bensch
January 9, 2015 5:45 am

Your ‘tap water’ is now sparkling mineral water? Wonderful!

Harry Passfield
Reply to  george e. smith
January 9, 2015 1:44 am

George: Good point. However, substitute Bristlecone Pines tree rings for ‘CET’ and the point is still well made.

Reply to  george e. smith
January 9, 2015 9:26 am

As usual, georgesmith sifts the wheat from the chaff.

Reply to  george e. smith
January 10, 2015 9:31 am

I agree with George e. Smith. The statement by Mr. Catto that “I consider the CET as a reasonable representation of Northern Hemisphere trends” is illogical and needs explaining. The trend in Central England represents the northern half of the globe? Does climate science really accept that the temperature trend in one small region, like New York, is the same as the temp trend for San Diego….or the entire northern half of the globe? Maybe i read it wrong. Maybe I’m learning something new. It may not be particularly key to the point Mr. Catto is making about adjustments, but I have yet, with all the garbage data in, been able to understand why good climate skeptics, in making a point, seem to concede points unnecessarily that are either a small or large part of the reason for all the skepticism in the first place. I can give examples of why its so problematic to concede such points, but that’s for another time. I just need to understand Mr. Catto’s basis for accepting this significantly illogical, at least on its face, extrapolation for now.

January 9, 2015 12:20 am

The future is inevitable, as long as one can make adjustments to it.

January 9, 2015 12:33 am

I don`t think any adjustments can accurately account for the massive UHI effect in this heavily industrialized region.

January 9, 2015 12:35 am

Fig 1 would have been more meaningful if the Y scale was adjusted (!) to show the full extent of the negative-going spikes. Why “clip” the plot?

Reply to  Sensorman
January 9, 2015 4:09 am

“Why “clip” the plot?”
It has not been clipped. The anomolies are all positive, there are no “negative-going spikes” …that is one of the writers main points. Why are all of the adjustments positive?
Would you prefer to see a -.01 under the zero line where no anomlies reached it?… would probably complain that the -.01 graphing was unnecessary!

Non Nomen
January 9, 2015 12:37 am

I understand that this article implies that
a) CET temp have been “adjusted”
b) it is unclear if these adjustments led to conclusions of AGW being whipped.
There has been no standard scale of temperature measurement until Huygens in 1665 proposed the boiling and freezing point of water as reference. So each thermometer used in the early CET days, until scientific sandards were imposed, ought to be known and be referred to individually to make readings comparable. Have adjustments been made to these thermometer readings? Who performed them, when and on what basis? Are these adjustments made “bona fide” or, as one might assume knowing about the procedures of a certain Mann and that ilk, malevolently? Are the resulting, adjusted data correct?
From the Met Office Hadley Centre on CET:
“These daily and monthly temperatures are representative of a roughly triangular area of the United Kingdom enclosed by Lancashire, London and Bristol. The monthly series, which begins in 1659, is the longest available instrumental record of temperature in the world. The daily series begins in 1772. Manley (1953, 1974) compiled most of the monthly series, covering 1659 to 1973. These data were updated to 1991 by Parker et al (1992), who also calculated the daily series. Both series are now kept up to date by the Climate Data Monitoring section of the Hadley Centre, Met Office. Since 1974 the data have been adjusted to allow for urban warming.”

Reply to  Non Nomen
January 9, 2015 3:14 am

Since 1974 the data have been adjusted to allow for urban warming.
That does not mean, “Only the data cllected since 1974 is corrected for UH effects.”
It means what it says, that is, “The entire historical CET dataset has been rejiggered as UKMO sees fit under the guise of urban warming adjustment. We started these adjustments in 1974.”

January 9, 2015 12:59 am

It seems the explanation for the adjustment may be on their page.
“Since 1974 the data have been adjusted to allow for urban warming.”
The normal convention is that current data is held correct, so past data is adjusted relative to it. If current data should be adjusted down for UHI, the effect will be that past data is adjusted up. This could well be just the adjustment for 2014. It’
s true that the “keep the present correct” is not really appropriate for UHI, but it may be what they do.
So why isn’t the past adjustment uniform? That could well be rounding. Clearly it is rounded to two figures. If the months were adjusted, then rounded, then annually averaged, this scatter could well result.

Reply to  Nick Stokes
January 9, 2015 1:19 am

It’s an explanation of how they do things. Unfortunately what they are doing is insane

Reply to  Nick Stokes
January 9, 2015 1:30 am

I’ve just had to have a drink to get into the ‘logic’ (I use the term loosely) of your statement.
Let me see:
The present is always correct so the past has to be adjusted to reflect that.
If the present needs to be adjusted then the past has to be adjusted too.
Sounds like ‘heads I win tails you lose’
That doesn’t sound insane to you?

Harry Passfield
Reply to  Alex
January 9, 2015 1:48 am

No, Alex, I think Nick was explaining how the two ends of the series are relative: if one end goes down it’s the same as if teh other had gone up – relatively.

Reply to  Alex
January 9, 2015 2:14 am

He said it is adjusted. Its not like one end is adjusted and therefore the trend line ‘naturally’ changes. It sounds to me , from what he says, that both ends are adjusted. I am not accusing him of doing this. Just interpreting what he is telling me about what what ‘they’ do.

Reply to  Alex
January 9, 2015 2:41 am

It’s just a convention. If an instrument is moved, for example, you have to adjust one to meet the other. The present is always with us, so it makes a reasonable reference point. With UHI, it might seems less logical, thinking that the past is right and now is wrong. But that’s not really sustainable. Urbanisation might be artificial, but it isn’t going to go away.

Reply to  Alex
January 9, 2015 8:01 am

I heartily endorse your logic visualization technique.
To me, the insanity is not necessarily all the adjustments and deletions, but the notion that what you wind up with is a very good facsimile of reality.
The expectation on the skeptic side that there is a different way that will render the Truth is equally deluded.
Without all the hubris, zeal and hysteria, studying the weather would be good, clean fun with elements of practical merit, but I’ll confess, it wouldn’t have drawn my interest to the insane extent it has.

Joe Public
Reply to  Nick Stokes
January 9, 2015 1:58 am

“The normal convention is that current data is held correct, so past data is adjusted relative to it. ”
IMHO that is crazy logic.
‘Current’ data is current on one day only.

Reply to  Nick Stokes
January 9, 2015 3:41 am

The normal convention is that current data is held correct, so past data is adjusted relative to it. If current data should be adjusted down for UHI, the effect will be that past data is adjusted up. ~ Stokes

And here we have yet another definition of insanity.
Stokes is telling us that since changes need to be made to the current measurements because of Urban Warming (UHI) that we hold current measurements as correct even though we know they are not; and change the past measurements — the ones before urbanization. In other words, If today’s temp is wrong (too high) I don’t reduce today’s temperature at all. No sir, that would be bad for the scam so I pretend to know how to adjust temperature way in the past as if I had a time machine.
An honest group of men (good luck finding such men in climatology) would adjust the current temperatures downward due to the urban heat island (UHI) effect. Instead they use the hottest temperature they can find and adjust every other measurement to that.
They admit this bogus operation and pretend it is “science”. I guess they think some hand-waving and statistical mumbo-jumbo will fool us all.
And this site will automatically send your comment to moderation if you use the Fr*ud word? Oh my.

Rud Istvan
Reply to  markstoval
January 9, 2015 7:48 am

If there is a growing UHI problem that oberstates a temperature trend, there are two logical ways to adjust to get a ‘true’ trend. Either cool the present or warm the past. Cooling the present results in discrepancy to current actual observation. So the preference is to warm the past. NASA GISS website uses Tokyo as the example.
The problem is, that when you actually compare what is done, the past is cooled on average, not warmed. This is so even where there should be no adjustment at all because no UHI.
Many specific examples for many land temperature series are provided in essay When Data Isn’t in Blowing Smoke. This CET post perhaps adds to the pile of examples.

Reply to  Nick Stokes
January 9, 2015 4:28 am

BS Nick….I am always finding on my “accuweather” app that current temperature compared to predicted temperature are not the same.
Example…8am temp was said to be 50f and yet current temp at 8am is 46f. AND we have clowns shoving down our throats these hundreths of a degree!!!!
Pure and simple BULLSHYT.
If the temp cannot be accurately measured for me today by all of our technology……no one can telk me what it will be in 100 years.PERIOD.

Reply to  gaelansclark
January 9, 2015 7:59 am

This really depends on where you are relative to the reporting station. I’m about 4 miles from mine according to “Accuweather” (yes, I use the term VERY loosely), and probably 150ft higher. The two will rarely, if ever, exactly agree.

Reply to  Nick Stokes
January 9, 2015 5:53 am

So the past is always wrong Nick?
When the temperature trend falls, those climate data detail devils will ‘readjust’ the data down?
Honest data is kept as-is, period. Adjustments are kept as-is, period.
All reports using the data are identified as ‘adjusted’ is so and then adjustments are identified in detail.
e.g. ** Report uses TOB adjustments; TOB adjustments incurred 1930…2014 inclusive, on a monthly basis.
Tell me that when the data is so frequently manipulated/tortured that readers would not immediately question the efficacy of the report and ask about an ‘original’ untouched report?

January 9, 2015 1:01 am

Dear Mr Catto
Below is a link to the work of Professor Gordon Manley. The CET record is to 1974 showing actual temperatures. You of course may already be aware of this, if not it should help in showing what Jones et al have done with the record before 1974. The paper is worth a read in any case as the Professor identifies the effects of urbanisation on temperature readings.
If the link doesn’t work Google manley and royal meterological society

January 9, 2015 1:06 am

Thankfully the “official” data is homogenised in the Met Office blender- . Manley gives a different version which shows a much more steady increase from the LIA. No wonder the results keep changing!

January 9, 2015 1:19 am

I would prefer to look at the figures another way ignoring CO2, adjustments etc
2 degrees C in 355 years equates to a warming rate of 0.6 degrees C per century, which is well within the bounds of natural variability?
An over simplification of course but Je Suis Charlie

January 9, 2015 1:52 am

There is nothing suspicious about this:
Few months ago I wrote to the Met Office about a small error in their annual data:
CET monthly data is given with one decimal place, these are added together, divided by 12 and rounded off (to the math’s rule) at two decimal places.
I repeated process and a zero difference between so calculated (annual from the monthly) data and the Met office annual numbers (last column in
confirms the method.
Since monthly data is made of the daily numbers and months are of different length, it is a wrong method, resulting in the Met Office data being fractionally wrong.
As a test (using 1900-2013 data) I multiplied each month’s data by number of days, added all months then divided the sum by 365 or 366 as appropriate.
This method gives annual data which is fractionally warmer, mainly due to short February, warmest long Jul & Aug are balanced by cold long Dec & Jan, the rest makes tiny difference.
this is not much, but still important, in every single year the second decimal digit (hundredth of a degree) is wrong, maximum difference is 0.07 and minimum 0.01 C.
Ialso discussed subject with Tony Brown (TonyB) via email, and even may have commented on the WUWT.

David R
Reply to  vukcevic
January 9, 2015 2:34 am

That’s very informative, thanks.

Reply to  vukcevic
January 9, 2015 3:33 am

If the effect is mainly due to a February short but cold bias, your graph suggests February has been getting colder since about 1995.

Reply to  joelobryan
January 9, 2015 3:55 am

You are entirely correct.
February had nearly flat trend for 250 years (1730 – 1980)
(I alerted Met Office to the February effect and error in their dat in early August 2014)

Reply to  joelobryan
January 9, 2015 4:09 am

It has, but strangely, January has been getting warmer.
January used to be much colder than February but now they are much closer (30 year mean), although that may be a return to the situation in the early 1700’s.

Reply to  vukcevic
January 9, 2015 4:21 am

Are you saying that the adjustments depicted in Mr. Catto’s graph all result from correcting the month-weighting error you brought to their attention? If so, you may want to suggest to Mr. Watts or Mr. Catto that an update be made to the head post.

Reply to  Joe Born
January 9, 2015 5:46 am

I only did test for 1900-2013, then wrote to Met Office. As far as I can see my graph is identical to the one posted by Mr. Catto (post 1900). I am only a visitor here and not qualified to comment on the editorial matters at the WUWT.

Reply to  Joe Born
January 9, 2015 3:23 pm

This is actually not the responsibility of Mr Watts, Mr Catto or anybody – except the custodians of the CET who made the change! I accept that change needed to be made, and that it appears to have little effect on the trend – but why is is not documented so that people who use the data KNOW WHAT HAS BEEN DONE.
Sorry for shouting, but as someone who uses databases regularly, this kind of lack of documentation really screws up my day.

January 9, 2015 2:12 am

The oldest CET data file I can find is from July 2012 and I did a comparison between that file and the end of 2014.
As far as I can tell from the graph, the annual adjustments are the same as above, and the average adjustment is 0.02796c.
It is possible that this was a “one off” adjustment and consequently translating it into a 0.18c/decade may not be valid.
I don’t know when these adjustments were made, but it is possible that the is an explanation somewhere on the MO website.
It should also be remembered that the location of sites used to calculate CET have changed over the years.
By the way, the warmest year in the CET record was not 2014, but the 12 months ending April 2007, at 11.63c, compared to calendar year 2014 at 10.88c.

Reply to  QV
January 9, 2015 2:53 am

I found a file for 2007, and there were no changes to figures between 2007 and 2012.

David R
Reply to  QV
January 9, 2015 3:18 am

“By the way, the warmest year in the CET record was not 2014, but the 12 months ending April 2007, at 11.63c, compared to calendar year 2014 at 10.88c.”
Then by that reckoning the 2006 record had already been beaten in 2007! Funny that nobody mentioned that until now.
Plus, if you’re going to take the warmest consecutive 12 month period in a temperature record as the warmest ‘year’, then you have to also accept that the warmest ‘year’ in HadCRUT4 ended in September 2014, and in both NOAA and GISS it ended in November 2014.
By that system we don’t need to await the December figures; 2014 is already ends the warmest ‘year’ in all three surface data sets.

Reply to  David R
January 9, 2015 3:37 am

Can you explain to me why we should only use (arbitrary) calendar years and ignore all 12 month periods?
It is sheer chance that we measure our “year” from January to December.
Rolling annual figures tell us more than calendar year figures do.

David R
Reply to  David R
January 9, 2015 3:47 am

Why not use the daily data? Or, on the other hand, why limit it to a 12 month period? Why not 6 months, 3 months… 63 months?!
Whole calendar years is probably an easier metric for the average punter to grasp, isn’t it?

Reply to  David R
January 9, 2015 3:59 am

The 12 month period has a natural, astronomical basis.
By not using rolling 12 month periods, misses the fact that annual CET reached 11.63c.
You could use daily data, but it has complications, e.g. leap years which make it difficult to calculate.
Averages are calculated over 3 month periods, e.g. winter/summer which also have astronomical significance.
You didn’t really answer my question, merely replaced it with several others.

Alan Robertson
Reply to  David R
January 9, 2015 4:57 am

David R- So what?

January 9, 2015 2:18 am

Sorry for repeating this from previous threads, but I can’t seem to get an answer.
Its about adjustments, so seems pertinent on this thread.
I thought that USHCN and USCRN were very different systems. using totally different climate stations.
How can it be that on this page…(link below) they match so closely ?
Thanks for any explanations.

Peter Azlac
January 9, 2015 2:19 am

The CET record is often claimed to be the longest continuous temperature series but this is far from the truth as it is in fact a compiled record from many differing measurements sites that have changed frequently – not least since the 1950’s when sites like Ringway (now a major airport) were replaced by other sites including Rothamsted that is close to another major airport Luton. The CET data we see presented most often is the annual mean temperature that is misleading as the maximum and minimum data (that is not available in as much detail from the beginning of the series) show different trends with the minimum values having been subject to substantial adjustments. To be fair to the UK Met Office the uncertainties in the values and the problems in compiling the record have been openly published in papers by Parker and Horton that anyone considering this record needs to be familiar with:
Of importance is the trend in rainfall over the period and not just the annual distribution but the soil types of the stations used in the record on which it falls since soil moisture content and the ease with which that moisture is released to the atmosphere can have substantial effects on minimum and maximum surface temperatures – cooling the maximum and warming the minimum. These effects depend on surface solar insolation (hence cloud cover) as well as wind speed and other factors considered in this model by Berg and colleagues:
and this work by Monroe and colleagues
and this modelling study by Hirsch and colleagues:
or this by Herb and colleagues:
There are several other studies but these serve to show that evaluating surface temperature without regard to factors such as rainfall, solar surface insolation and soil conditions leads to erroneous results, especially in the homogenization of data from sites with differing profiles for these factors and especially for the UK where surface insolation has increased since the 1960’s.
This link between temperature, soil conditions, solar insolation and rainfall has been well known since the Koppen climate zone classification was first promoted and in effect means that global temperature trends have little meaning, especially when presented as annual means. The only trends that have some value are those within climate zones ( such as the work presented by Frank Lasner of Hidethedecline) and especially on the periphery of such zones that indicate the direction of the climate trend – for example the cycles in climate linked to the Sahel area of Africa which changes as the Hadley cell expands and retreats from the equator bringing changes in rainfall and surface insolation. Perhaps of more importance in this regard is the expansion/contraction of the cereal growing areas of the Northern Hemisphere that brought crop failures hence widespread famine during the Little Ice Age – a situation that looks like it is about to repeat.
To get back to the CET record anyone familiar with the climate of the UK will see that the record is a mixture of data from several climate zones and urbanized areas and as such, whilst interest as a guide to long term trends lacks the precision to detect real changes in climate over decadal periods. Also discussing the mean CET trend is misleading as one needs to look at the mean minimum and maximum values and consider the impact of UHI as well as rainfall, soil moisture, surface insolation and wind speed. Finally, it is interesting to observe that a recognized measure of these values in agriculture is the Class A Pan Evaporation data that globally shows no clear warming trend though there are local trends depending on the factors discussed above:

January 9, 2015 2:21 am

First: The temperature data in these files are not anomalies. They are absolute temperatures.
Second: The whole series in the annual column has been shifted upward. So there is no change in trend.
Third: I had av version of the file downloaded December 2013. Comparing this file to the newest gives the same result as in this post. But when I compute the mean of the monthly values from the two files the results are identical. The monthly values are actually unchanged. So computing the mean of the monthly values from the newest file and comparing those with the annual column the new file I get the same result as comparing the yearly columns from the two files.
That means: This is just a glitch in the year column. An error. It is not an adjustment at all. And the error will be corrected.
That correction might of course result in some interesting thoughts from some.

Reply to  rooter
January 9, 2015 9:32 am

Thanks for that. I agree that there is no change in trend.
The entire ‘man-made global warming’ scare is based on just a few years’ of anomalous warming. That has stopped. But the scare continues as if global warming had continued.

January 9, 2015 2:58 am

It was important to ask the question. Seems we have the answers from vukcevic and rooter. Even if CET data are reliable, can we say the same for NOAA and others? The discussion should continue.

January 9, 2015 3:25 am

Hadcrut V18.23 will show warming between 1850 and 2014 was indeed 2.0C just as the climate models had predicted.
The data will also show that there was immense impacts on the climate as severe hurricanes tripled in number and rainfall extremes increased by 137% while there was over 50 million climate refugees by 2010. And cats and dogs went extinct in 2023.

Reply to  Bill Illis
January 9, 2015 3:44 am

You forgot to mention that the adjusted summer Arctic ice extent fell below 1milliom km^2 in 2013, and the last polar bears drowned in 2015.

January 9, 2015 4:02 am

Thanks for all the comments.
Whilst vukcevic provides an explanation for the changes, I would rather have a file showing raw (unadjusted) data. From these data it would be possible for anyone to provide their own interpretation for situations such as UHI. Mainly because I disagree with P. Jones figure for UHI and consider it 0.8 Deg C after an analysis between London Heathrow and London Gatwick (using 15 years of data)..
I can see there is no trend in the adjustments, however I asked the questions “how often and by how much are these adjustments done?” It was a bit sarcastic of me to relate the adjustment to a /decade figure not knowing the answers to the questions. If these adjustments happen frequently then the whole record warms and there is a greater potential for the latest year to be warmer.
As far as the warmest year is concerned, I have been collecting 24 hourly data on a daily basis for 27 locations in the UK since October 1998 as part of my work. One of the locations is Birmingham, right in the middle of the CET. When I analysed the average daily/monthly/yearly data I found in 1999 the average temperature was 11.1, which is 0.6 Deg C higher than the CET for 201.
[“higher than the CET for 201.” ? .mod]

Reply to  NeilC
January 9, 2015 4:07 am

I alerted Met Office to the error in early August 2014 and suggested method of recalculation which they appear to have adopted and corrected the annual values.

Reply to  NeilC
January 9, 2015 5:03 am

Just ask for the raw unadjusted data the.
No big deal.
For this series they apply an UHI adjustment from 0,1 – 0,3 degrees (depending on month – biggest adjustment i summer). That gives of course lower adjusted temperature than raw unadjusted.
Like in Birmingham….
But if you think that i wrong you can of course suggest another adjustment. Just see to give good arguments for change of adjustments.
Or perhaps you would keep the raw unadjusted data?

Reply to  rooter
January 9, 2015 9:33 am

I would keep both adjusted and raw data side by side. That way everyone can see what has been done.

Reply to  NeilC
January 9, 2015 8:12 am

If vukcevic’s explanation is correct and they did correct their annual mean calculation then the original data from which it is derived will still be there and will be unchanged only the derived annual data will have changed. Should be easy to check.

January 9, 2015 4:08 am

An analogy exists between average global temperature resulting from forcings and the level of water in a bucket containing a hole in the bottom being filled with a hose. If the inflow or hole size is suddenly (or gradually) changed to a different value the level of water would slowly change until equilibrium between inflow and outflow was reestablished. The water level would change according to the time-integral of the difference between inflow and outflow. Likewise, average global temperature depends on the time-integral of the net effect of forcings.
If global warming was caused by CO2 (which it isn’t), warming rate (rate-of-change of average global temperature) instead of (as usually presented) the temperature itself would vary with the CO2 level. To be valid, the comparison should be between the temperature and the time-integral of CO2 level and/or the time-integral of any other factor(s) proportional to energy rate.
Thus any co-plot of CO2 level and temperature or any other implication that average global temperature depends directly on CO2 level is misleading and physically and mathematically wrong.
An analysis at derives a physics-based equation which, using the time-integral of forcings, accurately calculates the uptrends and down trends of average global temperatures irrespective of whether CO2 change is included or not. The paper at this link discloses:
1. A reference which provides historical evidence that CO2 change does not cause climate change.
2. The two factors that do explain climate change. The correlation is 95% with average global temperatures since before 1900; including the current plateau. The analysis also predicts the ongoing down trend of average global temperature.
3. An explanation of why any credible CO2 change does not cause significant climate change.
The two factors are also identified in a peer reviewed paper published in Energy and Environment, vol. 25, No. 8, 1455-1471.

Gary Pearse
January 9, 2015 5:19 am

January 8, 2015 at 10:43 pm
Phillipbratby, think about this (I mean it sincerely). If the world is going to warm 2 to 5C and this is very alarming, there is no need to do all this adjusting of temperature records because:
A) The warming WILL overwhelm any errors. Thermometers show (roughly?) a global 0.6- 0.7C per century increase and this is supported in most cases regionally with different thermometer sets in different continents. It does not matter if this is out 0.1-0.2C, an order of magnitude less than the signal we are worried about.
B) If a modest network of ideally located thermometers is used for monitoring the warming and assessing its seriousness, they can be left raw. The magical-seeming Central Limit Theorem [CLT] (large number of measurements of a metric averages out to a normal distribution [bell curve] with the temperature we seek indicated by the peak in the bell. The application of the CLT is a legitimate one for this use, because, over the several hundred years since the thermometer’s invention, it has been calibrated at 0C (or the earlier 32F) using a mix of ice and water, boiling point at 100C (212F) at sea level, and later, human body temperature for interpolation and probably today an number of other known constant temperature types. Each of these calibrations is essentially one iteration of hundreds of thousands of measurements of freezing, boiling and body temperatures. Their averages will be precise for the task at hand. Individuals will read somewhat too warm or too cool but the averages you can rely on.
I’m surprised that this hasn’t been prominently presented in any of the thousands of posts I’ve read on warming and the data sets. Indeed, a dozen pristinely located thermometers distributed around the world would be fully adequate to give us a signal of any worrying trends UP or Down. I think the problem is modern hubris, an idea that our quaint predecessors efforts at instrumentation were crude in their manufacture. Had I time, I would be happy to expand on this. Suffice it to say for now that hand crafted instrumentation of several hundred years ago was remarkably intricate.
I have a ‘recipe’ for making a reflecting telescope parabolic mirror that is that of Sir Isaac Newton himself (not an original but the recipe is). It uses two flat round plates of glass – the tool (on top) and the reflector with grit to grind down the reflector between them (ever finer grit as you progress until you get to jewelers rouge). I won’t get into the regular turning of the assembly and the walking around in the opposite direction it to grind back and forth from all directions (CLT works admirally here, too!!).
When this stage is complete, you have a spherical indentation from the edge inwards in the reflector. Now you have to change this to a flatish parabolic shape so that the lines of light converge at a focus. This is done by coating the tool with beeswax, grooving the beeswax in a reticular pattern and resuming the same grinding motions with jewelers rouge. The beeswax, not being rigid deforms slightly with each rub and in time you have a parabolic, polished surface.
To check it out, you set the mirror on its edge, find the focus using a light passing through a slit and adjusting the distance until the entire reflector is aluminated (remember Newton was also a seminal researcher in optics). To check for “bumps” you slowly slide a razor blade in guides into the slit and look for topographic irregularities (usually smooth, polished, rounded bumps) as the slit of light sweeps the refector. This method will permit you to map bumps greater than 2 to 3 millionths of an inch on your reflector. You remove these bumps by putting a thin coating of jewelers rouge on your scrubbed thumb and giving the bump a few light rubs. Repeat until the deflection from the bumps can no longer be detected with the advancing razor blade.
I don’t have a link and the out-of-print book is a few thousand kilometres away from me in my home, but I’m sure you can find it or another by googling Newtons method. Oh, and 2millionths of an inch is 0.05 microns (cigarette smoke particles are 2 microns). How’s this for precision in the 17th Century. You may not also know that modern telescope mirrors also get the final by-human-hand finishing!! They have the same recipe, I’m sure.

January 9, 2015 5:21 am

I don’t see any problem here. I don’t see any trend in the difference graph, it just adds approximately 0.03 degree celsius to the whole record, making both present and past look veeeery slightly warmer. Magnitude of warming is difference between now and then, and this difference is in general untouched. So are any temperature trends.
As to why are these changes made, my honest guess is they come from improved methodology, perhaps slight changes in weights given to individual stations, or changes to interpolation for missing data.
I don’t see any reason to assume conspiracy.

Reply to  Kasuha
January 9, 2015 5:51 am

‘I don’t see any reason to assume conspiracy.’
I think you only have to look at the track rerecord of those making these adjustments , often carried out in poor way with little or no justification , to see why people tend to ask questions of them . After all if someone spends 90% of their time lying to you , you are likely to question what they say for the other 10% too.

January 9, 2015 5:55 am

Typo near the end. It says 2.0 C/decade.

Gary Pearse
January 9, 2015 5:57 am

Oh and apropos of my comment above, many of the historic thermometers are still available for testing.

January 9, 2015 6:00 am

This is a complete nothing. Slight change in the 1961-90 monthly means. Doesn’t change trends.
Nor is the pre-anomaly data hard to find. It’s here
Completely different animal than GISS two-legged adjustments.

Reply to  Steve McIntyre
January 9, 2015 7:46 am

Given your reputation, I am surprised you would accept any change, regardless of how benign. Perhaps you’re only interested in the large changes. Perhaps you have become numb to these things these days. Unfortunately I think that one finger instead of three is still rape.

Reply to  Steve McIntyre
January 9, 2015 10:07 am

Reply to SMc ==> absolutely correct. The reported “anomaly” is hundredths of a degree — if this were a medical study, it would fail the “Minimal Clinically Important Difference” test. — kh

January 9, 2015 6:09 am

There is of course a marked difference between temperature readings in towns and cities and the countryside but even if we took all readings in rural areas there could still be a marked difference. My readings are taken in my garden at 400ft above sea level. On a still clear night readings taken at the bottom of the valley 300ft below will regularly be 1 to 2C below my own.

January 9, 2015 6:12 am

Dan Pangburn.
Thanks for your input to the discussion. I am an Electrical Engineer M.Sc. and have been working with process control systems in the geothermal industry for decades. I was also a lecturer in Modern Control Engineering (Ogata) at the local university for several years, so I have a fair understanding of thermodynamics and mathematical modeling of physical systems.
I agree with you that the correct method is to use the time-integral of the forcing signal. It is simply wrong to compare directly the instant value of CO2 or variable TSI and the Earth’s temperature.
Therefore I consider your analysis at very valuable.
Best regards

January 9, 2015 6:16 am

What is the reason for these data adjustments?
Zealots changing the record.
How often and by how much are these data adjusted?
Every chance they get.
Is this anthropogenic warming caused by man-made adjustments?
There you go. My best guesses.

January 9, 2015 6:20 am

I got a copy of the file from december 2014. The annual column in that file was identical to the file from 2013. And had of course therefore the same difference in comparison to the latest file.
Supports the glitch-hypthesis.

January 9, 2015 6:30 am
Mount Tambora
[Excerpt from wiki]
With an estimated ejecta volume of 160 km3 (38 cu mi), Tambora’s 1815 outburst was the largest volcanic eruption in recorded history. The explosion was heard on Sumatra island more than 2,000 km (1,200 mi) away. Heavy volcanic ash falls were observed as far away as Borneo, Sulawesi, Java, and Maluku Islands. Most deaths from the eruption were from starvation and disease, as the eruptive fallout ruined agricultural productivity in the local region. The death toll was at least 71,000 people, of whom 11,000–12,000 were killed directly by the eruption;[6] the often-cited figure of 92,000 people killed is believed to be overestimated.[7]
The eruption caused global climate anomalies that included the phenomenon known as “volcanic winter”: 1816 became known as the “Year Without a Summer” because of the effect on North American and European weather. Crops failed and livestock died in much of the Northern Hemisphere, resulting in the worst famine of the 19th century.[6]
[end of excerpt]
[Excerpt from my above post]
The Dalton Minimum had 2 back-to-back low SC’s with SSNmax of 48 in 1804 and 46 in 1816. Tambora erupted in 1815.
Two of the coldest years in the Dalton were 1814 (7.75C year avg CET) and 1816 (7.87C year avg CET).
[end of excerpt]
So, for CET’s, it appears that the 1815 eruption of Tambora had minimal effect, since CET’s in 1814 were slightly lower than CET’s for 1816.
However, the anecdotal evidence suggests that 1816 was a much harder year for humanity than 1814.
What to believe?
Best to all, Allan

January 9, 2015 6:40 am

It seems the entire record went up, from far to near past. How could this affect trends?

January 9, 2015 6:55 am

Neil Catto ==> Had you seen this record from Greenwich?
Don’t know how they relate to the CET — but they are “original” and published by the man you kept the record during that period.
It might be interesting to figure the adjustments made during these 34 years alone — as we have a guaranteed set of actual recorded temperatures that have never been touched.
– kh

Stephen Ricahrds
January 9, 2015 7:04 am

I don’t know how often the following needs to be said before crimatologists take note but here goes :
In no other science is it acceptable to alter, modify, adjust or change in any way shape or form, past data. It is what it is and must be left untouched. You may choose to critique the data but you may not change it.
Only in climate novels is it seen as normal practice to alter both past and present data and, of course, future data.

Non Nomen
Reply to  Stephen Ricahrds
January 9, 2015 8:31 am

E.g.: the air pressure reading at a given location is 1290hPa. Surrounding stations show an average of 1025hPa. In this case, it seems to me that action is required. There has never been an air pressure of 1290hPa, but averaging that out by mixing this value with other stations values would be a gross distortion. So the only way not to distort the results is -to me- to omit that 1290hPa completely with a remark “data not plausible – typo”.
Would you agree that there are situations where it becomes necessary to “manipulate” data?

Richard Ilfeld
January 9, 2015 8:14 am

Imagine there’s no thermometers.
It’s easy if you try
No adjustments possible
Above us only sky
Imagine all the people
Living as they do
Some years they skate the Thames
It isn’t hard to do
While once vinters worked
and we had wine not brew.
Imagine all the people
Living life in peace
You may say I’m a denier
But I’m not the only one
I hope someday for science
And the world will be as one
Imagine no adjustments
I wonder if you can
No hockey sticks or models
where true science isn’t banned
Imagine all the sunshine
and all the climate too
You, you may say I’m a dreamer
But I’m not the only one
I hope someday there will be science
And the models are all gone.
And the world can continue do do what it damn well pleases!

Bruce Cobb
Reply to  Richard Ilfeld
January 9, 2015 9:41 am

Good one! Lol.

Keith A. Nonemaker
January 9, 2015 8:29 am

The fastest warming over a 50-year period was from Oct. 1688 to Sept. 1738 with a warming of 1.90059 degrees. The warming of the second half of the twentieth century was 0.82415 degrees, less than half as fast.

January 9, 2015 9:17 am

I live in central England, if this is the warmest year ever, I must have totally imagined 1976 (massive enduring heatwave). I think I may need some revisionist history training, I believe there’s a course available at the UEA?

Reply to  sophie
January 9, 2015 9:30 am

Your confusing a hot summer with a hot year.
2014 hasn’t been particularly hot, just not particularly cold.
Only 1 month was below normal, whereas in 1976 4 months were, in spring and autumn, so you wouldn’t really notice. OTOH the summer was an average of 2.5 degrees above normal

James Abbott
Reply to  sophie
January 9, 2015 11:03 am

Massive heatwave indeed – but for a few months. What about the rest of 1976 ?

Stephen Wilde
Reply to  sophie
January 9, 2015 2:35 pm

The lack of cold weather in the early months has heavily skewed the averages for CET.
That lack of cold weather last winter was due to a stronger jet strean across western europe which in turn was caused by very cold plunges across north america.
Perversely,our warm winter last year was actually a sign of a cooling globe with more meridional jets and greater global cloudiness.

John Finn
Reply to  Stephen Wilde
January 9, 2015 6:48 pm

The lack of cold weather in the early months has heavily skewed the averages for CET.

Ah that explains it, Steve. The “lack of cold weather” is the reason it was warmer.
Cheers, mate.

Reply to  sophie
January 10, 2015 3:58 pm

I think the issue is that “warmest” seems to be that the average temperature is higher, which does not necessarily mean higher summer temperatures. For example 2012 was the “warmest” year in the US, but actually due to a mild winter/spring, see
I wonder if there are better definitions for comparing temperature across two time periods, reducing the entire range of a year to a single value loses a lot of information.

Doug Proctor
January 9, 2015 9:29 am

Looks more like a bulk shift to me. You need to run a line-producing algorithm.

See - owe to Rich
January 9, 2015 9:40 am

Although CET mean was a record, funnily enough neither CET max nor CET min was a record. The former, which I was watching closely for the whole of December (using a good proxy to CET) came second to 2003, and the latter came second to 2006.
Re Sophie and 1976, none of 2014’s seasons was a record on CET max. Winter (counting December 2013 rather than 2014) came 6th, spring came 8th, summer came 28th, autumn came 4th. See for these.

January 9, 2015 10:13 am

I hate abréviations and soon will stop reading your blog
Do you think that people knows what CET means ? You should at least explain when you use it the first time in a post

Reply to  fritz
January 20, 2015 2:53 am

There is a glossary in the mast head.
CET = Central England Temp.

James Abbott
January 9, 2015 11:02 am

“The CET record started in 1659 close to the minimum of the little ice age. As such, it is with no surprise that last year (2014) was the warmest on record”
And the link between those two events is ?
If we get a record warm year in 2020, or any other year, would the author say the same ? Presumably he would as he just thinks that current warming is part of an endless rebound from the LIA.
Its another version of string theory – how long is a piece of it.

January 9, 2015 11:58 am

James Abbott says:
And the link between those two events is ?
I can hardly believe that is a serious question. The LIA was one of the coldest episodes of the entire Holocene. What does Abbott expect? That when temperatures go down, that they stay down forever?

James Abbott
January 9, 2015 12:15 pm

The fact that you find my question so unbelievable says a lot. Yes the LIA was a cold episode, well documented. But the article does not explain, and nor do any of those who rely on the LIA rebound as a cause of current warming, as to WHEN the LIA rebound ends. So my point is, we are now over 300 years since the lowest point of the LIA – how long is this piece of string ?
The CET shows a notable warming from about 1900 but which then flattens out until about 1980 – then it clearly jumps upwards. The ten year running mean is clearly at a higher level post the 1980s than the C20th recovery period or the preceding LIA period.
How can that upwards jump from 1980 onwards still be pinned to the LIA recovery ? How do you define the end of the LIA ?

Stephen Wilde
Reply to  James Abbott
January 9, 2015 2:30 pm

The peak to trough periodicity is historically about 500 years.
The lowest level of solar activity was about 350 years ago.
There could be another solar induced warmer period before we reach the warmth of the Mediaeval Warm Period again and then start a decline towards the next colder spell.
It is possible that the latest solar quietness is the beginning of an earlier decline but not certain..

January 9, 2015 1:51 pm

Regarding Central England and London to be just one large Urban Heat Island makes a lot of sense. There is precious little open space left in Central England and population has increased from to 8,900,000 in 1800 to 53,000,000 now.

john cooknell
January 9, 2015 1:59 pm

My opinion is that the CET is an adjusted data set that shows very little upward trend till the 1980’s.
In the 1980’s it became the established scientific consensus that increasing CO2 and other man made gas emissions would cause the surface temperature to warm. I am not surprised that the temperature data measurements match the consensus theory, any other result would not be allowed.
However I would have expected that the effects of man made emissions would also be noticeable to a similar extent in the precipitation data record but my understanding is that so far the record shows no trend.

Reply to  john cooknell
January 9, 2015 2:52 pm

Adjustments after 1975 includes reducing temperatures after 1975. Adjustments for UHI. Raw unadjusted temperatures are higher.
Must be wrong. We want raw unadjusted data. They show more temperature increase. That does not match the concensus theory.

Stephen Wilde
January 9, 2015 2:22 pm

Very sad.
Hitherto I had more confidence in CET than other data sets.
All gone.

Reply to  Stephen Wilde
January 9, 2015 2:46 pm

You loose confidence in CET because there is an difference between the yearly values and the yearly mean of the monthly values in the latest hadcet file?
A difference that does not make any difference for the temperature increase.
Easily unconvinced. Unconvincingly so.

Stephen Wilde
January 9, 2015 2:41 pm

vukcevic said:
January 9, 2015 at 1:52 am
“There is nothing suspicious about this:”
Does the feature you spotted account for ALL the anomalies in the head post?

Reply to  Stephen Wilde
January 9, 2015 3:45 pm

Hi Mr Wilde
As far as I can see graph I got is identical with corresponding section of what is shown by Mr.Catto. In my view it doesn’t change anything significant. I did think if the annual CET contains two decimal points (which may or may not be justifiable) then Met Office should recalculate data based on a year of 365/6 days and not on 12 equal months.
In the email to the Met Office I said:
Since monthly data is made of the daily numbers and months are of different length, as a test (1900-2013 data, see graph attached)), I recalculated the annual numbers, using weighting for each month’s data, within the annual composite, according to number of days in the month concerned.
This method gives annual data which is fractionally higher, mainly due to short February (28 or 29 days), see the attachment.
Differences are minor, but still important, maximum difference is ~ 0.07 and minimum 0.01 degrees C.
I am of the view that the month-weighted data calculation is the correct method.

This is copy of the actual graph I attached to the email:

Stephen Wilde
Reply to  vukcevic
January 10, 2015 6:31 pm

Thanks, Vuk, very helpful.

January 9, 2015 2:58 pm

Re the “rebounding theory”:
Why was there no rebounding 1820 – 1920?
And no rebounding 1940 – 1980 (actually the opposite of a rebouding)?
Does the rebounding theory imply that the LIA ended in 1980?

January 9, 2015 5:34 pm

Data for models keeps getting revised after the fact
Factor key to the models is admittedly unknown (how Clouds work)
All Models failed… after 18 years….
DNA evidence would never have been allowed in at trial with the type of ‘scientific’ foundation seen in the AGW area…….any clear thinking juror or judge would be skeptical. After that miserable testimony on the everchanging historical data and the failed models, jurors would be particularly skeptical when the scientist on the stand was unable to even understand why anyone else would be skeptical… screaming like an old time preacher, calling everyone in the room ‘deniers’ of the ‘obvious’ truth to his ‘science’
Conclusions based on Astrology would have a better chance at trial based on that type of proof.

John Finn
January 9, 2015 6:17 pm

It is noticeable that nearly every adjustment is positive, with no negative changes. The whole data set shows an average increase of 0.03 Deg C in 20 months or equivalent to 0.18 Deg C/decade.

Right and ….

Is this anthropogenic warming caused by man-made adjustments?

No, Neil, it’s not because the ‘adjustments’ haven’t produced any warming.

don penman
January 10, 2015 3:35 am

The question that needs an answer here is how can the yearly average be so high while the max.temps and min temps are not exceeded,why does the atmosphere not accumulate heat(the green house effect).My answer is that the clouds(storminess) does not increase the greenhouse effect ,yes the surface temperature is a little higher but clouds reflect more sunlight back into space and lose heat.It is false logic to believe that if the surface temperature anomaly rises then it must have been caused by the green house effect.

See - owe to Rich
January 10, 2015 4:10 am

The head posting arose because 2014 had the highest mean CET on record (though only 2nd for max and for min).
Today, 10 days after 2014 was ushered out, I believe I have just witnessed the warmest English January day evah! That is, my estimate of CET max for January 9th is 14.8degC, smashing 1930’s record of 13.6degC. (There are 17 previous years breaking the 13 degree mark.)
It’s been a bit freakish. Oddly enough, my thermometer registered only 14.6, but Pershore’s 15 miles away apparently made 16.0. Last night I drove north to Worcester (beyond Pershore) and noticed it getting warmer as I drove north.
One final point: there was statistically significant cooling in CET max from 2002-2013 inclusive. That cooling now seems to be over, unless 2014 and January 2015 turn out to be a mere blip.

Sal Minella
January 10, 2015 7:17 am

Amazing science! We have temp data, precise to the hundredth of a degree, from all over the Earth, recorded in the thirteenth century. This information just blows my mind as I thought that temperature data of that quality has only been available in the latter part of the satellite era. Silly me.

Reply to  Sal Minella
January 10, 2015 12:36 pm

No you aren’t silly (you were being ironic I know).
There was nothing to see all along.
The data remains the data and is unaltered.
And the respected posters on here got righteously indignant.
Which was of course all that was wanted.

Reply to  Toneb
January 11, 2015 3:39 am

You are right, completely unaltered. Just like the axe that has been in my family for 350 years. Over time we had to replace the head three times and the handle five times. Same axe.

Richard Barraclough
January 11, 2015 3:52 am

The Met Office made an elementary error in averaging months of different lengths to produce the annual temperature. They have corrected this by averaging the 365 (or 366) daily temperatures, which leaves all the daily readings unchanged and increases the annual temperature very slightly because the shortest month is one of the coldest of the year.
To suggest that there is some subterfuge involved would be a pretty dumb isolated comment. How does it acquire the status of “Guest Essay”?
Well done Vuk for getting them to correct it. I’d noticed a slight discrepancy in my own spreadsheets which also use the appropriate number of days for annual, seasonal and monthlyaverages, but had just assumed I must be missing the odd significant digit in the monthly figures here and there.

Reply to  Richard Barraclough
January 11, 2015 6:43 am

Sloppy work is not acceptable. It leads to thoughts of incompetence. How many years have those ‘learned’ folk been operating? They had to have an outsider point out something fundamentally wrong with their ability to average something correctly.

Reply to  Alex
January 11, 2015 6:51 am

A bit harsh vs Catto there Alex.

Reply to  Alex
January 11, 2015 6:55 am

I was referring to the MET office. I have no problem with Catto

Reply to  Alex
January 11, 2015 8:43 am

Perhaps fiddling is warranted then Alex. The previous yearly averaging was “sloppy work” according to you Alex. The new method is more correct.
I take it that you don’t prefer the sloppy work.

Reply to  Richard Barraclough
January 11, 2015 6:44 am

I also did a recalculation of yearly means adjustet for monthly length. And got results very close to the newest hadcet for yearly values. One reason for the minimal discrepancy is of course that I did not bother to account for leap years. Perhaps they use daily recordings.
The reason for the series being shifted up is of course that we get a decrease in winter recordings vs summer recordings. And as most of us know: summer is warmer than winter. This series have absolute temperatures and when you get more summer recordings vs winter recordings the series will give higher temperatures on average.
The difference is small, but the new calculation for yearly averages is obviously more correct. Some will of course maintain that it must be wrong because it is different from the previous calculation.
So what happens? Some loose their “confidence” in the series. Some call it rape. Etc. The usual stuff.
When we know the reason for the adjustment for yearly averages: Why not update the post? Would Catto answer his own questions? And perhaps he could correct this:
“The whole data set shows an average increase of 0.03 Deg C in 20 months or equivalent to 0.18 Deg C/decade.”
Which is just s….. wrong.

Reply to  rooter
January 11, 2015 7:20 am

My comments are about my dislike of data being fiddled with. I have absolutely no idea if Catto has got it wrong with his calculations. Around the world you have stations moved and data adjusted to reflect what would have been if the station had been at a particular place for 200 years. So you have this ‘floating average’ that is in fact not an average of anything, its just -ish. You can’t have a floating ‘earth’ in electrical circuits because you get screwy results. The whole climate thing is so screwy now that I have no idea on whether the climate is getting warmer or cooler. I’m better off licking my finger and testing the wind or asking grandma if her bones are aching.
It’s ‘climate something’ not ‘climate science’

January 11, 2015 6:00 am

It does seem strange that the producers of this record would consider the creation of monthly averages, (fine) then using this intermediate average, with its rounding, to be averaged again to produce the annual result.
It seems obvious that you should use 365 days worth of readings to create the annual figure, which will minimise any unwanted mathematical artefacts being introduced.
I wonder if this process has common usage in the climate industry?

Reply to  steverichards1984
January 11, 2015 6:49 am

Using absolute temperature will give this effect, small as it is. Using anomalies will make this effect even more neglible.

January 11, 2015 9:54 am

The accuracy of early thermometers has been questioned by some, but others in this thread suggest that because the early experimenters were ‘gentlemen scientists’ they would take great care at using well made thermometers.
CET records temperature from 1659.
The Royal Society set up a committee in 1777 to find how to reliably fix, and find out, what the temperature of boiling water was.
Modern experiments show that at sea level, boiling point of water varies from 100 C to 103 C depending upon how you go about the boiling process, as documented in the above link.
How can we rely upon pre 1777 temperatures when we could not ‘fix’ the boiling point of water to enable the calibration of thermometers?
Also, when did it become common knowledge and practice to adjust your boiling point calibration setting to account for your altitude?
Were many thermometers of the day made in the Midlands (Birmingham, UK) is 99m above sea level, changing the boiling point of water by nearly 0.4 C or were they manufactured on the banks of the river Thames at sea level?
With so many factors creating genuine unknowns in early temperature metrology, perhaps we should stop trying to reduce the size or error bars and increase them instead.

Reply to  steverichards1984
January 11, 2015 3:37 pm

Or you could of course not use the recordings before ca 1750. Many issues. Including measurements taken indoor, not actual temperature readings and not from England. Etc

Reply to  steverichards1984
January 11, 2015 5:53 pm

So at 10.3 C it is actually 10 C. How accurately did they read ‘better quality’ thermometers up to 100 years ago. Have you heard about parallax error? Rounding up and rounding down?

%d bloggers like this: