Guest essay by Ron Clutz
This is a study to see what the world’s best stations (a subset of all stations I selected as “world class” by criteria) are telling us about climate change over the long term. There are three principle findings.
To be included, a station needed at least 200 years of continuous records up to the present. Geographical location was not a criterion for selection, only the quality and length of the histories. 247 years is the average length of service in this dataset extracted from CRUTEM4.
The 25 stations that qualified are located in Russia, Norway, Denmark, Sweden, Netherlands, Germany, Austria, Italy, England, Poland, Hungary, Lithuania, Switzerland, France and Czech Republic. I am indebted to Richard Mallett for his work to identify the best station histories, to gather and format the data from CRUTEM4.
The Central England Temperature (CET) series is included here from 1772, the onset of daily observations with more precise instruments. Those who have asserted that CET is a proxy for Northern Hemisphere temperatures will have some support in this analysis: CET at 0.38°C/Century nearly matches the central tendency of the group of stations.
1. A rise of 0.41°C per century is observed over the last 250 years.
| Area | WORLD CLASS STATIONS | |
| History | 1706 to 2011 | |
| Stations | 25 | |
| Average Length | 247 | Years |
| Average Trend | 0.41 | °C/Century |
| Standard Deviation | 0.19 | °C/Century |
| Max Trend | 0.80 | °C/Century |
| Min Trend | 0.04 | °C/Century |
The average station shows an accumulated rise of about 1°C over the last centuries. The large deviation, and the fact that at least one station has almost no warming over the centuries, shows that warming has not been extreme, and varies considerably from place to place.
2. The warming is occurring mostly in the coldest months.
The average station reports that the coldest months, October through April are all warming at 0.3°C or more, while the hottest months are warming at 0.2°C or less.
| Month | °C/Century | Std Dev |
| Jan | 0.96 | 0.31 |
| Feb | 0.37 | 0.27 |
| Mar | 0.71 | 0.27 |
| Apr | 0.33 | 0.28 |
| May | 0.18 | 0.25 |
| June | 0.13 | 0.30 |
| July | 0.21 | 0.30 |
| Aug | 0.16 | 0.26 |
| Sep | 0.16 | 0.28 |
| Oct | 0.34 | 0.27 |
| Nov | 0.59 | 0.23 |
| Dec | 0.76 | 0.27 |
In fact, the months of May through September warmed at an average rate of 0.17°C/Century, while October through April increased at an average rate of 0.58°C/Century, more than 3 times higher. This suggests that the climate is not getting hotter, it has become less cold..
3. An increase in warming is observed since 1950.
In a long time series, there are likely periods when the rate of change is higher or lower than the rate for the whole series. In this study it was interesting to see period trends around three breakpoints:
- 1850, widely regarded as the end of the Little Ice Age (LIA);
- 1900, as the midpoint between the last two centuries of observations;
- 1950 as the date from which it is claimed that CO2 emissions begin to cause higher temperatures.
For the set of stations the results are:
| °C/Century | Start | End |
| -0.38 | 1700’s | 1850 |
| 0.95 | 1850 | 2011 |
| -0.14 | 1800 | 1900 |
| 1.45 | 1900 | 1950 |
| 2.57 | 1950 | 2011 |
From 1850 to the present, we see an average upward rate of almost a degree, 0.95°C/Century, or an observed rise of 1.53°C up to 2011. Contrary to conventional wisdom, the aftereffects of the LIA lingered on until 1900. The average rate since 1950 is 2.6°C/Century, higher than the natural rate of 1.5°C in the preceding 50 years. Of course, this analysis cannot identify the causes of the 1.1°C added to the rate since 1950. However it is useful to see the scale of warming that might be attributable to CO2, among other factors.
Of course climate is much more than surface temperatures, but the media are full of stories about global warming, hottest decade or month in history, etc. So people do wonder: “Are present temperatures unusual, and should we be worried?” In other words, “Is it weather or a changing climate?” The answer in the place where you live depends on knowing your climate, that is the long-term weather trends.
Note: These trends were calculated directly from the temperature records without any use of adjustments, anomalies or homogenizing. The principle is: To understand temperature change, analyze the changes, not the temperatures.
Along with this post I have submitted the World Class TTA Excel workbook for readers to download for their own use and to check the data and calculations. You can download it from this link: World Class TTA (.xls)
For those who might be interested, the method and rationale are described at this link, along with the pilot test results on a set of Kansas stations:
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Jeff
As Richard says, look at the individual station sheets. There you find the procedure of calculating the monthly slopes for that station and averaging the slopes for the station trend.
No averaging of temperatures across stations.
Micro
Impressive work. Much more quantitative than my admittedly qualitative approach. Good to see that your work is supportive.
Ron C. commented
And this is important, different people doing similar but different work, and getting similar results, that do not show the same trend that over processing the data does.
Willis Eschenbach says:
July 28, 2014 at 4:37 pm
Willis, I was a little surprised by your suggestion to use a breakpoint identification algorithm for reviewing the subject data. If I remember correctly, there was a fairly lengthy discussion here (at WUWT) on this subject just recently. Wasn’t it something to the effect that the algorithm inherently added warming based on the way it was set-up? Or, is there a difference between a breakpoint identification algorithm, and the one that was discussed previously (which didn’t just identify, but also adjusted)?
Just curious. Thanks in advance.
rip
Ron C. says:
Greg Goodman
My bad, trusting the CRUTEM4 data. So even if, as you seem to believe, the data keepers have baked in some warming into the records, the analysis shows modest warming and mostly in the coldest months.
====
Thanks Ron. I’m not saying CRUTEM4 has had unjustified changes made : I have not looked. My main objection was that they way you presented it read like it was not adjusted/homogenised data which it is.
The HISTALP data , of which you use several stations has huge adjustments and if you read the Boehm paper it is fairly obvious there are some pretty stupid errors in what they’ve done. Yet they are playing games and hiding the data to prevent any one validating or correcting what they’ve done.
That is a shame because they are potentially some of the most valuable long term records.
Yet more activist scientists that seem to think throwing scientific integrity under a bus is somehow going to help the enviro cause. In fact they are destroying it.
ripshin says:
July 29, 2014 at 11:42 am
Good question, rip. The issue in that discussion was the often indiscriminate use of the breakpoint algorithm to identify putative breakpoints which are NOT shown in the meta-data.
I’m suggesting going at it the other way. Use the metatdata to identify physical changes (instrumentation, location, etc).
Now, if your accurate thermometer breaks and you replace it with an equally accurate thermometer, no harm, no foul. You don’t want to do anything. But if a location change has introduced a drop or rise in the average temperature, or you’ve replaced an accurate thermometer with a biased thermometer, you need to deal with that.
So I’m recommending the use of the breakpoint algorithm to VERIFY that that known physical change has resulted in a step change in the temperature. How you deal with that is up to you.
w.
This is really interesting, but I have a question. As someone who has been recording temperatures as part of non-climate studies for the past 45 years – I note that the lab thermometers that I started out using in the 1960’s had variations of about 2 degrees C with in a lot of 25 lab mercury type thermometers. In recent decades I have noted the accuracy variation has improved to about 1 degree C among a similar lot of thermometers. That’s comparing the same brand and production cycles of thermometers. Changing brands produced different error variability. As well in our research we noted that individuals recording temperatures varied on how they read the same temperature on the same thermometer. I believe the science [of] measurement is called Metrology.
Going back to the 1700s and you would be dealing with hand made thermometers from many different producers or self-made per data gather and obviously with even higher variance. While you can avg. out measurements similar variable sources, you can’t accurately average out measurements with different variables – different instrument producers, brands and measurement protocols, etc. So, my question is how have these obvious inaccuracies been sorted out of not only historical temperature recordings and recording protocols, but even present day ones – to accurately conclude increments of 0.1 degrees C less?
Durwood
Good comment, It is also why I avoid any averaging of the temperatures themselves. As said above, the records are imperfect. There is considerable effort applied to the station reports to identify erroneous daily data points, which are then removed in calculating the station’s monthly average. Too many missing dailies, and the average for a month is left blank. Since we are working with trends of station-specific monthly averages, occasional scattered blanks do not impede the linear regressions.
But your main point is that we cannot claim to know more than we know–the analysis is qualitative and meant to provide an historical frame of reference.