Guest essay by Ron Clutz
This is a study to see what the world’s best stations (a subset of all stations I selected as “world class” by criteria) are telling us about climate change over the long term. There are three principle findings.
To be included, a station needed at least 200 years of continuous records up to the present. Geographical location was not a criterion for selection, only the quality and length of the histories. 247 years is the average length of service in this dataset extracted from CRUTEM4.
The 25 stations that qualified are located in Russia, Norway, Denmark, Sweden, Netherlands, Germany, Austria, Italy, England, Poland, Hungary, Lithuania, Switzerland, France and Czech Republic. I am indebted to Richard Mallett for his work to identify the best station histories, to gather and format the data from CRUTEM4.
The Central England Temperature (CET) series is included here from 1772, the onset of daily observations with more precise instruments. Those who have asserted that CET is a proxy for Northern Hemisphere temperatures will have some support in this analysis: CET at 0.38°C/Century nearly matches the central tendency of the group of stations.
1. A rise of 0.41°C per century is observed over the last 250 years.
| Area | WORLD CLASS STATIONS | |
| History | 1706 to 2011 | |
| Stations | 25 | |
| Average Length | 247 | Years |
| Average Trend | 0.41 | °C/Century |
| Standard Deviation | 0.19 | °C/Century |
| Max Trend | 0.80 | °C/Century |
| Min Trend | 0.04 | °C/Century |
The average station shows an accumulated rise of about 1°C over the last centuries. The large deviation, and the fact that at least one station has almost no warming over the centuries, shows that warming has not been extreme, and varies considerably from place to place.
2. The warming is occurring mostly in the coldest months.
The average station reports that the coldest months, October through April are all warming at 0.3°C or more, while the hottest months are warming at 0.2°C or less.
| Month | °C/Century | Std Dev |
| Jan | 0.96 | 0.31 |
| Feb | 0.37 | 0.27 |
| Mar | 0.71 | 0.27 |
| Apr | 0.33 | 0.28 |
| May | 0.18 | 0.25 |
| June | 0.13 | 0.30 |
| July | 0.21 | 0.30 |
| Aug | 0.16 | 0.26 |
| Sep | 0.16 | 0.28 |
| Oct | 0.34 | 0.27 |
| Nov | 0.59 | 0.23 |
| Dec | 0.76 | 0.27 |
In fact, the months of May through September warmed at an average rate of 0.17°C/Century, while October through April increased at an average rate of 0.58°C/Century, more than 3 times higher. This suggests that the climate is not getting hotter, it has become less cold..
3. An increase in warming is observed since 1950.
In a long time series, there are likely periods when the rate of change is higher or lower than the rate for the whole series. In this study it was interesting to see period trends around three breakpoints:
- 1850, widely regarded as the end of the Little Ice Age (LIA);
- 1900, as the midpoint between the last two centuries of observations;
- 1950 as the date from which it is claimed that CO2 emissions begin to cause higher temperatures.
For the set of stations the results are:
| °C/Century | Start | End |
| -0.38 | 1700’s | 1850 |
| 0.95 | 1850 | 2011 |
| -0.14 | 1800 | 1900 |
| 1.45 | 1900 | 1950 |
| 2.57 | 1950 | 2011 |
From 1850 to the present, we see an average upward rate of almost a degree, 0.95°C/Century, or an observed rise of 1.53°C up to 2011. Contrary to conventional wisdom, the aftereffects of the LIA lingered on until 1900. The average rate since 1950 is 2.6°C/Century, higher than the natural rate of 1.5°C in the preceding 50 years. Of course, this analysis cannot identify the causes of the 1.1°C added to the rate since 1950. However it is useful to see the scale of warming that might be attributable to CO2, among other factors.
Of course climate is much more than surface temperatures, but the media are full of stories about global warming, hottest decade or month in history, etc. So people do wonder: “Are present temperatures unusual, and should we be worried?” In other words, “Is it weather or a changing climate?” The answer in the place where you live depends on knowing your climate, that is the long-term weather trends.
Note: These trends were calculated directly from the temperature records without any use of adjustments, anomalies or homogenizing. The principle is: To understand temperature change, analyze the changes, not the temperatures.
Along with this post I have submitted the World Class TTA Excel workbook for readers to download for their own use and to check the data and calculations. You can download it from this link: World Class TTA (.xls)
For those who might be interested, the method and rationale are described at this link, along with the pilot test results on a set of Kansas stations:
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
TheLastDemocrat says:
July 28, 2014 at 11:59 am
Effects of the warming are supposedly evident in every corner of the globe. We are told that species habitats are moving all over the place. Species are going extinct all over the place. Drought, flood, tsunami, hail, and locust plagues are busting out all over.
=========================
TLD,
You forgot boils… boils are breaking out worldwide… and pustules, too.
Willis, thanks for the comment.
Are the temperature records imperfect? Absolutely.
Can we go somewhere and get perfect data? Not on this planet, not in our lifetimes.
Are the temperature histories “garbage?” No way–thousands of professional meteorologists are doing the best they can to document the weather as it happens.
My approach: Let’s take the best records we have (warts and all), and let the data with minimum processing tell us about the issue we have: Are present temperatures unusual, and should we be worried?
So which comes first the chicken warming or the egg CO2? Whose on first? I don’ know? Do you?
nick,
the answer to the chicken and egg question is here:
http://www.lavoisier.com.au/articles/greenhouse-science/solar-cycles/IanwilsonForum2008.pdf
Ron C. says:
July 28, 2014 at 3:30 pm
Agreed.
Agreed.
Here we part company. First, in the US at least, the people collecting the data are not “professional meteorologists”. They are volunteers. And indeed, any records prior to about 1900 are extremely unlikely to be from “professional meteorologists”. So your point about meteorologists simply won’t wash.
Second, no matter whether the data collectors were professional meteorologists, most if not all temperature records have the problems I noted:
Third, since (as far as I can see) you have done absolutely no quality control of any kind … so how on earth would you even begin to know if some, many, or all of them were unfit for the purpose to which you have put them?
That is not what you claimed above. Above you made the claim that you had selected the records for “quality”, viz:
Since you have done absolutely no quality control, how could you possibly know which are the “best records”? And if you can’t trust your data and you’ve done no quality control, there is no possible way to know if present temperatures are “unusual” in any way.
Ron, I understand your desire to use the “raw data”, but in most cases and in most fields, doing absolutely no quality control of your data is a serious mistake. Remember that e.g. a change in observation time can easily create a totally spurious warming, and a station move of only fifty feet can easily create a totally spurious cooling.
So while I totally disagree with the automated “homogenization” algorithms used by e.g. Berkeley Earth and other organizations, I also am mad keen about accounting for known errors in any dataset. If you know for a fact from the station metadata that a station move occurred in e.g. 1917, and the station record shows a 1° drop in temperature in 1917, and the other nearby stations show no such drop, you’d be a fool to use that record as is. GIGO, with the result that you are misleading both yourself and your readers.
Having said that, you are close to the finish line.Were I in your shoes I’d obtain the metadata for the 25 stations and note carefully any changes in instruments, time of observation, location, and the rest. I suspect that the information is most easily available from the Berkeley Earth dataset.
Then I’d look at the data and see if there is a visible jump at that time, using one of the recognized algorithms for detecting a step change.. If so, simply cut the one record into two records at the step change.
Then I’d take the “first differences” of all the datasets, average them by year, and cumulatively sum them. This would give me the average of the data, from which I’d obtain the trends over the periods.
But to just take raw data with absolutely no quality control and use them as is? Sorry, that’s anathema in my world.
Please take this in the supportive sense in which it is offered. Your work is a start, but there is more to do before any weight can be put on your results.
All the best,
w.
Reply to Willis Eschenbach :-
Since I’m a raw novice here, I will need some help along the way.
Berkeley Earth provides raw and adjusted values, as well as flags for station moves, gaps, TOBS changes and other inhomogeneities, as well as a regionally expected time series, so that looks potentially very useful.
What do you mean by ‘one of the recognized algorithms for detecting a step change’ ?
If you could help me along the way, I would be very grateful.
Richard Mallett says:
July 28, 2014 at 2:31 pm
As with many temperature records, the long-term Armagh record (which starts in 1796) is made up of several overlapping records from one station. This is quite common, and is likely (unknown to either you or Ron) the case with a number of the allegedly “continuous” records you have used in your dataset. This highlights another problem with using station records “as is”, of just grabbing “raw data” without examining its provenance … you don’t know whether it’s even “raw” at all.
In any case, the Armagh record goes back to 1796, as detailed here. That description is well worth a read, as they detail a number of issues with the datasets, and how they have adjusted for them.
By modern standards, not really. Remember that none of these stations was established or maintained with any idea that they would be used to determine century-long trends to the nearest tenth of a degree. Any given one may or may not be usable even after removing the obvious documented inhomogeneities.
See my reply to Ron above.
Regards, perseverance furthers.
w.
Willis, I beg to differ. You dismiss out of hand the quality control done by the NMSs. I do not.
And your averaging of temperatures will lose the variations which are the very thing of interest.
Willis, your link in your reply to Richard comes up “forbidden.”
Steven Mosher July 28, 2014 at 9:50 am says:
“CET is not a station. it has been adjusted and homogenized.
…now people will defend using adjusted data. crutemp4 is adjusted and homogenized. CET is likewise.”
This is just the problem. He knows it, I and many other workers do not. This makes it impossible for us to know what the real temperature history of the earth is. If you find errors write a paper about it. Under no circumstances should a publicly available data-set be changed while pretending that it is still that same data-set.Such stealth changes to scientific data are not scientific and are an invitation to manufacture false information about the climate meant to support a particular view of temperature history. There is almost no chance of checking this unless you get lucky. I got lucky when I compared satellite and ground-based data for the eighties and nineties. Satellite data show that there was no warming between 1979 and 1997. There were five El Nino peaks but the mean temperature stayed constant for 18 years. But this is not what you find in GISS, NCDC, and HadCRUT data-sets. They show an upward slope that gains 0.1 degrees Celsius between these two data points. This happens to be important for current temperature history. As everybody knows, there has been no warming for the last 17 years. What you don’t know is that in the eighties and nineties there was also no warming for 18 years. You don’t know this thanks to the phony temperature graphs that are foisted upon us as climate science. Evaluating these two temperature stand-stills together you will see that they are separated by the super El Nino of 1998. If it wasn’t for the interference from this super El Nino the two flat regions would have joined up. An unexpected feature of this interference is that global temperature took a step warming of 0.3 degrees Celsius right after the super El Nino left, and thereby caused the twenty-first century to be higher than the nineties are. This is neatly fudged out in the three temperature curves by giving the eighties and the nineties an upward slope. Their cooperation is apparently inter-continental. How do I know this? Because they screwed up when they computer-processed all three temperature curves by an identical device. As an unanticipated consequence of this computer processing it left traces of its work on the finished product. These consist of sharp upward spikes at the beginnings of years. They just happen to be in the exact same locations in all three databases (statisticians, calculate this!). Twelve of them are easily visible if you have a good resolution graph.
Finally, I want to quote from Michael Crichton’s presentation to the United States Senate in 2005:
“…let me tell you a story. It’s 1991, I am flying home from Germany, sitting next to a man who is almost in tears, he is so upset. He’s a physician involved in an FDA study of a new drug. It’s a double-blind study involving four separate teams—one plans the study, another administers the drug to patients, a third assess the effect on patients, and a fourth analyzes results. The teams do not know each other, and are prohibited from personal contact of any sort, on peril of contaminating the results. This man had been sitting in the Frankfurt airport, innocently chatting with another man, when they discovered to their mutual horror they are on two different teams studying the same drug. They were required to report their encounter to the FDA. And my companion was now waiting to see if the FDA would declare their multi-year, multi-million-dollar study invalid because of this contact.
For a person with a medical background, accustomed to this degree of rigor in research, the protocols of climate science appear considerably more relaxed. A striking feature of climate science is that it’s permissible for raw data to be “touched,” or modified, by many hands. Gaps in temperature and proxy records are filled in. Suspect values are deleted because a scientist deems them erroneous. A researcher may elect to use parts of existing records, ignoring other parts. But the fact that the data has been modified in so many ways inevitably raises the question of whether the results of a given study are wholly or partially caused by the modifications themselves.”
@Ron C., who says, Let’s take the best records we have (warts and all), and let the data with minimum processing tell us about the issue we have: Are present temperatures unusual, and should we be worried?
Are present temperatures unusual? Yes. Geologically speaking, they’re unusually cold.
Should we be worried? Crimenently, we’re in the the early stretches of a freakin’ ice age. This one is generally considered to have begun 2.5-3 million years ago. The shortest previous ice age in the geologic record is 30 million years long — a full order of magnitude longer. The average length of an ice age in the geologic record is more like 90 million years long. My money sez the ice is coming back — and considering that 460 mya, CO2 concentrations were four THOUSAND ppm and we were, nonetheless, in a deep ice age (the 30-million-year one, in fact), I don’t think there is anything that will save Manhattan from being scraped off the face of the continent. Not that I’ll miss it. And the residents will have time to move.
Honest to goodness, there are far too few paleoclimatologists involved in this discussion. Yours was a very nice analysis, but most everybody talking on this topic just makes me think of a bunch of mayflies worrying about the afternoon getting hotter — and the afternoon, I must add, of a warm day in January.
Your non-article value of 1998-2010 being -1.77 C/century — now, that stood my neck hairs on end!
Ron C. says:
July 28, 2014 at 5:05 pm
Perhaps that is because I am more aware of the quality of their quality control than you are … I’ve seen hideous stuff that has passed local muster. But please note I don’t “dismiss it out of hand”, on the contrary. I simply suggest that it is incumbent on YOU to do quality control yourself, regardless of who else you think has done it.
Sorry, but I don’t understand that. Which “very thing of interest” is lost by averaging?
Finally, let me recommend the following paragraphs to you from the Armagh study cited above, which is an excellent introduction to the art and science of quality control of temperature datasets, a study which I would recommend that you read very carefully. From their introduction (emphasis mine):
So I greatly suspect that your long series are composites of two or more records from a single site, rather than single continuous records … but since you’re doing no quality control of any kind, how would you know? Heck, you’ve even included the CET in your dataset, which is a pastiche of stations, the number and identity of which changed over time … it’s a reasonably good pastiche, but not “raw data” by any definition.
Look, I’m not trying to bust your chops, Ron, you’re well on the way. I’m just trying to let you know how to convert what you’ve done to a serious study. Saying that you “beg to differ” should be put off until you’ve actually taken a long hard look at your own data and have inspected the metadata for each and every station. Claiming that you trust the NMSs (national meteorological services) have done a good job will just make serious students of the subject laugh and pass your work by. A real scientist trusts nothing, especially himself, and certainly not the work of random foreign bureaucrats …
My best to you,
w.
OK.Willis, I will think on that. Do you have a link that doesn’t return “forbidden?”
[I put it into my dropbox here. -w.]
Reply to Ron C :-
climate.arm.ac.uk/calibrated/airtemp/Met-Data-Vol6.pdf is probably the same. I would strongly recommend that we take Willis’ advice. He’s a good man.
I doubt that taking temperature records from two separate stations, and averaging them, gives you anything physically meaningful. So I DEFINITELY doubt that taking hundreds of station records and averaging them gives you anything physically meaningful either.
Why? Intensive properties.
This entire post, along with many others, is really academic.
I did similar work with Tmin and Tmax from a larger dataset and found similar results. But because of the larger area it covered I found the large changes to Tmin took place regionally at different times. I suspect that this is a function of SST’s suddenly changing.
Alternate link for the Armagh study is here.
w.
Can the summer/winter anomaly simply be explained by the local populations heating their homes and work areas in winter time, absolutely adding heat to the local climate, whereas in summer, even with air conditioning (not extensively used in these mostly northern European locales), there is little or no heat added to the system (just concentrated in a/c exhaust).,
Hi Guys
I live in the Bilt (the small town of the Dutch meteorologist society) and i can tell you the town grew from marshy land without roads and major build up areas, into a fairly large town where the marsh was reclaimed and is now a major crossroads of 2 main highways (all black tarmac). This all happened in the last 70 years. Furthermore 30 km away they reclaimed 2500 km2 of inland see into dry land (farm land). i think those effects would cause the anthropogenic local warming.
“This entire post, along with many others, is really academic.”
1. Uses data with large non-validated “corrections” and homogenisations.
2. Fits linear trends to data that is not at all linear
3. Does not state selection criteria, no visible QA before declaring data to be “world class”
4. No uncertainty figures for data or fitted “trends”.
Yes that does seem to be typical of passes in academia these days, so I suppose it is accurate to call it “really academic.” I’m sure he could get these unfounded “trends” published is peer review journals.
Ron C.” Willis, I beg to differ. You dismiss out of hand the quality control done by the NMSs. I do not.”
Indeed, you accept it without question and without examination. You then state that your results are obtained without homogenisation, which gives the impression you are working with raw data, a situation you then refuse to clarify.
“Note: These trends were calculated directly from the temperature records without any use of adjustments, anomalies or homogenizing. ”
So you have used adjusted and homogenised data “without any use of adjustments, anomalies or homogenizing. ” LOL.
I suggest you look at the adjustments that are made to the many HISTALP records that you have included. Those long records have very little long term trend until they are “corrected”.
http://climategrog.wordpress.com/?attachment_id=999
http://www.slf.ch/fe/landschaftsdynamik/dendroclimatology/Publikationen/index_DE/Bohm_2010_ClimCha.pdf
Jeff Albert
That is why we did no averaging of temperatures. These are independent trends taken directly from the data. We analyzed the changes, not the temperatures!
Greg Goodman
My bad, trusting the CRUTEM4 data. So even if, as you seem to believe, the data keepers have baked in some warming into the records, the analysis shows modest warming and mostly in the coldest months.
Reply to Ron C :-
This is what I always say. If the adjusted / averaged NCDC / GISS / HadCRUT4 show a trend since 1880 of 0.64 C per century, then it’s likely that the ‘real’ value is even less than that.
Good effort but as most of your stations are urban what you have mainly demonstrated is the UHI effect over that time period, especially for Paris, Berlin, Templehof (an urban airport) and Kopenhaeven. Prague only showed a very modest trend until the site was moved to the airport. De Bilt like CET is a composite of sites known to have problems with UHI – CET is a composite of changing stations several of which are at airports (Ringway, Rothhamsted – Luton, Squires Gate) and show differing trends. Look at the data from Frank Lansner at Hidethedecline to see these effects. I do not see any truly rural stations among your list though several exist – you can find them at http://www.john-daly.com/ges/surftmp/surftemp.htm with most included in the BEST climate map. Good examples of single rural station records are Valentia and Armagh Ireland from the 1860´s, Vardo Norway from 1949, Akureyri 1882, Haparanda Finland from 1860, Lampassas Texas 1890, Punta Arenas Chile 1888, Adelaide Australai 1857 – urban with a cooling trend!, Snoqualmie Falls USA 1899, Thorshavn Faroe 1857, Angmaggsalik Greenland 1895, Darwin Australia 1882 – urban with cooling trend, Lander USA 1892, Lamar Colorado USA 1898, Spikard Missouri 1896, Concord USA 1870Sodankyla Finland 1908, Farmington Maine 1890, Gloversville NY 1893, Westpoint and Central Park 1820 showing UHI effect over this period of approx 2C, Waverley USA 1883, etc etc most of which show no trend, 1930´s warmer than 1990´s or cooling – especially in the northern NH where according to Arrhenius a doubling of CO2 should have its greatest effect.
Excellent comment there.
Let me ask this … Mosher (or anyone else), is there *any* pure station that you are aware of? Criteria for “pure” in my humble opinion is that the data is not tampered with *and* the station has not moved nor has its surroundings been compromised. Does such a thing even exist where we can compare data at one single location over a long time period?
Furthermore, why can we not create at least one single station today using the exact original equipment, same time of observation, carefully recreating the same surroundings to ensure an Apples to Apples scientific control? It seems to me that this is the most scientific approach to comparing temps from one time period to another. Adjusting temp data is utterly ridiculous. Even if these people could be trusted (and I don’t think so) why introduce error pathways and new variables to something so damn simple?
I’m sure I remember Leif stating that something similar to this idea is being done with sunspots (using original centuries old telescopes and locations). If I am remembering correctly I would ask him to explain this and perhaps suggest how this might be accomplished in the temperature gathering field.
Peter Aslac
Thanks for your comment, your list of stations, and the link.
Ron,
I’m not seeing any trends for individual stations, only single numbers for all of them, unless I’m missing something.You’ve combined stations in some way to get single numbers. If you’re talking of trends for something someone else already averaged, the problem still exists. You’ve only ended up with a fanciful statistical construct that has no meaning in the real world.
Reply to Jeff Alberts :-
The trends for individual stations are in the spreadsheet.
Ron C,
I have 120 million records from about 25,000 stations that I’ve assembled into averages for various sized areas, working with Min, Max, the day to day difference in Min and Max, over night cooling, Surface Pressure, Humidity and Rain. Just follow the url in my name.
I’m currently generating a 1 x 1 Lat/Lon box for the globe that I will upload once it’s finished.
I’ve taken the other tack, I do minimal filtering of stations, so no quibbling over which stations I included. The important part is that I came to a similar conclusion, “Warming” is just swings of Min temp http://www.science20.com/virtual_worlds/blog/global_warming_really_recovery_regional_cooling-121820