Analysis of Temperature Change using World Class Stations

Guest essay by Ron Clutz

This is a study to see what the world’s best stations (a subset of all stations I selected as “world class” by criteria) are telling us about climate change over the long term. There are three principle findings.

To be included, a station needed at least 200 years of continuous records up to the present. Geographical location was not a criterion for selection, only the quality and length of the histories. 247 years is the average length of service in this dataset extracted from CRUTEM4. 

The 25 stations that qualified are located in Russia, Norway, Denmark, Sweden, Netherlands, Germany, Austria, Italy, England, Poland, Hungary, Lithuania, Switzerland, France and Czech Republic. I am indebted to Richard Mallett for his work to identify the best station histories, to gather and format the data from CRUTEM4.

The Central England Temperature (CET) series is included here from 1772, the onset of daily observations with more precise instruments. Those who have asserted that CET is a proxy for Northern Hemisphere temperatures will have some support in this analysis: CET at 0.38°C/Century nearly matches the central tendency of the group of stations.

1. A rise of 0.41°C per century is observed over the last 250 years.

Area WORLD CLASS STATIONS
History 1706 to 2011
Stations 25
Average Length 247 Years
Average Trend 0.41 °C/Century
Standard Deviation 0.19 °C/Century
Max Trend 0.80 °C/Century
Min Trend 0.04 °C/Century

The average station shows an accumulated rise of about 1°C over the last centuries. The large deviation, and the fact that at least one station has almost no warming over the centuries, shows that warming has not been extreme, and varies considerably from place to place.

2. The warming is occurring mostly in the coldest months.

The average station reports that the coldest months, October through April are all warming at 0.3°C or more, while the hottest months are warming at 0.2°C or less.

Month °C/Century Std Dev
Jan 0.96 0.31
Feb 0.37 0.27
Mar 0.71 0.27
Apr 0.33 0.28
May 0.18 0.25
June 0.13 0.30
July 0.21 0.30
Aug 0.16 0.26
Sep 0.16 0.28
Oct 0.34 0.27
Nov 0.59 0.23
Dec 0.76 0.27

In fact, the months of May through September warmed at an average rate of 0.17°C/Century, while October through April increased at an average rate of 0.58°C/Century, more than 3 times higher. This suggests that the climate is not getting hotter, it has become less cold..

3. An increase in warming is observed since 1950.

In a long time series, there are likely periods when the rate of change is higher or lower than the rate for the whole series. In this study it was interesting to see period trends around three breakpoints:

  1. 1850, widely regarded as the end of the Little Ice Age (LIA);
  2. 1900, as the midpoint between the last two centuries of observations;
  3. 1950 as the date from which it is claimed that CO2 emissions begin to cause higher temperatures.

For the set of stations the results are:

°C/Century Start End
-0.38 1700’s 1850
0.95 1850 2011
-0.14 1800 1900
1.45 1900 1950
2.57 1950 2011

From 1850 to the present, we see an average upward rate of almost a degree, 0.95°C/Century, or an observed rise of 1.53°C up to 2011. Contrary to conventional wisdom, the aftereffects of the LIA lingered on until 1900. The average rate since 1950 is 2.6°C/Century, higher than the natural rate of 1.5°C in the preceding 50 years. Of course, this analysis cannot identify the causes of the 1.1°C added to the rate since 1950. However it is useful to see the scale of warming that might be attributable to CO2, among other factors.

Of course climate is much more than surface temperatures, but the media are full of stories about global warming, hottest decade or month in history, etc. So people do wonder: “Are present temperatures unusual, and should we be worried?” In other words, “Is it weather or a changing climate?” The answer in the place where you live depends on knowing your climate, that is the long-term weather trends.

Note: These trends were calculated directly from the temperature records without any use of adjustments, anomalies or homogenizing. The principle is: To understand temperature change, analyze the changes, not the temperatures.

Along with this post I have submitted the World Class TTA Excel workbook for readers to download for their own use and to check the data and calculations. You can download it from this link: World Class TTA (.xls)

For those who might be interested, the method and rationale are described at this link, along with the pilot test results on a set of Kansas stations:

http://wattsupwiththat.com/2014/07/12/a-way-of-calculating-local-climate-trends-without-the-need-for-a-government-supercomputer/

About these ads

124 thoughts on “Analysis of Temperature Change using World Class Stations

  1. Two more categories would be of use:

    1. 1979 to present – to compare to the satellite record
    2. 1998 to present – to compare to “the pause”

    I expect the record will deviate from the satellite record due to UHI and siting issues. By how much would be of as much interest as the actual trend reported itself.

  2. Reply to Dave :-

    I suggested this to Ron, but also mentioned that this blog may not be the best medium for that, because there are 25 stations, 12 months, and annual trends, so that could be 325 graphs. What would you like to see ? I have plotted the annual average anomalies from the 1961-1990 average for each station.

  3. Interesting, but what does it really mean?
    Surely, the 25 stations were all located in urban areas that produced more high rises, electric heaters, etc. etc. over time?

    Who knows what happened in fields and woods 1/4 mile plus from the nearest roads in the same nations at the same time?

  4. [snip - your comment makes no sense as posted. If you have a point, make it. Drive by comments consisting of one line Mosherisms add nothing to the conversation. You are welcome to re-submit a comment of substance. - Anthony]

  5. Richard Mallet;
    What would you like to see ? I have plotted the annual average anomalies from the 1961-1990 average for each station.
    >>>>>>>>>>>>>>>>>>

    Just average them for the time being, just like you did in the table.

  6. “The Central England Temperature (CET) series is included here from 1772, the onset of daily observations with more precise instruments. Those who have asserted that CET is a proxy for Northern Hemisphere temperatures will have some support in this analysis: CET at 0.38°C/Century nearly matches the central tendency of the group of stations.”

    CET is not a station. it has been adjusted and homogenized.

    now people will defend using adjusted data. crutemp4 is adjusted and homogenized. CET is likewise.

    Here is a clue.

    Dont use monthly data as your source.

    Dont use “regional series” as your source.

    and dont believe that “long stations” are necessarily the best. doubt everything

  7. But if there a temperature station that’s been around for 247 years, then it’s probably in a city that has grown all this time, right?

  8. Reply to davidmhoffer :-

    For my list of 28 best stations (Ron dropped one of each pair of stations from the same location) the annual trend from 1979-2010 (the limit of CRUTem4) is +4.78 C/century and from 1998-2010 is -1.77 C/century.

  9. “The average rate since 1950 is 2.6°C/Century, higher than the natural rate of 1.5°C in the preceding 50 years. Of course, this analysis cannot identify the causes of the 1.1°C added to the rate since 1950. However it is useful to see the scale of warming that might be attributable to CO2, among other factors.”

    – You assume the “natural rate” of 1.5°C would have continued and that therefore the rate attributable to CO2 is 1.1°C. What is the justification for that?

  10. Howsmart says:
    July 28, 2014 at 9:40 am
    —————————————————————————————————————————
    The only problem is that the CO2 hasn’t even doubled yet over the period this is talking about. So there is something more going on than changes in the [CO2].

  11. Reply to davidmhoffer :-

    I don’t know how to post images on here. Maybe Ron does.

    [Reply: Only those with Edit privileges can post images. But you can post the image address, or link to an image. ~mod.]

  12. Great report and info.

    At first glance, the temperature increases seem match increases in solar activities from the period.
    Now we need someone to compair this data with true sunspot data in a well laid out chart( not my Art Charts).

    Also I know you say they are world class stations but what is the human population growth surrounding the stations and what % of the area surrounding them has been built up with buildings, homes and Black Pavement. Were the areas with less population growth, new buildings, homes and Paved surfaces with more or less of temperature increase.

    .

    • Reply to njsnowfan :-

      Where can I find population data from 1700 to the present for individual locations ?

  13. Thanks Anthony.

    We knew CET relies on more than one station, but was included because it is a well-respected long record. As it turns out, removing it would change little, since it mirrors the central tendency.

    I share Mosher’s distrust of data averages, and intend to look at Tmaxs and TMins. As for regional series, did not Mosher himself once say that 10 long station records would give approximately the same result as that from all records?

  14. Mr. Clutz,

    You have done good and valuable work here. Urban Heat Island effects have been shown, in the USA at Las Vegas, to explain as much as 3 degrees C spurious temperature increases since the 1950’s. If there were an easy way to pick a rural station associated geographically with each of your All-Star 25, perhaps with a continuous record of several decades back from present, it seems that your analysis could be made even more valuable.

    Your 2.57 C increase 1950-2011 would not seem plausible absent UHI effects, any thoughts?

    I appreciate your contribution…

  15. Stockholm and Uppsala (Sweden) are close to each other. Stockholm is a large city, Uppsala a small town. Compare the results (trends, etc.)! Best from Finland, Antero Jarvinen

    • Reply to Antero Jarvinen :-

      Overall, Uppsala has an annual trend of +0.10C per century.
      Stockholm +0.28 C per century.
      Stockholm Bromma (shorter record, not used by Ron) +0.33 C per century.

  16. “Richard Mallett says:
    July 28, 2014 at 10:10 am
    Reply to njsnowfan :-
    Where can I find population data from 1700 to the present for individual locations ?”

    I have no idea where to find that data. The influence is most likely small but would like to see what the % is.
    Solar activity since the end of the last mini ice age is in line with the data and the strongest solar cycles ever recorded is also in line with the sharper increase since 1950.

  17. All of those stations are in cities, so couldn’t UHI could be even larger than 2.57 C. TOBS? Calibration? Station movement? Thermometer changes? etc etc etc.

    I don’t think anything can be learned from any of the ground based temperature records. Obviously a lot of $$ can be gained though…

  18. So its 0.41C per CENTURY (100 years) so probably insignificant either way. Why hasn’t 2012, 2013 and 2014 been included?Thanks for your efforts BTW.

  19. Richard Mallett says:
    July 28, 2014 at 10:00 am
    Reply to davidmhoffer :-
    For my list of 28 best stations (Ron dropped one of each pair of stations from the same location) the annual trend from 1979-2010 (the limit of CRUTem4) is +4.78 C/century and from 1998-2010 is -1.77 C/century.
    >>>>>>>>>>>>>>

    Well that’s interesting. That 1979 to 2010 is well above satellite, but 1998 to 2010 is well below. I expected the first result due to UHI but the second result baffles me. If they were commensurate with one another, you’d have something of interest for the longer record of the ground stations. But since you don’t, I’m not sure that the record is of use unless you can figure out what the driving factors are in the disparities, and then (dreaded word though it is) adjust accordingly.

    • Reply to davidmhoffer :-

      Inevitably, the stations with the longest (and most complete) temperature records are all in Europe, so we cannot extrapolate to the rest of the world.

  20. Mosher,

    The objections here to “adjusted” data, mine included, are based on traditional scientific/engineering standards. Data revision with no political agenda, properly done, is not objectionable. Clearly our current Federal government functionaries have their thumbs on the scale, as Steyer and his ilk have instructed the Dem lackeys in Washington to do.

    I just looked at Moshtemp, to get some idea of who you are and why you do what you do. Have you become lost in the trees, no longer able to see the forest? Your defense of BEST methods, while ignoring the political machinations behind them, is disingenuous at best. There is a word an English PhD could appreciate!

    Odd that you lecture Mr. Clutz as if he, and all of us, are your students. Not so much…

  21. Reply to Michael Moon:
    No doubt most of these stations are in urban heat islands, many of them located at airports. As we know, it’s the changes in air traffic, population density, buildings, pavement, etc that impacts upon temperature changes.

    One clue is CET, a mixture of rural and urban sites, shows the following:
    1900-1950 +1.32C/Century
    1950-2014 +1.66C/Century

  22. Reply to Richard Mallet: Yes, I noticed the difference, too, and made some graphs. A big difference between these neighboring towns, and a small trend in both of them. Antero

  23. “Of course, this analysis cannot identify the causes of the 1.1°C added to the rate since 1950.”
    Layman here… All the sites are in Europe, does AMO influence temps in Europe? I believe AMO was at a low point in 1950 and a high point in 2011

  24. Almost all of these are in urban areas and should be subject to at least some UHI effect.

  25. You can make your own chart to match what data you want, with the spread sheet provided.
    Select the years you want (column C) hold the ctrl key and select the temperatures you want that match the years. The select insert > scatter chart and select the one you like.
    You have to enable editing first and get full ribbon – the little arrow beside the blue ?.
    If you need help – Google is your friend:

    http://office.microsoft.com/en-ca/excel-help/add-or-remove-titles-in-a-chart-HP010342154.aspx

  26. The Stockholm comparison underscores the Urban Heat Island influence.

    I wonder if there are comparable data from somewhere outside of Europe. e.g. China?

    • Reply to Michael D :-

      All the long temperature records without major gaps in the annual record are in Europe.

      In China, Beijing starts in 1841, and only has 116 years (68%) with complete data.
      Shanghai starts in 1847, and has 138 out of 165 years (84%) with complete data.

      For me, that’s not enough coverage of the pre-industrial period.

  27. “Note: These trends were calculated directly from the temperature records without any use of adjustments, anomalies or homogenizing. ”

    Where did you get that idea from ??!

    http://onlinelibrary.wiley.com/doi/10.1029/2011JD017139/abstract;jsessionid=DC740DACE56ABD9B9BAEA0440C44FA80.f01t03

    “This study is an extensive revision of the Climatic Research Unit (CRU) land station temperature database…. Many station records have had their data replaced by newly homogenized series that have been produced by a number of studies”

  28. E.S., thanks for that tip;Richard is more of a graphics than I am, but this is useful.

    Daffy, Don’t know about AMO effects upon N. Europe. One surprise for me was to see how slow was the recovery from the LIA; I now question whether that was a steady rise.

  29. This article like several others has noted the issue of “Less Cold”.

    I have been plotting certain Western Canadian stations downloaded from Environment Canada for my own interest. I picked places I know and for diverse local climate. The issue of getting “LESS COLD” shows up in every plot I have done so far. I like to look at Max Extremes versus Min Extremes along with Max Monthly Means and Min Monthly Means. In all the sites I have looked at, it seems the temperatures are getting “less” extreme with highs getting lower or staying the same and lows getting higher. The “average” of course shows increasing temperatures even where the max declines because the minimums are less cold. EC also provided precipitation – and that looks like a sine curve so I suspect Western Canada precipitation is affected by the oscillations of the Pacific and the Jet stream (no surprise there). Of course, in Western Canada I can only get 60 to 110 years of data, and the longer series often have holes in them. The trends however, hold across the breaks. The technical aspects of this can be argued infinitum.

    But I am curious about the “Less Cold” aspect of this warming as I am seeing it commented on more frequently. Just like less cold night time temperatures affects daily averages/means. it would seem less heat is being radiated out at night and when it is cold. Undoubtedly there are people here with ideas as to why the earth seems to be modulating temperature/heat in this way? And if you do, please do away with the formulae for radiative heat transfer and black body discussions and see if you can put it into lay terms.

    Thanks.

    Examples:
    Trends are picked from graphed trend line.
    Bella Coola, west coast of BC, on the Ocean, 107 years of record, Mean monthly trend + 1C;
    Monthly Extreme Max – no trend, Monthly Extreme Min +2.5C
    Rocky Mountain House, AB Lee of Rocky Mountains. 90 years of record, Mean monthly trend +1C;
    Monthly Extreme Max -0.5C, Monthly Extreme Min +5C;
    Monthly Mean Max -1, Monthly Mean Min +2C
    Grand Forks, BC – Desert – highs close to 40, 67 years of record, Mean Monthly trend +1C:
    Monthly Extreme Max -1.2C, Monthly Extreme Min +2.5
    Monthly Mean Max -0.5+-, Monthly Mean Min +2.5

  30. Greg Goodman

    Yes, there is a fine line between quality control of errors, and tampering. The studies mentioned are done by NMSs, who are the people producing, verifying and submitting the data. I tend to believe they are trying to get the record right.

  31. Excellent analysis. Do you maybe observe a ~60 yr oscillation in these data? E.g. high at ~1876, low at 1911, high at 1945, low at 1976, high at 2007?

    thanks!

    • Reply to David Dohbro :-

      I don’t see that in my annual average anomalies. I see (roughly) a gradual climb from 1894 to 1942, a decline to 1990, followed by an increase.

  32. Oh, I should have mentioned all the monthly processed data I seem to be able to find in CSV format on the Environment Canada Site seems to end in 2007 at present so that is what I used.

  33. Perhaps in part or mostly due to an increase in relative humidity during the winter? Is data on humidity and dew point collected and retained at these weather stations?

  34. Wayne Delbeke

    You have a colleague in J.R. Wakefield. If you have not yet seen this:

  35. ” crutemp4 is adjusted and homogenized. CET is likewise.”

    This needs correcting in the article. It is a signifant error and likely to mislead may readers.

    “Dont use monthly data as your source.
    Dont use “regional series” as your source.
    and dont believe that “long stations” are necessarily the best. doubt everything”

    Indeed. There is not reason to assume a long record is accurate long term. They are very likely to be affected by UHI effect.

    Also I see Kremsmunster Hohenpissenberg and other HISALP stations. Their long term variation is nothing but “bias corrections”. Most of them were pretty flat before they got “corrected” and homogenised.

    You really need to research your sources better before playing with your spreadsheet.

    Clearly you have not even checked wether these data are adjusted or not before incorrectly assuring everyone they are not.

    No cookie. :(

  36. Anthony, could you look at this incorrect statement that these data are not adjusted / homgenised. It is grossly misleading and needs correcting in the article.

    [I'll check with the author -A]

  37. Re: “This suggests that the climate is not getting hotter, it has become less cold.”
    Actually, this is well established as due to heat transport, which distributes the warming more to the colder parts of the year and colder parts of the globe.

  38. Michael Moon and others interested in UHI

    I will take Mosher’s to this extent:

    Steven Mosher says:
    May 3, 2012 at 1:51 pm
    The last study Zeke, nick stokes and I did, suggested a UHI trend from 1979-2010. That trend, about .04C per decade, is consistent with the handful of regional studies of UHI which all show trend bias of .03C to .125C per decade over the same period. It is REGIONALLY variable. One UHI does not fit all. In the SH, UHI is much smaller. In china and japan and Korean building practice drives it higher.

    http://wattsupwiththat.com/2012/05/03/has-the-crutem4-data-been-fiddled-with/

    • Reply to Alex S :-

      I have specfically stated that I am not making any claims about the world outside Europe. I am making claims about Europe before 1850 and after 1850.

  39. “..did not Mosher himself once say that 10 long station records would give approximately the same result as that from all records?”
    Here’s an easy small size calculator:

    http://www.measuringusability.com/ci-calc.php

    It looks like what was taught in business statistics classes.
    Enter 10 random numbers from 0.0 to 0.3 to represent temperature increases from 10 stations over 30 years. Then look for the confidence interval on the graph. It is possible that 10 stations are enough to draw a useful conclusion. I think it is argued that a sample size of 30 is pretty accurate. More than 30 doesn’t tighten the confidence interval much.
    Interesting post. A skeptic and a warmist do a study of 30 randomly picked high quality stations.

    • Reply to Ragnaar :_

      I don’t know who’s the sceptic and who’s the warmist, but the stations were not randomly selected. I plotted all 172 stations in CRUTem4 that had temperature records that started before or in 1850, then did conditional formatting in Excel to select those that had more than 200 years of coverage, 90% or greater coverage (Berlin had only 90% coverage) and had at least 70 years of coverage before 1850.

  40. “Inevitably, the stations with the longest (and most complete) temperature records are all in Europe, so we cannot extrapolate to the rest of the world.”

    Sure, we can. This is what the man-made global warming theory says. Overall, temps are rising. On a long-enough time scale, you will see this nearly everywhere or everywhere you have a long record, barring some contrary trend in occasional spots, such as the specific coastline where the Gulf Stream might lay its heat upon the British Isles across time.

    Effects of the warming are supposedly evident in every corner of the globe. We are told that species habitats are moving all over the place. Species are going extinct all over the place. Drought, flood, tsunami, hail, and locust plagues are busting out all over.

    The AGW cult members said it and they get to own it. Temps modestly representing any of the continents ought to show us this extreme effect. If a continent, or decent-size country, does not have significant, notable warming above and beyond natural variation, there ought to be a fairly obvious explanation for how it has ducked this global calamity.

    • Reply to The Last Democrat :-

      Globally, from 1880-2014, GISS and NCDC have trended +0.65 per century, HadCRUT4 has trended +0.63 C per century. That looks pretty modest to me.

  41. Quality measurements must be samples that represent the surrounding “climate”, not air-conditioner exhaust. Show the “rural” temperatures. The “urban” temperature records must be rejected completely. Very few thermometers were ever placed with consistent scientific methodology.
    That’s why the USCRN was created only recently.

    http://hidethedecline.eu/pages/ruti/europe/western-europe-rural-temperature-trend.php

    The global surface temperature record has never been measured with any scientific integrity.
    Satellites do not measure surface temperatures.

  42. Welcome to the end of the LIA… Expect glaciers to retreat even more and spruce forests to establish like during the MWP.

  43. Wayne Delbeke says:
    July 28, 2014 at 11:11 am

    “it would seem less heat is being radiated out at night and when it is cold. Undoubtedly there are people here with ideas as to why the earth seems to be modulating temperature/heat in this way?”

    An astute observation. It’s what you would expect from CO2. In daytime convection is working to maintain the lapse rate. At night convection stops and window radiation rules. “Window radiation” is radiation through that part of the IR spectrum that is transparent to radiation from the surface. The CO2 bands absorb on the edge of the window radiation hole, so with more CO2 the window gets smaller and surface temperatures must increase to get the same radiant flux through the narrower window and out to space. So we see nighttime warming, more in the continental interiors, and more at northern locations where there is little moisture in the air so that CO2 is relatively more important. In daylight hours convection largely negates the effect of CO2.

  44. Greg Goodman

    This study did not aim to critique the quality of the temperature data itself. We were interested in seeing the trends arising from CRUTEM4 data taken at face value. In the post, my comment means that we did no adjusting, anomalies or homogenizing to the data in our method of analysis.

  45. Is there a seasonal variation to UHI? I realize it depends on the particulars, but I was wondering if anyone had studied the general problem.

  46. This data gives a good insight into what has happened in northern Europe. I trust original data more than “adjusted” as do many others. There seems to be a definite warming trend with quite a bit of variation. I am not surprised as I have found quite a bit of variation between stations that are less than 20 miles apart here on the south coast of the UK. If CO2 was the main cause of warming all over the world then surely we should see a much more uniform warming.

  47. “Richard Mallett says:

    July 28, 2014 at 10:00 am

    For my list of 28 best stations (Ron dropped one of each pair of stations from the same location) the annual trend from 1979-2010 (the limit of CRUTem4) is +4.78 C/century and from 1998-2010 is -1.77 C/century.”

    Whoa!

    Serious downturn lately.

  48. graph from Boehm et al 2009 showing “homogenisation” applied. Note that summer temps show virtually no rise since 1750 until “corrected”. Note Dr. Phil “why should I give you our data, you only want to find something wrong with it” Jones, is one of the co-authors.

    http://climategrog.wordpress.com/?attachment_id=999

    Several of the HOSTALP sites have interesting long records but are heavily manipulated long term Several sites in this group were used here.

    What is worse is that raw data is a closely guarded secret requiring a substantial fee and a non-disclosure agreement to even get a copy. This means that these substantial additions to the long term variability can not be verified and any one who found a conflicting result would not be able to publish a contrary study that itself could be reproduced because he’s not allowed to publish the data.

  49. TRG says:

    July 28, 2014 at 12:39 pm

    Is there a seasonal variation to UHI? I realize it depends on the particulars, but I was wondering if anyone had studied the general problem.

    Make that “variation to UHI effect at each station” and not just seasonal, but perhaps even daily. Besides the environmental static items (such as asphalt pavement) that have an effect, there are the dynamic items such as wind, cloud cover, and humidity.

    Individual, improperly sited stations give a warmer than correct reading and the probability that they are being adjusted to a proper reading is extremely doubtful.

  50. Ron C. says:

    Greg Goodman

    This study did not aim to critique the quality of the temperature data itself. We were interested in seeing the trends arising from CRUTEM4 data taken at face value. In the post, my comment means that we did no adjusting, anomalies or homogenizing to the data in our method of analysis.
    ====

    Thanks Ron. Then you probably need to make this clear in the article. It currently says:
    “Note: These trends were calculated directly from the temperature records without any use of adjustments, anomalies or homogenizing.”

    Now you’ve “explained” what you meant, it can be read that way but several people have questioned you on the same point. It is misleading.

    If you mean “I have not added any homogenisation to the substantial manipulations already made to the data to make if conform to AGW thinking” , you need to make it clear that is what are saying.

    Currently you give the impression that the data is not homogenised at all and that is very misleading.

  51. it seems that the data is essentially worthless; all high urban stations, Paris, Berlin, Prague, Stockholm, Turin, St Petersburg… No wonder the temperature is skyrocketing in the 20th century.

    • Reply to Ivan :-

      Trend from 1702 to 1900 is -0.15 C per century. Trend from 1900 to 2010 is +1.33 C per century. That doesn’t conform to my definition of sky-rocketing.

  52. Derek says:
    This data gives a good insight into what has happened in northern Europe. I trust original data more than “adjusted” as do many others. There seems to be a definite warming trend with quite a bit of variation.

    ====

    Sorry Derek, you too have been mislead by the author’s description. This data is _heavily_ adjusted ( just not by Ron ).

    Anthony, perhaps you could encourage Ron add a clarification. This data , especially the several HISTALP stations used here has far more “homogenisation” problems than the USHCN that has been getting so much press recently.

    Were talking 0.5 deg C or more of “corrections”, just to the summer temps:

    http://climategrog.wordpress.com/?attachment_id=999

    Worse, the HISTALP group are just a secretive and obstructive as Phil Jones and unlike the US data, no one gets to look at the raw data and do some auditing.

  53. Ron C says: “This study did not aim to critique the quality of the temperature data itself. ”

    Well it does because you present this as a specially selected group of “world class data”. I really don’t see that you have done any QA other than picking long records. Which, as Mosh’ and others have comments, in no way guarantees quality.

    Indeed you did not “critique” the quality but maybe you should have done before applying the epithet “world class” to it.

  54. Temperatures from Uppsala and Stockholm has been discussed at ‘Klimatupplysningen.se’ in three threads. Even if it is written in Swedish the graphs and pictures are informative.

    My personal reflection is that you always get into difficulties when you use data for a purpose they where not collected for.

    For those of you that look for amusement I think this citation may make your day:
    “Hur snurrigt det här än låter så verkar det som att Uppsalas väder bestäms av en väderstation utanför staden vid F16. Uppsalas (och Sveriges) klimat uppmäts av en station i Uppsala.”

    My translation:
    “Despite the oddity it seems that the weather in Uppsala is determined from a station outside the city at the air force base, F16. The climate in Uppsala (and Sweden) is measured by a station inside the city.”

    Link, to a thread that has links to the other two:

    http://www.klimatupplysningen.se/2014/04/08/uppsalatemperaturer/

  55. darwin wyatt says:
    July 28, 2014 at 12:12 pm

    Welcome to the end of the LIA… Expect glaciers to retreat even more and spruce forests to establish like during the MWP.
    ========================================================================
    and Vikings will once again return to flourish in Greenland. Sharpen the axes.

  56. “TRG says:
    July 28, 2014 at 12:39 pm

    Is there a seasonal variation to UHI? I realize it depends on the particulars, but I was wondering if anyone had studied the general problem.”

    Not sure about seasonal, but there is a variation with prevailing conditions…cloudy/sunny and it’s also not uniform during the day. On a bright, clear, sunny day, it will be more pronounced than on an overcast one. It should also peak in the mid/late afternoon.

    Just a simple, not very accurate test the past few weeks show up to a 6F difference along the main street of my little town. On a clear, sunny day from the grocery store to the gas station is about 6F warmer than even a block on either side. It’s all pavement, buildings and concrete. No trees or anything else. The buildings are tall enough (3 to 5 stories) to block most of the wind, so it’s also a pretty dead space. On cloudy days it’s about 2-3F difference. It also stays at that 2-3F through most of the night, if not all night (no, I haven’t checked much at 5 AM).

    And no, it’s not being done with a very accurate thermometer…but it is consistent with itself and it’s response time is good enough to get a stable reading while stopped at the traffic light I’m using as a reference point.

    It also seems to diminish, but not completely disappear if there are several overcast days, with fairly uniform temperatures and little separation between high and low, in a row.

    UHI is NOT a uniform correction factor.

  57. Ron C. says:

    Greg Goodman

    Yes, there is a fine line between quality control of errors, and tampering. The studies mentioned are done by NMSs, who are the people producing, verifying and submitting the data. I tend to believe they are trying to get the record right.

    =====

    Thanks again, Ron. However, UEA’s CRU is not a national weather service. Their top man said he’s rather delete the data if he ever got forced to hand it over ( a criminal act under the law of England and Wales) . He also said he would “hide behind” intellectual property arguments to avoid releasing the raw data. Then they “lost” it.

    The HISTALP group a similarly secretive and obstructive and same P.D.Jones is a co-author on the paper ( Boehm et al 2009 ) that explains their ‘homogenisation’. In fact they hold international meetings on “data homogenisation” to ensure that all their methods give results that are ‘homogeneous’. Don’t want some “poor quality” data telling the wrong storey , do we.

    There is no reason to be over cynical but having seen behind the curtain, and read some of what this little team of zealots were getting up to, there is no longer grounds for a generous presumption of good faith. You have the right to that before you get caught being cheating and manipulative.

    Once gone trust does not come back.

  58. We came out of an ice age about 150 years ago? An age? So temperature observations have been going up some? (yawn)

  59. The idea of finding “world class” stations is excellent. It lends itself to a solid PR strategy. Focusing on getting more and more credible raw data is a winning topic for skeptics.

    The main thing I got from this post was winter is getting less cold. I understand what that means, but I don’t like the way it sounds. Imagine the headline: “Global warming skeptics: It’s not getting warmer, it’s getting less cold.” The “convinced” would have a field day with it.

    Maybe that statement has scientific underpinnings, but it sounds so much like bureaucratic gobbledy gook I would hesitate to use it, Thankfully you explained it. Even though it takes more time and uses more words, the full explanation sounds far more credible to my ear than “it’s getting less cold”.

    Thank you for sharing the spreadsheet.

  60. Ron, first, my thanks for your interesting work. Right or wrong, it’s good to see people doing their own research.

    That said, I didn’t see the criteria you used to select your stations. You state:

    Geographical location was not a criterion for selection, only the quality and length of the histories.

    While the length of the histories is easy to determine, how did you determine the “quality” of the records?

    I ask because it is far from a trivial problem. Most, perhaps all, long term station records contain instrument changes, location changes, shelter changes, time-of-observation changes, or other alterations that can significantly change the size and even the sign of the trend. How did you winnow these records for “quality”?

    In this regard, you say:

    Note: These trends were calculated directly from the temperature records without any use of adjustments, anomalies or homogenizing. The principle is: To understand temperature change, analyze the changes, not the temperatures.

    Let me explain what you are actually saying:

    Note: These trends were calculated directly from a bunch of shonky temperature records with known issues and inconsistencies, without any quality control of any kind. The principle is: To understand temperature change, analyze the changes in a bunch of error-ridden temperature records, not the temperatures.

    I fear you’ve forgotten the old maxim, “Garbage in, garbage out”. When your data contains known problems, as almost all temperature records do, using them as-is is just GIGO.

    As I said, I give you high marks for doing the work, explaining it clearly, and providing your data and code. That is how science is done.

    However …

    Best regards,

    w.

    PS—In addition, I was surprised not to see the Armagh Observatory record among the ones chosen. What was your reason for rejecting it?

    • Reply to Willis Eschenbach :-

      I selected the records, based on the criteria I explained to Ragnar.

      Regarding Armagh, the record starts in 1844, so it would have been of no help in determining trend before 1850. I wanted to compare trends before and after 1850. It was convenient that this gave me 148 years before 1850, and 160 years after 1850.

      Are there any long temperature records that are not ‘shonky’ or error ridden ?

      My originally selected set of 28 temperature records (Ron eliminated 3 duplicates) gave a trend since 1702 of +0.26 C per century since 1702. What do you suggest we do to improve the quality of this figure, while maintaining the value of its longevity ?

  61. Matt L.

    The definitive answer comes only by analyzing TMaxs and TMins to see which one is driving the rising TAvg. But the results here suggests that winters are milder, springs earlier and autumns later–What’s not to like? JR Wakefield proved that to be the case in Ontario.

  62. What we are seeing with the lows getting warmer and the highs getting cooler is a thermodynamic Regression to the Mean. It is a mathematical artifact of the sparse sample size over land and a too short measurement period.

    With the Satellite data we will continue to see the same thing until we get a long enough data period. Adding data points, like Argo, will shorten the time period to the regression to the mean and show an increased rate of warming until then.

    We will continue to see the same warming trend in the near future but at a declining rate.

    The reason we see the Regression to the Mean and the warming is because the ocean surface temperature is the Mean and it has on average, warmer lows and lower highs than land surface measurements.

  63. Instead of increased CO2, I would bet on increased irrigation and surface area of reservoirs and plowed fields of decreased albedo as playing a larger role (if any) in temperature increase.

  64. Ron C.

    … the results here suggests that winters are milder, springs earlier and autumns later–What’s not to like?

    Nothing. I like it fine. But I like the way you wrote it in the quote above better than “it has become less cold.”

    If this article spawns a peer reviewed paper, the peers will likely “get it”.

    If this article gets picked up by MSM, “it’s not becoming warmer, it’s becoming less cold” will come across differently to the public.

  65. First of all, thanks for publicly sharing your results and your method and for the time and effort required to look at the record from a new perspective. I noted with some interest that all the stations were in countries in the northern hemisphere. As the CAGW community was so fond of saying when trying to diminish the MWP, we have no way of knowing if the results are global in nature. But unless we’re willing to dig through Southern Hemisphere records, we also have no way of knowing they aren’t. Food for thought.

    • Reply to The definition guy :-

      The only Southern Hemisphere stations with data from before 1850 are Rio de Janeiro (started in 1832, first reliable year 1851), Pamplemousses, Mauritius (1787-1960, only 62 of 174 complete years, and only 3 years before 1850) and Hobart, Tasmania (started 1841, first reliable year 1883)

  66. TheLastDemocrat says:
    July 28, 2014 at 11:59 am
    Effects of the warming are supposedly evident in every corner of the globe. We are told that species habitats are moving all over the place. Species are going extinct all over the place. Drought, flood, tsunami, hail, and locust plagues are busting out all over.

    =========================
    TLD,
    You forgot boils… boils are breaking out worldwide… and pustules, too.

  67. Willis, thanks for the comment.

    Are the temperature records imperfect? Absolutely.

    Can we go somewhere and get perfect data? Not on this planet, not in our lifetimes.

    Are the temperature histories “garbage?” No way–thousands of professional meteorologists are doing the best they can to document the weather as it happens.

    My approach: Let’s take the best records we have (warts and all), and let the data with minimum processing tell us about the issue we have: Are present temperatures unusual, and should we be worried?

  68. Ron C. says:
    July 28, 2014 at 3:30 pm

    Willis, thanks for the comment.

    Are the temperature records imperfect? Absolutely.

    Agreed.

    Can we go somewhere and get perfect data? Not on this planet, not in our lifetimes.

    Agreed.

    Are the temperature histories “garbage?” No way–thousands of professional meteorologists are doing the best they can to document the weather as it happens.

    Here we part company. First, in the US at least, the people collecting the data are not “professional meteorologists”. They are volunteers. And indeed, any records prior to about 1900 are extremely unlikely to be from “professional meteorologists”. So your point about meteorologists simply won’t wash.

    Second, no matter whether the data collectors were professional meteorologists, most if not all temperature records have the problems I noted:

    … instrument changes, location changes, shelter changes, time-of-observation changes, or other alterations that can significantly change the size and even the sign of the trend.

    Third, since (as far as I can see) you have done absolutely no quality control of any kind … so how on earth would you even begin to know if some, many, or all of them were unfit for the purpose to which you have put them?

    My approach: Let’s take the best records we have (warts and all), and let the data with minimum processing tell us about the issue we have: Are present temperatures unusual, and should we be worried?

    That is not what you claimed above. Above you made the claim that you had selected the records for “quality”, viz:

    Geographical location was not a criterion for selection, only the quality and length of the histories.

    Since you have done absolutely no quality control, how could you possibly know which are the “best records”? And if you can’t trust your data and you’ve done no quality control, there is no possible way to know if present temperatures are “unusual” in any way.

    Ron, I understand your desire to use the “raw data”, but in most cases and in most fields, doing absolutely no quality control of your data is a serious mistake. Remember that e.g. a change in observation time can easily create a totally spurious warming, and a station move of only fifty feet can easily create a totally spurious cooling.

    So while I totally disagree with the automated “homogenization” algorithms used by e.g. Berkeley Earth and other organizations, I also am mad keen about accounting for known errors in any dataset. If you know for a fact from the station metadata that a station move occurred in e.g. 1917, and the station record shows a 1° drop in temperature in 1917, and the other nearby stations show no such drop, you’d be a fool to use that record as is. GIGO, with the result that you are misleading both yourself and your readers.

    Having said that, you are close to the finish line.Were I in your shoes I’d obtain the metadata for the 25 stations and note carefully any changes in instruments, time of observation, location, and the rest. I suspect that the information is most easily available from the Berkeley Earth dataset.

    Then I’d look at the data and see if there is a visible jump at that time, using one of the recognized algorithms for detecting a step change.. If so, simply cut the one record into two records at the step change.

    Then I’d take the “first differences” of all the datasets, average them by year, and cumulatively sum them. This would give me the average of the data, from which I’d obtain the trends over the periods.

    But to just take raw data with absolutely no quality control and use them as is? Sorry, that’s anathema in my world.

    Please take this in the supportive sense in which it is offered. Your work is a start, but there is more to do before any weight can be put on your results.

    All the best,

    w.

    • Reply to Willis Eschenbach :-

      Since I’m a raw novice here, I will need some help along the way.
      Berkeley Earth provides raw and adjusted values, as well as flags for station moves, gaps, TOBS changes and other inhomogeneities, as well as a regionally expected time series, so that looks potentially very useful.
      What do you mean by ‘one of the recognized algorithms for detecting a step change’ ?
      If you could help me along the way, I would be very grateful.

  69. Richard Mallett says:
    July 28, 2014 at 2:31 pm

    Reply to Willis Eschenbach :-

    I selected the records, based on the criteria I explained to Ragnar.

    Regarding Armagh, the record starts in 1844, so it would have been of no help in determining trend before 1850. I wanted to compare trends before and after 1850. It was convenient that this gave me 148 years before 1850, and 160 years after 1850.

    As with many temperature records, the long-term Armagh record (which starts in 1796) is made up of several overlapping records from one station. This is quite common, and is likely (unknown to either you or Ron) the case with a number of the allegedly “continuous” records you have used in your dataset. This highlights another problem with using station records “as is”, of just grabbing “raw data” without examining its provenance … you don’t know whether it’s even “raw” at all.

    In any case, the Armagh record goes back to 1796, as detailed here. That description is well worth a read, as they detail a number of issues with the datasets, and how they have adjusted for them.

    Are there any long temperature records that are not ‘shonky’ or error ridden?

    By modern standards, not really. Remember that none of these stations was established or maintained with any idea that they would be used to determine century-long trends to the nearest tenth of a degree. Any given one may or may not be usable even after removing the obvious documented inhomogeneities.

    What do you suggest we do to improve the quality of this figure, while maintaining the value of its longevity ?

    See my reply to Ron above.

    Regards, perseverance furthers.

    w.

  70. Willis, I beg to differ. You dismiss out of hand the quality control done by the NMSs. I do not.
    And your averaging of temperatures will lose the variations which are the very thing of interest.

  71. Steven Mosher July 28, 2014 at 9:50 am says:

    “CET is not a station. it has been adjusted and homogenized.
    …now people will defend using adjusted data. crutemp4 is adjusted and homogenized. CET is likewise.”

    This is just the problem. He knows it, I and many other workers do not. This makes it impossible for us to know what the real temperature history of the earth is. If you find errors write a paper about it. Under no circumstances should a publicly available data-set be changed while pretending that it is still that same data-set.Such stealth changes to scientific data are not scientific and are an invitation to manufacture false information about the climate meant to support a particular view of temperature history. There is almost no chance of checking this unless you get lucky. I got lucky when I compared satellite and ground-based data for the eighties and nineties. Satellite data show that there was no warming between 1979 and 1997. There were five El Nino peaks but the mean temperature stayed constant for 18 years. But this is not what you find in GISS, NCDC, and HadCRUT data-sets. They show an upward slope that gains 0.1 degrees Celsius between these two data points. This happens to be important for current temperature history. As everybody knows, there has been no warming for the last 17 years. What you don’t know is that in the eighties and nineties there was also no warming for 18 years. You don’t know this thanks to the phony temperature graphs that are foisted upon us as climate science. Evaluating these two temperature stand-stills together you will see that they are separated by the super El Nino of 1998. If it wasn’t for the interference from this super El Nino the two flat regions would have joined up. An unexpected feature of this interference is that global temperature took a step warming of 0.3 degrees Celsius right after the super El Nino left, and thereby caused the twenty-first century to be higher than the nineties are. This is neatly fudged out in the three temperature curves by giving the eighties and the nineties an upward slope. Their cooperation is apparently inter-continental. How do I know this? Because they screwed up when they computer-processed all three temperature curves by an identical device. As an unanticipated consequence of this computer processing it left traces of its work on the finished product. These consist of sharp upward spikes at the beginnings of years. They just happen to be in the exact same locations in all three databases (statisticians, calculate this!). Twelve of them are easily visible if you have a good resolution graph.

    Finally, I want to quote from Michael Crichton’s presentation to the United States Senate in 2005:
    “…let me tell you a story. It’s 1991, I am flying home from Germany, sitting next to a man who is almost in tears, he is so upset. He’s a physician involved in an FDA study of a new drug. It’s a double-blind study involving four separate teams—one plans the study, another administers the drug to patients, a third assess the effect on patients, and a fourth analyzes results. The teams do not know each other, and are prohibited from personal contact of any sort, on peril of contaminating the results. This man had been sitting in the Frankfurt airport, innocently chatting with another man, when they discovered to their mutual horror they are on two different teams studying the same drug. They were required to report their encounter to the FDA. And my companion was now waiting to see if the FDA would declare their multi-year, multi-million-dollar study invalid because of this contact.
    For a person with a medical background, accustomed to this degree of rigor in research, the protocols of climate science appear considerably more relaxed. A striking feature of climate science is that it’s permissible for raw data to be “touched,” or modified, by many hands. Gaps in temperature and proxy records are filled in. Suspect values are deleted because a scientist deems them erroneous. A researcher may elect to use parts of existing records, ignoring other parts. But the fact that the data has been modified in so many ways inevitably raises the question of whether the results of a given study are wholly or partially caused by the modifications themselves.”

  72. @Ron C., who says, Let’s take the best records we have (warts and all), and let the data with minimum processing tell us about the issue we have: Are present temperatures unusual, and should we be worried?

    Are present temperatures unusual? Yes. Geologically speaking, they’re unusually cold.

    Should we be worried? Crimenently, we’re in the the early stretches of a freakin’ ice age. This one is generally considered to have begun 2.5-3 million years ago. The shortest previous ice age in the geologic record is 30 million years long — a full order of magnitude longer. The average length of an ice age in the geologic record is more like 90 million years long. My money sez the ice is coming back — and considering that 460 mya, CO2 concentrations were four THOUSAND ppm and we were, nonetheless, in a deep ice age (the 30-million-year one, in fact), I don’t think there is anything that will save Manhattan from being scraped off the face of the continent. Not that I’ll miss it. And the residents will have time to move.

    Honest to goodness, there are far too few paleoclimatologists involved in this discussion. Yours was a very nice analysis, but most everybody talking on this topic just makes me think of a bunch of mayflies worrying about the afternoon getting hotter — and the afternoon, I must add, of a warm day in January.

    Your non-article value of 1998-2010 being -1.77 C/century — now, that stood my neck hairs on end!

  73. Ron C. says:
    July 28, 2014 at 5:05 pm

    Willis, I beg to differ. You dismiss out of hand the quality control done by the NMSs. I do not.

    Perhaps that is because I am more aware of the quality of their quality control than you are … I’ve seen hideous stuff that has passed local muster. But please note I don’t “dismiss it out of hand”, on the contrary. I simply suggest that it is incumbent on YOU to do quality control yourself, regardless of who else you think has done it.

    And your averaging of temperatures will lose the variations which are the very thing of interest.

    Sorry, but I don’t understand that. Which “very thing of interest” is lost by averaging?

    Finally, let me recommend the following paragraphs to you from the Armagh study cited above, which is an excellent introduction to the art and science of quality control of temperature datasets, a study which I would recommend that you read very carefully. From their introduction (emphasis mine):

    Our instrumental knowledge of climate change prior to the mid-19th century relies heavily on a few long meteorological series, most of which are from Europe. Even here, good instrumental series longer than 150 years are extremely rare, and it is essential that those that exist be carefully calibrated and standardized in order to make what comparison we can with modern measurements. Of particular importance are: (1) the longevity of the series; (2) the availability of meta-data concerning the instruments used and measurement practices; (3) knowledge of the position and exposure of instruments; (4) knowledge of any local time- dependent micro-climatic effects (e.g. from an urban heat island).

    To date, only a handful of temperature series fulfil these broad criteria, and most, if not all of them, suffer from deficiencies at some time or another. Some long series (e.g. central England; Manley, 1974; Parker et al., 1992) are in fact composite series, containing data from a number of sites. Although this can be an advantage in the sense that climate is being measured over a larger area than a single location, it can be a disadvantage if the same distribution of sites is not maintained (i.e. different sites are used at different times).

    So I greatly suspect that your long series are composites of two or more records from a single site, rather than single continuous records … but since you’re doing no quality control of any kind, how would you know? Heck, you’ve even included the CET in your dataset, which is a pastiche of stations, the number and identity of which changed over time … it’s a reasonably good pastiche, but not “raw data” by any definition.

    Look, I’m not trying to bust your chops, Ron, you’re well on the way. I’m just trying to let you know how to convert what you’ve done to a serious study. Saying that you “beg to differ” should be put off until you’ve actually taken a long hard look at your own data and have inspected the metadata for each and every station. Claiming that you trust the NMSs (national meteorological services) have done a good job will just make serious students of the subject laugh and pass your work by. A real scientist trusts nothing, especially himself, and certainly not the work of random foreign bureaucrats …

    My best to you,

    w.

  74. OK.Willis, I will think on that. Do you have a link that doesn’t return “forbidden?”

    [I put it into my dropbox here. -w.]

    • Reply to Ron C :-

      climate.arm.ac.uk/calibrated/airtemp/Met-Data-Vol6.pdf is probably the same. I would strongly recommend that we take Willis’ advice. He’s a good man.

  75. Steven Mosher says:
    July 28, 2014 at 9:50 am

    and dont believe that “long stations” are necessarily the best. doubt everything

    I doubt that taking temperature records from two separate stations, and averaging them, gives you anything physically meaningful. So I DEFINITELY doubt that taking hundreds of station records and averaging them gives you anything physically meaningful either.

    Why? Intensive properties.

    This entire post, along with many others, is really academic.

  76. I did similar work with Tmin and Tmax from a larger dataset and found similar results. But because of the larger area it covered I found the large changes to Tmin took place regionally at different times. I suspect that this is a function of SST’s suddenly changing.

  77. Can the summer/winter anomaly simply be explained by the local populations heating their homes and work areas in winter time, absolutely adding heat to the local climate, whereas in summer, even with air conditioning (not extensively used in these mostly northern European locales), there is little or no heat added to the system (just concentrated in a/c exhaust).,

  78. Hi Guys

    I live in the Bilt (the small town of the Dutch meteorologist society) and i can tell you the town grew from marshy land without roads and major build up areas, into a fairly large town where the marsh was reclaimed and is now a major crossroads of 2 main highways (all black tarmac). This all happened in the last 70 years. Furthermore 30 km away they reclaimed 2500 km2 of inland see into dry land (farm land). i think those effects would cause the anthropogenic local warming.

  79. “This entire post, along with many others, is really academic.”

    1. Uses data with large non-validated “corrections” and homogenisations.
    2. Fits linear trends to data that is not at all linear
    3. Does not state selection criteria, no visible QA before declaring data to be “world class”
    4. No uncertainty figures for data or fitted “trends”.

    Yes that does seem to be typical of passes in academia these days, so I suppose it is accurate to call it “really academic.” I’m sure he could get these unfounded “trends” published is peer review journals.

  80. Ron C.” Willis, I beg to differ. You dismiss out of hand the quality control done by the NMSs. I do not.”

    Indeed, you accept it without question and without examination. You then state that your results are obtained without homogenisation, which gives the impression you are working with raw data, a situation you then refuse to clarify.

    “Note: These trends were calculated directly from the temperature records without any use of adjustments, anomalies or homogenizing. ”

    So you have used adjusted and homogenised data “without any use of adjustments, anomalies or homogenizing. ” LOL.

    I suggest you look at the adjustments that are made to the many HISTALP records that you have included. Those long records have very little long term trend until they are “corrected”.

    http://climategrog.wordpress.com/?attachment_id=999

    http://www.slf.ch/fe/landschaftsdynamik/dendroclimatology/Publikationen/index_DE/Bohm_2010_ClimCha.pdf

  81. Jeff Albert

    That is why we did no averaging of temperatures. These are independent trends taken directly from the data. We analyzed the changes, not the temperatures!

    Greg Goodman

    My bad, trusting the CRUTEM4 data. So even if, as you seem to believe, the data keepers have baked in some warming into the records, the analysis shows modest warming and mostly in the coldest months.

    • Reply to Ron C :-

      This is what I always say. If the adjusted / averaged NCDC / GISS / HadCRUT4 show a trend since 1880 of 0.64 C per century, then it’s likely that the ‘real’ value is even less than that.

  82. Good effort but as most of your stations are urban what you have mainly demonstrated is the UHI effect over that time period, especially for Paris, Berlin, Templehof (an urban airport) and Kopenhaeven. Prague only showed a very modest trend until the site was moved to the airport. De Bilt like CET is a composite of sites known to have problems with UHI – CET is a composite of changing stations several of which are at airports (Ringway, Rothhamsted – Luton, Squires Gate) and show differing trends. Look at the data from Frank Lansner at Hidethedecline to see these effects. I do not see any truly rural stations among your list though several exist – you can find them at http://www.john-daly.com/ges/surftmp/surftemp.htm with most included in the BEST climate map. Good examples of single rural station records are Valentia and Armagh Ireland from the 1860´s, Vardo Norway from 1949, Akureyri 1882, Haparanda Finland from 1860, Lampassas Texas 1890, Punta Arenas Chile 1888, Adelaide Australai 1857 – urban with a cooling trend!, Snoqualmie Falls USA 1899, Thorshavn Faroe 1857, Angmaggsalik Greenland 1895, Darwin Australia 1882 – urban with cooling trend, Lander USA 1892, Lamar Colorado USA 1898, Spikard Missouri 1896, Concord USA 1870Sodankyla Finland 1908, Farmington Maine 1890, Gloversville NY 1893, Westpoint and Central Park 1820 showing UHI effect over this period of approx 2C, Waverley USA 1883, etc etc most of which show no trend, 1930´s warmer than 1990´s or cooling – especially in the northern NH where according to Arrhenius a doubling of CO2 should have its greatest effect.

  83. Michael Moon [July 28, 2014 at 10:36 am] says:

    Mosher,

    The objections here to “adjusted” data, mine included, are based on traditional scientific/engineering standards. Data revision with no political agenda, properly done, is not objectionable. Clearly our current Federal government functionaries have their thumbs on the scale, as Steyer and his ilk have instructed the Dem lackeys in Washington to do.

    Excellent comment there.

    Let me ask this … Mosher (or anyone else), is there *any* pure station that you are aware of? Criteria for “pure” in my humble opinion is that the data is not tampered with *and* the station has not moved nor has its surroundings been compromised. Does such a thing even exist where we can compare data at one single location over a long time period?

    Furthermore, why can we not create at least one single station today using the exact original equipment, same time of observation, carefully recreating the same surroundings to ensure an Apples to Apples scientific control? It seems to me that this is the most scientific approach to comparing temps from one time period to another. Adjusting temp data is utterly ridiculous. Even if these people could be trusted (and I don’t think so) why introduce error pathways and new variables to something so damn simple?

    I’m sure I remember Leif stating that something similar to this idea is being done with sunspots (using original centuries old telescopes and locations). If I am remembering correctly I would ask him to explain this and perhaps suggest how this might be accomplished in the temperature gathering field.

  84. Ron C. says:
    July 29, 2014 at 4:09 am

    Jeff Albert

    That is why we did no averaging of temperatures. These are independent trends taken directly from the data. We analyzed the changes, not the temperatures!

    Ron,

    I’m not seeing any trends for individual stations, only single numbers for all of them, unless I’m missing something.You’ve combined stations in some way to get single numbers. If you’re talking of trends for something someone else already averaged, the problem still exists. You’ve only ended up with a fanciful statistical construct that has no meaning in the real world.

    • Reply to Jeff Alberts :-

      The trends for individual stations are in the spreadsheet.

  85. Ron C,
    I have 120 million records from about 25,000 stations that I’ve assembled into averages for various sized areas, working with Min, Max, the day to day difference in Min and Max, over night cooling, Surface Pressure, Humidity and Rain. Just follow the url in my name.
    I’m currently generating a 1 x 1 Lat/Lon box for the globe that I will upload once it’s finished.
    I’ve taken the other tack, I do minimal filtering of stations, so no quibbling over which stations I included. The important part is that I came to a similar conclusion, “Warming” is just swings of Min temp http://www.science20.com/virtual_worlds/blog/global_warming_really_recovery_regional_cooling-121820

  86. Jeff

    As Richard says, look at the individual station sheets. There you find the procedure of calculating the monthly slopes for that station and averaging the slopes for the station trend.
    No averaging of temperatures across stations.

  87. Micro

    Impressive work. Much more quantitative than my admittedly qualitative approach. Good to see that your work is supportive.

    • Ron C. commented

      Good to see that your work is supportive.

      And this is important, different people doing similar but different work, and getting similar results, that do not show the same trend that over processing the data does.

  88. Willis Eschenbach says:
    July 28, 2014 at 4:37 pm

    Willis, I was a little surprised by your suggestion to use a breakpoint identification algorithm for reviewing the subject data. If I remember correctly, there was a fairly lengthy discussion here (at WUWT) on this subject just recently. Wasn’t it something to the effect that the algorithm inherently added warming based on the way it was set-up? Or, is there a difference between a breakpoint identification algorithm, and the one that was discussed previously (which didn’t just identify, but also adjusted)?

    Just curious. Thanks in advance.

    rip

  89. Ron C. says:

    Greg Goodman

    My bad, trusting the CRUTEM4 data. So even if, as you seem to believe, the data keepers have baked in some warming into the records, the analysis shows modest warming and mostly in the coldest months.

    ====

    Thanks Ron. I’m not saying CRUTEM4 has had unjustified changes made : I have not looked. My main objection was that they way you presented it read like it was not adjusted/homogenised data which it is.

    The HISTALP data , of which you use several stations has huge adjustments and if you read the Boehm paper it is fairly obvious there are some pretty stupid errors in what they’ve done. Yet they are playing games and hiding the data to prevent any one validating or correcting what they’ve done.

    That is a shame because they are potentially some of the most valuable long term records.

    Yet more activist scientists that seem to think throwing scientific integrity under a bus is somehow going to help the enviro cause. In fact they are destroying it.

  90. ripshin says:
    July 29, 2014 at 11:42 am

    Willis Eschenbach says:
    July 28, 2014 at 4:37 pm

    Willis, I was a little surprised by your suggestion to use a breakpoint identification algorithm for reviewing the subject data. If I remember correctly, there was a fairly lengthy discussion here (at WUWT) on this subject just recently. Wasn’t it something to the effect that the algorithm inherently added warming based on the way it was set-up? Or, is there a difference between a breakpoint identification algorithm, and the one that was discussed previously (which didn’t just identify, but also adjusted)?

    Just curious. Thanks in advance.

    rip

    Good question, rip. The issue in that discussion was the often indiscriminate use of the breakpoint algorithm to identify putative breakpoints which are NOT shown in the meta-data.

    I’m suggesting going at it the other way. Use the metatdata to identify physical changes (instrumentation, location, etc).

    Now, if your accurate thermometer breaks and you replace it with an equally accurate thermometer, no harm, no foul. You don’t want to do anything. But if a location change has introduced a drop or rise in the average temperature, or you’ve replaced an accurate thermometer with a biased thermometer, you need to deal with that.

    So I’m recommending the use of the breakpoint algorithm to VERIFY that that known physical change has resulted in a step change in the temperature. How you deal with that is up to you.

    w.

  91. This is really interesting, but I have a question. As someone who has been recording temperatures as part of non-climate studies for the past 45 years – I note that the lab thermometers that I started out using in the 1960’s had variations of about 2 degrees C with in a lot of 25 lab mercury type thermometers. In recent decades I have noted the accuracy variation has improved to about 1 degree C among a similar lot of thermometers. That’s comparing the same brand and production cycles of thermometers. Changing brands produced different error variability. As well in our research we noted that individuals recording temperatures varied on how they read the same temperature on the same thermometer. I believe the science [of] measurement is called Metrology.

    Going back to the 1700s and you would be dealing with hand made thermometers from many different producers or self-made per data gather and obviously with even higher variance. While you can avg. out measurements similar variable sources, you can’t accurately average out measurements with different variables – different instrument producers, brands and measurement protocols, etc. So, my question is how have these obvious inaccuracies been sorted out of not only historical temperature recordings and recording protocols, but even present day ones – to accurately conclude increments of 0.1 degrees C less?

  92. Durwood

    Good comment, It is also why I avoid any averaging of the temperatures themselves. As said above, the records are imperfect. There is considerable effort applied to the station reports to identify erroneous daily data points, which are then removed in calculating the station’s monthly average. Too many missing dailies, and the average for a month is left blank. Since we are working with trends of station-specific monthly averages, occasional scattered blanks do not impede the linear regressions.

    But your main point is that we cannot claim to know more than we know–the analysis is qualitative and meant to provide an historical frame of reference.

Comments are closed.