A way of calculating local climate trends without the need for a government supercomputer

This method may or may not have merit – readers are invited to test the merit of it themselves, the method is provided – Anthony

Guest essay by Ron Clutz

People in different places are wondering: What are temperatures doing in my area? Are they trending up, down or sideways? Of course, from official quarters, the answer is: The globe is warming, so it is safe to assume that your area is warming also.

But what if you don’t want to assume and don’t want to take someone else’s word for it. You can answer the question yourself if you take on board one simplifying concept:

“If you want to understand temperature changes, you should analyze temperature changes, not the temperatures.”

Analyzing temperature change is in fact much simpler and avoids data manipulations like anomalies, averaging, gridding, adjusting and homogenizing . Temperature Trend Analysis starts with recognizing that each microclimate is distinct with its unique climate patterns. So you work on the raw, unadjusted data produced, validated and submitted by local metrorologists. This is accessed in the HADCRUT3 dataset made public in July 2011.

The dataset includes 5000+ stations around the world, and only someone adept with statistical software running on a robust computer could deal with all of it. But the Met Office provides it in folders that cluster stations according to their WMO codes.

http://www.metoffice.gov.uk/research/climate/climate-monitoring/land-and-atmosphere/surface-station-records

 

Anyone with modest spreadsheet skills and a notebook computer can deal with a set of stations of interest. Of course, there are missing datapoints which cause much work for temperature analysts. Those are not a big deal for trend analysis.

The method involves creating for each station a spreadsheet that calculates a trend for each month for all of the years recorded. Then the monthly trends are averaged together for a lifetime trend for that station. To be comparable to others, the station trend is normalized to 100 years. A summary sheet collects all the trends from all the sheets to provide trend analysis for the geographical area of interest.

I have built an Excel workbook to do this analysis, and as a proof of concept, I have loaded in temperature data for Kansas . Kansas is an interesting choice for several reasons:

1) It’s exactly in the middle of the US with no (significant) changes in elevation;

2) It has a manageable number of HCN stations:

3) It has been the subject lately of discussion about temperature processing effects;

4) Kansas legislators are concerned and looking for the facts; and

5) As a lad, my first awareness of extreme weather was the tornado in OZ, after which Dorothy famously said: “We’re not in Kansas anymore, Toto.”

I am not the first one to think of this. Richard Wakefield did similar analyses in Ontario years ago, and Lubos Motl did trend analysis on the entire HADCRUT3 in July 2011. With this simplying concept and a template, it is possible for anyone with modest spreadsheet skills and a notebook computer to answer how area temperatures are trending. I don’t claim this analysis is better than those done with multimillion dollar computers, but it does serve as a “sanity check” against exaggerated claims and hype.

For the Kansas example, we see that BEST shows on its climate page that the State has warmed 1.98 +/-0.14°C since 1960. That looks like temperatures will be another 2°C higher in the next 50 years, and we should be alarmed.

Well, the results from temperature trend analysis tell a different story.

From the summary page of the workbook:

 

Area State of Kansas, USA  
History 1843 to 2011  
Stations 26  
Average Length 115 Years
Average Trend 0.70 °C/Century
Standard Deviation 0.45 °C/Century
Max Trend 1.89 °C/Century
Min Trend -0.04 °C/Century

 

So in the last century the average Kansas station has warmed 0.70+/-0.45°C , with at least one site cooling over that time. The +/- 0.45 deviation shows that climate is different from site to site even when all are located on the same prairie.

And the variability over the seasons is also considerable:

 

Month °C/century Std Dev
Jan 0.59 1.30
Feb 1.53 0.73
Mar 1.59 2.07
Apr 0.76 0.79
May 0.73 0.76
June 0.66 0.66
July 0.92 0.63
Aug 0.58 0.65
Sep -0.01 0.72
Oct 0.43 0.94
Nov 0.82 0.66
Dec 0.39 0.50

 

Note that February and March are warming strongly, while September is sideways . That’s good news for farming, I think.

Temperature change depends on your location and time of the year. The rate of warming is not extreme and if the next 100 years is anything like the last 100, in Kansas there will likely be less than a degree C added.

 

Final point:

When you look behind the summary page at BEST, it reports that the Kansas warming trend since 1910 is 0.75°C +/-0.08, close to what my analysis showed. So the alarming number at the top was not the accumulated rise in termperatures, it was the Rate for a century projected from 1960. The actual observed century rate is far less disturbing. And the variability across the state is considerable and is much more evident in the trend analysis. I had wanted to use raw data from BEST in this study, because some stations showed longer records there, but for comparable years, the numbers didn’t match with HADCRUT3.

Not only does this approach maintain the integrity of the historical record, it also facilitates what policy makers desperately need: climate outlooks based on observations for specific jurisdictions. Since the analysis is bottom-up, microclimate trends can be compiled together for any desired scope: municipal, district, region, province, nation, continent.

If there is sufficient interest in using this method and tool, I can provide some procedural instructions along with the template.

The Kansas Excel workbook is provided as an example: HADCRUT3 Kansas.xls

About these ads

73 thoughts on “A way of calculating local climate trends without the need for a government supercomputer

  1. My understanding of the AGW theory (whoops; hypothesis) is that the most pronounced warming should be present in the coldest and driest air masses, and during winter. While that’s interpreted to be polar regions I think Kansas during the winter is in a position to compete. It’s interesting, therefore, that while February and March exhibit the strongest upward seasonal trend (seemingly validating the hypothesis; oops, theory), December and January exhibit the smallest upward trend (contradicting the … oh, forget it) excepting September. And following February and March the next highest trend was in July which is where it shouldn’t be.

    Another example of the mysterious AGW thingy that also backflips when it comes to Arctic and Antarctic regions.

  2. September is when briefly neither heating nor air conditioning are used much in buildings so the urban heat island effect on thermometer stations is lessened.

  3. CET is the best known and the longest regional data set. It may come as a surprise to many that its 360 year long record conclusively shows that increased insolation suppresses long term temperature up-trend ( see link )

  4. “For the Kansas example, we see that BEST shows on its climate page that the State has warmed 1.98 +/-0.14°C since 1960. That looks like temperatures will be another 2°C higher in the next 50 years, and we should be alarmed.”
    This is misinterpreting the BEST results. The page clearly labels the result as “Warming since 1960
    (°C / century)”. Ron gets it right later, but there is no reason to say BEST is misleading anyone. It does NOT look like it will be warming 2C in 50 years; it looks exactly like the warming will be 2C in the next 100 years.

    “Since the analysis is bottom-up, microclimate trends can be compiled together for any desired scope: municipal, district, region, province, nation, continent.”
    But what you would really want is some sort of weighted average. If the eastern half of Kansas had 20 stations and the western half (with far fewer people and cities) had 6, simply averaging these will not give a good estimate of the warming for the state. Similar problems continue when going to national or global estimates.

    “So in the last century the average Kansas station has warmed 0.70+/-0.45°C …
    One rather interesting thing here is that this means there was COOLING for the first half of the century, and warming for the second half. If the stations warmed ~ 1 C in the last 50 years (ie 2 C/century), then simple subtraction says the stations must have cooled by ~ 0.3 C in the previous 50 years to get the 0.7 C figure for the entire 100 years.

  5. If anyone is interested in looking a little deeper into the weather station trends, here are a few pointers.

    First, the weather station data is the air temperature measured in an enclosure placed at eye level 1.5 to 2 m above the ground. The minimum temperature is an approximate measure of the bulk air temperature of the base of the local weather systems as they pass through. The maximum temperature is a measure of the convective mixing of the warm air rising from the solar heated ground below with the cooler air at the thermometer level. This depends on cloud cover and the surface evaporation.

    In many regions, the prevailing weather systems are formed over the oceans. The trend in the minimum temperature could have a strong signal from the trend in ocean surface temperature in the region of origin. For example, in California, the PDO sticks out like a sore thumb. In the UK and surrounding regions it is the AMO.

    To start, download the relevant ocean trend, (AMO. PDO etc.), do a 5 year rolling average for both the ocean and station data, do an XY plot and look for the ‘fingerprint’. Then run a linear fit to the 5 year averages and get the slope from the Excel ‘Trendline’ function. The differences in slope should give information on the local urban heat island effects.

    Now subtract the maximum temperatures from the minimum and look at the trend in the delta T. The delta T will increase with sunlight and low rainfall (less surface evaporation).

    By the way, none of this has anything to do with CO2 ……..

    Further details can be found at:

    http://scienceandpublicpolicy.org/originals/pacific_decadal.html

    http://www.venturaphotonics.com

  6. Feb. and Mar. this year , in “my” part of Ks were 8.37 and 2.75 deg F below avg.(high temps )
    ( Per AccuWeather records) And if Big Joe B . is right , this trend may continue …Brrrr…

  7. Since I have the spreadsheet skills on par with a “Climate Scientist” could someone tell me if possibly one of the station records is a reliable one and would pass Mr. Watts station sighting criteria for minimal UHI? And if so what is its trend all by itself?

  8. When you look behind the summary page at BEST, it reports that the Kansas warming trend since 1910 is 0.75°C +/-0.08
    ===============
    Paul Homewood:USHCN Adjustments In Kansas
    In addition to recent temperatures being adjusted upwards, we also find that historical ones have been adjusted down. So, for instance we find that the January 1934 mean temperature at Ashland has been adjusted from 3.78C to 3.10C, whilst at Columbus there is a reduction from 4.00C to 3.52C.

    In total, therefore, there has been a warming trend of about 1C added since 1934

    http://notalotofpeopleknowthat.wordpress.com/2014/06/28/ushcn-adjustments-in-kansas/

  9. @ F Ross at 9:21 am…The AccuWeather record and forecast has July being 1.77 deg below avg.
    TOFA says a couple deg. above..will be interesting to see which is closest to actual temps.

  10. “For the Kansas example, we see that BEST shows on its climate page that the State has warmed 1.98 +/-0.14°C since 1960. That looks like temperatures will be another 2°C higher in the next 50 years, and we should be alarmed.”

    STOP. This is just the stupid simplistic kind of thing that AGW is based on.

    First there is no reason to fit a linear model to climate data because nothing about it is linear. It’s a fallacy before you do the calculation. Second and probably worst is the idea that the average straight line through all the ups and downs of climate will be the same for the next 100 years as it was for the last 100 (or 50 or 35 or whatever amount of data you happen to have for one area) . What on earth is that idea based on?

    The whole problem is this idea of a “trend”. Climate does not “trend”, it changes rather irratically. Somewhere in this word “trend” is unspoken assumption that once a “trend” is measured we can assume that it will be “likely” to continue. THIS IS A FALLACY.

    “If you want to understand temperature changes, you should analyze temperature changes, not the temperatures.”

    That’s great. I’ve been saying this for years but stick to looking at rate of change of temperature and stop thinking of “trends”.

    BTW the land data you are looking is not HadCrut3, that is a land and sea dataset made from combining HadSST2 and CRUTem3. I think you mean CRUTem3 . It does wonders for cred. it you at least know the name of dataset you are using and know the difference between land and sea.

    You should probably correct this in the article.

  11. Another problem you have with this approach is what I call “cosine warming”.

    http://climategrog.wordpress.com/?attachment_id=209

    It depends upon where you start and finish in the data. This has been heavily discussed, usually with reference to “cherry picking”. Even if you do not cherry pick but use the data you have, if each station you look has a different time span, your idea of “nomalising” then all to a per century “trend” will be heavily biased by where any particular records starts and ends.

    You make the erroneous assumption that they are all somehow comparable once you have “normalised” them.

    This is a major problem since you can easily double a rate of change found in a cosine, which has zero long term trend anyway. The whole thing is a fiction.

  12. http://www.metoffice.gov.uk/hadobs/hadcrut3/

    “HadCRUT3 dataset
    The gridded data are a blend of the CRUTEM3 land-surface air temperature dataset and the HadSST2 sea-surface temperature dataset. ”

    However, having read the page you linked to I can see why you were confused. It certainly reads like HadCrut3 is the land station data you used. This is very poor presented.

  13. For the last several months I have been downloading Environment Canada data for sites that I have interest in, sometimes because people have said it is warming and I really haven’t noticed much.

    The data says some places are warming, some places are not. Some show that there is significant MEAN warming, but what virtually every data set I have looked at shows NO INCREASE in Extreme Maximum or Mean Maximum temperatures. What they do show, is that it is getting LESS COLD. That is, the Extreme Minimum and the Mean Minimum Temperatures are not as low as they were 60, 80 or 100 years ago. That increases the Mean Temperature and provides a Global Warming “Signal” even though it isn’t getting hotter – at least in my terms.

    And in those locations where the Extreme Maximum Annual temperatures and the Maximum Mean Annual temperatures do show an upward trend, it is because seasonally low maximums (winter) have increased, not because summer temperatures have increased. In fact in several cases I have looked at (Winnipeg, Manitoba for example) the summer maximum temperatures show a slight decline in trend (cooling) while the winter maximum temperatures show a slight rise in trend (warming). In fact, I see a number of CONVERGING trend lines in the data when plotting sites with 80 to over 100 years of temperature data.

    I have seen some papers on WUWT suggesting that Average global warming is more due to LESS cold than actual warming. Of course, that is net warming but I think it is important to know what is causing that warming in the data because it isn’t because it is getting so hot we are all going to have to move towards the poles. It seems that weather is simply getting less extreme, with low mean and low extreme temperatures getting LESS cold.

    With the exception of our desire in my region to see extreme cold weather to kill the western pine beetle, this would seem to be a benefit. It isn’t any hotter in the summer (the hot years were back in the 30’s and 40’s) and you don’t have to use so much heat in the winter (as also opposed to the 30’s and 40’s and the turn of the century as where I live temperatures regularly hit -45 C and even lower – Dec 1924 & Jan 1935: -53.9, Feb 1936: -55.6, Environment Canada data, Rocky Mountain House, AB)

    I am not a statistician, climatologist or mathematician, just an old engineer/rancher interested in comparing data to what I have experienced and heard from my prairie ancestors.

    I have no idea if the Environment Canada data has been adjusted but I draw my speculation from what I have downloaded from the EC site at this site. I also admit that I have at times spliced data due to changes in the local stations since one will have a record from say 1917 to 1940 and a second from 1942 to 2007. A lot of stations stop at 2007 for some reason. And a lot of stations have missing data and breaks especially some of the really old and rural sites.

    And finally, since what I have provided is just anecdotal so far, I have posted two sites I just started some work on yesterday for people to examine if they wish.

    Rocky Mountain House, AB: https://www.dropbox.com/s/j2bu889j037q6j6/Rocky%20Mountain%20House.ods

    Winnipeg, MB:

    https://www.dropbox.com/s/98i6yrkf94o8zpx/Winnipeg.ods

  14. The local rag ran an article about the warming of the Northeast and Southwest and a companion article, in which they claim that the average summer temps in Carson City, NV have risen by 6.8 degrees F since 1984.
    The last couple of summers we have not had more than 1-2, or NO days of 100+ temps.
    This June seems to be warmer than usual. The incompetent idiots who supply the weather data to the NV Appeal, always have highs 1-4 degrees higher than anything the TV weather stations report.
    In any event, an increase 6.8 degrees seems over the top!
    Maybe the temps are recorded beside a compost pile? Since they are all in for AGW, I wouldn’t put it past them.

  15. I like this approach because whenever you take any kind of average you are losing information and in this case, the averaging is being done AFTER the trend analysis rather than before making each of them easier to both understand and address. Thus, if someone wants to argue that the mean for the state needs weighting, they can do that to their heart’s content separately from the trend analysis and if someone wants to complain that the trends are not calculated properly, they can do that separately from the averaging.

    The bad thing about it is that is gives people lots of scope to do exactly that – keep playing around with weighting and trend analysis until they get an answer that suits them. This is the crux of the matter with land temperature records – they are full of so many little problems (TOBS bias, UHI effects, station site changes) that you can pretty much justify almost any manipulation you like until to come to the answer you are looking for.

    At the end of the day, historical surface records are only ever going to be a battle-ground in the climate change argument and one in which the equivalent of trench warfare ca. 1914-1918 is being undertaken: Lots of effort and many casualties for little overall gain.

  16. ecowan says:
    July 12, 2014 at 11:26 am

    Take a look at the minimum temperatures in Carson. Per my post above, I suspect the minimums have rising thus causing the “Average” temperature to rise. I have looked at several sites and most of them show this warming bias due to the low temperatures becoming LESS cold over time.

  17. Thank you so much for this. I have been collecting data from the longest records in the world in CRUTem4. I will probably end up with 28 stations with good full year coverage both before and after 1850 (which people seem to use as the start of the industrial era)

  18. vukcevic says:
    July 12, 2014 at 8:37 am
    CET is the best known and the longest regional data set. It may come as a surprise to many that its 360 year long record conclusively shows that increased insolation suppresses long term temperature up-trend ( see link )
    ——————————————————————————————-
    Explain what you mean by “insolation” please.

  19. Gamecock says:
    July 12, 2014 at 8:52 am

    “funky

    I’ve been making statements such as this for almost 6 years since getting a fast internet connection in fall of 2008. That funky thing – average global temperature – seems to have stuck in people’s minds regardless of how useless it is. Now in a BSk and today at the local air station (KELN) the temp just reached 102°F. Six months from now we will have a night or two (or weeks) when it will go to -10°F; or maybe colder. Our precip is mostly controlled by the rain shadow effect of the Cascade Mountains; averaging about 8 inches per year.

  20. You can look up the corresponding GHCN unadjusted trends on this globe map if you want. Just click on stations for details.

    “The +/- 0.45 deviation shows that climate is different from site to site even when all are located on the same prairie.”

    It doesn’t show that. It shows that the raw readings, which are actually reports of max/min markers at certain times and places, yield a different calculated monthly average from station to station. You choose to disregard adjustments, but they are what are needed to see whether the deviations are indeed climate.

    A station changing its time of observation can easily make a difference of 0.8°C to the average monthly temperature calculated from the readings. That’s not a difference in climate.

  21. what a difference six months can make!!!!

    13 July: Khaleej Times: Unmasking the chilling conspiracy of global warming
    A tragedy turned an Indian chemistry graduate into a filmmaker. Now he wants to expose what he says 
is the global warming myth, 
reports Nivriti Butalia…
    Nine countries in the past four months. Not a bad record, especially when you consider that Zanane Rajsingh, a 26-year old filmmaker from Gujarat, first went abroad in January…
    His new film is a full-length feature, an American-Indian venture on climate change produced by US-based company Nanoland. The script is by Dr Rajeshkumar Acharya, Nanoland’s corporate advisor.
    The preparation for the movie took time: Six months of reading to be convinced that global warming was a farce, as claimed by the script writer, Dr Acharya…
    “People think climate change is (due to) global warming. But global warming is a myth.”
    That’s what the film talks about, the global warming ‘conspiracy’.
    The first part of the film releases in January 2015. Currently, the director is on a packed schedule and constantly travelling…

    http://www.khaleejtimes.com/kt-article-display-1.asp?xfile=data/nationgeneral/2014/July/nationgeneral_July55.xml&section=nationgeneral

    8 Feb: Times of India: Ankur Tewari: ‘I am Earth, ravaged by climate change’
    All these and other mysteries about the environment will be dealt with in India’s first ‘feature film’ on climate change – It’s Tomorrow’ – written by non-resident Gujarati, Dr Rajeshkumar Acharya, who currently lives in the US. It is directed by Zanane Rajsingh, who was born in Nagpur but studied in Ahmedabad. The director said they had initially planned a documentary but had finally decided to make a feature-length film on the subject. Interestingly, the movie has the Earth talking to the audience directly and telling them about its plight caused by climate change.
    Will human beings become extinct in another 100 years? Startling questions like this one about the environment and life on earth will be answered in the film which is about 20% ready. It will explain climate change, its impact and the factors that influence climate on earth, Rajsingh said.
    “‘It’s Tomorrow’ aims to sensitize the audience about climate change and global warming. It is a wake-up call to the world. The one-and-a-half-hour film explores what a worst-case scenario might look like and gives a glimpse of the kind of floods and other natural disasters that might visit us in future…
    Talking about the crew of the film, Acharya said that ‘It’s Tomorrow’ is a political drama that revolves around the ongoing climate change phenomenon around the world. “Hence we’ve political giants, renowned scientists and actors playing roles in it. Actor Letiana Bohlke from Colombia will play the role of a strong political leader of Argentina, while Xenia Henriquez will essay the role of a former president of Philippines. Kiyoshi Kuzuwa, a noted Japanese scientist, will play himself in the film,” said Acharya…
    The film is jointly produced by a US-based firm Nanoland Inc and an Indian firm, Nanoland Ltd…

    http://timesofindia.indiatimes.com/city/ahmedabad/I-am-Earth-ravaged-by-climate-change/articleshow/30026768.cms

  22. Changing climates? Forget temperature; watch the vegetation. If you do not want to use Koeppen, use the USDA growing regions map (plant and seed catalogs) and see if the plants in a higher numbered region are starting to grow naturally and thrive in your (lower numbered?) region. Crepe Myrtles grow well in zone 7-9, for instance. I live in Maryland and CAGW becomes real for me when palm trees grow naturally in my back yard, moving zone 9 vegetation up to the north end of zone 7. How long will that take do you think?

  23. Agree with Gamecock. The author deals with temperature and talks of “climate”. The climate of a site or region cannot be described by talking of temperature alone. Equally important are factors such as humidity, altitude, prevailing winds, precipitation, and daily and seasonal variations in them. The CAGW alarmists “jumped the shark” when they latched on to “climate change” so that they could latch onto every extreme weather event. We should call them out consistently on this, not follow them in their bastardization of the meaning of “climate”.

  24. We in Australia know exactly what they,who control our temperature data are doing.
    As do you in America.
    But that is not the problem.
    It’s holding these fraudsters to account that is the problem.
    Yes, fraudsters is a very strong accusation but these adjustments did not happen by accident.
    They were deliberately done to project a specific result.
    Every adjustment was up to further enhance the fraud of gloabal warming.
    Over at Nova an independant check of only 100 stations in Australia has exposed 84 of those station’s with some significant abnomylys.
    The following being but one example.
    “The most extreme example that Ken found of data corruption was at Amberley, near Brisbane, Queensland, where a cooling minima trend was effectively reversed, ” 
    We already know what they are doing now we have to bring it to the attention of the wider public.
    Where it directly affects who sits on the gravy train of politics in Australia.
    I don’t know about America but thats how its done in Australia.
    The current government gives us a glimmer of hope but are battling to get anything done against a tide of opposition corruption.
    Who wholeheartedly support the fraud of global warming.

  25. Leigh;
    stations
    anomalies

    Do you have a good link for following this Australian saga? It’s very interesting, and important.

  26. Yes thanks for the correction Brian but spell check can be a bitch when your typing angry.
    The link is just one of many over at Jo Nova.
    If you do a search there under “BOM audit” many more will pop up.
    Like a broken record I continue to post these links so others that may have missed them get to read them.
    Unlike the global warmists, they are not fairytales.
    They are evidence of crimes committed.
    People need to understand the keepers of temperature records around the world are systematically altering history to further the global warming fraud.
    Those data changes are not happening by accident.
    Once or twice might be an accident but hundreds of times is not.
    Somebody in every government controlled temperature record keeping place around the world is consciously altering temperature records.
    They call them adjustments to raw data.(with out explanation)
    I call it fraud.
    We know their doing it but how do we stop them?

    http://joannenova.com.au/2014/06/australian-bom-neutral-adjustments-increase-minima-trends-up-60/

  27. This post is timely for me. I was just thinking the exact thing – what is my local climate doing? I had downloaded USHCN data for a couple of stations and actually drove out in the desert (Arizona) looking for them to make sure they were really rural. Both were, one nestled in a farm and one on an Indian reservation. Tomorrow, out to Ajo. Great tips. Thanks much.

    I have also measured UHI out here in Arizona and it is wicked – 5 to 7 degrees F delta between city and field.

    Great tip, I just have to say Thanks!! again. You saved me SOOoooo much time and effort.

  28. CET-record adjustments?
    I am not statistician, but it seems to me that the CET annual distribution has at least two suspect areas, of cours I could be wrong. It appears that there is a unexpected boost in numbers of years with cooler temperatures (around 8.6C) which is below the 9.2C average, and some above the average (around 10.5C). One possible effect of this (depending on the actual years concerned) would make the cooler past a bit colder and the more recent warm period even warmer. However, these are likely to be natural outliers in the normally expected annual distribution of some 650 data points.

  29. “It appears that there is a unexpected boost in numbers of years with cooler temperatures”
    In Australia those downward “adjustments” are pre 1950.
    Upward “Adjustments” are after 1950 to give an appearance of rapid temperature rises to coincide with increased industrialization after the war.
    It is a simple “trick” being applied world wide to temperature records.
    Commonly known as catostropheric anthropological global warming.
    We know their doing it and we know it’s no accident.
    But how do average people in the street stop them.
    It is going to take a greater combined effort by those who can academically challenge them.
    Back in March Eric Worrall touched on that in this post.

    http://wattsupwiththat.com/2014/03/22/occams-razor-and-climate-change/

  30. Interesting to read the responses here. Thanks for reading and commenting.

    Some say the climate is much more than temperature, and I agree. A proper climate outlook would cover additional variables important to life, especially precipitation. A corollary principle: If you want to understand precipitation changes, study the changes, not the rainfall itself.

    Preoccupation with surface temperatures comes from the global warming theory that posits temperatures are/will rise with rising CO2. I thought people should have a tool to look at the trends as they are.

    It is said that a line is too simple a depiction of a complex phenomenon. True, but here’s the thing: The attempts to extract other signals from the noise are fraught with difficulty and statistical controversy. Just look at the long discussion at Bishop Hill (with some insightful comments from Nullis in Verba).

    http://www.bishop-hill.net/blog/2014/7/3/where-there-is-harmony-let-us-create-discord.html#comments

    It is common practice to start looking for the relation between two variables (here, time and temperature) by drawing a linear regression. That applies here, and it is rough, but useful. The deviations suggest that much more is going on and needs to be understood. But it’s a start to have a frame of reference.

    I object to the BEST summary description of Kansas temperatures as rising 1.98 +/-0.14°C. Why? Because the number is partly fact and partly projection, a mixup of observed and not yet observed. It’s another indication of the malaise in climate science, where expectation and observation are repeatedly confused.

    An example in the thread above are Nick Stoke’s comments. He basically is saying: These stations should not have readings so different from each other. There must be mistakes, and once we adjust for them, the records will be more in line. Well, yeah! BEST got the deviation down from 0.45 to 0.14. More about this later.

    In keeping with what I said about BEST, I have to drop from my workbook the one station with an history less than 100 years. I didn’t like its shortness, but didn’t want to be accused of cherry picking (i.e. excluding available data from analysis). It happens to be only 64 years long with a century trend of 1.89°C, and removing it makes little difference to the results. Now the station ages range from 96 to 148 years old, so century observations are covered.

    The point: Trustworthy people, including scientists, make it clear when they are reporting facts that have occurred, and when they are predicting future facts. Knowledge is proven ex post facto. That is why I said in the post, “if the next 100 years is anything like the last 100, then . . .” The conditional clause tells you that what follows depends on assuming the long term trend continues. If temperatures are much above or below trend, then we know something different is happening.

  31. BTW, there is something valuable I left hidden in the workbook.

    Think of it as a egg. “Egg”=something that is an hidden like to more information.

    Has anyone found a peculiar thing about the workbook?

  32. Results are sorted by state and posted at

    http://sowellslawblog.blogspot.com/2010/02/usa-cities-hadcrut3-temperatures.html

    ===========
    what is interesting about these graphs is that they show how much variation there is in the data as compared to the average.

    most of the stations show a small increase or decrease in average temperatures. but the change itself is miniscule in relation to the natural variability. thus, it could be that temperatures are going up or down simply as a result of chance, due to the large variability in the underlying data.

  33. have you ever turned on the TV only you see your favorite team get scored on? have you ever thought that maybe it was you watching the game that somehow caused your team to get scored on? so you turned off the TV, and heard the next day that your team immediately rallied after you turned off the TV, and came back to with the game. have you ever thought as a result, that maybe somehow you can help your team win by not watching them?

    is it possible that climate change, global warming is no different? that temperature is going up and down largely by chance, and we think we are the cause, so we try and figure out what we are doing that affects temperature. but in this case, instead of turning the TV on/off to help our team, we think that turning industrial production on/off will help temperatures.

  34. Wayne Delbeke says:
    July 12, 2014 at 11:25 am

    That’s interesting. My next project is Canadian long term stations. Can you provide a link to raw station data from EC? I have data from HADCRUT3, and comparing could be interesting, also maybe EC records are longer.

  35. An example in the thread above are Nick Stoke’s comments. He basically is saying: These stations should not have readings so different from each other. There must be mistakes, and once we adjust for them, the records will be more in line.
    ===========
    the problem is that there is no allowance for station quality. maybe two stations are different because there is a quality difference. as a result you may end up adjusting the good data to match the bad data. especially when confounding factors such as urbanization and land clearing affect the majority of the stations and their data quality. in the end, the large number of poor data quality stations will overwhelm the good data quality stations, so that all you have left is poor data quality.

    the problem is that the temperature data was never intended to be used to measure climate change. it was intended to provide short term weather forecasting. as such it didn’t matter if there was drift in the station data year to year, the short term weather forecasting is largely immune to this sort of change. however, even the slightest drift invalidates the data for climate purposes, because the climate signal over 100 years is so small as compared to the daily temperature signal.

    Daily temperatures vary by 10-20C or more. And from this we are trying to tease out a climate signal of what, about 1C or less over 100 years. This is the nonsense of using weather station data. This data could easily drift 1C over 100 years and it would make no difference to short term weather forecasting, because the daily cycle is so large in comparison. So there was no reason to make the weather stations accurate enough for climate analysis. They were accurate enough for their purpose, which was short term weather forecasting.

  36. Latitude, ferdberple

    I concur with many issues you are raising. It is why I wanted data as close to the station sources as possible. At least with HADCRUT3, the raw data is distinguishable from their gridded product. As Greg Goodman points out, there is some nomenclature confusion. in the link below, the land station data is referred to as a HADCRUT3 “subset” without naming it specifically; ocean data being another “subset.”

    The FAQs say that this raw data is cleaned observation, prior to any gridding, anomalies or averaging.

    “The station data that we are providing are from the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods or site location.

    The database, therefore, consists of the ‘value added’ product that has been quality-controlled and adjusted to account for identified non-climatic influences. Adjustments were only applied to a subset of the stations, so in many cases the data provided are the underlying data minus any obviously erroneous values removed by quality control. The Met Office do not hold information as to adjustments that were applied and, so, cannot advise as to which stations are underlying data only and which contain adjustments.

    Underlying data are held by the National Meteorological Services (NMSs) and other data providers. Such data have in certain cases been released for research purposes under specific licences that govern their usage and distribution.

    It is important to distinguish between the data released by the NMSs and the truly raw data, e.g. the temperature readings noted by the observer. The data may have been adjusted to take account of non-climatic influences, for example changes in observing methods, and in some cases this adjustment may not have been recorded, so it may not be possible to recreate the original data as recorded by the observer.”

    http://www.metoffice.gov.uk/research/climate/climate-monitoring/land-and-atmosphere/surface-station-records/faq

    Interesting that the adjustments Nick wants to make have already been done by HADCRU, and possibly also by the NMSs. Too many cooks could spoil this broth.

    Someone (our host, perhaps) said recently: “Raw data ain’t what it used to be.”

    But I persist because I know that there will continue to be claims of rising temperatures.

  37. Nice post Ron. I’m a DIY data nerd and this post is right up my alley. I’ve been doing some basic pivot table analysis on various datasets since last week.

    If these next questions seem a bit on the under side of the over-under scale of eruditeness, bear in mind, I’m also a numskull.

    Are there any datasets with temperatures captured from the darkest hour of the day? Or the time of day when direct sunlight has the least effect on surface temperature? Is that what TMIN represents?

    Would trending the temp at the darkest hour of day give us a good idea of “trapped” warmth?

  38. Matt L.

    Good questions. Glad to hear you you do pivot tables; with that skill you can import from BEST data tables. There when you select all of the rows, you also get all of the columns even though the raw data is only in the first 3. With a pivot table on those 3, you get the array with years in the left column and months in the top row. Paste into the template and you are good to go.

    Usually TMin occurs in the early am before sunrise, and TMax in the mid to late afternoon. Of course, especially in winter, a storm with sharp temperature changes or a heat wave could throw off the coolest or hottest time of the day. How fast or slow the air temperature cools overnight is a function of many things, but principally, humidity. Dry deserts can get freezing cold after sundown, while moist tropical places are mildly warm all night.

    In the HADCRUT3 land dataset, the base data in a monthly average of daily averages:(TMin+Tmax)/2. . Some datasets do include maxs and mins as well, and these are quite informative, as mentioned by Wayne Delbeke upthread.

    Richard Wakefield is my mentor on this so have a look here:

    http://cdnsurfacetemps.wordpress.com/

  39. Brian H I have evidence that the BOM wiped approximately 2 Mj/M^2/day off the insolation record across Australia for 2013. In the case of Sydney, the record tampering goes back to 1 Jul 2011. The high levels of insolation were an inconvenient truth that does not fit the BOM’s political agenda or the CAGW mantra. I used to have a healthy respect for the BOM but these days I question everything that they produce.

  40. I’ve been asked why do this analysis. That’s really asking why are people wondering about the temperature trends. The underlying issue is: Are present temperatures and weather unusual and something we should worry about? In other words, “Is it weather or a changing climate?”

    The answer in the place where you live depends on knowing your climate, that is the long-term weather trends. This analysis enables you to find from relevant station histories both the long-term trend and the normal range. With this context, you can judge for yourself between weather and climate change.

    The workbook deals only with temperature monthly station averages (HADCRUT3 raw data), but can be expanded to TMaxes, TMins and precipitation, if you have access to those data.

    BTW, for those interested, here is:
    Why adjusting termperature records is a bad idea, by JR Wakefield

  41. Matt L
    Most weather stations record temperatures at hourly or quarter hourly obsevations. Calculating the average using these numbers is more accurate than (max+min)/2, but it also takes more computing time.

  42. “Ron C. says:
    July 13, 2014 at 8:32 am
    Wayne Delbeke says:
    July 12, 2014 at 11:25 am

    That’s interesting. My next project is Canadian long term stations. Can you provide a link to raw station data from EC? I have data from HADCRUT3, and comparing could be interesting, also maybe EC records are longer.

    HADCRUT3 and 4 use a homogenized version of canadian data

    ENV canada data can be downloaded but its a nightmare. I wrote a R package to do this its called CHCN.

    a few years back we did a station by station comparison between Env Canada and BEST input data.

    There might be a few stations we miss.

    ENV canada take their data and submit to GHCN-D as well as other collections.
    so you will find duplication between ENV canada and GHCN-D

    Your biggest problem is trusting ENV canada. And you have to ask “why didnt they submit this station to official collections” and why did they submit this station instead.

    The biggest problem is that people assume there is a historical truth to get at here. There isnt.
    There is a estimation of what the past was. The biggest assumption is that there is a thing called
    ‘raw’ data. What there is, in fact, is data that has been declared “adjusted” and “other” data.
    You have some evidence that adjusted data is adjusted. somebody called it adjusted. you have no evidence that “other’ data or data that is reported to be raw, is in fact raw.

    In short, skeptics for the most part are not skeptical enough when it comes to the distinction between raw and adjusted.

    If we define raw as ‘what the sensor showed’
    then we never have raw data. we never have what the sensor showed. we have what the observer wrote down. or we have what the electronic system reported or stored. Neither of these is exactly what the sensor showed. one can assume zero transmission, transcription errors.. but of course we know this assumption is wrong. In fact what the sensor showed is lost. in practice however people refer to stuff the observer wrote down as a raw report. and people refer to the electronic transmissions of sensor data as raw. but a good skeptic knows that neither is these is the same thing as what the sensor sensed. Otherwise we would believe an observer when they write down 100C in the dead of winter or we would believe the automated reporting system when it records 15000C.

    So there are two classes of data. Data which somebody declares has been adjusted. and data that is either silent on the issue of adjustment or declares itself as raw. but we have no evidence that raw is in fact raw, that raw is in fact what the sensor showed in the past.
    we can call it raw, and many of us do call it raw, but in fact if we are skeptical we dont know that it is raw. we only know that somebody called it raw, or that no record of adjustment is extant.

    how do you proceed?

    To do this right you first start by developing a method. Or using a known proven method.

    You then test that method by using synthetic data to see any systematic bias in the method.

    Then you take all the data you can find. You take a subset of that data. you hold the rest out

    Then you use the data selected to create an estimation of the past.

    You then use the out of sample data to test this estimate.

    What you dont do is start with a method that you think is “good” you FIRST have to show the method is good APART FROM the data you are looking at. This post doesnt do that. grade; F

    Then you dont start by cherry picking data or picking data according to some criteria that you have never tested. You have the universe of data. You select a random subset. you create a estimation.

    Then you test other subsets. You will find cases where the estimate fails because of data issues
    and cases where it fails because of estimation ( statistical model ) issues.

  43. Thanks, Steven, for your comment.

    As you say, measurements in the past have uncertainty, including today’s readings, which despite our best efforts to be accurate, will soon join the history and its uncertainty. I do hope we can agree that there is an important difference between a fact estimated to have already occurred in the past and a fact estimated to occur in the future.

    You confirm what HADCRU also says, that NWSs like Environment Canada (EC) do their own adjusting and homogenizing in the interest of highest quality data.

    Now I believe that NWSs are sincere and competently trying to get the record right. Since the global warming cause, unfortunately some of the people working with these data do have an agenda. A skeptic has an additional uncertainty: Are they altering the record to suit their cause?
    As you say, the “raw data” is already processed, and hopefully improved by removing errors. In the past, we could assume that the adjustments would be randomly distributed, but today bias could be creeping in. So we look for data as close to the instruments as possible, with as little processing as possible, verified by meteorologists whose mission is to report the weather as it happens.

    I have found access to the EC history of monthly averages for Canadian stations (including TMaxs and TMins BTW). The record has gone through 2 homogenizations, and I will take their word that they have only improved the accuracy by that process. As you say, getting the data into my workbook is labor-intensive, cut-and-paste stuff, and I can myself introduce errors if I take my eye off the ball.

    But I am following a focused method rather than a global one, and so the numbers of stations make the project doable.

    I am pleased with the proof of concept, having verified that the spreadsheet calculations are working as intended. I also take comfort that my result was so close to that obtained by BEST with a far more sophisticated method and tools.

  44. Maybe not a supercomputer, how about an old Olivetti Programma 101. NASA used one in calculating orbits for the Moon missions.

  45. In response to some comments above:

    The rationale for this method of analysis is simple and compelling.

    Temperature is an intrinsic quality of the object being measured. We can take the temperature of a rock and a pail of water, and the average of the two tells us . . . Nothing. However, if we have a time series, the two temperature trends do inform us about changes in the two environments where the rock and water are located.

    Weather stations measure the temperature of air in thermal contact with the underlying terrain. Each site has a different terrain, and for a host of landscape features documented by Pielke Sr., the temperature patterns will differ, even in nearby locations. However, if we have station histories (and we do), then trends from different stations can be compared to see similarities and differences..

    In summary, temperatures from different stations should not be interchanged or averaged, since they come from different physical realities. The trends can be compiled to tell us about the direction, extent and scope of temperature changes

    • In reply to Ron C in particular :-
      Here are the results from the 32 stations with the best CRUTem4 coverage before and after 1850 :-

      1700-2013 Maximum +0.81 (St. Petersburg) Minimum +0.04 (Paris le Bourget and Vilnius) Average +0.41 C/century
      1700-1850 Maximum +0.97 (Vilnius) Minimum -1.09 (Kobenhavn) Average -0.39 C/century
      1850-2013 Maximum +1.45 (St. Petersburg) Minimum -0.01 (Vilnius) Average +0.94 C/century
      Difference Maximum +2.44 (Kobenhavn) Minimum -0.98 (Vilnius) Average +1.33 C/century

  46. Richard,

    Interesting results. Let me see if I get the meaning.

    We have a trend over 313 years of 0.41 C/century. The first 150 years cooled at -0.39 C/century. The last 163 years warmed at +0.94 C/century.

    Don’t know the time frame for your fourth line; I do note that Kobenhavn was the minimum in 1700-1850, and the maximum in the last interval, whatever it is. Can you clarify?

    Also, am I right to think that these are European sites?

    • Reply to Ron C :-

      Yes, those (32) sites with the most coverage both before and after 1850 were all European sites. Philadelphia started in 1758 but only had 106 years with 12 months of data.

      The last line is just the change from the pre-1850 trend to the post-1850 trend. So Vilnius was +0.97 C/century before 1850 and -0.01 C after 1850, for a difference of -0.98 C/century. Kobenhavn was -1.09 C/century before 1850 and +1.35 C/century after 1850, for a difference of +2.44 C/century.

  47. Richard,
    We would expect cooling before 1850, and warming from 1850 to present. Usually the graphs show 0.5 C/century from end of LIA (1850). Vilnius is really counter to that pattern, and Kobenhavn appears quite volatile being the outlier in both periods.

    Is there a way to have a copy of your workbook? I would like to look deeper at this.

    • Reply to Ron C :-

      How would I get it to you ? I produced workbooks for the individual stations, calculated the annual values, then copied and pasted the annual values into the ‘master’ workbook.

  48. Richard

    That sounds like a lot of work. Can you briefly describe your method? I.e, raw data and source, sequence of calculations, end results?

    Also, I have another idea. If you provide al list of station names (with WMO ID numbers, if handy), I can get GHCN data for the stations, and amalyze the trends since 1850.

    • Reply to Ron C :-

      1. I went to http://www.cru.uea.ac.uk/cru/data/temperature/crutem4/station-data.htm then scrolled down to the bottom and downloaded
      crutem4_asof020611_stns_used.zip Station data
      crutem4_asof020611_stns_used_hdr.txt Header lines from above

      2. The header lines formed the master workbook.
      3. Sorted the header lines in order of ‘First Year’
      4. For each station with ‘First Year’ 1850 or before, searched for it in the station data.
      5. Copied and pasted the station data into a new text file.
      6. Imported the text file into a new workbook.
      7. Calculated the temperatures by dividing by 10.
      8. Calculated the average temperature for each year.
      9. Cleared each year’s average for which there was a month with ‘-99.9′ indicating missing value.
      10. Plotted graph of yearly average, 10 year average and linear trend line for each station.
      11. Copied and pasted each station’s yearly values (169 rows) into master workbook.
      12. Calculated (for each station) :-
      (a) Range = Last Year – First Year
      (b) % Coverage = % Years with values
      (c) Number of years with values to 1850
      13. Selected those 32 stations with the top values in (a) (b) (c) – this eliminated Frankfurt, Philadelphia and Moscow for sparse coverage.
      14. Calculated (for each of 32 stations) :-
      (a) slope over whole period
      (b) slope before 1850
      (c) slope after 1850
      (d) difference between (b) and (c)

      WMO Station No.
      103840
      62600
      24581
      260630
      160590
      67000
      66450
      24851
      24640
      106380
      724080
      160600
      12710
      160811
      71500
      110120
      61860
      115180
      107760
      160950
      110350
      113200
      267300
      66310
      107270
      123750
      276120
      128390
      128430
      109620
      108650

      Station Name Country
      BERLIN GERMANY
      DE BILT NETHERLANDS
      UPPSALA SWEDEN
      ST.-PETERSBURG RUSSIA
      Torino ITALY
      Geneve-Cointrin SWITZERLAND
      Basel-Binningen SWITZERLAND
      STOCKHOLM SWEDEN
      STOCKHOLM/BROMMA SWEDEN
      FRANKFURT A MAIN GERMANY
      PHILADELPHIA, PA USA
      Torino ITALY
      Trondheim/Vaernes NORWAY
      Milano-Brera ITALY
      PARIS/LE BOURGET FRANCE
      Kremsmunster AUSTRIA
      Koebenhavn DENMARK
      PRAHA/RUZYNE AIRPORT CZECH REPUBLI
      Regensburg GERMANY
      Padova ITALY
      Wien-Hohe Warte AUSTRIA
      Innsbruck-Universita AUSTRIA
      VILNIUS LITHUANIA
      Bern-Zollikofen SWITZERLAND
      Karlsruhe GERMANY
      WARSAW-OKECIE POLAND
      MOSKVA RUSSIA
      Budapest – Lorinc Ai HUNGARY
      Budapest – Lorinc Ai HUNGARY
      Hohenpeisenberg GERMANY
      Munchen-Stadt GERMANY

  49. Thanks for the info and link.

    Impressive–full marks for dedication and persistence.
    Assuming you are paid normal hourly rates, it seems you have serious money from Big Oil behind you. ;>)

    The only difference in method I can see is my workbook also does trends for each month to get at seasonality. Were your trend lines done on the monthly TAvg values? Did you also do Std. Dev.?

    This source appears preferable to Met Office, since at least in the Kansas subset, some records inexplicably stopped in 1990. You have of course selected long-service sites still active, so this is not an issue. Others have written, Jeff Id for example, on the dying out on stations, a lot of them in US and also Canada.

    I am also considering the NOAA GHCN dataset here:
    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/

    It has the advantage of being up to date (June 2014), but it does have some unwanted columns to work around. The data is called qcu (quality controlled unadjusted) so that looks good. They also provide TMaxs and TMins in separate additional files, which could be interesting.

    Looks like you’ve put the ball in my court.

    There may be a delay in getting this done as my Big Oil check seems lost in the mail.

    • Reply to Ron C :-

      No pay, I just love Excel :-) The trend lines were on the calculated annual averages. No, I didn’t calculate standard deviation. As I say, if you want any of my files, and can tell me a way to get it to you, please let me know.

  50. Richard, thanks for sending the files. I have learned a lot today.
    I sent back your Vilnius spreadsheet with a Vilnius TTA spreadsheet added.

    I see now that you have a temperature averaging method, including averaging all stations together for each year. As I have said in the WUWT post, I do not support averaging (or exchanging) temperatures among different stations. For reasons I have explained above, only the trends should be averaged.

    In the amended Vilnius file, you can see the difference between the two methods.

    TTA results for Vilnius:

    C/Century 0.0266
    C/Lifetime 0.0625
    C/1849 0.0006
    Yrs. Lifetime 235
    Ave Annual C 6.35
    Std. Dev C 2.23

    As you can see, the results are quite different. According to the TTA, Vilnius temperatures are trending sideways since 1850, gained 1C in the 73 years before 1850. The overall trend is 0.03 C/century, the Average Annual temperature is 6.5C, with Std Dev. Of +/- 2.2C.

    Please do check these calculations to see if I have made any errors.

  51. The table didn’t paste completely above:
    C/Century 0.0266
    C/Lifetime 0.0625
    C/1849 0.0006
    Yrs. Lifetime 235
    Ave Annual C 6.35
    Std. Dev C 2.23

Comments are closed.