CRU produces something useful for a change

World temperature records available via Google Earth

Climate researchers at the University of East Anglia have made the world’s temperature records available via Google Earth.

The Climatic Research Unit Temperature Version 4 (CRUTEM4) land-surface air temperature dataset is one of the most widely used records of the climate system.

The new Google Earth format allows users to scroll around the world, zoom in on 6,000 weather stations, and view monthly, seasonal and annual temperature data more easily than ever before.

Users can drill down to see some 20,000 graphs – some of which show temperature records dating back to 1850.

The move is part of an ongoing effort to make data about past climate and climate change as accessible and transparent as possible.

Dr Tim Osborn from UEA’s Climatic Research Unit said: “The beauty of using Google Earth is that you can instantly see where the weather stations are, zoom in on specific countries, and see station datasets much more clearly.

“The data itself comes from the latest CRUTEM4 figures, which have been freely available on our website and via the Met Office. But we wanted to make this key temperature dataset as interactive and user-friendly as possible.”

The Google Earth interface shows how the globe has been split into 5° latitude and longitude grid boxes. The boxes are about 550km wide along the Equator, narrowing towards the North and South poles. This red and green checkerboard covers most of the Earth and indicates areas of land where station data are available. Clicking on a grid box reveals the area’s annual temperatures, as well as links to more detailed downloadable station data.

But while the new initiative does allow greater accessibility, the research team do expect to find errors.

Dr Osborn said: “This dataset combines monthly records from 6,000 weather stations around the world – some of which date back more than 150 years. That’s a lot of data, so we would expect to see a few errors. We very much encourage people to alert us to any records that seem unusual.

“There are some gaps in the grid – this is because there are no weather stations in remote areas such as the Sahara. Users may also spot that the location of some weather stations is not exact. This is because the information we have about the latitude and longitude of each station is limited to 1 decimal place, so the station markers could be a few kilometres from the actual location.

“This isn’t a problem scientifically because the temperature records do not depend on the precise location of each station. But it is something which will improve over time as more detailed location information becomes available.”

This new initiative is described in a new research paper published on February 4 in the journal Earth System Science Data (Osborn T.J. and Jones P.D., 2014: The CRUTEM4 land-surface air temperature dataset: construction, previous versions and dissemination via Google Earth).

For instructions about accessing and using the CRUTEM Google Earth interface (and to find out more about the project) visit http://www.cru.uea.ac.uk/cru/data/crutem/ge/. To view the new Google Earth interface download Google Earth, then click here CRUTEM4-2013-03_gridboxes.kml.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
134 Comments
Inline Feedbacks
View all comments
Karen
February 6, 2014 10:19 pm

tiz interesting that the temp’s also work on the moon and Mars. lol

February 6, 2014 10:47 pm

“ferdberple says:
February 6, 2014 at 6:28 pm
Steven Mosher says:
February 6, 2014 at 9:31 am
CRU do not use EnvCanada.
============
We know that EnvCanada data actually come from the Canadian weather stations. We are not at all sure where the CRU data comes from, that much is clear in the climategate emails. Since it appears that CRU and EnvCanada do not agree on Canadian temps, either one or both of them must be wrong.
WRONG. CRU use data from ENv Canada that has been homogenized. It’s in their documentation. All you have to do is read it. At one point I spent about 3 months comparing
Env Canada data ( I wrote an R package for downloading it all ) The Berkeley data and CRU data. CRU is a subset of Env canada, However, they rely on homogenized versions.
Env Canada data can be in really poor shape depending on the station.
##############################

February 6, 2014 10:53 pm

“In terms of your comment regarding BEST matching CRUTEM4 – you do know that CRUTEM4 is a gridded (ie. homogenised and averaged) dataset, don’t you? In other words, a highly averaged dataset (BEST) is quite likely to agree with another highly averaged dataset! Moreover, since BEST incorporated such data into its compilation, it makes logical sense for it to follow the same trends, depending, of course, on the ‘weighting’ applied to the various components.
###################
wrong on several counts.
There are substantial differences between CRU and BEST.
1. They grid at 5 degrees.
2. We grid at 1 degree and 1/4 degree,
3. CRU use homogenized data. We use unadjusted data.
4. we dont average.
here is what 1/4 degree grids look like
http://static.berkeleyearth.org/posters/agu-2013-poster-1.pdf

February 6, 2014 11:03 pm

“I attempted to locate 6 sites with lat/long information within 30 minutes of my house. Using Google Earth I drove to those locations and was only able to locate 2 out of the 6. One was at a county airport and the other was at a radio station. Little wonder with a tolerance of up to ± 0.05 degrees (almost 3 miles).”
The data we use in ingest from public archives. No secret data, no data, like Jones, that we cannot share. That data comes as is, including location errors.
If you found them did you record the exact lat/lon with GPS? thats really important information
and you can do you part by sharing that data back so that the public records get corrected.
For us, If you find an error, please write to me. We are constantly updating the data and fixing
known problems or upstreaming the fixes so they are fixed at the source.
Especially station identity issues. In the raw sources there are 300000 pairs of stations within
1km. That’s before we “de duplicate” Just last week one guy wrote me with a pair of duplicates that we missed. 300K is a lot to sort through.
So, whatever you find, send me documentation ( I like to keep records ) and I’ll tackle it with my new data helper.

February 6, 2014 11:05 pm

“ferdberple says:
February 6, 2014 at 7:05 am
Something doesn’t add up. A quick look at the GE data shows warming on the south west coast of BC Canada. The weather records from Environment Canada show no such warming.”
###############
Be careful with the BC records of Env Canada.

February 6, 2014 11:06 pm

“Bill from Nevada says:
February 6, 2014 at 5:55 am
Here is what the writer of the now
legendary file in the climate gate emails called “Harry_Read_me.txt
###############################
Harry read me had to do with an entirely different team and entirely different dataset.

Nick Stokes
February 6, 2014 11:34 pm

Patrick says: February 6, 2014 at 8:43 pm
“Whichever way you want to say it the CRU lost the raw data.”

CRU does not provide raw data. GHCN unadjusted does. You just have to go to the right place.

Patrick
February 7, 2014 2:05 am

“Nick Stokes says:
February 6, 2014 at 11:34 pm”
I never said they do. For a “scientific” body, to LOSE that data, and then “provide adjusted” data without the ability to refer BACK to the raw data, is the problem. But, still, you go on defending BS (bad science).

richardcfromnz
February 7, 2014 2:16 am

Steven Mosher says:
February 6, 2014 at 10:53 pm
>”There are substantial differences between CRU and BEST.
[……]
3. CRU use homogenized data. We use unadjusted data.”
But your method does produce adjusted data on the way to kriging so you do, in effect, use adjusted data. Proof:
BEST adjusted data for AUCKLAND, ALBERT PARK NZ: http://berkeleyearth.lbl.gov/stations/157062
BEST adjusted data for CHRISTCHURCH AP/HAREWOOD NZ: http://berkeleyearth.lbl.gov/stations/157045
For every raw station dataset you produce a corresponding “Breakpoint Adjusted” dataset (examples above) as method output along with the multi-station composite output.
Albert Park above has 3 site move adjustments and 9 “empirical break” adjustments.
Christchurch AP/Harewood above has 2 site move adjustments and 5 “empirical break” adjustments.
That’s a lot of adjustments for a method that you say uses “unadjusted data”. Although I don’t find breakpoint analysis controversial, it’s the composite kriging that doesn’t pass observational V&V.

RichardLH
February 7, 2014 2:21 am

Steven Mosher says:
February 6, 2014 at 10:53 pm
“There are substantial differences between CRU and BEST.
1. They grid at 5 degrees.
2. We grid at 1 degree and 1/4 degree,
3. CRU use homogenized data. We use unadjusted data.
4. we dont average.”
Can you please provide one simple statistic? What are the 1*1 (or 1/4) degree cells that have data in them at today and 50, 100 and 150 years in the past. As a percentage of the available cells.
Station numbers are of little use as there are multiple duplicates, and multiple stations per cell as the BEST database show.
P.S. You do still have some internal data inconsistencies within your published data which I find hard to reconcile with these being separate ‘views’ of the same internal data.

Kev-in-Uk
February 7, 2014 10:12 am

richardcfromnz says:
February 7, 2014 at 2:16 am
absolutely! I find it hard to imagine that Mosh claims unadjusted data when they use CRU homogenised and gridded data? Or have CRU now found the ‘raw’ data? LOL.
Obviously, there could be debate about what is ‘raw’ data – but in essence, to my mind, it would be the initially recorded values, QC’d as required. I don’t think the CRU data within the BEST dataset is like this at all !

Matt G
February 7, 2014 11:57 am

David Sanger (@davidsanger) says:
February 6, 2014 at 3:47 pm
“Matt G. I’m not sure what you are looking at, but on the Google Earth display Dec 2010 shows an anomaly of -0.31ºC for the grid-box covering most of England (52.5N 2.5W) . coldest since 1988”
The one I was referring too was created and displayed in January 2011, this data is different to what it was then so must have changed since. Still the temperatures for England were over 3.7c below normal and were the coldest since the 1890s (CET). Just stating that is was the coldest since 1988 already shows it was wrong.
For example look at the difference of temperatures between December 1988 and December 2010. December 1988 was one the mildest recorded since 1934 and December 2010 was the coldest. (2nd coldest for CET since 1890 )
CET
December 1988 7.5c
December 2010 -0.7c
Therefore December for ENgland in 1988 was on average 8.2c warmer than 2010, yet grid data apparntly shows 1988 to be somehow even colder. December 2010 broke records across all the UK for severe cold and plenty of snow in some areas.
The weighting is worthless for grid data.

richardcfromnz
February 7, 2014 1:01 pm

Kev-in-Uk says:
February 7, 2014 at 10:12 am
>”…. they [BEST] use CRU homogenised and gridded data”
No, they don’t. Say for the Albert Park example above, the raw data they start with is supplied to Global Historical Climatology Network (GHCN), WMO, and whoever by New Zealand’s national weather and climate institutions. That same Albert Park raw data can be accessed by anyone from NIWA’s CliFlo database in New Zealand.
So in the case of Auckland, BEST uses ALL of the raw Albert Park data and doesn’t adjust for UHI/sheltering. NIWA doesn’t use all but doesn’t adjust for UHI either. Albert Park is UHI/sheltering contaminated and has to be corrected for that. But even without UHI correction, BEST’s break adjusted Albert Park corroborates the NZCSET audit series of NIWA’s 7SS Auckland location but eliminates NIWA’s Auckland series (trend far too steep – shonky site move adjustments, didn’t correct for UHI). Correcting BEST Albert Park for UHI/sheltering would give a series trend less than the NZCSET audit series trend (which was much less than NIWA’s) and not much above flat.
Once a site was established at Auckland Airport (Mangere), NIWA ceased using Albert Park and used Mangere raw data instead. BEST continued using Albert Park however. GISS uses Mangere but not Albert Park.
BEST’s adjustment method is an in-house development they’ve termed the “scalpel” method. You can read about it in their Method paper at the BEST website. In short, BEST, GISS, and CRU, start by selecting their respective raw data from the same sources (GHCN) and adjust it themselves by their own methods. But for New Zealand, NIWA uses sites from their CLiFlo database some of which are used by BEST, GISS, CRU and some aren’t.
In New Zealand therefore (and elsewhere), we have independent means of checking location series that BEST, GISS, CRU produce starting with the same raw data that anyone has access to and can select from.

Kev-in-Uk
February 7, 2014 3:40 pm

richardcfromnz says:
February 7, 2014 at 1:01 pm
I’m not sure to be honest. The data page lists that they use CRU data (as part of the many datasets they use), but unless CRU supply raw data, I’m presuming it is ‘adjusted’? If it is not ‘adjusted’, I’d be pleased to know, as this would mean the dataset you can download from the Berkeley Earth site for CRU can be compared to other CRU products?

sonofametman
February 7, 2014 4:01 pm

My father worked for the UK Met Office, in various roles, for over 40 years. He never lost his interest in getting the data right, and eventually gave up the marine division and moved to forecasting for the RAF as he was not being taken seriously. He was concerned about the accuracy of sea surface temperature measurements, as the methods used were crude. He thought that the measurements were liable to errors from evaporative cooling as well as radiative heating from the weathership itself, and wanted to do experiments to eliminate any problems. He was ignored, and so spent the next 20 odd years in the air division instead. When I see a fuss being made about 1 deg C heating, I begin to wonder….
What I’m getting at is that the raw temperature data, especially older data, may not be as reliable as it might be convenient to imagine.
Lest we might take data for granted, in this age of remote sensing and the internet, just imagine being on station on a weathership in the Denmark Strait, in winter.

richardcfromnz
February 7, 2014 5:39 pm

Kev-in-Uk says:
February 7, 2014 at 3:40 pm
>”The data page lists that they use CRU data”
That’s something I wasn’t aware of. Can you link to that data page you’re referring to please?
I’ve looked at your comments up-thread but can’t see a link to anything like that.
Makes a huge difference because that would introduce 2 layers of adjustment to an adjusted CRU station if BEST actually do use ‘adjusted’ rather than raw to start with in some cases – CRU’s and BEST’s. I’ll be surprised if they’re doing that so I’d like to see the facts.
BEST describe their adjustment method by “scalpel” analogy because of the very short overlap required for it. Given their 9 “empirical break” adjustments to Albert Park above, I’m more inclined to think of a butchers meat slicer analogy than a scalpel, but that’s just me. Australia’s BOM make a similar number of “empirical break” adjustments for their ACORN-SAT series so they’re not alone.
I’ll make the point again though, it’s not BEST’s breakpoint adjustments to single stations that do the most damage – it’s their subsequent composite kriging that churns out the rubbish.

Michael Whittemore
February 7, 2014 10:04 pm

Are these data sets taking into account urban heat?

Patrick
February 8, 2014 1:31 am

“richardcfromnz says:
February 7, 2014 at 1:01 pm”
Puhlease! Ignore anything NIWA says about climate. I have been there, seen how they work…it’s not pretty!

Mervyn
February 8, 2014 5:31 am

Following climategate, I have absolutely no confidence in the CRU and its temperature data. I’m with the Russian scientists on this… the instrumental surface temperature data has been subjected to so much fudging, it’s now totally unreliable.

Kev-in-Uk
February 8, 2014 5:32 am

richardcfromnz says:
February 7, 2014 at 5:39 pm
Hi Richard – the links are straightforward…
http://berkeleyearth.org/data
then click on source files
http://berkeleyearth.org/source-files
CRU data is listed half way down that page. As I say, it’s not clear what the provenance of the data actually is but on the presumption CRU allegedly doesn’t have raw data, I assume this ‘isn’t’ either?

February 8, 2014 11:08 am

Nick Stokes said February 6, 2014 at 3:46 pm

ThinkingScientist says: February 6, 2014 at 2:52 pm
“No its not a lot of data, its a trivial amount of data.”
It’s a lot of numbers to type. And they had to do some of that. It’s a lot of numbers to track down, gather and check.
“And what’s the excuse for CRU “losing” some? Even a tiny memory stick could hold all the worlds temperature data recorded on a monthly basis.”
They did not have memory sticks in 1984 when Phil Jones was putting his dataset together. They had 360 Kb floppies and 10 Mb hard drives.

We also had 1.2 MB 8″ floppies. however, then as now, bulk archival storage was on tape. In 1984 IBM’s released the 3480 cartridge tape system as a replacement for the traditional magnetic tape reels. It was a 4” x 5” cartridge that held more information than reels the capacity being 200MB. They were slow to catch on though, so I suspect that the tapes Jones’s predecessor recycled would have been reels.
I remember doing a backup using DOS 3.x back then to ~50 3.5″ floppies. It was a large dataset for the day. When it came time to restore, I discovered that the maximum number of floppies in a DOS 3.x backup was 9 (IIRC) due to a bug. Not that CRU would have been using PCs, or DOS.

Kev-in-Uk
February 8, 2014 12:36 pm

richardcfromnz says:
February 7, 2014 at 5:39 pm
and just to illustrate the point, – this quote from the http://berkeleyearth.org/about-data-set page quotes ‘The Berkeley Earth Surface Temperature Study has created a preliminary merged data set by combining 1.6 billion temperature reports from 16 preexisting data archives. Whenever possible, we have used raw data rather than previously homogenized or edited data.”
Obviously this term ‘whenever possible’ probably is to cover the known use of non-raw data?

richardcfromnz
February 8, 2014 12:40 pm

Kev-in-Uk says:
February 8, 2014 at 5:32 am
>”CRU data is listed half way down that page.”
Thanks for this. I went to CRU website “Station data used for generating CRUTEM4”, found:
‘CRUTEM4 Temperature station data’
http://www.cru.uea.ac.uk/cru/data/temperature/crutem4/station-data.htm
>”As I say, it’s not clear what the provenance of the data actually is but on the presumption CRU allegedly doesn’t have raw data, I assume this ‘isn’t’ either?”
I think you’re right. Rather than bother finding the provenance of CRU TAVG, at the CRU link above I see “GHCNv2 (adjusted series)” i.e. not raw. BEST, from your link, uses “GHCN Monthly version 3”. NCDC states re GHCN-Mv3:
“Methods for removing inhomogeneities from the data record associated with non-climatic influences such as changes in instrumentation, station environment, and observing practices that occur over time were also included in the version 2 release (Peterson and Easterling, 1994; Easterling and Peterson 1995). Since that time efforts have focused on continued improvements in dataset development methods including new quality control processes and advanced techniques for removing data inhomogeneities (Menne and Williams, 2009). Effective May 2, 2011, the Global Historical Climatology Network-Monthly (GHCN-M) version 3 dataset of monthly mean temperature has replaced GHCN-M version 2 as the dataset for operational climate monitoring activities.”
So yes, if BEST uses adjusted GHCN-Mv3 (and then adjusts it again by their own method ?) it’s probably safe to assume the situation is the same for BEST and CRU TAVG.
This is an eye opener for me Kev. Not what I thought was going on at all at CRU and BEST.

richardcfromnz
February 8, 2014 12:48 pm

Kev-in-Uk says:
February 8, 2014 at 12:36 pm
>”and just to illustrate the point”
>”Whenever possible, we [BEST] have used raw data rather than previously homogenized or edited data.”
Yep, I certainly get it now. Thanks.

Kev-in-Uk
February 8, 2014 12:48 pm

richardcfromnz says:
February 8, 2014 at 12:40 pm
Precisely! Kind of makes a mockery of Mosh’s claim that BEST doesn’t use adjusted data?
regards
Kev