World temperature records available via Google Earth

Climate researchers at the University of East Anglia have made the world’s temperature records available via Google Earth.
The Climatic Research Unit Temperature Version 4 (CRUTEM4) land-surface air temperature dataset is one of the most widely used records of the climate system.
The new Google Earth format allows users to scroll around the world, zoom in on 6,000 weather stations, and view monthly, seasonal and annual temperature data more easily than ever before.
Users can drill down to see some 20,000 graphs – some of which show temperature records dating back to 1850.
The move is part of an ongoing effort to make data about past climate and climate change as accessible and transparent as possible.
Dr Tim Osborn from UEA’s Climatic Research Unit said: “The beauty of using Google Earth is that you can instantly see where the weather stations are, zoom in on specific countries, and see station datasets much more clearly.
“The data itself comes from the latest CRUTEM4 figures, which have been freely available on our website and via the Met Office. But we wanted to make this key temperature dataset as interactive and user-friendly as possible.”
The Google Earth interface shows how the globe has been split into 5° latitude and longitude grid boxes. The boxes are about 550km wide along the Equator, narrowing towards the North and South poles. This red and green checkerboard covers most of the Earth and indicates areas of land where station data are available. Clicking on a grid box reveals the area’s annual temperatures, as well as links to more detailed downloadable station data.
But while the new initiative does allow greater accessibility, the research team do expect to find errors.
Dr Osborn said: “This dataset combines monthly records from 6,000 weather stations around the world – some of which date back more than 150 years. That’s a lot of data, so we would expect to see a few errors. We very much encourage people to alert us to any records that seem unusual.
“There are some gaps in the grid – this is because there are no weather stations in remote areas such as the Sahara. Users may also spot that the location of some weather stations is not exact. This is because the information we have about the latitude and longitude of each station is limited to 1 decimal place, so the station markers could be a few kilometres from the actual location.
“This isn’t a problem scientifically because the temperature records do not depend on the precise location of each station. But it is something which will improve over time as more detailed location information becomes available.”
This new initiative is described in a new research paper published on February 4 in the journal Earth System Science Data (Osborn T.J. and Jones P.D., 2014: The CRUTEM4 land-surface air temperature dataset: construction, previous versions and dissemination via Google Earth).
For instructions about accessing and using the CRUTEM Google Earth interface (and to find out more about the project) visit http://www.cru.uea.ac.uk/cru/data/crutem/ge/. To view the new Google Earth interface download Google Earth, then click here CRUTEM4-2013-03_gridboxes.kml.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Ref Steven Mosher: February 6, 2014 at 8:13 am”
Thanks for the link to “Map of the more than 40,000 temperature stations used by the Berkeley Earth analysis.”
I attempted to locate 6 sites with lat/long information within 30 minutes of my house. Using Google Earth I drove to those locations and was only able to locate 2 out of the 6. One was at a county airport and the other was at a radio station. Little wonder with a tolerance of up to ± 0.05 degrees (almost 3 miles).
Dr Osborn said: “This dataset combines monthly records from 6,000 weather stations around the world – some of which date back more than 150 years. That’s a lot of data…”
No its not a lot of data, its a trivial amount of data.
6000 * 12 * 150 years = 10.8 million data values (actually its probably a lot less than that in practice as there were fewer thermometers 150 years ago, but lets be generous.). So in single precision floating point that’s 43.2 million bytes or a 41 Mb data file (not including location and metadata).
If the climate scientists think that’s a lot of data they need to get out more.
A medium size 1,000 km^2 3D Seismic survey recording at 25 x 25 m trace spacing (pretty course these days), with pre-stack fold of 60, sampled at 4 ms and covering a time window of 5.0 secs contains 447 Gbytes of raw data and these are being recorded and processed on a continuous basis by seismic contractors all over the world. Even the final stacked, migrated, data processed volume would be 7.5 Gb volume. I have dozens of these datasets sitting on my servers right now.
Why do climate scientists make so much fuss about such trivial amounts of data? And what’s the excuse for CRU “losing” some? Even a tiny memory stick could hold all the worlds temperature data recorded on a monthly basis. Frankly, its pathetic. 41 Mb? I’ve got Excel and PowerPoint files much bigger than that.
Richard Mallett says:
February 6, 2014 at 1:30 pm
did you mean to put a /sarc? or are you being serious?
When your motor mechanic says you need a new engine do you automatically believe him? What if you find that the ‘data’ is actually that he says it needs a new engine because he had it checked over by a mate (because he was busy on another motor), who was also busy, so had it checked by his mate (a junior trainee mechanic), who heard a slight rattle and said it sounded bad, and so the second mate said it was a ‘bag of spanners’ and your actual mechanic interpreted that as the engine being completely ‘fecked’ beyond repair……would you accept it on face value? Yeah, right!
Just so there’s no confusion with my analogy. Some guy records the temperature data, several decades later, some other guy decides it needs adjusting. Some years later, another guy runs a computer analysis of the data and sees some anomalies, so writes a program to ‘smooth’ them, produces a ‘final’ set of data and then ‘accidentally’ destroys the original data, and the notes of all the adjustments. You wanna believe that data as correct?
So when Berkeley Earth says, for each station :-
The data for this station is presented below in several columns and in
% several forms. The temperature values are reported as “raw”,
% “adjusted”, and “regional expectation”.
%
% The “raw” values reflect the observations as originally ingested by
% the Berkeley Earth system from one or more originating archive(s).
% These “raw” values may reflect the merger of more than one temperature
% time series if multiple archives reported values for this location.
%
and you find that their raw data agrees with the data from CRUTEM4 and from Rimfrost, are you saying that we should throw away all these station records, because some unknown people, at some unknown time, for some unknown reasons, might have adjusted them before they arrived at Berkely Earth, Rimfrost and CRUTEM4 ? That would make all historical scientific data unreliable.
It will be interesting to compare New Zealand CRUTEM since 1986 (see wazsah says: February 6, 2014 at 12:47 pm) with the New Zealand stations they DON’T use vs their grid output. Also vs NIWA’s 7SS and the NZCSET audit of it.
I did a similar V&V with BEST for Auckland and Hamilton, Waikato District. BEST output is not verified by the B station (non-climatically influenced i.e. no UHI etc) Te Aroha in the Waikato neither is it verified by NIWA Ruakura in the Waikato.
On the other hand, and critically, BEST input data for Auckland Albert Park after breakpoint analysis and before kriging DOES corroborate the NZCSET Auckland series but DOES NOT corroborate NIWA’s Auckland series. After kriging, BEST Auckland is nonsense as for Hamilton.
BEST Auckland NZ case study:
http://www.climateconversation.wordshine.co.nz/2014/01/salingers-status-clarified/#comment-535373
BEST Hamilton NZ, Waikato District, case study:
http://www.climateconversation.wordshine.co.nz/2014/01/salingers-status-clarified/#comment-541646
Wouldn’t surprise me if CRUTEM GE fails V&V in the same area too.
Those grid cells sure render the average temperature/cel for the dubious metric it is.
Has the CRU raw data been recompiled yet?
Did I miss it?
Or is it still waiting for the MET to fulfill the promise of the CRU email whitewashes?
3 years are up.
I can’t wait to start analyzing this stuff against areas I know.
Very nice. It;’s an easy way to access and download the station data and also see the annual and seasonal graphs
ThinkingScientist says: February 6, 2014 at 2:52 pm
“No its not a lot of data, its a trivial amount of data.”
It’s a lot of numbers to type. And they had to do some of that. It’s a lot of numbers to track down, gather and check.
“And what’s the excuse for CRU “losing” some? Even a tiny memory stick could hold all the worlds temperature data recorded on a monthly basis.”
They did not have memory sticks in 1984 when Phil Jones was putting his dataset together. They had 360 Kb floppies and 10 Mb hard drives.
Matt G. I’m not sure what you are looking at, but on the Google Earth display Dec 2010 shows an anomaly of -0.31ºC for the grid-box covering most of England (52.5N 2.5W) . coldest since 1988
So I divided earth’s surface into 6,000 equal area squares; one for each global measuring station, and I get a grid 291.5 km on a side; which is also 157.4 nautical miles, which is about 2.6226 degrees on a side.
I see I have already screwed up, because one of those references to the data set, says there aren’t 6,000 world wide measuring sites but something more than 5,500. Now I know lots of numbers bigger than 5,500, but I have no way to know which ones are on the side of a climate station.
Does UEACRU have a better guess, than “more than 5,500” I mean, they either have the data or they don’t; so why the secrecy about how many informants they have ??
So if their global grid is 5 degrees on a side, and not 2.63 degrees, then they only have one quarter of the grid cells in my grid, so their cells are more like 583 km on a side, which is still comfortably smaller than Hansen’s 1,000 km correlation radius.
I crossed the Pacific ocean on a ship once, from Wellington NZ, to Manhattan NY, via Panama; spent a lot of time at the bow of the ship filming flying fish flying off the bow wake. Had a whole month in Feb/Mar to do this.
Never ever saw a climate measuring station out there; but the ship did get hit by a tidal wave going 400 knots, (the wave silly; not the ship). Bloody exciting; the wave was 150 miles long, and a foot high. The ships captain tooted the horn right when we were on top of the wave (so’s we’d know).
So UEACRU must have built all those climate stations since 1961, although they say they go back to 1850. Never travelled anywhere in 1850, so I wouldn’t know.
Last time I checked on the Nyquist theorem, it didn’t say anything about AVERAGE sample intervals. It relates to the “Maximum Permissible sampling interval”, not the average. You can have shorter than maximum, sampling intervals, but not greater. The sampling can be random, so long as it is more often than the maximum interval.
I suspect that much of UEACRU’s 1850 plus data, is not REAL data at all; but is contaminated with aliasing noise; which can never be filtered out, since the noise is inside the signal bandwidth; so to lose the noise, you also have to lose some signal, which is noise of a different kind.
Prof John Christy, et al showed that oceanic near surface water Temperatures (one meter deep) and oceanic near surface lower troposphere air Temperatures (three meters high); are (a) not the same; and (b) not correlated.
So it is impossible to correct all those ancient oceanic temperatures obtained from a bucket of water from uncontrolled depth, in waters that meander around on ocean currents. They do not reflect the lower tropo air Temperatures that are measured at land based stations. For the about 20 years of data, from the buoys, that Christy studied. the actual air Temperature warming was about 60% of the actual water Temperature warming. Well it might have been that the water warming was 40% above the air Temperature warming; I forget, long time since I read their paper (Jan 2001 Geophysical Research Letters (I think)).
Now “important”; they didn’t say water warming is 40% more than air warming. They said the numbers were; for that particular 20 years of data; not forever and ever Amen.
So now we have the Google earth maps and we can see which places on earth are not correctly Nyquist sampled.
Can’t recall how Nyquist works for multivariable sampling; for temporal and spatial sampling, I believe that all spatial samples have to be taken simultaneously, and all Temporal samples have to be taken at the same time. So spatial samples can only reconstruct the spatial map if all locations are sampled at the same time, and you can’t use time sampling at just any old place to get the temporal data, the temps have to be for a specific place.
Well the whole idea of a global climate map is nutty anyhow. Climate is a local thing (check Vostok Station against YOUR climate) ; and it is the long term integral of your weather; everything that already happened at your spot.
We didn’t have a tropical storm Sandy in California, or a Hurricane Sandy, so it isn’t showing up in our climate.
Getting tired of click on links, that don’t return me to where I clicked on !!
Richard Mallett says:
February 6, 2014 at 3:22 pm
I assume you are responding to my earlier post?
Firstly, raw data is sacrosanct in the scientific world – or at least it should be. In simple terms, the terminology is called ‘traceability’. You can ‘change’ things a thousand times, so long as you can trace it back to original data, before, during and after each change. For example, your thermometer might be a degree or so ‘out’? that’s fine, so long as you find out at some stage and then you can correct for it – but you NEVER change the actual original recorded data, because you may find out later that the correction was too much/too little, etc, etc. So that’s rule 1
Secondly, if you make a change, for a good reason, you record ‘why’ and keep a copy of before and after values.
In terms of your comment regarding BEST matching CRUTEM4 – you do know that CRUTEM4 is a gridded (ie. homogenised and averaged) dataset, don’t you? In other words, a highly averaged dataset (BEST) is quite likely to agree with another highly averaged dataset! Moreover, since BEST incorporated such data into its compilation, it makes logical sense for it to follow the same trends, depending, of course, on the ‘weighting’ applied to the various components.
As for throwing away station records – that is a somewhat crass statement. It is exactly what I am not advocating! Indeed, I’d like the original station records (as in, the ones ‘lost’ by CRU) to be found and published……….
I enter my replies in the box that says ‘Leave a Reply to Kev-in-Uk’ – I don’t know of any other way to reply to your posts. I am talking about station records from BEST, CRUTEM4 and Rimfrost, not the 5 by 5 degree gridded data. No weighting or averaging needed to be applied before publishing CRUTEM4, BEST or Rimfrost, as explained in the BEST station data header that I quoted. I was asking you if we should throw away the raw (i.e. original) station data that were received (and published on the three websites) on the grounds that we don’t know if they were adjusted at source. That would be the only explanation if a record from the same station from BEST, CRUTEM4 and Rimfrost were all identical.
Berényi Péter : KML is indeed an open standard and you can find the specs here
Time samples at the same place, ( at all times) and spatial samples at the same time (for all sites) You can’t just gather up samples made at random times in random locations, and say they represent a two D map.
At ANY sampled time, you must get a sample at each site, simultaneously, in order to say the global Temperature map was thus at THIS TIME, and at the next sampling time instant, you again need simultaneous samples from all sites to be able to say; this is what the spatial map changed too at this time.
Even if you aren’t interested in seeing the reconstructed CONTINUOUS FUNCTION, but only want an AVERAGE in TIME or in SPACE or in both; a factor of only two under-sampling, will fold the noisy spectrum back to zero, and corrupt the average.
Steven Mosher says:
February 6, 2014 at 8:13 am
Can you provide a percentage figure for the Global 1*1 degree cells that are covered by the actual data?
The ratio of measured as opposed to interpolated values. At say 50, 100, 150 years ago?
My counts when using the BEST databases show that the figures drop off very fast.
Station counts are of little use as the same station can appear more than once and there may well be more than one station per cell as well.
george e. smith says:
February 6, 2014 at 4:21 pm
“Time samples at the same place, ( at all times) and spatial samples at the same time (for all sites) You can’t just gather up samples made at random times in random locations, and say they represent a two D map.”
Actually it is a lot more complicated than that. To capture the evolution of a 2d map in time terms with the data regime available is pushing the limits very hard indeed.
Firstly each sampling point provides, in time series terms, the average for noon the previous day as acquired from midnight to midnight over the whole of the preceding day. So we have a continuously varying temporal scheme merged in with a non optimum spacial one. We also mainly have TMax + TMin / 2 = TMean which, although close enough for government work, is hardly a high quality averaging methodology. It really only close to the right answer on 12 hour days. Most of the rest of the time it is +-1.0C, possibly more. On 12 hour days the ‘range / 2’ is OK but for the rest it ought to be a more complicated Sine Wave plus DC offset equation which I don’t have to hand on my mobile. Assuming the ‘drain’ to cold is a nearly linear slope anyway.
Then we have a jittering sampling period. 28,30,31 and even 365,365,365,366. All play havoc with the pure temperature figure.
Weather, and hence temperature, also move across this sampling map which smears that out as well. If we did jpg or mpg sampling on the basis that Climate work is done, none of us would never be able to view pictures or watch movies!
The CRU lost all their raw data in office moves in the 1990’s. What was archived in 1998 would appear to be “adjusted” data and thus invalid IMO. And again, we see Nick Stokes defend the indefensible.
What I see here is an attempt to “sex up” the AGW agenda using, Google Earth, on what would ordinarily be boring min/max temperature tables. Nothing to see, move along.
I just checked the data against the numbers for my own location (which I know are quality controlled and fully adjusted for whatever needs adjusting).
And it is close, but the Crutemp4 trend is 0.048C per decade or a total of 0.6C over the whole record higher than my own location (which I know is quality controlled and fully adjusted).
So, another nail in the coffin in my opinion.
Steven Mosher says:
February 6, 2014 at 9:31 am
CRU do not use EnvCanada.
============
We know that EnvCanada data actually come from the Canadian weather stations. We are not at all sure where the CRU data comes from, that much is clear in the climategate emails. Since it appears that CRU and EnvCanada do not agree on Canadian temps, either one or both of them must be wrong.
Second biggest country in the world, and they’ve screwed the pooch on temps. But trust us, we are scientists, we know what we are doing. No, you are not scientists, you are academics. Big, big difference.
Kev-in-Uk says:
February 6, 2014 at 4:05 pm
Firstly, raw data is sacrosanct in the scientific world – or at least it should be.
============
Unfortunately we know for a fact that someone at CRU threw away the raw data. We’ve heard excuses why that was done. We also have climategate emails from an academic at CRU saying he’d rather destroy the data than give it Steve M.
Co-incidence? In police work, there is no such thing as co-incidence. The raw data is gone and we can finally see that CRU doesn’t match the actual weather stations it is supposed to match.
Patrick says: February 6, 2014 at 5:01 pm
“The CRU lost all their raw data in office moves in the 1990′s.”
No. CRU says:
“Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. “
“The move is part of an ongoing effort to make data about past climate and climate change as accessible and transparent as possible.”
Until they tell us how the raw data gets translated into adjusted data, there remain NO transparency of data.
Re: “So the people at CRU, Rimfrost and Berkeley Earth are all motivated by money and / or power ?” (Mallett at 1:30pm)
Who knows what motivates them? It doesn’t matter. CRU is a known l1ar. Thus, believing anything they say is irrational. The other temperature reconstruction groups have been exposed repeatedly as using recklessly unscientific methods (at best). What their motive is, is irrelevant.
You attempted far above to denigrate the rational view that the CRU Google data is suspect-from-the-get-go by asserting that our claim that the vast majority of the temp. reconstructionists are crooked is tantamount to an irrational belief in a conspiracy. Our rational conclusion does not, however, depend on there being any such conspiracy. I told you why above.
From your comments, you are CLEARLY a troll. Congrats on getting so many of us to respond. If your ego-clogged brain can take this fact in, note: we do it solely to prevent your slimy half-truths and red herrings from fooling anyone reading here (good job, Kev-in-UK). You have proven your complete insincerity and ignorance (or pride-blind stupidity or whatever is the cause of your poor reasoning and factual errors, it makes no difference) out of your own mouth.
Just know that, any replies you receive here are US using YOU.
If that’s what floats your boat, be our guest!
“… some of them want to get used by you… .”
(“Sweet Dreams” (Are Made of This) – Eurythmics)
Personal attack by Janice Moore (who also makes accusations without evidence against the current data sets published by Berkeley Earth, CRU and Rimfrost) ignored
@ur momisugly Nick Stokes 6:44.
So they chose not to retain the raw data.
Rather than deliberately lost it.
That sure clears up your disagreement with Patrick of 5:01.
So what raw data did CRU use to produce their product,the station series after adjustment for homogeneity issues.?
How might one verify the validity of these homogeneity issues?
How could one replicate their methodology, given we still cannot verify exactly which station data was used?
“Nick Stokes says:
February 6, 2014 at 6:44 pm”
Yes. They threw away some of the raw data in the 80’s and lost even more in office moves in the 90’s. Whichever way you want to say it the CRU lost the raw data.
Kev-in-Uk says:
February 6, 2014 at 4:05 pm
>”In terms of your comment regarding BEST matching CRUTEM4 – you do know that CRUTEM4 is a gridded (ie. homogenised and averaged) dataset, don’t you? In other words, a highly averaged dataset (BEST) is quite likely to agree with another highly averaged dataset!”
Waste of time comparing CRU to BEST. The only V&V is to compare observed actual measurements at a specific location i.e. find a long running station requiring no adjustment whatsoever (e.g. no UHI) that either CRU or BEST have NOT used (rare yes but possible) and compare the series profile, trend, and absolute values to gridded interpolated output. Either that or as Bill Illis says: February 6, 2014 at 5:42 pm:
>”I just checked the data against the numbers for my own location (which I know are quality controlled and fully adjusted for whatever needs adjusting). And it is close, but the Crutemp4 trend is 0.048C per decade or a total of 0.6C over the whole record higher than my own location (which I know is quality controlled and fully adjusted).”
Where was that Bill?
Up-thread I posted links to two such case studies, one comparing BEST to an adjusted series, the other to a long running non-adjusted series:
http://wattsupwiththat.com/2014/02/06/cru-produces-something-useful-for-a-change/#comment-1560562
BEST fails badly on the output V&V (worse than Bill’s CRU check) but their adjusted input datasets appear rather better. Haven’t done the same for CRUTEM yet but it seems to me that kriging, averaging, interpolation, whatever, just doesn’t work in the real world.
It may work (case study reqd – Bill?) if the orography and therefore microclimate doesn’t change over vast distances like say parts of Australia, Africa, or US Midwest but in New Zealand where microclimate changes from district to district in the space of 100km it just does not work.
In the New Zealand examples linked above , BEST uses the same output temperature profile for the adjacent Auckland, Waikato, and Bay of Plenty districts. All that happens is the absolute levels move up and down the y axis. Problem is, each respective district microclimate is completely different and the profile of two Waikato stations doesn’t match the BEST output profile for Waikato that is also common to Auckland and Bay of Plenty.