CRU produces something useful for a change

World temperature records available via Google Earth

Climate researchers at the University of East Anglia have made the world’s temperature records available via Google Earth.

The Climatic Research Unit Temperature Version 4 (CRUTEM4) land-surface air temperature dataset is one of the most widely used records of the climate system.

The new Google Earth format allows users to scroll around the world, zoom in on 6,000 weather stations, and view monthly, seasonal and annual temperature data more easily than ever before.

Users can drill down to see some 20,000 graphs – some of which show temperature records dating back to 1850.

The move is part of an ongoing effort to make data about past climate and climate change as accessible and transparent as possible.

Dr Tim Osborn from UEA’s Climatic Research Unit said: “The beauty of using Google Earth is that you can instantly see where the weather stations are, zoom in on specific countries, and see station datasets much more clearly.

“The data itself comes from the latest CRUTEM4 figures, which have been freely available on our website and via the Met Office. But we wanted to make this key temperature dataset as interactive and user-friendly as possible.”

The Google Earth interface shows how the globe has been split into 5° latitude and longitude grid boxes. The boxes are about 550km wide along the Equator, narrowing towards the North and South poles. This red and green checkerboard covers most of the Earth and indicates areas of land where station data are available. Clicking on a grid box reveals the area’s annual temperatures, as well as links to more detailed downloadable station data.

But while the new initiative does allow greater accessibility, the research team do expect to find errors.

Dr Osborn said: “This dataset combines monthly records from 6,000 weather stations around the world – some of which date back more than 150 years. That’s a lot of data, so we would expect to see a few errors. We very much encourage people to alert us to any records that seem unusual.

“There are some gaps in the grid – this is because there are no weather stations in remote areas such as the Sahara. Users may also spot that the location of some weather stations is not exact. This is because the information we have about the latitude and longitude of each station is limited to 1 decimal place, so the station markers could be a few kilometres from the actual location.

“This isn’t a problem scientifically because the temperature records do not depend on the precise location of each station. But it is something which will improve over time as more detailed location information becomes available.”

This new initiative is described in a new research paper published on February 4 in the journal Earth System Science Data (Osborn T.J. and Jones P.D., 2014: The CRUTEM4 land-surface air temperature dataset: construction, previous versions and dissemination via Google Earth).

For instructions about accessing and using the CRUTEM Google Earth interface (and to find out more about the project) visit http://www.cru.uea.ac.uk/cru/data/crutem/ge/. To view the new Google Earth interface download Google Earth, then click here CRUTEM4-2013-03_gridboxes.kml.

0 0 votes
Article Rating
134 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Steve W
February 6, 2014 12:13 am

At last we are getting somewhere with transparency. Well done to the CRU!

John Peter
February 6, 2014 12:20 am

So will it now be possible for independent analysts to ascertain if CRUTEM4 is reliable as an indicator or if “warming” has been added lately by reducing pre satellite era temperatures through “adjustments”?

February 6, 2014 12:23 am

Are those temperature records raw data or have they been, “Hansen-ed”? Is not putting carefully-selected everythings on Google a good way of turning doubtful computer-generated data into accepted truth to underwrite the CAGW narrative? “A lie will be halfway round the world before the truth can get its boots on”. One of Stalin’s favourite sayings just may be the watchword behind this move. CRU has form in this matter.

Bertram Felden
February 6, 2014 12:24 am

Top job CRU. Kudos to Dr Osborn and the team. It’s great to see this kind of openness. Given the amount of rain the UK has had lately I wonder will there be a project to get precipitation and wind speed data on there too?

Rob
February 6, 2014 12:33 am

A potentially positive step. I have most of the raw and “corrected” data for the U.S.
I’ll be watching with interest!

Mailman
February 6, 2014 12:44 am

As a few others have already touched upon it would be interesting to see if you could run “reports” using unadjusted temperature data wouldn’t it?
Somehow I doubt this data will be available? Hopefully I’m wrong and unadjusted temp data is available but I suspect it’s not.
Regards
Mailman

Somebody
February 6, 2014 1:00 am

“the temperature records do not depend on the precise location of each station”
Notice the wording. ‘Temperature records’.
Not temperature, which of course it depends on position, they do not have a system at thermodynamic equilibrium to have the same temperature everywhere. In fact, to have a temperature defined…

rtj1211
February 6, 2014 1:21 am

I have to say that getting the defendants at trial to put their version of events, verbatim, into the judge’s summation to the jury does seem a slightly strange way of proceeding in climate justice.
The data should only be ‘put out there’ after it is accepted that it is raw data. Sanitised data can only be put out there if it includes all the details of how it was sanitised and how that sanitisation has been justified.
Otherwise, you’ve just got ‘digital warming’ gone mad……..

Stephen Richards
February 6, 2014 1:21 am

Peresumably these are the adjusted, sanitised and greenpiss approved temperatures. What’s the point. Let’s see the RAW date. You know, the stuff they haven’t adulterated in the name of CO² tax.

wayne
February 6, 2014 1:24 am

“… the world’s temperature records …”
What the heck does that mean? Will the CRU also provide the adjustments applied to the temperature records per location or cell and actually make this transparent and “open”? Without letting people also see what they have also done to the raw temperature data this is but another huge layer of Global Warming propaganda. No? Well many understand why… because all of the collective adjustments are upward, inversed, and then applied negatively backwards to make the far past reading cooler than the thermometers literally read at that historic time and location. Walla… climatologist-made Global Warming.
If I am wrong on this and CRU used the pre-adjusted records on Google Earth, my apologies in advance, but that will be an incredible first.

Kev-in-Uk
February 6, 2014 1:30 am

Regarding others comments about data adjustment, I also tend to view with suspicion. If unreasonable data adjustment has taken place, it might be fairly easy to find out (in reasonably developed areas at least). Take your local ‘main’ library for example, it may house weather records, or a copy of them. Hence, a bit like the surfacestations project, a number of volunteers could perhaps search for the ‘written’ information and then compare to the ‘official’ record shown on this dataset?
Much as I am sure that many older written records could have been removed (intentionally or not) – I’m also sure that many will remain forgotton on dusty bookshelves!

D, Cohen
February 6, 2014 1:39 am

It should provide access to the raw data at the locations — the specified data stations — where it was collected by specified individuals or organizations. Otherwise, not interested. Don’t be fooled by the illusion of having nothing to hide.

Disputin
February 6, 2014 1:42 am

As I read it, Wayne is being a little unfair (quite understandable, given the history of bad faith from warmisti). It seems they are going to put actual weather station records on. If so, well done indeed CRU!
I’ve long argued that, to see if the world is warming, or cooling, or just buggering about, rather than trying to get an “average temperature” which is completely meaningless for all the reasons people on here have said, it is only necessary to look at trends of individual stations. Then any trends can be evaluated, e.g. for urbanisation or other land use changes and obvious causes eliminated. Then, should you wish, you can take the average trend.

Nick Stokes
February 6, 2014 1:47 am

Mailman says: February 6, 2014 at 12:44 am
“As a few others have already touched upon it would be interesting to see if you could run “reports” using unadjusted temperature data wouldn’t it?
Somehow I doubt this data will be available? Hopefully I’m wrong and unadjusted temp data is available but I suspect it’s not.”

This is gridded data, so the notion of “raw” data doesn’t really apply. It’s locally averaged, and some kind of homogenisation is likely done; it should be.
If you want unadjusted station data, it’s all on the GHCN unadjusted file.
If you want to see that in a GE-like environment, it’s here, month-by-month. It won’t, currently, pop up a graph, but it will produce the monthly numbers.

Berényi Péter
February 6, 2014 1:55 am

Here is the paper.

We have now developed a KML interface that enables both the gridded temperature anomalies and the weather station temperatures to be visualized and accessed within Google Earth or other Earh browsers.

Fine. Where is the interface definition?
If it is done properly it should be flexible enough to accommodate to any other dataset, including raw temperature data, wind speed, pressure, precipitation, etc.
Therefore this device needs to be published urgently under GPL in a revision control system, otherwise it is nothing but another useless propaganda tool.

Patrick
February 6, 2014 2:00 am

“Nick Stokes says:
February 6, 2014 at 1:47 am
This is gridded data, so the notion of “raw” data doesn’t really apply. It’s locally averaged, and some kind of homogenisation is likely done; it should be.”
Rubbish! Stokes stop trying!

February 6, 2014 2:03 am

Now wait just a multi-decadal minute! Haven’t I seen this P..D…O…. before?????

Patrick
February 6, 2014 2:04 am

I disagree. It “appears” useful, to the “useful idiots” (Politicians and the like). Given the screenshot, how many ground based thermometers are there in Australia? I understand it is ~180, of which ~112 are used to calculate a national “average”. LOL its total bullcarp!

Old Ranga
February 6, 2014 2:06 am

Are the figures fudged or unfudged, dare one ask?

Nick Stokes
February 6, 2014 2:09 am

Patrick says: February 6, 2014 at 2:00 am
“Rubbish! Stokes stop trying!”

OK, Patrick, where would you expect to find raw data for a grid cell?

Nick Stokes
February 6, 2014 2:18 am

Nick Stokes says: February 6, 2014 at 1:47 am
“This is gridded data,…”

Although the top level data is gridded, I see you can drill down to get station data.

mark
February 6, 2014 2:20 am

Newbie questions about the temperature data.
– The data in the dozen or so stations I looked at are all monthly averages. Is that the data that Crutem4 uses?
– In this paper http://www.nrcse.washington.edu/NordicNetwork/reports/temp.pdf they show computing averages by the minute vs. min/max and get major differences in standard deviations. Is there a standard algorithm that the various stations use? Or does each station “operator” choose the algorithm they use and provide the daily/monthly average to CRU? Do they report the algorithm they use to CRU?
Sorry if this is basic info – links to educate myself would be great.

holts
February 6, 2014 2:20 am

by using raw data on each grid cell mean!

charles nelson
February 6, 2014 2:21 am

As the Warmists flood the world with their adjusted, homogenised, gridded data, one is reminded of the switch from the Julian Calendar to the Gregorian Calendar used today…it was only when people began to notice that Christmas was getting closer to the middle of Spring that the revision took place.
Already there is a strong sense that simply by looking out the window the general public are becoming more and more skeptical of Warmist claims.

DDP
February 6, 2014 2:33 am

“This isn’t a problem scientifically because the temperature records do not depend on the precise location of each station.”
Derp. Lots of small errors combined make big errors

John Shade
February 6, 2014 2:37 am

I do not think anything from CRU deserves such automatic trust and admiration, although I admire the generosity of spirit that such responses reveal. My own immediate reaction was less noble. It was along the lines of ‘what are they up to now?’. I would like to see some critical review of this product.

Bloke down the pub
February 6, 2014 2:41 am

“This isn’t a problem scientifically because the temperature records do not depend on the precise location of each station.”
Now remind me, how do we know if a station’s record is affected by uhi?

johnmarshall
February 6, 2014 2:50 am

How do we know if this data is altered or raw data? Only raw data will do.

Patrick
February 6, 2014 2:57 am

“Nick Stokes says:
February 6, 2014 at 2:09 am”
Well, nowhere. That’s the point.

Alan the Brit
February 6, 2014 3:00 am

I hate to sound so a Grumpy Old Man, but I think the lines spoken by the late, great, Trevor Howard (actor), in a scene from the movie Battle of Britain may ring true……”the bastards are up to something!” It just sounds a little too good to be true at the moment, for me anyway.

Nick Stokes
February 6, 2014 3:04 am

I see that CRU has archived versions back to 1998, if you want to see what changes have been made.

troe
February 6, 2014 3:08 am

Anthony
You have to post the “Professor living in a dumpster for a year in Austin, TX”story up on Climate Depot. It really can’t wait till Friday.

Nick Stokes
February 6, 2014 3:14 am

Patrick says: February 6, 2014 at 2:57 am
“Well, nowhere. That’s the point.”

Well, my point is that grid averaged data is necessarily processed. You at least need to anomalise to average.
Anywhere, their paper is here. They describe homogenisation in Sec 2.2. In fact, it seems they don’t do much now. They did in early days. So the underlying station data, which they show, is rarely changed.

February 6, 2014 3:52 am

I notice Skeptical Science now has “Google Earth: how much has global warming raised temperatures near you?” [It hasn’t]
Local station seems to have values from before it came into existence, otherwise the trend looks believable. 1 degree rise in average, airport site over 50+ years, a few prop aircraft to numerous jets, UHI.

wayne
February 6, 2014 4:17 am

Here is one example of the “adjustments” I am speaking of, in a town in my state, randomly chosen:
Went to http://cdiac.ornl.gov/epubs/ndp/ushcn/usa_monthly.html, scrolled down to map of states to pick mine.
Get the raw monthly minimums for example:
http://cdiac.ornl.gov/cgi-bin/broker?id=343821&_PROGRAM=prog.gplot_meanclim_mon_yr2012.sas&_SERVICE=default&param=TMINRAW&minyear=1892&maxyear=2012
Get the “adjusted” monthly minimums:
http://cdiac.ornl.gov/cgi-bin/broker?id=343821&_PROGRAM=prog.gplot_meanclim_mon_yr2012.sas&_SERVICE=default&param=TMIN&minyear=1892&maxyear=2012
Notice the difference? You should! Seems 1900’s temp was moved all of the way down from 51°F down to 47°F, just four degrees, that’s all. Same for close years to 1900. THAT is what I mean when I said the adjustments are overwhelmingly upward but they are being applied inversely, that is downward, to the far past years. The further back you go, the larger the negative adjustment is applied to the ABSOLUTE values! This is not only seen in anomalies.
I rest my case.
Try some towns about your state. It is quite easy.

February 6, 2014 4:50 am

Now any monkey can get raw data on historical temps around the globe, but only a “Climate Scientist” knows how to make the data do what he wants.

February 6, 2014 5:21 am

I actually like the GISTEMP colored maps, very useful for making a trends since-then, or comparing some period against any selected reference period. Pity that data itself are crap.
http://data.giss.nasa.gov/gistemp/maps/

Editor
February 6, 2014 5:21 am

If you want to compare adjusted (raw) and adjusted data via Google Earth, KML files have been available since 2010 for GHCNv2 and GHCNv3 (beta)
http://diggingintheclay.wordpress.com/2010/10/06/google-earth-kml-files-spot-the-global-warming/
http://diggingintheclay.wordpress.com/2010/10/08/kml-maps-slideshow/
Note though that these are snapshots of the data in time and have not been updated. It is however instructive to see the trends of the individual stations and their variability.
Concerning gridded data, I am not a fan. I agree some adjustments are necessary (basically agree with evanmjones’ comments here: http://wattsupwiththat.com/2014/01/29/important-study-on-temperature-adjustments-homogenization-can-lead-to-a-significant-overestimate-of-rising-trends-of-surface-air-temperature/) but as soon as you homogenise to produce gridded data you mix well sited stations, with badly sited ones, and methods to pick out station moves etc are far from perfect.

RichardLH
February 6, 2014 5:39 am

Nick Stokes says:
February 6, 2014 at 2:09 am
“OK, Patrick, where would you expect to find raw data for a grid cell?”
Why would you want grid cell, partially interpolated, information in the first place?
That is just an exercise in trying to re-create a 2D field when it would appear that you do not have the Nyquist level of sampling required to do so accurately.
It just, in effect, creates a set of weighting factors that are then applied to the sampling point records themselves.
You could just track the changes in the sampling points directly and achieve a higher overall level of accuracy.

February 6, 2014 5:47 am

RichardLH:
At February 6, 2014 at 5:39 am you write

Why would you want grid cell, partially interpolated, information in the first place?
That is just an exercise in trying to re-create a 2D field when it would appear that you do not have the Nyquist level of sampling required to do so accurately.
It just, in effect, creates a set of weighting factors that are then applied to the sampling point records themselves.
You could just track the changes in the sampling points directly and achieve a higher overall level of accuracy.

Yes! Well said!
I have repeated it as emphasis and in hope that this will catch the attention of any who missed it when you wrote it.
Richard

Bill from Nevada
February 6, 2014 5:55 am

Here is what the writer of the now
legendary file in the climate gate emails called “Harry_Read_me.txt
had to say about CRU and the shape of the information they are in control of.
If you haven’t really, pored through the Climate Gate Emails, go online to some of the many sites where the emails are highlighted
and the background, explained.
Someone who was a computer modeler doing global climate research put notes into the ‘remarks’ of one of the climate models he was building.
In programming, you can insert ”remarks” which give details, or important fundamentals, regarding the program or whatever, and since they are annotated AS ”remarks”,
the program when running simply ignores those lines. But later on people working with the computer program can read: and discover for themselves, whatever is documented about places the program performs well, or performs poorly, or really just about anything.
Here is a quick list of some of the most telling things said about CRU and it’s data manipulation, in the Harry_Read_Me.txt file.
If you’ve already seen it all then it’s old news.
But every day there’s a whole group of people checking into this the first time.
Obviously the lines lifted from the Harry_Read_me.txt file below are highlights.
I googled the file, opened some tabs, and grabbed some excerpts.
=======
“But what are all those monthly files? DON’T KNOW, UNDOCUMENTED. Wherever I look, there are data files, no info about what they are other than their names. And that’s useless …” (Page 17)
– “It’s botch after botch after botch.” (Page 18)
“Am I the first person to attempt to get the CRU databases in working order?!!” (Page 47)
– “COBAR AIRPORT AWS (data from an Australian weather station) cannot start in 1962, it didn’t open until 1993!” (Page 71)
“What the hell is supposed to happen here? Oh yeah — there is no ‘supposed,’ I can make it up. So I have : – )” (Page 98)
– “You can’t imagine what this has cost me — to actually allow the operator to assign false WMO (World Meteorological Organization) codes!! But what else is there in such situations? Especially when dealing with a ‘Master’ database of dubious provenance …” (Page 98)
– “So with a somewhat cynical shrug, I added the nuclear option — to match every WMO possible, and turn the rest into new stations … In other words what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad …” (Pages 98-9)
– “OH F— THIS. It’s Sunday evening, I’ve worked all weekend, and just when I thought it was done, I’m hitting yet another problem that’s based on the hopeless state of our databases.” (Page 241).
– “This whole project is SUCH A MESS …” (Page 266)

RichardLH
February 6, 2014 6:04 am

richardscourtney says:
February 6, 2014 at 5:47 am
Thanks. Just a simple engineering point of view 🙂

Greg
February 6, 2014 6:19 am

“The move is part of an ongoing effort to make data about past climate and climate change as accessible and transparent as possible.”
Where’s the “transparency” in all this? CRUTemX is all based on unverified adjustments to non existant data.
Has anyone forgotten Prof. Phil Jones’ famous “why should I give our data you only want to find something wrong with it” ?
Or the “oops the dog ate it” excuses for not having the original data ANYWHERE?
Or “if I did have to hand over the file I think I’d rather destroy it ” ?
Or the Infomation Commissioner’s decision that there probalby was grounds for procecuting a criminal breach of FOIA , except that they were smart enough to procrastinate long enought for the statutory time limit to run out on the offence?
Sorry guys but this is not more “transparent” than it was last week. They’ve just made thier data, which has no grounding in observable records (they’re gone), more readily available to that peopel can be more easily duped into thinking it is an objective scientific record.

February 6, 2014 6:37 am

Now we can finally see for ourselves how much the parking lots of the world have warmed.

Greg
February 6, 2014 6:41 am

Bill from Nevada says:
February 6, 2014 at 5:55 am
Thanks for the helpful tips from “Harry”. , we can see why Phil Jones would rather destroy thier files than let someone rigorous like Steve McIntyre get a look at them.

Greg
February 6, 2014 6:48 am

Juraj V says:
February 6, 2014 at 5:21 am
I actually like the GISTEMP colored maps, very useful for making a trends since-then, or comparing some period against any selected reference period. Pity that data itself are crap.
http://data.giss.nasa.gov/gistemp/maps/
===
Indeed, this whole exercise of putting a fancy hi-tech front-end on a corrupt and non-verifiable database is like have jacked-up suspension, and a custom paint job on car with a worn out Lada engine.
It’s masking mess that is inside and trying to fool the observer.

ferdberple
February 6, 2014 7:05 am

Something doesn’t add up. A quick look at the GE data shows warming on the south west coast of BC Canada. The weather records from Environment Canada show no such warming.

ferdberple
February 6, 2014 7:26 am

RichardLH says:
February 6, 2014 at 5:39 am
You could just track the changes in the sampling points directly and achieve a higher overall level of accuracy.
============
Assuming that accuracy was the objective.

Pamela Gray
February 6, 2014 7:30 am

Gridded data is convenient. For all the wrong reasons scientifically but for all the right reasons politically. Station data is inconvenient. For all the right reasons scientifically but for all the wrong reasons politically.

JJ
February 6, 2014 7:33 am

“This isn’t a problem scientifically because the temperature records do not depend on the precise location of each station.”
This is the mindset that ultimately succumbs to the notion that it isn’t a problem scientifically that the temperature records do not depend on the precise temperature of each station.
Of course, they don’t subscribe to the notion that it is a problem scientifically that the things they call “temperature records” are not temperature records. The rest follows.

February 6, 2014 7:42 am

My grid cell has been [warmer in the past] and it is cooling now. I’m not a big believer in AGW. Thanks CRU!
http://sunshinehours.wordpress.com/2014/02/06/crutem4-is-on-google-earth/

February 6, 2014 7:43 am

I meant “warmer in the past”. Not “warming”. Darn.

C.M. Carmichael
February 6, 2014 7:43 am

Will I be able to access unadjusted data? Is it OK to change the data if it makes my model output look like hash?

Greg
February 6, 2014 7:57 am

Stephen Richards says:
Peresumably these are the adjusted, sanitised and greenpiss approved temperatures. What’s the point. Let’s see the RAW date. You know, the stuff they haven’t adulterated in the name of CO² tax.
==
Sorry Stephen , apparently they “lost” the original data years ago.

February 6, 2014 8:14 am

Great so now they can feed google earth with fraudulent, tainted data and say ‘presto, see it really is warming….’ and hide all declines behind google. In this vein Jan ’14 will be the warmest ever month in US history.

Old'un
February 6, 2014 8:50 am

Credit where credit is due – in their introduction to CRUTEM 4, CRU say (caps mine):
‘These datasets have been widely used for assessing THE POSSIBILITY OF anthropogenic climate change’
Three little words that make a world of difference – and sadly, could get someone up before the inquisition.

Dave in Canmore
February 6, 2014 9:10 am

Checked a couple stations quickly and my gridcell in the Rocky Mountains of Canada bears little resemblance to the station data from Environment Canada. Fun project tonight will be to get every station from the gridcell and take a look.

RichardLH
February 6, 2014 9:17 am

ferdberple says:
February 6, 2014 at 7:26 am
“Assuming that accuracy was the objective.”
As an Engineer and Scientist I try to logically follow only that path.

February 6, 2014 9:31 am

‘Dave in Canmore says:
February 6, 2014 at 9:10 am
Checked a couple stations quickly and my gridcell in the Rocky Mountains of Canada bears little resemblance to the station data from Environment Canada. Fun project tonight will be to get every station from the gridcell and take a look.”
CRU do not use EnvCanada.
1. they use data that has been homogenized.
2. Env Canada data comes in two formats QC and non QC. so unless you are careful
you may be comparing junk data to homogenized data. The guy named sunshine hours
made this mistake. So, check the quality flags

Brian H
February 6, 2014 9:42 am

Real transparency would allow “data drilling” into each record to view all adjustments and their rationale and provenance. The way it’s supposed to be.

Richard Mallett
February 6, 2014 10:00 am

You can compare the station data with Berkeley Earth and with http://www.rimfrost.no for example, unless you assume that they are all in on the conspiracy.

Janice Moore
February 6, 2014 10:44 am

No applause, just cool, arms folded, healthy, skepticism.
Repeat a lie often enough… .
********************************************
Re: “…all in on the conspiracy.” (Mallett at 10am) — It doesn’t require a “conspiracy” for a bunch of people, all independently motivated by essentially the same thing (money and or power), all aware of a given con game, to act corruptly.
I.E., having the same goal is NOT necessarily being on the same team. Every runner in a race is aiming for the finish line, only a few are on the same team.
***********************************************************
Re CRU: Once someone is a known l!ar, to trust anything they say is irrational, thus, here:
Do NOT trust — ONLY VERIFY.

Richard Mallett
Reply to  Janice Moore
February 6, 2014 1:30 pm

So the people at CRU, Rimfrost and Berkeley Earth are all motivated by money and / or power ? And they all think they can gain money and / or power by producing the same false station temperatures from all over the world, without anybody noticing, except those who say that the station temperatures must be false (without evidence) because all those people are not to be trusted ?

February 6, 2014 11:07 am

Mosher: “The guy named sunshine hours made this mistake.”
Mosher prefers heavily homogenized data.

wayne
February 6, 2014 11:54 am

It’s funny what catches your eye while browsing about.
Nice to see that CRUTEM4 on Google Earth has the Great Lakes area 9°F (5°C) warmer today than in 1880. Don’t think history agrees there though but will have to check this out. As the lakes completely freeze over today they can at least feel so much warmer after viewing this warming data, but I bet many don’t agree this day of Feb 6, 2014.
Per this map at 717491 — PORT ARTHUR up north and to the west of Lake Superior the trend went nearly linear from a low average at 31°F in about 1883 to a high average of 42°F in about 1933, a 11 degree warming trek upward pre-AGW, more than even the averaged grid area before that temperature station was evidently discontinued. Hmm. Drilling down to the station level is going to be rather amusing and something to do for for the next two weeks until it is projected to finally get back above freezing, we keep hovering some twenty degrees below normal. But I must say this does make it so much easier to browse about but including the adjustment differentials sure would put it all in one place.

Tez
February 6, 2014 12:02 pm

In 2009, CRU admitted that:
“Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.”
The raw data has ceased to exist.

KNR
February 6, 2014 12:23 pm

if the CRU told me it was raining I go outside to check , that level of distrust is one they have fully earned.

Kev-in-Uk
February 6, 2014 12:28 pm

To be fair, I don’t think Mosh just likes homogenised data – but like all of us, he does like his data to be verified via some form of quality control – which is a perfectly valid stance!
However, and this is the crux that the data keepers (and the data ‘pushers’) don’t like to shout about – many of the QC procedures are either computerised checks/algorithms (rerun on previous computer entered datasets), or were manually made, and not ‘recorded’ as demonstrated in Harry-readme.txt. The bottom line being that we (public) usually have to accept that the presented data (i.e. the stuff from the datakeepers, homogenised or otherwise) is correct, as we have very little means to check it. What causes my personal continued skepticism about the data is that whenever anyone seems to check it (especially individual station data), they nearly always find inexplicable adjustments, or at best, poorly explained or simply unjustified ‘adjustments’. Now, I accept that the adjustments may well be justified, but without detailed explanation – how do we know?
Even drilling down to the station data gains nothing, unless you can see what QC and adjustments (as required, and as explained) were made to it.

Kev-in-Uk
February 6, 2014 12:33 pm

KNR says:
February 6, 2014 at 12:23 pm
I think it’s worse than that – I wouldn’t even bother to go and check, as I’d fully expect it to be the opposite!

February 6, 2014 12:47 pm

Nick Stokes said February 6, 2014 at 1:47 am –
[If you want unadjusted station data, it’s all on the GHCN unadjusted file.]
That might lead up false alleyways Nick. And yes I realise PDJ himself has suggested this in recent years.
If anybody wants to check what CRU do to data they have to get the CRU station strings they actually use at various stages in their process.
In another time summaries of two versions were published – and I have an html version of their 1986 Southern Hemisphere book –
http://www.warwickhughes.com/cru86/tr027/index.htm
SH stations were only several hundred then so it is easier to try and follow what was done.
Appendix A lists “Station history Information and Homogeneity Assessment Details” – something we never see post 1991.
http://www.warwickhughes.com/cru86/tr027/tr027_60.gif – an example
Appendix B lists the fewer – “Stations used in the gridding algorithm”
http://www.warwickhughes.com/cru86/tr027/tr027_72.gif
When you dig back beyond CRU’s WMO stations into National data – prepare for various surprises – such as Sydney Airport transmogrifies into the WMO Sydney used by CRU. In the case of some Perth data CRU quote – I tried to get the source but despite asking the BoM – never found what station it came from.
I think Harry saw the scene clearer than most of us ever could.

Nick Stokes
February 6, 2014 12:55 pm

Tez says: February 6, 2014 at 12:02 pm
“The raw data has ceased to exist.”

The raw data has not ceased to exist. It sits, as it always did, with the national met offices, from which CRU had copies. Organisations like GHCN make it conveniently available.

AlexS
February 6, 2014 1:00 pm

So wuwt is happy with more lies being promoted by CRU….who would have thought.

Matt G
February 6, 2014 1:07 pm

RichardLH says:
February 6, 2014 at 6:04 am
richardscourtney says:
February 6, 2014 at 5:47 am
“Thanks. Just a simple engineering point of view :-)”
A very good point of view.
Exactly, grid cells create false weighting that doesn’t represent the data samples in the first place.
Anyone remember December 2010 for the UK?
http://www.metoffice.gov.uk/pub/data/weather/uk/climate/anomacts/2010/12/2010_12_MeanTemp_Actual.gif
It was an extremely cold month with the CET (not shown above) only just being the 2nd coldest December on record in the data set, going back to the 17th century. What did the grid cell version show covering the UK in December? Between 0.5 c and 1.0.c above normal and that’s when I realized what a waste of time grid cell data was, it doesn’t represent the Instrumental data stations accurately at all.
Will repeat that again, the grid cell data showed between 0.5 c and 1.0 c above normal. (in the light pink/red color)

Matt G
February 6, 2014 1:25 pm

This was the anomaly of the same month with the baseline the coldest period available here.
http://www.metoffice.gov.uk/pub/data/weather/uk/climate/anomacts/2010/12/2010_12_MeanTemp_Anomaly_1961-1990.gif
Still off the scale for most of the UK.

February 6, 2014 1:26 pm

Two years ago I put all the hadcrut3 station data up on an interactive map to drill down to the data http://clivebest.com/world/Map-data.html.

PMHinSC
February 6, 2014 2:45 pm

Ref Steven Mosher: February 6, 2014 at 8:13 am”
Thanks for the link to “Map of the more than 40,000 temperature stations used by the Berkeley Earth analysis.”
I attempted to locate 6 sites with lat/long information within 30 minutes of my house. Using Google Earth I drove to those locations and was only able to locate 2 out of the 6. One was at a county airport and the other was at a radio station. Little wonder with a tolerance of up to ± 0.05 degrees (almost 3 miles).

February 6, 2014 2:52 pm

Dr Osborn said: “This dataset combines monthly records from 6,000 weather stations around the world – some of which date back more than 150 years. That’s a lot of data…”
No its not a lot of data, its a trivial amount of data.
6000 * 12 * 150 years = 10.8 million data values (actually its probably a lot less than that in practice as there were fewer thermometers 150 years ago, but lets be generous.). So in single precision floating point that’s 43.2 million bytes or a 41 Mb data file (not including location and metadata).
If the climate scientists think that’s a lot of data they need to get out more.
A medium size 1,000 km^2 3D Seismic survey recording at 25 x 25 m trace spacing (pretty course these days), with pre-stack fold of 60, sampled at 4 ms and covering a time window of 5.0 secs contains 447 Gbytes of raw data and these are being recorded and processed on a continuous basis by seismic contractors all over the world. Even the final stacked, migrated, data processed volume would be 7.5 Gb volume. I have dozens of these datasets sitting on my servers right now.
Why do climate scientists make so much fuss about such trivial amounts of data? And what’s the excuse for CRU “losing” some? Even a tiny memory stick could hold all the worlds temperature data recorded on a monthly basis. Frankly, its pathetic. 41 Mb? I’ve got Excel and PowerPoint files much bigger than that.

Kev-in-Uk
February 6, 2014 3:03 pm

Richard Mallett says:
February 6, 2014 at 1:30 pm
did you mean to put a /sarc? or are you being serious?
When your motor mechanic says you need a new engine do you automatically believe him? What if you find that the ‘data’ is actually that he says it needs a new engine because he had it checked over by a mate (because he was busy on another motor), who was also busy, so had it checked by his mate (a junior trainee mechanic), who heard a slight rattle and said it sounded bad, and so the second mate said it was a ‘bag of spanners’ and your actual mechanic interpreted that as the engine being completely ‘fecked’ beyond repair……would you accept it on face value? Yeah, right!
Just so there’s no confusion with my analogy. Some guy records the temperature data, several decades later, some other guy decides it needs adjusting. Some years later, another guy runs a computer analysis of the data and sees some anomalies, so writes a program to ‘smooth’ them, produces a ‘final’ set of data and then ‘accidentally’ destroys the original data, and the notes of all the adjustments. You wanna believe that data as correct?

Richard Mallett
Reply to  Kev-in-Uk
February 6, 2014 3:22 pm

So when Berkeley Earth says, for each station :-
The data for this station is presented below in several columns and in
% several forms. The temperature values are reported as “raw”,
% “adjusted”, and “regional expectation”.
%
% The “raw” values reflect the observations as originally ingested by
% the Berkeley Earth system from one or more originating archive(s).
% These “raw” values may reflect the merger of more than one temperature
% time series if multiple archives reported values for this location.
%
and you find that their raw data agrees with the data from CRUTEM4 and from Rimfrost, are you saying that we should throw away all these station records, because some unknown people, at some unknown time, for some unknown reasons, might have adjusted them before they arrived at Berkely Earth, Rimfrost and CRUTEM4 ? That would make all historical scientific data unreliable.

richardcfromnz
February 6, 2014 3:09 pm

It will be interesting to compare New Zealand CRUTEM since 1986 (see wazsah says: February 6, 2014 at 12:47 pm) with the New Zealand stations they DON’T use vs their grid output. Also vs NIWA’s 7SS and the NZCSET audit of it.
I did a similar V&V with BEST for Auckland and Hamilton, Waikato District. BEST output is not verified by the B station (non-climatically influenced i.e. no UHI etc) Te Aroha in the Waikato neither is it verified by NIWA Ruakura in the Waikato.
On the other hand, and critically, BEST input data for Auckland Albert Park after breakpoint analysis and before kriging DOES corroborate the NZCSET Auckland series but DOES NOT corroborate NIWA’s Auckland series. After kriging, BEST Auckland is nonsense as for Hamilton.
BEST Auckland NZ case study:
http://www.climateconversation.wordshine.co.nz/2014/01/salingers-status-clarified/#comment-535373
BEST Hamilton NZ, Waikato District, case study:
http://www.climateconversation.wordshine.co.nz/2014/01/salingers-status-clarified/#comment-541646
Wouldn’t surprise me if CRUTEM GE fails V&V in the same area too.

February 6, 2014 3:33 pm

Those grid cells sure render the average temperature/cel for the dubious metric it is.
Has the CRU raw data been recompiled yet?
Did I miss it?
Or is it still waiting for the MET to fulfill the promise of the CRU email whitewashes?
3 years are up.

Rob aka Flatlander
February 6, 2014 3:41 pm

I can’t wait to start analyzing this stuff against areas I know.

February 6, 2014 3:44 pm

Very nice. It;’s an easy way to access and download the station data and also see the annual and seasonal graphs

February 6, 2014 3:46 pm

ThinkingScientist says: February 6, 2014 at 2:52 pm
“No its not a lot of data, its a trivial amount of data.”

It’s a lot of numbers to type. And they had to do some of that. It’s a lot of numbers to track down, gather and check.
“And what’s the excuse for CRU “losing” some? Even a tiny memory stick could hold all the worlds temperature data recorded on a monthly basis.”
They did not have memory sticks in 1984 when Phil Jones was putting his dataset together. They had 360 Kb floppies and 10 Mb hard drives.

February 6, 2014 3:47 pm

Matt G. I’m not sure what you are looking at, but on the Google Earth display Dec 2010 shows an anomaly of -0.31ºC for the grid-box covering most of England (52.5N 2.5W) . coldest since 1988

george e. smith
February 6, 2014 4:03 pm

So I divided earth’s surface into 6,000 equal area squares; one for each global measuring station, and I get a grid 291.5 km on a side; which is also 157.4 nautical miles, which is about 2.6226 degrees on a side.
I see I have already screwed up, because one of those references to the data set, says there aren’t 6,000 world wide measuring sites but something more than 5,500. Now I know lots of numbers bigger than 5,500, but I have no way to know which ones are on the side of a climate station.
Does UEACRU have a better guess, than “more than 5,500” I mean, they either have the data or they don’t; so why the secrecy about how many informants they have ??
So if their global grid is 5 degrees on a side, and not 2.63 degrees, then they only have one quarter of the grid cells in my grid, so their cells are more like 583 km on a side, which is still comfortably smaller than Hansen’s 1,000 km correlation radius.
I crossed the Pacific ocean on a ship once, from Wellington NZ, to Manhattan NY, via Panama; spent a lot of time at the bow of the ship filming flying fish flying off the bow wake. Had a whole month in Feb/Mar to do this.
Never ever saw a climate measuring station out there; but the ship did get hit by a tidal wave going 400 knots, (the wave silly; not the ship). Bloody exciting; the wave was 150 miles long, and a foot high. The ships captain tooted the horn right when we were on top of the wave (so’s we’d know).
So UEACRU must have built all those climate stations since 1961, although they say they go back to 1850. Never travelled anywhere in 1850, so I wouldn’t know.
Last time I checked on the Nyquist theorem, it didn’t say anything about AVERAGE sample intervals. It relates to the “Maximum Permissible sampling interval”, not the average. You can have shorter than maximum, sampling intervals, but not greater. The sampling can be random, so long as it is more often than the maximum interval.
I suspect that much of UEACRU’s 1850 plus data, is not REAL data at all; but is contaminated with aliasing noise; which can never be filtered out, since the noise is inside the signal bandwidth; so to lose the noise, you also have to lose some signal, which is noise of a different kind.
Prof John Christy, et al showed that oceanic near surface water Temperatures (one meter deep) and oceanic near surface lower troposphere air Temperatures (three meters high); are (a) not the same; and (b) not correlated.
So it is impossible to correct all those ancient oceanic temperatures obtained from a bucket of water from uncontrolled depth, in waters that meander around on ocean currents. They do not reflect the lower tropo air Temperatures that are measured at land based stations. For the about 20 years of data, from the buoys, that Christy studied. the actual air Temperature warming was about 60% of the actual water Temperature warming. Well it might have been that the water warming was 40% above the air Temperature warming; I forget, long time since I read their paper (Jan 2001 Geophysical Research Letters (I think)).
Now “important”; they didn’t say water warming is 40% more than air warming. They said the numbers were; for that particular 20 years of data; not forever and ever Amen.
So now we have the Google earth maps and we can see which places on earth are not correctly Nyquist sampled.
Can’t recall how Nyquist works for multivariable sampling; for temporal and spatial sampling, I believe that all spatial samples have to be taken simultaneously, and all Temporal samples have to be taken at the same time. So spatial samples can only reconstruct the spatial map if all locations are sampled at the same time, and you can’t use time sampling at just any old place to get the temporal data, the temps have to be for a specific place.
Well the whole idea of a global climate map is nutty anyhow. Climate is a local thing (check Vostok Station against YOUR climate) ; and it is the long term integral of your weather; everything that already happened at your spot.
We didn’t have a tropical storm Sandy in California, or a Hurricane Sandy, so it isn’t showing up in our climate.
Getting tired of click on links, that don’t return me to where I clicked on !!

Kev-in-Uk
February 6, 2014 4:05 pm

Richard Mallett says:
February 6, 2014 at 3:22 pm
I assume you are responding to my earlier post?
Firstly, raw data is sacrosanct in the scientific world – or at least it should be. In simple terms, the terminology is called ‘traceability’. You can ‘change’ things a thousand times, so long as you can trace it back to original data, before, during and after each change. For example, your thermometer might be a degree or so ‘out’? that’s fine, so long as you find out at some stage and then you can correct for it – but you NEVER change the actual original recorded data, because you may find out later that the correction was too much/too little, etc, etc. So that’s rule 1
Secondly, if you make a change, for a good reason, you record ‘why’ and keep a copy of before and after values.
In terms of your comment regarding BEST matching CRUTEM4 – you do know that CRUTEM4 is a gridded (ie. homogenised and averaged) dataset, don’t you? In other words, a highly averaged dataset (BEST) is quite likely to agree with another highly averaged dataset! Moreover, since BEST incorporated such data into its compilation, it makes logical sense for it to follow the same trends, depending, of course, on the ‘weighting’ applied to the various components.
As for throwing away station records – that is a somewhat crass statement. It is exactly what I am not advocating! Indeed, I’d like the original station records (as in, the ones ‘lost’ by CRU) to be found and published……….

Richard Mallett
Reply to  Kev-in-Uk
February 6, 2014 4:24 pm

I enter my replies in the box that says ‘Leave a Reply to Kev-in-Uk’ – I don’t know of any other way to reply to your posts. I am talking about station records from BEST, CRUTEM4 and Rimfrost, not the 5 by 5 degree gridded data. No weighting or averaging needed to be applied before publishing CRUTEM4, BEST or Rimfrost, as explained in the BEST station data header that I quoted. I was asking you if we should throw away the raw (i.e. original) station data that were received (and published on the three websites) on the grounds that we don’t know if they were adjusted at source. That would be the only explanation if a record from the same station from BEST, CRUTEM4 and Rimfrost were all identical.

February 6, 2014 4:17 pm

Berényi Péter : KML is indeed an open standard and you can find the specs here

george e. smith
February 6, 2014 4:21 pm

Time samples at the same place, ( at all times) and spatial samples at the same time (for all sites) You can’t just gather up samples made at random times in random locations, and say they represent a two D map.
At ANY sampled time, you must get a sample at each site, simultaneously, in order to say the global Temperature map was thus at THIS TIME, and at the next sampling time instant, you again need simultaneous samples from all sites to be able to say; this is what the spatial map changed too at this time.
Even if you aren’t interested in seeing the reconstructed CONTINUOUS FUNCTION, but only want an AVERAGE in TIME or in SPACE or in both; a factor of only two under-sampling, will fold the noisy spectrum back to zero, and corrupt the average.

RichardLH
February 6, 2014 4:27 pm

Steven Mosher says:
February 6, 2014 at 8:13 am
Can you provide a percentage figure for the Global 1*1 degree cells that are covered by the actual data?
The ratio of measured as opposed to interpolated values. At say 50, 100, 150 years ago?
My counts when using the BEST databases show that the figures drop off very fast.
Station counts are of little use as the same station can appear more than once and there may well be more than one station per cell as well.

RichardLH
February 6, 2014 4:58 pm

george e. smith says:
February 6, 2014 at 4:21 pm
“Time samples at the same place, ( at all times) and spatial samples at the same time (for all sites) You can’t just gather up samples made at random times in random locations, and say they represent a two D map.”
Actually it is a lot more complicated than that. To capture the evolution of a 2d map in time terms with the data regime available is pushing the limits very hard indeed.
Firstly each sampling point provides, in time series terms, the average for noon the previous day as acquired from midnight to midnight over the whole of the preceding day. So we have a continuously varying temporal scheme merged in with a non optimum spacial one. We also mainly have TMax + TMin / 2 = TMean which, although close enough for government work, is hardly a high quality averaging methodology. It really only close to the right answer on 12 hour days. Most of the rest of the time it is +-1.0C, possibly more. On 12 hour days the ‘range / 2’ is OK but for the rest it ought to be a more complicated Sine Wave plus DC offset equation which I don’t have to hand on my mobile. Assuming the ‘drain’ to cold is a nearly linear slope anyway.
Then we have a jittering sampling period. 28,30,31 and even 365,365,365,366. All play havoc with the pure temperature figure.
Weather, and hence temperature, also move across this sampling map which smears that out as well. If we did jpg or mpg sampling on the basis that Climate work is done, none of us would never be able to view pictures or watch movies!

Patrick
February 6, 2014 5:01 pm

The CRU lost all their raw data in office moves in the 1990’s. What was archived in 1998 would appear to be “adjusted” data and thus invalid IMO. And again, we see Nick Stokes defend the indefensible.
What I see here is an attempt to “sex up” the AGW agenda using, Google Earth, on what would ordinarily be boring min/max temperature tables. Nothing to see, move along.

Bill Illis
February 6, 2014 5:42 pm

I just checked the data against the numbers for my own location (which I know are quality controlled and fully adjusted for whatever needs adjusting).
And it is close, but the Crutemp4 trend is 0.048C per decade or a total of 0.6C over the whole record higher than my own location (which I know is quality controlled and fully adjusted).
So, another nail in the coffin in my opinion.

ferdberple
February 6, 2014 6:28 pm

Steven Mosher says:
February 6, 2014 at 9:31 am
CRU do not use EnvCanada.
============
We know that EnvCanada data actually come from the Canadian weather stations. We are not at all sure where the CRU data comes from, that much is clear in the climategate emails. Since it appears that CRU and EnvCanada do not agree on Canadian temps, either one or both of them must be wrong.
Second biggest country in the world, and they’ve screwed the pooch on temps. But trust us, we are scientists, we know what we are doing. No, you are not scientists, you are academics. Big, big difference.

ferdberple
February 6, 2014 6:34 pm

Kev-in-Uk says:
February 6, 2014 at 4:05 pm
Firstly, raw data is sacrosanct in the scientific world – or at least it should be.
============
Unfortunately we know for a fact that someone at CRU threw away the raw data. We’ve heard excuses why that was done. We also have climategate emails from an academic at CRU saying he’d rather destroy the data than give it Steve M.
Co-incidence? In police work, there is no such thing as co-incidence. The raw data is gone and we can finally see that CRU doesn’t match the actual weather stations it is supposed to match.

Nick Stokes
February 6, 2014 6:44 pm

Patrick says: February 6, 2014 at 5:01 pm
“The CRU lost all their raw data in office moves in the 1990′s.”

No. CRU says:
“Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. “

Greg Cavanagh
February 6, 2014 7:04 pm

“The move is part of an ongoing effort to make data about past climate and climate change as accessible and transparent as possible.”
Until they tell us how the raw data gets translated into adjusted data, there remain NO transparency of data.

Janice Moore
February 6, 2014 7:43 pm

Re: “So the people at CRU, Rimfrost and Berkeley Earth are all motivated by money and / or power ?” (Mallett at 1:30pm)
Who knows what motivates them? It doesn’t matter. CRU is a known l1ar. Thus, believing anything they say is irrational. The other temperature reconstruction groups have been exposed repeatedly as using recklessly unscientific methods (at best). What their motive is, is irrelevant.
You attempted far above to denigrate the rational view that the CRU Google data is suspect-from-the-get-go by asserting that our claim that the vast majority of the temp. reconstructionists are crooked is tantamount to an irrational belief in a conspiracy. Our rational conclusion does not, however, depend on there being any such conspiracy. I told you why above.
From your comments, you are CLEARLY a troll. Congrats on getting so many of us to respond. If your ego-clogged brain can take this fact in, note: we do it solely to prevent your slimy half-truths and red herrings from fooling anyone reading here (good job, Kev-in-UK). You have proven your complete insincerity and ignorance (or pride-blind stupidity or whatever is the cause of your poor reasoning and factual errors, it makes no difference) out of your own mouth.
Just know that, any replies you receive here are US using YOU.
If that’s what floats your boat, be our guest!
“… some of them want to get used by you… .”
(“Sweet Dreams” (Are Made of This) – Eurythmics)

Richard Mallett
Reply to  Janice Moore
February 7, 2014 8:25 am

Personal attack by Janice Moore (who also makes accusations without evidence against the current data sets published by Berkeley Earth, CRU and Rimfrost) ignored

February 6, 2014 8:23 pm

@ Nick Stokes 6:44.
So they chose not to retain the raw data.
Rather than deliberately lost it.
That sure clears up your disagreement with Patrick of 5:01.
So what raw data did CRU use to produce their product,the station series after adjustment for homogeneity issues.?
How might one verify the validity of these homogeneity issues?
How could one replicate their methodology, given we still cannot verify exactly which station data was used?

Patrick
February 6, 2014 8:43 pm

“Nick Stokes says:
February 6, 2014 at 6:44 pm”
Yes. They threw away some of the raw data in the 80’s and lost even more in office moves in the 90’s. Whichever way you want to say it the CRU lost the raw data.

richardcfromnz
February 6, 2014 8:47 pm

Kev-in-Uk says:
February 6, 2014 at 4:05 pm
>”In terms of your comment regarding BEST matching CRUTEM4 – you do know that CRUTEM4 is a gridded (ie. homogenised and averaged) dataset, don’t you? In other words, a highly averaged dataset (BEST) is quite likely to agree with another highly averaged dataset!”
Waste of time comparing CRU to BEST. The only V&V is to compare observed actual measurements at a specific location i.e. find a long running station requiring no adjustment whatsoever (e.g. no UHI) that either CRU or BEST have NOT used (rare yes but possible) and compare the series profile, trend, and absolute values to gridded interpolated output. Either that or as Bill Illis says: February 6, 2014 at 5:42 pm:
>”I just checked the data against the numbers for my own location (which I know are quality controlled and fully adjusted for whatever needs adjusting). And it is close, but the Crutemp4 trend is 0.048C per decade or a total of 0.6C over the whole record higher than my own location (which I know is quality controlled and fully adjusted).”
Where was that Bill?
Up-thread I posted links to two such case studies, one comparing BEST to an adjusted series, the other to a long running non-adjusted series:
http://wattsupwiththat.com/2014/02/06/cru-produces-something-useful-for-a-change/#comment-1560562
BEST fails badly on the output V&V (worse than Bill’s CRU check) but their adjusted input datasets appear rather better. Haven’t done the same for CRUTEM yet but it seems to me that kriging, averaging, interpolation, whatever, just doesn’t work in the real world.
It may work (case study reqd – Bill?) if the orography and therefore microclimate doesn’t change over vast distances like say parts of Australia, Africa, or US Midwest but in New Zealand where microclimate changes from district to district in the space of 100km it just does not work.
In the New Zealand examples linked above , BEST uses the same output temperature profile for the adjacent Auckland, Waikato, and Bay of Plenty districts. All that happens is the absolute levels move up and down the y axis. Problem is, each respective district microclimate is completely different and the profile of two Waikato stations doesn’t match the BEST output profile for Waikato that is also common to Auckland and Bay of Plenty.

Karen
February 6, 2014 10:19 pm

tiz interesting that the temp’s also work on the moon and Mars. lol

February 6, 2014 10:47 pm

“ferdberple says:
February 6, 2014 at 6:28 pm
Steven Mosher says:
February 6, 2014 at 9:31 am
CRU do not use EnvCanada.
============
We know that EnvCanada data actually come from the Canadian weather stations. We are not at all sure where the CRU data comes from, that much is clear in the climategate emails. Since it appears that CRU and EnvCanada do not agree on Canadian temps, either one or both of them must be wrong.
WRONG. CRU use data from ENv Canada that has been homogenized. It’s in their documentation. All you have to do is read it. At one point I spent about 3 months comparing
Env Canada data ( I wrote an R package for downloading it all ) The Berkeley data and CRU data. CRU is a subset of Env canada, However, they rely on homogenized versions.
Env Canada data can be in really poor shape depending on the station.
##############################

February 6, 2014 10:53 pm

“In terms of your comment regarding BEST matching CRUTEM4 – you do know that CRUTEM4 is a gridded (ie. homogenised and averaged) dataset, don’t you? In other words, a highly averaged dataset (BEST) is quite likely to agree with another highly averaged dataset! Moreover, since BEST incorporated such data into its compilation, it makes logical sense for it to follow the same trends, depending, of course, on the ‘weighting’ applied to the various components.
###################
wrong on several counts.
There are substantial differences between CRU and BEST.
1. They grid at 5 degrees.
2. We grid at 1 degree and 1/4 degree,
3. CRU use homogenized data. We use unadjusted data.
4. we dont average.
here is what 1/4 degree grids look like
http://static.berkeleyearth.org/posters/agu-2013-poster-1.pdf

February 6, 2014 11:03 pm

“I attempted to locate 6 sites with lat/long information within 30 minutes of my house. Using Google Earth I drove to those locations and was only able to locate 2 out of the 6. One was at a county airport and the other was at a radio station. Little wonder with a tolerance of up to ± 0.05 degrees (almost 3 miles).”
The data we use in ingest from public archives. No secret data, no data, like Jones, that we cannot share. That data comes as is, including location errors.
If you found them did you record the exact lat/lon with GPS? thats really important information
and you can do you part by sharing that data back so that the public records get corrected.
For us, If you find an error, please write to me. We are constantly updating the data and fixing
known problems or upstreaming the fixes so they are fixed at the source.
Especially station identity issues. In the raw sources there are 300000 pairs of stations within
1km. That’s before we “de duplicate” Just last week one guy wrote me with a pair of duplicates that we missed. 300K is a lot to sort through.
So, whatever you find, send me documentation ( I like to keep records ) and I’ll tackle it with my new data helper.

February 6, 2014 11:05 pm

“ferdberple says:
February 6, 2014 at 7:05 am
Something doesn’t add up. A quick look at the GE data shows warming on the south west coast of BC Canada. The weather records from Environment Canada show no such warming.”
###############
Be careful with the BC records of Env Canada.

February 6, 2014 11:06 pm

“Bill from Nevada says:
February 6, 2014 at 5:55 am
Here is what the writer of the now
legendary file in the climate gate emails called “Harry_Read_me.txt
###############################
Harry read me had to do with an entirely different team and entirely different dataset.

Nick Stokes
February 6, 2014 11:34 pm

Patrick says: February 6, 2014 at 8:43 pm
“Whichever way you want to say it the CRU lost the raw data.”

CRU does not provide raw data. GHCN unadjusted does. You just have to go to the right place.

Patrick
February 7, 2014 2:05 am

“Nick Stokes says:
February 6, 2014 at 11:34 pm”
I never said they do. For a “scientific” body, to LOSE that data, and then “provide adjusted” data without the ability to refer BACK to the raw data, is the problem. But, still, you go on defending BS (bad science).

richardcfromnz
February 7, 2014 2:16 am

Steven Mosher says:
February 6, 2014 at 10:53 pm
>”There are substantial differences between CRU and BEST.
[……]
3. CRU use homogenized data. We use unadjusted data.”
But your method does produce adjusted data on the way to kriging so you do, in effect, use adjusted data. Proof:
BEST adjusted data for AUCKLAND, ALBERT PARK NZ: http://berkeleyearth.lbl.gov/stations/157062
BEST adjusted data for CHRISTCHURCH AP/HAREWOOD NZ: http://berkeleyearth.lbl.gov/stations/157045
For every raw station dataset you produce a corresponding “Breakpoint Adjusted” dataset (examples above) as method output along with the multi-station composite output.
Albert Park above has 3 site move adjustments and 9 “empirical break” adjustments.
Christchurch AP/Harewood above has 2 site move adjustments and 5 “empirical break” adjustments.
That’s a lot of adjustments for a method that you say uses “unadjusted data”. Although I don’t find breakpoint analysis controversial, it’s the composite kriging that doesn’t pass observational V&V.

RichardLH
February 7, 2014 2:21 am

Steven Mosher says:
February 6, 2014 at 10:53 pm
“There are substantial differences between CRU and BEST.
1. They grid at 5 degrees.
2. We grid at 1 degree and 1/4 degree,
3. CRU use homogenized data. We use unadjusted data.
4. we dont average.”
Can you please provide one simple statistic? What are the 1*1 (or 1/4) degree cells that have data in them at today and 50, 100 and 150 years in the past. As a percentage of the available cells.
Station numbers are of little use as there are multiple duplicates, and multiple stations per cell as the BEST database show.
P.S. You do still have some internal data inconsistencies within your published data which I find hard to reconcile with these being separate ‘views’ of the same internal data.

Kev-in-Uk
February 7, 2014 10:12 am

richardcfromnz says:
February 7, 2014 at 2:16 am
absolutely! I find it hard to imagine that Mosh claims unadjusted data when they use CRU homogenised and gridded data? Or have CRU now found the ‘raw’ data? LOL.
Obviously, there could be debate about what is ‘raw’ data – but in essence, to my mind, it would be the initially recorded values, QC’d as required. I don’t think the CRU data within the BEST dataset is like this at all !

Matt G
February 7, 2014 11:57 am

David Sanger (@davidsanger) says:
February 6, 2014 at 3:47 pm
“Matt G. I’m not sure what you are looking at, but on the Google Earth display Dec 2010 shows an anomaly of -0.31ºC for the grid-box covering most of England (52.5N 2.5W) . coldest since 1988”
The one I was referring too was created and displayed in January 2011, this data is different to what it was then so must have changed since. Still the temperatures for England were over 3.7c below normal and were the coldest since the 1890s (CET). Just stating that is was the coldest since 1988 already shows it was wrong.
For example look at the difference of temperatures between December 1988 and December 2010. December 1988 was one the mildest recorded since 1934 and December 2010 was the coldest. (2nd coldest for CET since 1890 )
CET
December 1988 7.5c
December 2010 -0.7c
Therefore December for ENgland in 1988 was on average 8.2c warmer than 2010, yet grid data apparntly shows 1988 to be somehow even colder. December 2010 broke records across all the UK for severe cold and plenty of snow in some areas.
The weighting is worthless for grid data.

richardcfromnz
February 7, 2014 1:01 pm

Kev-in-Uk says:
February 7, 2014 at 10:12 am
>”…. they [BEST] use CRU homogenised and gridded data”
No, they don’t. Say for the Albert Park example above, the raw data they start with is supplied to Global Historical Climatology Network (GHCN), WMO, and whoever by New Zealand’s national weather and climate institutions. That same Albert Park raw data can be accessed by anyone from NIWA’s CliFlo database in New Zealand.
So in the case of Auckland, BEST uses ALL of the raw Albert Park data and doesn’t adjust for UHI/sheltering. NIWA doesn’t use all but doesn’t adjust for UHI either. Albert Park is UHI/sheltering contaminated and has to be corrected for that. But even without UHI correction, BEST’s break adjusted Albert Park corroborates the NZCSET audit series of NIWA’s 7SS Auckland location but eliminates NIWA’s Auckland series (trend far too steep – shonky site move adjustments, didn’t correct for UHI). Correcting BEST Albert Park for UHI/sheltering would give a series trend less than the NZCSET audit series trend (which was much less than NIWA’s) and not much above flat.
Once a site was established at Auckland Airport (Mangere), NIWA ceased using Albert Park and used Mangere raw data instead. BEST continued using Albert Park however. GISS uses Mangere but not Albert Park.
BEST’s adjustment method is an in-house development they’ve termed the “scalpel” method. You can read about it in their Method paper at the BEST website. In short, BEST, GISS, and CRU, start by selecting their respective raw data from the same sources (GHCN) and adjust it themselves by their own methods. But for New Zealand, NIWA uses sites from their CLiFlo database some of which are used by BEST, GISS, CRU and some aren’t.
In New Zealand therefore (and elsewhere), we have independent means of checking location series that BEST, GISS, CRU produce starting with the same raw data that anyone has access to and can select from.

Kev-in-Uk
February 7, 2014 3:40 pm

richardcfromnz says:
February 7, 2014 at 1:01 pm
I’m not sure to be honest. The data page lists that they use CRU data (as part of the many datasets they use), but unless CRU supply raw data, I’m presuming it is ‘adjusted’? If it is not ‘adjusted’, I’d be pleased to know, as this would mean the dataset you can download from the Berkeley Earth site for CRU can be compared to other CRU products?

sonofametman
February 7, 2014 4:01 pm

My father worked for the UK Met Office, in various roles, for over 40 years. He never lost his interest in getting the data right, and eventually gave up the marine division and moved to forecasting for the RAF as he was not being taken seriously. He was concerned about the accuracy of sea surface temperature measurements, as the methods used were crude. He thought that the measurements were liable to errors from evaporative cooling as well as radiative heating from the weathership itself, and wanted to do experiments to eliminate any problems. He was ignored, and so spent the next 20 odd years in the air division instead. When I see a fuss being made about 1 deg C heating, I begin to wonder….
What I’m getting at is that the raw temperature data, especially older data, may not be as reliable as it might be convenient to imagine.
Lest we might take data for granted, in this age of remote sensing and the internet, just imagine being on station on a weathership in the Denmark Strait, in winter.

richardcfromnz
February 7, 2014 5:39 pm

Kev-in-Uk says:
February 7, 2014 at 3:40 pm
>”The data page lists that they use CRU data”
That’s something I wasn’t aware of. Can you link to that data page you’re referring to please?
I’ve looked at your comments up-thread but can’t see a link to anything like that.
Makes a huge difference because that would introduce 2 layers of adjustment to an adjusted CRU station if BEST actually do use ‘adjusted’ rather than raw to start with in some cases – CRU’s and BEST’s. I’ll be surprised if they’re doing that so I’d like to see the facts.
BEST describe their adjustment method by “scalpel” analogy because of the very short overlap required for it. Given their 9 “empirical break” adjustments to Albert Park above, I’m more inclined to think of a butchers meat slicer analogy than a scalpel, but that’s just me. Australia’s BOM make a similar number of “empirical break” adjustments for their ACORN-SAT series so they’re not alone.
I’ll make the point again though, it’s not BEST’s breakpoint adjustments to single stations that do the most damage – it’s their subsequent composite kriging that churns out the rubbish.

Michael Whittemore
February 7, 2014 10:04 pm

Are these data sets taking into account urban heat?

Patrick
February 8, 2014 1:31 am

“richardcfromnz says:
February 7, 2014 at 1:01 pm”
Puhlease! Ignore anything NIWA says about climate. I have been there, seen how they work…it’s not pretty!

Mervyn
February 8, 2014 5:31 am

Following climategate, I have absolutely no confidence in the CRU and its temperature data. I’m with the Russian scientists on this… the instrumental surface temperature data has been subjected to so much fudging, it’s now totally unreliable.

Kev-in-Uk
February 8, 2014 5:32 am

richardcfromnz says:
February 7, 2014 at 5:39 pm
Hi Richard – the links are straightforward…
http://berkeleyearth.org/data
then click on source files
http://berkeleyearth.org/source-files
CRU data is listed half way down that page. As I say, it’s not clear what the provenance of the data actually is but on the presumption CRU allegedly doesn’t have raw data, I assume this ‘isn’t’ either?

February 8, 2014 11:08 am

Nick Stokes said @ February 6, 2014 at 3:46 pm

ThinkingScientist says: February 6, 2014 at 2:52 pm
“No its not a lot of data, its a trivial amount of data.”
It’s a lot of numbers to type. And they had to do some of that. It’s a lot of numbers to track down, gather and check.
“And what’s the excuse for CRU “losing” some? Even a tiny memory stick could hold all the worlds temperature data recorded on a monthly basis.”
They did not have memory sticks in 1984 when Phil Jones was putting his dataset together. They had 360 Kb floppies and 10 Mb hard drives.

We also had 1.2 MB 8″ floppies. however, then as now, bulk archival storage was on tape. In 1984 IBM’s released the 3480 cartridge tape system as a replacement for the traditional magnetic tape reels. It was a 4” x 5” cartridge that held more information than reels the capacity being 200MB. They were slow to catch on though, so I suspect that the tapes Jones’s predecessor recycled would have been reels.
I remember doing a backup using DOS 3.x back then to ~50 3.5″ floppies. It was a large dataset for the day. When it came time to restore, I discovered that the maximum number of floppies in a DOS 3.x backup was 9 (IIRC) due to a bug. Not that CRU would have been using PCs, or DOS.

Kev-in-Uk
February 8, 2014 12:36 pm

richardcfromnz says:
February 7, 2014 at 5:39 pm
and just to illustrate the point, – this quote from the http://berkeleyearth.org/about-data-set page quotes ‘The Berkeley Earth Surface Temperature Study has created a preliminary merged data set by combining 1.6 billion temperature reports from 16 preexisting data archives. Whenever possible, we have used raw data rather than previously homogenized or edited data.”
Obviously this term ‘whenever possible’ probably is to cover the known use of non-raw data?

richardcfromnz
February 8, 2014 12:40 pm

Kev-in-Uk says:
February 8, 2014 at 5:32 am
>”CRU data is listed half way down that page.”
Thanks for this. I went to CRU website “Station data used for generating CRUTEM4”, found:
‘CRUTEM4 Temperature station data’
http://www.cru.uea.ac.uk/cru/data/temperature/crutem4/station-data.htm
>”As I say, it’s not clear what the provenance of the data actually is but on the presumption CRU allegedly doesn’t have raw data, I assume this ‘isn’t’ either?”
I think you’re right. Rather than bother finding the provenance of CRU TAVG, at the CRU link above I see “GHCNv2 (adjusted series)” i.e. not raw. BEST, from your link, uses “GHCN Monthly version 3”. NCDC states re GHCN-Mv3:
“Methods for removing inhomogeneities from the data record associated with non-climatic influences such as changes in instrumentation, station environment, and observing practices that occur over time were also included in the version 2 release (Peterson and Easterling, 1994; Easterling and Peterson 1995). Since that time efforts have focused on continued improvements in dataset development methods including new quality control processes and advanced techniques for removing data inhomogeneities (Menne and Williams, 2009). Effective May 2, 2011, the Global Historical Climatology Network-Monthly (GHCN-M) version 3 dataset of monthly mean temperature has replaced GHCN-M version 2 as the dataset for operational climate monitoring activities.”
So yes, if BEST uses adjusted GHCN-Mv3 (and then adjusts it again by their own method ?) it’s probably safe to assume the situation is the same for BEST and CRU TAVG.
This is an eye opener for me Kev. Not what I thought was going on at all at CRU and BEST.

richardcfromnz
February 8, 2014 12:48 pm

Kev-in-Uk says:
February 8, 2014 at 12:36 pm
>”and just to illustrate the point”
>”Whenever possible, we [BEST] have used raw data rather than previously homogenized or edited data.”
Yep, I certainly get it now. Thanks.

Kev-in-Uk
February 8, 2014 12:48 pm

richardcfromnz says:
February 8, 2014 at 12:40 pm
Precisely! Kind of makes a mockery of Mosh’s claim that BEST doesn’t use adjusted data?
regards
Kev

richardcfromnz
February 8, 2014 2:56 pm

Patrick says:
February 8, 2014 at 1:31 am
>”Ignore anything NIWA says about climate.”
Know what you mean but it’s not ALL bad, you just have to be careful with what you do access from NIWA. For example, the NZCSET audit of the their 7SS agrees with NIWA since about 1970. It is only really the pre-1970 adjustments that are contentious. BEST doesn’t provide a trend from 1970 but does for 1990 when NIWA’s series is fully acceptable. So the comparison for New Zealand is:
0.24 °C / Decade BEST NZ 1990 – Nov 2013
0.265 °C / Decade NIWA NZ 1990 – end 2013 (2013 was an exceptionally warm year)
0.21 °C / Decade NIWA NZ 1990 – end 2012
Reasonable except it’s not apples-to-apples,
11 °C BEST’s latest NZ temps on average
13 °C NIWA’s latest NZ temps on average
It’s the same all over NZ, BEST’s absolute temps are wildly at odds with the observations at each location whether in-situ or adjusted and some of the other trends are way out e.g. Hamilton:
0.01 °C/decade NIWA Ruakura (Hamilton) 1970 – 2009 (post 1970 – this isn’t dodgy)
0.107 °C/decade BEST Hamilton 1970 – 2009 using 1960 – pres trend
10.7 times more slope in BEST there (maybe the 1960 – 1970 decade skews the slope in BEST).
.
Don’t be too hasty throwing ALL of NIWA’s work away Patrick, their post-1970 7SS is a very useful check of BEST for example. Problem with CRUTEM4 as we’ve discovered up-thread is CRU uses the entire adjusted 7SS, and the 11SS:
‘Station data used for generating CRUTEM4′ – ‘CRUTEM4 Temperature station data’
http://www.cru.uea.ac.uk/cru/data/temperature/crutem4/station-data.htm
Code, Station Count, Regions, Sources (paper, project acronym or website)
41, 13, New Zealand, Homogenized series , NIWA, New Zealand
http://www.niwa.co.nz/our-science/climate/news/all/nz-temp-record
‘Seven-station’ series – 7SS, ‘Eleven-station’ series – 11SS
CRU don’t start with raw data now for NZ in CRUTEM4, they use the NIWA adjusted 7SS and 11SS directly.

Patrick
February 8, 2014 5:24 pm

“The Pompous Git says:
February 8, 2014 at 11:08 am
In 1984 IBM’s released the 3480 cartridge tape system as a replacement for the traditional magnetic tape reels. It was a 4” x 5” cartridge that held more information than reels the capacity being 200MB. They were slow to catch on though, so I suspect that the tapes Jones’s predecessor recycled would have been reels.”
Exactly right! Jones and the CRU had the technology availavle then to conduct proper data archiving even if they only had access to 3470 tape reels. Heck I worked for IBM in the late 80’s and we still had 3470 reels. I am pretty sure 360kb 3.5″ floppies were not available in 1984. 5.25″ most likely. I reckon one 12″ reel would have been sufficient enough to store a bunch of numbers for temperature given then we were able to back up entire 16MB address spaces (24bit addressing MVS) on tape.
Simply no excuse to LOSE the data your product and “science” is based on ESPECIALLY on such a sensitive subject, ie, global warming.

February 8, 2014 6:50 pm

@ Patrick
I was using 3.5 ” floppies ca. 1984. They were single sided and cost about $AU10 each. However, IIRC it was Tom Wigley’s watch when the data got “lost”, not Phil Jones’. Did Wigley make the decision? Did the IT department give Wigley a choice of saving X or Y? Will we ever know what really happened?
I don’t recall global warming being a hot topic (so to speak) back in those days. Climatology only really started becoming newsworthy late in the decade/early 90s which was when I was first interviewed about global warming on Radio National.

Patrick
February 8, 2014 7:29 pm

“The Pompous Git says:
February 8, 2014 at 6:50 pm”
It certainly was a hot topic in the UK along with acid rain and the UK being labelled the “dirty man of Europe” because of sulfur emissions (Now CO2 is the bogeyman) from, primarily, industrial power plants (Coal fired) and their impact on the environment and global warming. The term is still used today. Global warming became politicised in the 90’s, and thus, more widely discussed mostly due to the fact that politicians and environmental groups like Greenpeace/Friends of The Earth etc were constantly bleeting on and on about it.
Regardless, whoever was in the driving seat at the CRU didn’t protect that data. That’s BS (Bad Science) and should treated as such.

February 9, 2014 3:28 am

Nick Stokes: I was using 9-track magnetic tape in 1984. IBM introduced 9-track tape in 1964. Magnetic tape is a surprisingly tolerant and reliable media for digital data storage – we still archive our datasets on the modern equivalent of DAT tape even today as it is more reliable then CD and DVD, especially for large files sizes.
A standard 2400 ft reel of 1/2 inch, 9-track tape will store up to a maximum of 170 Mb of data, but more typically stores about 100+ Mb of data at typical block sizes.
So, no excuses for CRU then. They spent all that time and effort typing in records but couldn’t keep the equivalent of 1 x 2400 ft tape reel of data, using available technology that was at least 15 years old at the start of the 1980’s, technology that was still in regular use until the end of the 1990’s?
Give me a break.