How good is the NASA GISS global temperature dataset?

Guest essay by Rud Istvan

It is generally accepted that there are two major land temperature record issues: microsite problems, and urban heat island (UHI) effects. Both introduce warming biases.

The SurfaceStations.org project manually inspected and rated 1007 of 1221 USHCN stations (82.5%) using the 2002 Climate Reference Network (CRN) classification scheme (handbook section 2.2.1). The resulting preliminary paper shows a large temperature trend difference (about 0.1C/decade) between acceptably sited stations (CRN 1 or 2) and those with material microsite problems (CRN 3, 4, 5).

clip_image002

That is a real problem, since only 7.9% of USHCN is CRN 1 or 2. The NOAA solution has been to set up USCRN. This is not yet (AFAIK) being used to detect/correct USHCN station microsite issues in either the NCDC or GISS homogenization algorithms.

clip_image004

What about UHI? The NASA GISS website uses Tokyo to explain the issue and its homogenization solution. One could either cool the present to remove UHI or warm the past (inserting artificial UHI for trend comparison purposes). Warming the past is less discordant with the reported present (the UHI correction less noticeable), so preferred by GISS.

clip_image006

In the Surface Stations supplemental materials (available at www.surfacestations.org) only 14 CONUS stations have pristine CRN 1 siting (1.2%). 4 are labeled urban, 3 are suburban, and 7 are rural. Since these 14 have zero microsite issues, they can be used to examine the GISS UHI homogenization. Both the ‘raw’ and the ‘adjusted’ data can be accessed at www.data.giss.nasa.gov/gistemp. Just click on the monthly chart to go to the station selector page, and enter a station name. The following uses [combined location sources] raw v2, and homogenized v3 (since that is all that is now publically available). Only 13 stations proved usable; Corpus Christi v2 raw (urban) has different lat/lon coordinates than v3 homogenized. That could be a mistake, or it might introduce an unfair comparison. Corpus Christie was therefore excluded; the final GISS CRN 1 sample size is N=13.

Is UHI evident in the raw urban stations compared to rural stations (like the GISS Tokyo/Hachijyo example)? Yes. All three urban stations evidence UHI, for example San Antonio TX and Syracuse NY.

clip_image008

But in suburban Laramie WY or Baker OR UHI is not evident in the raw data–just as, for example, there is no UHI in rural Hobart OK or Fairmont CA.

clip_image010

How good was GISS at removing the apparent UHI bias from raw San Antonio and Syracuse? Hard to tell for sure, but it is evident that the past was warmed some to compensate, just as GISS says its homogenization works.

clip_image012

The third pristine urban station, Savannah GA, was homogenized so much its raw UHI warming trend was fully removed. That might make sense given Savannah’s coastal location, moderated by ocean proximity.

clip_image014

GISS should logically leave non-UHI suburban and rural stations relatively untouched. Oops. GISS homogenization cooled the past to add a spurious warming trend to all but one pristine station. For example these two:

clip_image016

In some cases the past was cooled AND the present warmed, as in Laramie WY.

clip_image018

A spurious warming trend was introduced into all three suburban and 6 of 7 rural CRN 1 stations. Only Apalachicola FL emerged from GISS unscathed.

Automated homogenization algorithms like GISS use some form of a regional expectation, comparing a station to ‘neighbors’ to detect/correct ‘outliers’. BUT 92% of US stations have microsite issues. So most neighbors are artificially warm. So the GISS algorithm makes the hash illustrated above. How could it not? And by extension NCDC, BEST, Australian BOM, …

1 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

157 Comments
Inline Feedbacks
View all comments
John
August 3, 2015 9:22 am

SeaTac has a sighting issue since the third runway went in, and they’re going to claim it’s the hottest summer in Seattle this year.
http://www.seattletimes.com/seattle-news/seattles-scorching-summer-sizzles-on/

John
Reply to  John
August 3, 2015 9:22 am

“… siting…”

Reply to  John
August 3, 2015 9:29 am

I just talked to my sister in Seattle, and she says it IS the hottest summer, since she arrived in the 90’s…

Science or Fiction
Reply to  Michael Moon
August 3, 2015 10:38 am

Are you serious?

Bernie
Reply to  Michael Moon
August 3, 2015 11:22 am

…and all summers are hotter than I remember when I was a kid.

James at 48
Reply to  Michael Moon
August 3, 2015 1:39 pm

Seattle’s finally having a warm summer after a series of crummy ones. In spite of stereotypes, being on an inland branch of the ocean, Seattle gets considerably less marine cooling than the CA coast (plus the SSTs are actually warmer, due to less upwelling). When there are not fronts coming through, it can get pretty toasty in Seattle (and even more toasty down the road in Portland).

Joseph Murphy
Reply to  Michael Moon
August 4, 2015 2:08 pm

Bernie, that reminds me of a conversation I had with my parents. They would tell me winters were so much worse when they were kids. ‘We would have snow for Thanksgiving!’, my dad would tell me. Being a big fan of snow this got me excited. ‘And what about your parents?’, I asked. ‘Did they have even more snow when they were kids?’ This caused my dad to pause. ‘No’, he replied thinking back, ‘They actually complained about how much worse the winters had become.’ What a disappointment, I thought.

Reply to  Michael Moon
August 5, 2015 5:00 pm

since she arrived in the 90’s

alarmists would ignore that conditional – smart people notice – and they would ask want to know if this is anecdotal evidence – or is it based on some sort of proof
always a risk giving one person’s assessment based on their limited experience more than a passing glance
and i don’t have to – i’ve been living in seattle since the late 70’s

Mark Albright
Reply to  John
August 3, 2015 10:08 am

I have been one of those who watches the SeaTac temperature record closely. SeaTac was considerably warmer (+2.2 to +2.5 F) than nearby sites in the greener and cooler neighborhoods surrounding the airport last month (July 2015):
July 2015
Site Mean Diff
——————
SEAT4 68.7 -2.5
NORM3 69.0 -2.2
DESM8 68.9 -2.3
——————
KSEA 71.2

Rhoowl
Reply to  John
August 3, 2015 11:39 am

I’ve been in tx for about 13 years now….and this is the coolest summer I remember…..we haven’t hit 100 yet….like last year we were in the 100’s almost everyday about now….

Reply to  John
August 3, 2015 7:24 pm

SeaTac’s temperature records date from 1945, Landsburg’s from 1915.
SeaTac’s 2015 Jun-Jul mean max temperature was 80.8°F, a whopping 3.5°F warmer than the 1958 Jun-Jul mean max of 77.3°F—previously the warmest on record.
Landsburg is 17 miles east of SeaTac. Landsburg’s 2015 Jun-Jul mean max was 80.4°F, warmer than the 1958 mean max of 78.6°F but cooler than the 1926 Jun-Jul mean max of 81.4°F.
If SeaTac existed in its current form in 1926, its 2015 Jun-Jul warmth might not be unprecedented.
Highest temperature at SeaTac in Jun-Jul 2015 was 95°F, 8° below the record.

knr
August 3, 2015 9:23 am

The question I would rise is , if you where to sit down think what would it take to scientifically come up with a meaningful value for the average temperature of the planet and given that , how well do we currently match this?
I suspect we find that the conditions required to produce this value in manner that actual has a scientific value , are not met , and that we are using a value which in reality is ‘better than nothing ‘
Experimental design 101, if you cannot take the measurements in the manner required than any value you produce is suspect and subject to errors , and if you do not know the errors its subject too, then your ‘guessing with numbers’
Now what is the actual state of our ability to produce this value in meaningful way , any one know ?

BFL
Reply to  knr
August 3, 2015 9:33 am

“actual state of our ability”
None

Auto
Reply to  BFL
August 3, 2015 3:13 pm

BFL
Given the lack of careful observation/precision for the oceans [note – most merchant ships – VOSs – are still using buckets; or Engine Room Intakes, which may be as much as twenty metres [>60 feet] below the surface they are ‘supposed’ to be measuring] I suggest that despite ARGO buoys doing a measurement every quarter million square miles – we are still far – I say again : f a r – from being able to quote an average temperature for the globe to even one dp C.
So – 14 – or 15 – or 13.
As best as we can accept. I suggest.
And in 257915 BC – or BP -or any other six digit number produced by letting my fingers wave at the top line on my keyboard – let’s all wave our arms!
We’re within an order or so – probably.
!4.04 or 15.92 or 13.83 – or anything to 2 dp – is an expletive carping deletive.
Even 14.9 or 16.2 or – ahhhhh anything better that a whole degree – is exploitative and likely wrong.
As noted – we can probably do the nearest degree . . . probably . . . .
At best for lots and lots of thousands of years ago – two C – and I think that’s optimistic.
[Hugely optimistic? I think so, but stand to be corrected, with evidence.]
NB – I stand to be corrected, with evidence . . . .
Prove me wrong – with evidence – and I will praise you.
With evidence.
Auto.
Note. Now, I’m borrowing from Willis – with huge acknowledgments – and much appreciation.
My BORROWED – From Willis (super star) – Plea is: If you disagree with someone, please quote the exact words that you object to. That way we can all understand exactly who and what you are objecting to.
I might be wrong – quote where I am in error, and how – and corrections.
Much appreciated. Auto

Auto
Reply to  BFL
August 3, 2015 3:15 pm

So – if I’m talking bollocks – TELL ME.
Thanks.
Auto

BFL
Reply to  BFL
August 3, 2015 8:18 pm

Auto:
“the actual state of our ability to produce this value [avg. earth temp.] in meaningful way”
Okay then I stand corrected, and change “none” to “effectively none” as in possible, but with large error bars.

Reply to  knr
August 3, 2015 10:37 am

@Knr: No averaged temperature of the whole planet could ever be meaningful. Even if you had temperature measurements from every square foot of the planet, it wouldn’t be meaningful once it was all averaged. Think about it this way, if the whole planet became 57 degrees F tomorrow and stayed that way every day, everything frozen would be melting, and life in the tropics would be dying, but your precious average would remain unchanged. It is no more meaningful than sampling the wavelengths of light in the rainbow and averaging them into a single wavelength measurement corresponding to an “average color” for the rainbow. It wouldn’t and couldn’t mean anything. It is just a mathematical exercise. The world is a rainbow of temperatures that fluctuate wildly every day, and cannot be represented by a single temperature number. Tracking the average temperature year after year will never tell you what is causing the temperature to fluctuate because you’ve averaged out all of the details.

knr
Reply to  Hoyt Clagwell
August 3, 2015 1:43 pm

Your right on this meaningless value for which we are no even in position to measure in the manner that as scientific meaning , a great deal has been built .
Sceptics are called ‘science-deniers ‘ by the faithful of CAGW , and yet the real mass denial is in failing to admit that we are still failing science 101 in failing the basic tenets of good scientific pratice in this area.
For if we cannot measure we cannot ‘know’ but only ‘guess’

Glenn999
Reply to  Hoyt Clagwell
August 4, 2015 9:11 am

I agree completely on the global average thingy. Absolutely useless. Why would we want to average the poles with the tropics, and what would that number mean anyway?
A proper use of averages would be to average your local temperature and weather to look for trends in your neighborhood. This could possibly be extended to other local areas nearby, but the physical characteristics would need to be similar also.

Karl Compton
Reply to  Hoyt Clagwell
August 5, 2015 8:16 am

Hoyt, we are being told ad nauseum that ‘The earth is warming.” Indeed, if CO2 (or something else) is causing global warming the average would by definition go up, so it is indeed of interest as a proof/disproof of the thesis. Though local variations are of more immediate interest, being able to know whether the earth is actually warming and at what rate is certainly of considerable interest, and might actually save us a few trillion wasted dollars.

Bill Treuren
Reply to  knr
August 3, 2015 1:57 pm

The question is very valid and the temperature presented will always be disputable by someone.
However if you are consistent and you include your errors even a moderately faulty process could be a valid measure for trends and change.
my issue is the errors are much ignored.

Evan Jones
Editor
Reply to  Bill Treuren
August 6, 2015 3:20 am

Well, we do have ARSS and UAH — fortunately . . .

Eugene WR Gallun
Reply to  knr
August 3, 2015 5:54 pm

knr
“jBetter than nothing” or worse than nothing?
Eugene WR Gallun

Evan Jones
Editor
Reply to  Eugene WR Gallun
August 6, 2015 3:21 am

Yes.

Reply to  knr
August 3, 2015 7:23 pm

One problem is that radiation is proportional to the fourth power of temperature. So the average radiative ability of anything is not the same as the averaged value of T over the whole object. (Something could become on average cooler and yet radiate more if the hot spots got hotter and most of the rest got cooler.) Since dissipation of heat is a prime function of weather, average temperature is a poor metric. For what is it a good metric?

August 3, 2015 9:28 am

Your charts should have a third panel: GISS Homogen – Raw.
I suspect the trend, the intermediate trends, and the noise will all be impossible to justify.
That difference will also be a source of uncertainty.
Whatever the uncertainty in the Raw, the variance from the Difference must be ADDED to the raw to get uncertainty in the final GISS Homogenized result.

August 3, 2015 9:28 am

Our tax dollars at work…

Schrodinger's Cat
August 3, 2015 9:29 am

It seems to me that by the time GISS has finished adjusting the data it is unsuitable for any scientific purpose. It is more instructive to look at the raw data with some knowledge of a problem it may have (or had).

Louis Hunt
Reply to  Schrodinger's Cat
August 3, 2015 9:54 am

Will they ever be finished adjusting the data? With their track record, it seems doubtful. But we do know one thing. The current GISS data set is unsuitable for any scientific purpose because we know it will be “corrected” again in the future. In fact, they will very likely make many future corrections to it. That means the current data are “wrong” and therefore worthless for any purpose other than propaganda. But that’s seems to suit their purpose anyway.

Reply to  Schrodinger's Cat
August 3, 2015 10:28 am

I totally agree with you, how can the data be adjusted accurately? For example a weather station is sat next a runway at a provincial airport, the airport expands and gets another runway, an accurate adjustment is impossible. The only way of getting accurate figures is to totally disregard all data from sites with heat island or microsite problems.
A few years ago in Newcastle upon Tyne, UK where I live, we had the reputation as the most air polluted city in the country. This changed after a few months, to one of the least air polluted cities in the country, because they moved the pollution sensor from the walled and ceilinged bus concourse where dozens of diesel powered coaches and buses were running their engines from 5:00am to midnight to a more sensible location.

george e. smith
Reply to  Schrodinger's Cat
August 3, 2015 11:28 am

Well I don’t think that ” heat islands ” are any kind of problem. They are in fact real places on earth’s surface, and they have a Temperature that may be different from that at surrounding areas.
The problem is that some people like NASA’s Dr. Hansen, think that it is ok to use that same temperature for places 1200 km away from the thermometer.
Siting is an issue in that many thermometers are situated on or near airfield runways, and are intended for flying safety data gathering (take offs and landings, and aircraft loading).
But that’s the same issue as the UHIs. Don’t use airfield thermometers for some place 1200 km away; or even 12 km away.
The big issue with the surface ” data ” gathering is that it doesn’t comply with the fundamental laws governing sampled data systems; so it just gathering noise; NOT ” data “.
And there is that other issue that the historic oceanic near surface Temperature data, prior to about 1980, is just useless rubbish, since water and air temperatures aren’t the same, and aren’t correlated.
Other than that; the Temperature range on earth is about 100-150 deg. C so trying to keep track of hundredths of a deg. C is plain silly. It certainly isn’t science.
Just my opinion of course; not good for any class credits.
g

Walt D.
Reply to  george e. smith
August 3, 2015 11:43 am

So you don’t believe that London and Barcelona temperatures are the same?
Everyone knows that Seattle and San Francisco temperatures are the same.
Venice and Munich?
What’s the problem.

ripshin
Editor
Reply to  george e. smith
August 4, 2015 8:11 am

George, the issue with UHI is the distortion of the trend due to factors other than those being sought by climate scientists. An upward trend from a thermometer sited in an urban area is more likely due to localized phenomena than regional climatic changes.
The distortion is further magnified if the UHI-affected site is transposed onto a non-UHI region. The resulting temperature trend will be artificially shifted up, concealing what is truly happening with the temperatures in the area.
rip

george e. smith
Reply to  george e. smith
August 5, 2015 7:40 pm

The idea of samples in a sampled data system, is very simple. There is a tacit assumption that the sample value is a credible value for nearby points that could have been sampled.
That ‘s the 4-H club description of the Nyquist sampling theorem. Theoretically the sample is the value at some instant of time (or other sampled variable), so ideal samples are zero width. If the signal is band limited (which it must be), then the point value can only change by small amounts in between samples.
So the Nyquist criterion requires that the function not change radically in between samples, and that is why the maximum sample spacing may not exceed the half period at the band limit frequency. So it is ludicrous to space position samples 1200 km apar as Hansen claims you can do, when significant temperature cycles can take place in just a few km.
In the greater SF Bay area, Temperatures can go through a five deg. C Temp cycle in perhaps five km distance.
It doesn’t matter a whit, what causes a UHI and how small or large it is. It’s Temperature is a valid data point, but it may not be valid to use even a few km away. So this 1200 km sampling bs, is just that. The resultant measurements are just noise, and contain no reliable information.
A properly sampled continuous function can (in principle) be exactly reconstructed from the point samples.
It doesn’t matter whether you want to reconstruct the entire continuous function or not. Any statistical computations made on the data such as an average value, are also invalid, if the function is not sampled properly.
in the case of the average value of the continuous function; that corresponds to the zero frequency component of the frequency spectrum of the function. If you undersample by just a factor of two, that means that there are frequency components at twice the maximum useful bandwidth tat can be sampled at that rate.
So if B is the bandwidth limit for a set of samples taken at a rate 2B, then an out of band frequency component, at a frequency B+B will be reconstructed t a frequency of B-B which is zero, so that will result in aliasing noise that changes the value of the average of the function.
So whether you reconstruct the continuous function or not, the average of the samples will not be correct if you under sample by a factor of two.
I don’t know why it is that statisticians simply refuse to accept this limitation on the numerical origami mastications they perform.
It epitomizes the GIGO syndrome.
Improperly sampled sets of numbers are NOT data; they are noise.

Evan Jones
Editor
Reply to  george e. smith
August 6, 2015 3:35 am

That ‘s the 4-H club description of the Nyquist sampling theorem.
Unfortunately all we got going is Heat Sink, Homogenization, Hype, and Hansen.

Reply to  Schrodinger's Cat
August 4, 2015 8:12 am

+1

Evan Jones
Editor
Reply to  Schrodinger's Cat
August 6, 2015 3:30 am

Better yet, do what Anthony and our team does: Drop the perturbed (moved, TOBS-biased) stations, applying only MMTS adjustment (unfortunately necessary — and probably flawed) to the remainder.
That works for the station-dense, metadata-rich CONUS. (For Outer Mongolia, not so much, though.)
Note that CRS in and of itself, is biased. It carries its own heat sink around on its back. There is a severe Tmax bias in CRS stations.
We’ll be coming through with our own set of “microsite adjustments” in due course.

August 3, 2015 9:32 am

The GISS data is worthless and should be thrown out.

Tim
August 3, 2015 9:34 am

I’m afraid that the scientific value is much less important to the politicians than a politically correct value.

Harry Twinotter
August 3, 2015 9:40 am

“It is generally accepted that there are two major land temperature record issues: microsite problems, and urban heat island (UHI) effects”.
Not really. When you average out the stations across a country the UHI effects are not large. When you average out the stations across the globe, even less.

Editor
Reply to  Harry Twinotter
August 3, 2015 3:07 pm

Not so. When other stations’ data is adjusted to match UHI-influenced stations, then the UHI starts to play a large role in the overall average.

Louis Hunt
August 3, 2015 9:43 am

Berkeley Earth (BEST) has the following comment in their FAQ:
“Our UHI paper analyzing this indicates that the urban heat island effect on our global estimate of land temperatures is indistinguishable from zero.”
How can they honestly make such a statement?

Reply to  Louis Hunt
August 3, 2015 1:43 pm

Well it’s true of the globe, which is why satellites can’t find warming. It would be true if thermometers were randomly situated (including 70% in the ocean, which experiences little UHI).

Reply to  Louis Hunt
August 4, 2015 3:28 pm

Hey, this is climate “science”!
What the heck has honesty got to do with it?
I am reminded of my favorite Mae West line ever:
https://youtu.be/u7ekAQ_Plxk?t=36s

Evan Jones
Editor
Reply to  Menicholas
August 6, 2015 3:39 am

Climatology — ask me no questions and I’ll tell you no lies.

Jimmy
August 3, 2015 9:46 am

“It is generally accepted that there are two major land temperature record issues: microsite problems, and urban heat island (UHI) effects. Both introduce warming biases.”
While these are a couple of the most serious issues, there’s more than just two generally accepted major land temperature record issues. For example, time of observation is known to introduce some pretty serious artifacts. There’s also the issue of discontinuous records.

Science or Fiction
Reply to  Jimmy
August 3, 2015 10:16 am

Regarding Time of OBServation adjustment I find the following little test revealing:
What Is The Real Value Of TOBS?
http://realclimatescience.com/2015/07/what-is-the-real-value-of-tobs/

Louis Hunt
Reply to  Science or Fiction
August 3, 2015 11:00 am

Tony Heller’s article reveals something quite interesting. By removing the stations that took afternoon readings he determined that, “The total bias caused by afternoon TOBS is a little more than 0.1C”. But then he goes on to point out, “The total NOAA adjustment is nearly two degrees F. It is unsupportable nonsense, and fraud.”

Reply to  Science or Fiction
August 3, 2015 11:33 am

Or maybe from a more basic level about TOBS:
http://climate.n0gw.net/TOBS.pdf

Science or Fiction
Reply to  Science or Fiction
August 3, 2015 2:06 pm

Louis Hunt August 3, 2015 at 11:00 am
He did not provide any sport for the claim that the NOAA adjustments is unsupportable nonsense, and fraud, in the linked article, however he has done many previous tests which are quite convincing:
1. The best correlation I have ever seen within climate science:
https://stevengoddard.wordpress.com/2014/10/02/co2-drives-ncdc-data-tampering/
2. Very good vizualisation of the adjusments:
http://realclimatescience.com/alterations-to-climate-data/

Science or Fiction
Reply to  Science or Fiction
August 3, 2015 3:08 pm

Great work on explaining reasons for Time of OBServation bias adjustments Gary.
However – when things gets just a little bit more complicated than very very trivial – then there are several influencing parameters, variables and uncertainties – I tend to believe nothing and require appropriate testing.

August 3, 2015 9:57 am

july anomaly +.18c satellite data the correct data the only data.

August 3, 2015 10:02 am

GISS says their annual temperature anomaly is accurate to .1 degree with a 95% confidence factor. Then, five or ten years later, they adjust it outside of that .1 degree range. That doesn’t even make sense. What is the error bar after the adjustment?

phodges
Reply to  Cardin Drake
August 3, 2015 10:40 am

More like one year later….or even one month

Paul
Reply to  Cardin Drake
August 3, 2015 10:44 am

“What is the error bar after the adjustment?”
My best guess would be 0.01 degrees. If you can adjust data, you can adjust error bars too.

george e. smith
Reply to  Cardin Drake
August 3, 2015 11:37 am

Which is basically gobbledegook anyway (AKA Statistician shop talk).
There isn’t any statistical significance to anything that only happens once, and climate weather data gathering is a one time affair. The Temperature is here today and gone tomorrow, to be replaced by tomorrow’s Temperature. So you have a sample of one for each member of the data set. And nobody knows, who it was that actually measured even that one sample, or where and when they measured it. Well they don’t measure it anyway; they calculate it from some model, so it is not even real observations of anything physical.

peter
August 3, 2015 10:06 am

I think temperature is a bit of a red herring. There is no possible way to measure global change over decades with the spotty recording record from the past. Maybe a century down the road with a hundred years of modern measurements we might be able to make a judgement as to the rise and/or fall of global temperatures.
A more valuable comparison tool would be major weather events. Because they were major, they were recorded, and often in some detail. We can compare what happened in the past and see how it compares to the present.
Unprecedented is a term tossed around with great frequency, but from this site I’ve learned that pretty much every extreme weather event there is a counterpart if you check back fifty or a hundred years.
For instance we know that serious hurricanes have hit the New York area in the time period it has been colonized by Europe, and before that from sedimentary deposits before that. So there was nothing unprecedented about Sandy, who was not even a hurricane.
Let them point out serious weather events that have no corresponding occurrences. After all, that is what they claim the whole crisis is about. If there are no such events, then there is no crisis, no matter what the temperature is doing.

Mike M. (period)
Reply to  peter
August 3, 2015 10:49 am

Peter,
I see two obvious problems with using major weather events. One is that they are rare, so the statistics suck. The other problem is that you will have to quantitatively define a threshold for what constitutes a major event. Events near the threshold will be much more common than events clearly above the threshold, that is the nature of extreme events. So a small change or error in the threshold produces a large change in the number of events. Now you are back to the problems of comparing old measurements to recent ones, but the errors are amplified.
Consider heat waves. A bias of one degree in temperature dramatically alters the odd of getting N consecutive days above a given T.

peter
Reply to  Mike M. (period)
August 3, 2015 4:14 pm

All too true. But, your common person does not pay attention to single digit temperature changes. They do recall the roof blowing off the house. I’m firmly convinced that the only reason the whole GW thing got a toe hold was because we had the perfect storm of events. The internet, and a period of above average years in first world countries making it easy to sell the idea that Temperatures were rising.

Reply to  Mike M. (period)
August 3, 2015 6:05 pm

The ironic part about the overuse and misuse of the term “unprecedented” in climate and weather reporting is that it is almost invariably immediately followed by the word “since”.

Mike M. (period)
Reply to  Mike M. (period)
August 4, 2015 8:13 am

Peter,
“I’m firmly convinced that the only reason the whole GW thing got a toe hold was because we had the perfect storm of events.”
True. The crazy 2005 hurricane season set things up for Al Gore and the complete politicization of global warming. But it was likely just a statistical aberration, aided by the AMO. As I said before the statistics of extreme events suck.
Politically, extreme events are a ratchet. People remember the roof blowing off the house, but they do not remember the roof not blowing off the house.

Ronald
August 3, 2015 10:10 am

The only good data is the raw data. Every adjustment is plain wrong. But yes I do understand that adjustments need to be maid to keep up whit the non excising global warming. So both temperatures in the past and present must be adjusted tho fit the models.
Its not good but oke what to do about it?? if you tell someone the temperature is adjusted your a skeptic who doesn’t know about climate.
The only thing we cane do is sit back relax and watch the world turn colder, colder and colder.

Reply to  Ronald
August 3, 2015 10:38 am

Their adjusted data is nonsense. It does not count.

Evan Jones
Editor
Reply to  Salvatore Del Prete
August 6, 2015 3:52 am

But raw data will lie to you. The current adjustment procedures are done in exactly the wrong way — and in the wrong direction — but some adjustment (including dropping the badly perturbed stations) is required.
For example, all of the CRS trend data is spuriously high because of equipment issues. So the entire surface record is inflated from the getgo. And rather than adjusting CRS data to conform with what we know about MMTS trends, NOAA and GISS do the opposite and adjust MMTS trends to conform with CRS trends. While homogenization adjusts the well sited station trends upward to match those of the poorly sited stations. All ass-backwards.
If you want to “disprove 10,000 scientists” all you have to do is kick the pins out from under their data. That’s where Anthony and our team comes in.

kentclizbe
Reply to  Evan Jones
August 6, 2015 5:02 am

“If you want to “disprove 10,000 scientists” all you have to do is kick the pins out from under their data. ”
Evan,
That’s a good point. You’ll find that Tony Heller has been kicking pins for several years. See this compilation for the best evisceration of the NOAA/GISS fraud:
Alterations to Climate Data
https://stevengoddard.wordpress.com/alterations-to-climate-data/

Mark Albright
August 3, 2015 10:20 am

I have begun monitoring the USA monthly mean temperature using the USCRN data:
http://www.atmos.washington.edu/marka/crn/
July 2015 finished -1.0 degrees F below normal (2005-14):
http://www.atmos.washington.edu/marka/crn/201507.69.txt

climanrecon
August 3, 2015 10:48 am

Land air temperatures are obviously of interest to Man, but for AGW it is the sea that matters, since there is a lot more sea than land, and the “scary” warmings only come about from water vapour, which depends on sea surface temperature.
GISS may well be making a pigs ear of land air temperatures, but maybe sceptics are devoting an undue amount of energy to the issue.

Mike M. (period)
Reply to  climanrecon
August 3, 2015 10:51 am

“GISS may well be making a pigs ear of land air temperatures, but maybe sceptics are devoting an undue amount of energy to the issue.”
Thumbs up.

Bill Treuren
Reply to  Mike M. (period)
August 3, 2015 2:03 pm

Yup but the lag is greatest at sea. Logically the land temps are the canary in the mine.
I would like to see the satellite data split between land and sea it may give a better picture the Urban impact is trivial at a land cover level.

george e. smith
Reply to  climanrecon
August 3, 2015 11:43 am

Well sea Temperatures prior to about 1980 were just rubbish anyway, because sea water Temperatures and sea air Temperatures are not the same, and they are not correlated, so you can’t after the fact get one from the other.
Add to that, ocean waters circulate and the currents meander. So even if a research vessel returns to the same GPS co-ordinates, a month or a year later, there is no assurance that it is in the same water it was previously in.
g

Science or Fiction
Reply to  climanrecon
August 3, 2015 3:40 pm

The problem with the adjustments was warned about by Karl Popper in his book: The logic of scientific discovery. (As you have a degree in physics and a hold a PhD you will know the following – anyway: Karl Popper was the master mind behind the moderns scientific method, the empirical method.):
“it is still impossible, for various reasons, that any theoretical system can ever be conclusively falsified. For it is always possible to find some way of evading falsification, for example by introducing ad hoc an auxiliary hypothesis, or by changing ad hoc a definition. It is even possible without logical inconsistency to adopt the position of simply refusing to acknowledge any falsifying experience whatsoever. Admittedly, scientists do not usually proceed in this way, but logically such procedure is possible”
By this reason Karl Popper ruled out unprecise definitions, ad hoc changes of hypothesis, change of definitions from the, the empirical method. Hence such changes are unscientific and are ruled out from the modern scientific method.

Science or Fiction
Reply to  climanrecon
August 3, 2015 4:22 pm

As you point out water vapor is very significant – but believe it or not IPCC doesn´t even regard it as a “natural forcing” agent. It seems like they regard the system as inherently and extremely stable.
Regarding sea temperature and land air temperature and their combination, there are several issues both with definitions, physics and scientific theory.
For example: Exactly what is supposed to be warming – and how much?
Is it: The troposphere, close to the surface air temperature, sea surface temperature, the temperature of the deep oceans, or any combination???
It matters, because the amount of energy which may warm the atmosphere by 1 K (K = Kelvin, same as Celsius) is only enough to warm the oceans by about 0.001 K.
But is the theory precisely defined?
No!
What about the various temperature products which estimate global temperature then, do they take into account the different heat capacity of oceans and troposphere?
No!
Have they defined the measurand (What they are measuring – or providing an estimate for)
No!
Has they predicted a range of observations which would falsify their theory?
No!
And that is a pity – because the theory will then not be falsifiable.
And if it isn´t falsifiable it isn´t science.
As phrased by Popper:
“I shall not require of a scientific system that it shall be capable of being singled out, once and for all, in a positive sense; but I shall require that its logical form shall be such that it can be singled out, by means of empirical tests, in a negative sense: it must be possible for an empirical scientific system to be refuted by experience.”

August 3, 2015 10:50 am

Mark, how does the trend for the last ten years from USCRN compare to GISS for the US?

Mark Albright
Reply to  Cardin Drake
August 3, 2015 11:57 am

69 sites out of 114 total USCRN sites now have 10 years of record over the USA48 domain. Here are the annual results for the 10 years of the “USA National Thermometer” (NAT69) in deg F ranked cold to warm:
Mean Anom
————————————–
1) 2009 51.75|-1.03
2) 2008 51.81|-0.92
3) 2013 51.99|-0.79
4) 2014 52.10|-0.67
5) 2010 52.53|-0.26
6) 2011 52.78| 0.00
7) 2005 53.05| 0.27
8) 2007 53.15| 0.37
9) 2006 53.73| 0.95
10) 2012 54.89| 2.12
I don’t have a comparison to GISS but here is the comparison of annual mean temperature between NAT69 and USHCN:
USHCN NAT69 DIFF
————————————
2005 53.64 53.05 +0.59
2006 54.25 53.73 +0.52
2007 53.65 53.15 +0.50
2008 52.29 51.81 +0.48
2009 52.39 51.75 +0.64
2010 52.98 52.53 +0.45
2011 53.18 52.78 +0.40
2012 55.28 54.89 +0.39
2013 52.43 51.99 +0.44
2014 52.53 52.10 +0.43
————————————
MEAN 53.26 52.78 +0.48
2005-2009 +0.55
2010-2014 +0.42
Except for a half degree offset between the two measures of USA48 annual mean temperature, the variability in annual temperature matches quite closely over the USA48 domain and over the 10 year period 2005-2014.

Reply to  Mark Albright
August 3, 2015 7:08 pm

Thanks, that’s interesting. So they correlate well, but the half degree is about 75% of the “global warming” in the U.S.

more soylent green!
August 3, 2015 11:02 am

My contention is that after all the homogenization and adjusts, the GISS dataset no longer qualifies as “data.”

Reply to  more soylent green!
August 3, 2015 1:14 pm

Correct. It is an estimate of what the data might have been, had they been collected timely from properly selected, sited, calibrated, installed and maintained instruments.
We do not do ourselves any favors by referring to the post “adjustment” temperature records as data sets.

Anne Ominous
August 3, 2015 11:16 am

When including charts and graphs like the pie chart above, by all means size it to fit the page as you have here. But please PLEASE include a link to a full-size version. Because those are just too small to see clearly.

Anne Ominous
Reply to  Anne Ominous
August 3, 2015 11:18 am

Pardon… the pie chart is fine. I meant the map. It is just too small to read clearly.

Reply to  Anne Ominous
August 3, 2015 11:37 am

Anne, the full prelininary surface stations paper by Watts et. a. Is available in the lower right corner of WUWT.

Latitude
August 3, 2015 11:22 am

warming the past moves all the temperatures up…
Since UHI makes nights warmer…changing the time of day moves it all up again

george e. smith
Reply to  Latitude
August 3, 2015 11:49 am

And if UHI makes nights warmer, that means that the UHI will radiate faster than before so contribute a greater amount of energy to the earth energy loss, so heat islands may be a good thing, provided you assign the correct Temperature to them, and not some homo-genized fake Temperature. Don’t forget, they also radiate much faster during the day; much, much more than at night.

Latitude
Reply to  george e. smith
August 3, 2015 12:09 pm

don’t forget to adjust up for UHI….

August 3, 2015 11:22 am

Suppose that we could find 20 or so sites around the USA that have been there since 1900 or so; and that have always been away from urbanization, have not moved, and we don’t suspect that any government goons have tampered with the records. Suppose we used these long term sites and their raw data — what do you suspect we would find?
Can this be done?

Reply to  markstoval
August 3, 2015 11:41 am

You can do it. Post provides all the rural CRN 1. Go add all the CRN 2 rural from surface stations.org. Average the lot for starters. Be fancy, do spatial weighted average.

Reply to  ristvan
August 3, 2015 12:20 pm

One would assume you mean follow one of the two links in the post above. The one to GISS gives me a DNS error and the one to surface stations org could use a link to the data you suggest. But as school has re-started for the year here, I’ll not have time to play with the idea for a long while.
I do wonder why no organization has offered to publish the raw data from selected “good” sites. Perhaps wood for trees will be helpful in this regard. I’ll check when time allows.

Reply to  markstoval
August 3, 2015 1:55 pm

Its been done.
you will find a minimal UHI effect.
Zeke’s paper
http://onlinelibrary.wiley.com/doi/10.1029/2012JD018509/full
On a global basis Zeke and I did the same thing.
again a minimal effect was found.. in the noise.
The biggest problem is coming up with a good definition of what counts as Rural and what counts as
Urban.
The first thing to look for ( see the post ) is the absence of a QUANTIFIABLE definition of what is rural.
As Oke found over 50% of UHI studies used airports as the rural station.
basically, define rural in a way that human judgment is removed from the process ( avoiding bias confirmation )
Next look at only rural stations.
you will find that the trends over time dont change.
other things you can do
1. Look at re analysis: same trends
2. Look at Marine Air temperature: same trends
3. Look at re analysis that uses no temperature measurements as inputs: same trends.
Bottom line. no measurable UHI effect on a global basis.
you can however cherry pick anything you want out of records

Reply to  Steven Mosher
August 3, 2015 4:41 pm

Hi SM. Figured in advance you would eventually show up. This logical ambush was set up partly just for you months ago. Lets now collect some ‘scalps’.
1. The example set chosen was all SS.org CRN1, less one that was explicitly dubious on its face. No cherry pick at all. Nice try. Fail. You deny that pristine sites are relevant? Or resent ‘cherry picking’ only sites without any siting problems? Shame on you…. Berkeley science, perhaps. NOT Feynman science.
2. The UHI correction BEST denies but NASA explicitly acknowledges is demonstrated by NASA GISS Tokyo. I even illustrated NASA’s own example. Take your discrepancy up with NASA, not me. They think it exists. EPA thinks it exists. Their websites both say so. (And I believe them.) Your problem, not mine.
3. Now, you might reply that on average for all BEST sites UHI does not exist. Well, GISS and NCDC and AUS BOM obviously disagree. And, BEST also provably does not always do proper data ingestion, so any conclusions have to be treated circumspectly. Ingestion examples (oh my, irrefutably illustrated in the ebook, or perhaps also in a sequelae to this post) include BEST problems in Reykjavik and Rutherglen, two pristine non-US stations.
And BEST has yet to explain its ‘regional expectation’ corrections to station 166900. Look it and previous comments about it up. See also footnote 24 to essay When Data Isn’t (no different than here in GISS, just differently proven) for that station’s BEST specifics. You imploded over that example some time ago over at Climate Etc. But still have no answer than BEST ‘model’ is better than watch was actually measured at BEST 166900–to which I say, what are you smoking? Anthony Watts got those referenced details along with my guest submission. He chose not to post them (yet). Were provided him to establish my proposed post bona fides, for him to use at his discretion. Have a nice day.

kentclizbe
Reply to  Steven Mosher
August 3, 2015 5:45 pm

Beautiful response, ristvan.
The data manipulators, at BEST, NASA, NOAA, and all, are apparently incapable of taking on competent criticism and feedback.
They’ve evidently built quite a tower on very shaky ground. Pointing out the emperor has no clothes is a threat.
Stay on them. Don’t let their arrogance shake you from their tail. Clear, factual analysis scares the daylights out of them.

Reply to  Steven Mosher
August 3, 2015 7:18 pm

Hi rud
“Hi SM. Figured in advance you would eventually show up. This logical ambush was set up partly just for you months ago. Lets now collect some ‘scalps’.
1. The example set chosen was all SS.org CRN1, less one that was explicitly dubious on its face. No cherry pick at all. Nice try. Fail. You deny that pristine sites are relevant? Or resent ‘cherry picking’ only sites without any siting problems? Shame on you…. Berkeley science, perhaps. NOT Feynman science.
a) Anthony has new ratings. sorry you used old data
b) the rating system itself is subjective and has never been field tested.
I actually talked to LeRoy’s collegues about this, they did limited tested.
c) there are actually over 200 sites that are ranked as Pristine. with better sensors that the 14 you
selected.
2. The UHI correction BEST denies but NASA explicitly acknowledges is demonstrated by NASA GISS Tokyo. I even illustrated NASA’s own example. Take your discrepancy up with NASA, not me. They think it exists. EPA thinks it exists. Their websites both say so. (And I believe them.) Your problem, not mine.
a) We dont deny any UHI corrections. The algorithms do them.
b) On a GLOBAL basis no one has successfully shown a UHI effect.
I’ve got piles of failed attempts and one attempt that showed a slight effect.
3. Now, you might reply that on average for all BEST sites UHI does not exist. Well, GISS and NCDC and AUS BOM obviously disagree. And, BEST also provably does not always do proper data ingestion, so any conclusions have to be treated circumspectly. Ingestion examples (oh my, irrefutably illustrated in the ebook, or perhaps also in a sequelae to this post) include BEST problems in Reykjavik and Rutherglen, two pristine non-US stations.
a) The argument is NOT that it doesnt exist
b) the effect exists and is real and you can find it without a doubt
c) negative UHI effects also exist.. google that.. have fun.
d) On a global basis the effect is near the noise floor. There is still some hope of pulling it
out.. but it wont change the science
e) Neither Reykjavick nor Rutherglen are Pristine.
f) The problem in Iceland is not what people think.. Its actually a change in land cover.
( confirmed by a recent visit to their headquarters )
g) There is no ingest problem.
And BEST has yet to explain its ‘regional expectation’ corrections to station 166900. Look it and previous comments about it up. See also footnote 24 to essay When Data Isn’t (no different than here in GISS, just differently proven) for that station’s BEST specifics. You imploded over that example some time ago over at Climate Etc. But still have no answer than BEST ‘model’ is better than watch was actually measured at BEST 166900–to which I say, what are you smoking?
a) 166900 Is antarctica.
b) This has been explained to you before, but you choose NOT to read or choose to forget.
c) I will try again. Antarcica is one of the most challenging areas for any spatial statistics. The reasons
are pretty simple. 1) the shortness of the records. 2) the large distance between stations.
3) the presence of weather phenomena (katabic winds) which are challenging for an appproach
that relies on :LAPSE Rate as ours does. The corrections to that record are VERY LIKELY TO BE WRONG. As I pointed out to the first person who ever commented on them. Globally they make no difference. How do we know that. we know that because the answer you get using ONLY RAW DATA
and no adjustments, isnt that much different. IF you want a more accurate version of antartica I would
suggest using the approach that Odonnel used. Its like the cowtan and way approach only for the south pole. OR you could do a specialized regression for that area that took into account the unique geography
of the region. We model the climate as a function of latitude and elevation, just as Willis as done here
That regression explains 90+ % of the variation. WHERE in the world does this type of regression break
down? It breaks down ( has larger errors) in places where temperature inversions dominate during certain
seasons. It also has larger errors where there are strong coastal effects. The regional expectation approach minimizes the GLOBAL ERROR. minimizing the global error does not mean that large local errors
cease to exist.
Yet you have avoided the real issue. Take the 200+ pristine stations ( CRN and RCRN) they dont
differ from the “bad” stations.
Every year going forward that story will be the same.

willnitschke
Reply to  Steven Mosher
August 3, 2015 9:57 pm

In actual science, when it’s shown that your methods repeatedly fail, you’re sent back to the drawing board. In climate science you just hand wave it away with a few snide remarks about “cherry picking” and carry on doing your junk science. Enjoy it while you can get away with it, but the tide always turns eventually.

RWturner
August 3, 2015 11:23 am

I know how this problem can be fixed without homogenization and interpolation. Let’s build and deploy an array of satellites with advanced microwave sounding units on board to derive the temperature of the troposphere. If only someone had thought of this before, we could have been more accurately measuring the global average temperature since 1979.

kim
August 3, 2015 11:32 am

Heh, I first read the headline as: How good is the NASA GISS global temperature disaster?
==============

August 3, 2015 12:07 pm

GISS data is obsolete it is that simple. Satellite data has replaced it .

Reply to  Salvatore Del Prete
August 3, 2015 1:48 pm

While we’re on the subject, since satellites WHY does NASA even bother with land data? (I know why they do NOW, but why did the bother before Obama?)

Reply to  Andrew
August 3, 2015 4:25 pm

1. because satellites measure something different
2. because the surface is where we live
3. understanding the climate means understand temperature from the bottom of the sea to the TOA
4. satillite series are short ( The LIA disappears and the MWP disappears)
5 curiousity

Science or Fiction
Reply to  Andrew
August 3, 2015 5:15 pm

Good point – satellites measure the temperature in the troposphere.
According to the theory energy is absorbed by CO2, mainly in the troposphere.
When the troposphere is not warming how can deep oceans be warming (if they are)?
I think they don’t´t like the satellite records because the records are suitable to falsify their theory.
That is – if the theory had been precisely defined – and thereby falsifiable.
And if they had acted scientific and predicted a range of observations which could falsify their theory.

Simon
Reply to  Andrew
August 3, 2015 6:01 pm

Steven
Thank you. It’s a pity people round here don’t understand your 5 good reasons before commenting here.

Reply to  Andrew
August 3, 2015 6:41 pm

There is no missing hotspot.

willnitschke
Reply to  Andrew
August 3, 2015 9:45 pm

Why does Mosher’s points sound like the usual climate alarmist talking points/drivel, rather than a serious discussion of the issues?

Mark Albright
August 3, 2015 12:11 pm

The surfacestations.org web page has a link to the gallery, but it seems to be broken:
http://gallery.surfacestations.org/main.php
Does anyone know how to access the gallery? I would like to begin reviewing each site in Washington and Oregon.