How good is the NASA GISS global temperature dataset?

Guest essay by Rud Istvan

It is generally accepted that there are two major land temperature record issues: microsite problems, and urban heat island (UHI) effects. Both introduce warming biases.

The SurfaceStations.org project manually inspected and rated 1007 of 1221 USHCN stations (82.5%) using the 2002 Climate Reference Network (CRN) classification scheme (handbook section 2.2.1). The resulting preliminary paper shows a large temperature trend difference (about 0.1C/decade) between acceptably sited stations (CRN 1 or 2) and those with material microsite problems (CRN 3, 4, 5).

clip_image002

That is a real problem, since only 7.9% of USHCN is CRN 1 or 2. The NOAA solution has been to set up USCRN. This is not yet (AFAIK) being used to detect/correct USHCN station microsite issues in either the NCDC or GISS homogenization algorithms.

clip_image004

What about UHI? The NASA GISS website uses Tokyo to explain the issue and its homogenization solution. One could either cool the present to remove UHI or warm the past (inserting artificial UHI for trend comparison purposes). Warming the past is less discordant with the reported present (the UHI correction less noticeable), so preferred by GISS.

clip_image006

In the Surface Stations supplemental materials (available at www.surfacestations.org) only 14 CONUS stations have pristine CRN 1 siting (1.2%). 4 are labeled urban, 3 are suburban, and 7 are rural. Since these 14 have zero microsite issues, they can be used to examine the GISS UHI homogenization. Both the ‘raw’ and the ‘adjusted’ data can be accessed at www.data.giss.nasa.gov/gistemp. Just click on the monthly chart to go to the station selector page, and enter a station name. The following uses [combined location sources] raw v2, and homogenized v3 (since that is all that is now publically available). Only 13 stations proved usable; Corpus Christi v2 raw (urban) has different lat/lon coordinates than v3 homogenized. That could be a mistake, or it might introduce an unfair comparison. Corpus Christie was therefore excluded; the final GISS CRN 1 sample size is N=13.

Is UHI evident in the raw urban stations compared to rural stations (like the GISS Tokyo/Hachijyo example)? Yes. All three urban stations evidence UHI, for example San Antonio TX and Syracuse NY.

clip_image008

But in suburban Laramie WY or Baker OR UHI is not evident in the raw data–just as, for example, there is no UHI in rural Hobart OK or Fairmont CA.

clip_image010

How good was GISS at removing the apparent UHI bias from raw San Antonio and Syracuse? Hard to tell for sure, but it is evident that the past was warmed some to compensate, just as GISS says its homogenization works.

clip_image012

The third pristine urban station, Savannah GA, was homogenized so much its raw UHI warming trend was fully removed. That might make sense given Savannah’s coastal location, moderated by ocean proximity.

clip_image014

GISS should logically leave non-UHI suburban and rural stations relatively untouched. Oops. GISS homogenization cooled the past to add a spurious warming trend to all but one pristine station. For example these two:

clip_image016

In some cases the past was cooled AND the present warmed, as in Laramie WY.

clip_image018

A spurious warming trend was introduced into all three suburban and 6 of 7 rural CRN 1 stations. Only Apalachicola FL emerged from GISS unscathed.

Automated homogenization algorithms like GISS use some form of a regional expectation, comparing a station to ‘neighbors’ to detect/correct ‘outliers’. BUT 92% of US stations have microsite issues. So most neighbors are artificially warm. So the GISS algorithm makes the hash illustrated above. How could it not? And by extension NCDC, BEST, Australian BOM, …

0 0 votes
Article Rating
157 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John
August 3, 2015 9:22 am

SeaTac has a sighting issue since the third runway went in, and they’re going to claim it’s the hottest summer in Seattle this year.
http://www.seattletimes.com/seattle-news/seattles-scorching-summer-sizzles-on/

John
Reply to  John
August 3, 2015 9:22 am

“… siting…”

Reply to  John
August 3, 2015 9:29 am

I just talked to my sister in Seattle, and she says it IS the hottest summer, since she arrived in the 90’s…

Reply to  Michael Moon
August 3, 2015 10:38 am

Are you serious?

Bernie
Reply to  Michael Moon
August 3, 2015 11:22 am

…and all summers are hotter than I remember when I was a kid.

James at 48
Reply to  Michael Moon
August 3, 2015 1:39 pm

Seattle’s finally having a warm summer after a series of crummy ones. In spite of stereotypes, being on an inland branch of the ocean, Seattle gets considerably less marine cooling than the CA coast (plus the SSTs are actually warmer, due to less upwelling). When there are not fronts coming through, it can get pretty toasty in Seattle (and even more toasty down the road in Portland).

Joseph Murphy
Reply to  Michael Moon
August 4, 2015 2:08 pm

Bernie, that reminds me of a conversation I had with my parents. They would tell me winters were so much worse when they were kids. ‘We would have snow for Thanksgiving!’, my dad would tell me. Being a big fan of snow this got me excited. ‘And what about your parents?’, I asked. ‘Did they have even more snow when they were kids?’ This caused my dad to pause. ‘No’, he replied thinking back, ‘They actually complained about how much worse the winters had become.’ What a disappointment, I thought.

Reply to  Michael Moon
August 5, 2015 5:00 pm

since she arrived in the 90’s

alarmists would ignore that conditional – smart people notice – and they would ask want to know if this is anecdotal evidence – or is it based on some sort of proof
always a risk giving one person’s assessment based on their limited experience more than a passing glance
and i don’t have to – i’ve been living in seattle since the late 70’s

Mark Albright
Reply to  John
August 3, 2015 10:08 am

I have been one of those who watches the SeaTac temperature record closely. SeaTac was considerably warmer (+2.2 to +2.5 F) than nearby sites in the greener and cooler neighborhoods surrounding the airport last month (July 2015):
July 2015
Site Mean Diff
——————
SEAT4 68.7 -2.5
NORM3 69.0 -2.2
DESM8 68.9 -2.3
——————
KSEA 71.2

Rhoowl
Reply to  John
August 3, 2015 11:39 am

I’ve been in tx for about 13 years now….and this is the coolest summer I remember…..we haven’t hit 100 yet….like last year we were in the 100’s almost everyday about now….

Reply to  John
August 3, 2015 7:24 pm

SeaTac’s temperature records date from 1945, Landsburg’s from 1915.
SeaTac’s 2015 Jun-Jul mean max temperature was 80.8°F, a whopping 3.5°F warmer than the 1958 Jun-Jul mean max of 77.3°F—previously the warmest on record.
Landsburg is 17 miles east of SeaTac. Landsburg’s 2015 Jun-Jul mean max was 80.4°F, warmer than the 1958 mean max of 78.6°F but cooler than the 1926 Jun-Jul mean max of 81.4°F.
If SeaTac existed in its current form in 1926, its 2015 Jun-Jul warmth might not be unprecedented.
Highest temperature at SeaTac in Jun-Jul 2015 was 95°F, 8° below the record.

knr
August 3, 2015 9:23 am

The question I would rise is , if you where to sit down think what would it take to scientifically come up with a meaningful value for the average temperature of the planet and given that , how well do we currently match this?
I suspect we find that the conditions required to produce this value in manner that actual has a scientific value , are not met , and that we are using a value which in reality is ‘better than nothing ‘
Experimental design 101, if you cannot take the measurements in the manner required than any value you produce is suspect and subject to errors , and if you do not know the errors its subject too, then your ‘guessing with numbers’
Now what is the actual state of our ability to produce this value in meaningful way , any one know ?

BFL
Reply to  knr
August 3, 2015 9:33 am

“actual state of our ability”
None

Auto
Reply to  BFL
August 3, 2015 3:13 pm

BFL
Given the lack of careful observation/precision for the oceans [note – most merchant ships – VOSs – are still using buckets; or Engine Room Intakes, which may be as much as twenty metres [>60 feet] below the surface they are ‘supposed’ to be measuring] I suggest that despite ARGO buoys doing a measurement every quarter million square miles – we are still far – I say again : f a r – from being able to quote an average temperature for the globe to even one dp C.
So – 14 – or 15 – or 13.
As best as we can accept. I suggest.
And in 257915 BC – or BP -or any other six digit number produced by letting my fingers wave at the top line on my keyboard – let’s all wave our arms!
We’re within an order or so – probably.
!4.04 or 15.92 or 13.83 – or anything to 2 dp – is an expletive carping deletive.
Even 14.9 or 16.2 or – ahhhhh anything better that a whole degree – is exploitative and likely wrong.
As noted – we can probably do the nearest degree . . . probably . . . .
At best for lots and lots of thousands of years ago – two C – and I think that’s optimistic.
[Hugely optimistic? I think so, but stand to be corrected, with evidence.]
NB – I stand to be corrected, with evidence . . . .
Prove me wrong – with evidence – and I will praise you.
With evidence.
Auto.
Note. Now, I’m borrowing from Willis – with huge acknowledgments – and much appreciation.
My BORROWED – From Willis (super star) – Plea is: If you disagree with someone, please quote the exact words that you object to. That way we can all understand exactly who and what you are objecting to.
I might be wrong – quote where I am in error, and how – and corrections.
Much appreciated. Auto

Auto
Reply to  BFL
August 3, 2015 3:15 pm

So – if I’m talking bollocks – TELL ME.
Thanks.
Auto

BFL
Reply to  BFL
August 3, 2015 8:18 pm

Auto:
“the actual state of our ability to produce this value [avg. earth temp.] in meaningful way”
Okay then I stand corrected, and change “none” to “effectively none” as in possible, but with large error bars.

Hoyt Clagwell
Reply to  knr
August 3, 2015 10:37 am

@Knr: No averaged temperature of the whole planet could ever be meaningful. Even if you had temperature measurements from every square foot of the planet, it wouldn’t be meaningful once it was all averaged. Think about it this way, if the whole planet became 57 degrees F tomorrow and stayed that way every day, everything frozen would be melting, and life in the tropics would be dying, but your precious average would remain unchanged. It is no more meaningful than sampling the wavelengths of light in the rainbow and averaging them into a single wavelength measurement corresponding to an “average color” for the rainbow. It wouldn’t and couldn’t mean anything. It is just a mathematical exercise. The world is a rainbow of temperatures that fluctuate wildly every day, and cannot be represented by a single temperature number. Tracking the average temperature year after year will never tell you what is causing the temperature to fluctuate because you’ve averaged out all of the details.

knr
Reply to  Hoyt Clagwell
August 3, 2015 1:43 pm

Your right on this meaningless value for which we are no even in position to measure in the manner that as scientific meaning , a great deal has been built .
Sceptics are called ‘science-deniers ‘ by the faithful of CAGW , and yet the real mass denial is in failing to admit that we are still failing science 101 in failing the basic tenets of good scientific pratice in this area.
For if we cannot measure we cannot ‘know’ but only ‘guess’

Glenn999
Reply to  Hoyt Clagwell
August 4, 2015 9:11 am

I agree completely on the global average thingy. Absolutely useless. Why would we want to average the poles with the tropics, and what would that number mean anyway?
A proper use of averages would be to average your local temperature and weather to look for trends in your neighborhood. This could possibly be extended to other local areas nearby, but the physical characteristics would need to be similar also.

Karl Compton
Reply to  Hoyt Clagwell
August 5, 2015 8:16 am

Hoyt, we are being told ad nauseum that ‘The earth is warming.” Indeed, if CO2 (or something else) is causing global warming the average would by definition go up, so it is indeed of interest as a proof/disproof of the thesis. Though local variations are of more immediate interest, being able to know whether the earth is actually warming and at what rate is certainly of considerable interest, and might actually save us a few trillion wasted dollars.

Bill Treuren
Reply to  knr
August 3, 2015 1:57 pm

The question is very valid and the temperature presented will always be disputable by someone.
However if you are consistent and you include your errors even a moderately faulty process could be a valid measure for trends and change.
my issue is the errors are much ignored.

Editor
Reply to  Bill Treuren
August 6, 2015 3:20 am

Well, we do have ARSS and UAH — fortunately . . .

Eugene WR Gallun
Reply to  knr
August 3, 2015 5:54 pm

knr
“jBetter than nothing” or worse than nothing?
Eugene WR Gallun

Editor
Reply to  Eugene WR Gallun
August 6, 2015 3:21 am

Yes.

Reply to  knr
August 3, 2015 7:23 pm

One problem is that radiation is proportional to the fourth power of temperature. So the average radiative ability of anything is not the same as the averaged value of T over the whole object. (Something could become on average cooler and yet radiate more if the hot spots got hotter and most of the rest got cooler.) Since dissipation of heat is a prime function of weather, average temperature is a poor metric. For what is it a good metric?

August 3, 2015 9:28 am

Your charts should have a third panel: GISS Homogen – Raw.
I suspect the trend, the intermediate trends, and the noise will all be impossible to justify.
That difference will also be a source of uncertainty.
Whatever the uncertainty in the Raw, the variance from the Difference must be ADDED to the raw to get uncertainty in the final GISS Homogenized result.

August 3, 2015 9:28 am

Our tax dollars at work…

Schrodinger's Cat
August 3, 2015 9:29 am

It seems to me that by the time GISS has finished adjusting the data it is unsuitable for any scientific purpose. It is more instructive to look at the raw data with some knowledge of a problem it may have (or had).

Louis Hunt
Reply to  Schrodinger's Cat
August 3, 2015 9:54 am

Will they ever be finished adjusting the data? With their track record, it seems doubtful. But we do know one thing. The current GISS data set is unsuitable for any scientific purpose because we know it will be “corrected” again in the future. In fact, they will very likely make many future corrections to it. That means the current data are “wrong” and therefore worthless for any purpose other than propaganda. But that’s seems to suit their purpose anyway.

Reply to  Schrodinger's Cat
August 3, 2015 10:28 am

I totally agree with you, how can the data be adjusted accurately? For example a weather station is sat next a runway at a provincial airport, the airport expands and gets another runway, an accurate adjustment is impossible. The only way of getting accurate figures is to totally disregard all data from sites with heat island or microsite problems.
A few years ago in Newcastle upon Tyne, UK where I live, we had the reputation as the most air polluted city in the country. This changed after a few months, to one of the least air polluted cities in the country, because they moved the pollution sensor from the walled and ceilinged bus concourse where dozens of diesel powered coaches and buses were running their engines from 5:00am to midnight to a more sensible location.

george e. smith
Reply to  Schrodinger's Cat
August 3, 2015 11:28 am

Well I don’t think that ” heat islands ” are any kind of problem. They are in fact real places on earth’s surface, and they have a Temperature that may be different from that at surrounding areas.
The problem is that some people like NASA’s Dr. Hansen, think that it is ok to use that same temperature for places 1200 km away from the thermometer.
Siting is an issue in that many thermometers are situated on or near airfield runways, and are intended for flying safety data gathering (take offs and landings, and aircraft loading).
But that’s the same issue as the UHIs. Don’t use airfield thermometers for some place 1200 km away; or even 12 km away.
The big issue with the surface ” data ” gathering is that it doesn’t comply with the fundamental laws governing sampled data systems; so it just gathering noise; NOT ” data “.
And there is that other issue that the historic oceanic near surface Temperature data, prior to about 1980, is just useless rubbish, since water and air temperatures aren’t the same, and aren’t correlated.
Other than that; the Temperature range on earth is about 100-150 deg. C so trying to keep track of hundredths of a deg. C is plain silly. It certainly isn’t science.
Just my opinion of course; not good for any class credits.
g

Walt D.
Reply to  george e. smith
August 3, 2015 11:43 am

So you don’t believe that London and Barcelona temperatures are the same?
Everyone knows that Seattle and San Francisco temperatures are the same.
Venice and Munich?
What’s the problem.

Editor
Reply to  george e. smith
August 4, 2015 8:11 am

George, the issue with UHI is the distortion of the trend due to factors other than those being sought by climate scientists. An upward trend from a thermometer sited in an urban area is more likely due to localized phenomena than regional climatic changes.
The distortion is further magnified if the UHI-affected site is transposed onto a non-UHI region. The resulting temperature trend will be artificially shifted up, concealing what is truly happening with the temperatures in the area.
rip

george e. smith
Reply to  george e. smith
August 5, 2015 7:40 pm

The idea of samples in a sampled data system, is very simple. There is a tacit assumption that the sample value is a credible value for nearby points that could have been sampled.
That ‘s the 4-H club description of the Nyquist sampling theorem. Theoretically the sample is the value at some instant of time (or other sampled variable), so ideal samples are zero width. If the signal is band limited (which it must be), then the point value can only change by small amounts in between samples.
So the Nyquist criterion requires that the function not change radically in between samples, and that is why the maximum sample spacing may not exceed the half period at the band limit frequency. So it is ludicrous to space position samples 1200 km apar as Hansen claims you can do, when significant temperature cycles can take place in just a few km.
In the greater SF Bay area, Temperatures can go through a five deg. C Temp cycle in perhaps five km distance.
It doesn’t matter a whit, what causes a UHI and how small or large it is. It’s Temperature is a valid data point, but it may not be valid to use even a few km away. So this 1200 km sampling bs, is just that. The resultant measurements are just noise, and contain no reliable information.
A properly sampled continuous function can (in principle) be exactly reconstructed from the point samples.
It doesn’t matter whether you want to reconstruct the entire continuous function or not. Any statistical computations made on the data such as an average value, are also invalid, if the function is not sampled properly.
in the case of the average value of the continuous function; that corresponds to the zero frequency component of the frequency spectrum of the function. If you undersample by just a factor of two, that means that there are frequency components at twice the maximum useful bandwidth tat can be sampled at that rate.
So if B is the bandwidth limit for a set of samples taken at a rate 2B, then an out of band frequency component, at a frequency B+B will be reconstructed t a frequency of B-B which is zero, so that will result in aliasing noise that changes the value of the average of the function.
So whether you reconstruct the continuous function or not, the average of the samples will not be correct if you under sample by a factor of two.
I don’t know why it is that statisticians simply refuse to accept this limitation on the numerical origami mastications they perform.
It epitomizes the GIGO syndrome.
Improperly sampled sets of numbers are NOT data; they are noise.

Editor
Reply to  george e. smith
August 6, 2015 3:35 am

That ‘s the 4-H club description of the Nyquist sampling theorem.
Unfortunately all we got going is Heat Sink, Homogenization, Hype, and Hansen.

Cube
Reply to  Schrodinger's Cat
August 4, 2015 8:12 am

+1

Editor
Reply to  Schrodinger's Cat
August 6, 2015 3:30 am

Better yet, do what Anthony and our team does: Drop the perturbed (moved, TOBS-biased) stations, applying only MMTS adjustment (unfortunately necessary — and probably flawed) to the remainder.
That works for the station-dense, metadata-rich CONUS. (For Outer Mongolia, not so much, though.)
Note that CRS in and of itself, is biased. It carries its own heat sink around on its back. There is a severe Tmax bias in CRS stations.
We’ll be coming through with our own set of “microsite adjustments” in due course.

August 3, 2015 9:32 am

The GISS data is worthless and should be thrown out.

Tim
August 3, 2015 9:34 am

I’m afraid that the scientific value is much less important to the politicians than a politically correct value.

Harry Twinotter
August 3, 2015 9:40 am

“It is generally accepted that there are two major land temperature record issues: microsite problems, and urban heat island (UHI) effects”.
Not really. When you average out the stations across a country the UHI effects are not large. When you average out the stations across the globe, even less.

Editor
Reply to  Harry Twinotter
August 3, 2015 3:07 pm

Not so. When other stations’ data is adjusted to match UHI-influenced stations, then the UHI starts to play a large role in the overall average.

Louis Hunt
August 3, 2015 9:43 am

Berkeley Earth (BEST) has the following comment in their FAQ:
“Our UHI paper analyzing this indicates that the urban heat island effect on our global estimate of land temperatures is indistinguishable from zero.”
How can they honestly make such a statement?

Andrew
Reply to  Louis Hunt
August 3, 2015 1:43 pm

Well it’s true of the globe, which is why satellites can’t find warming. It would be true if thermometers were randomly situated (including 70% in the ocean, which experiences little UHI).

Menicholas
Reply to  Louis Hunt
August 4, 2015 3:28 pm

Hey, this is climate “science”!
What the heck has honesty got to do with it?
I am reminded of my favorite Mae West line ever:
https://youtu.be/u7ekAQ_Plxk?t=36s

Editor
Reply to  Menicholas
August 6, 2015 3:39 am

Climatology — ask me no questions and I’ll tell you no lies.

Jimmy
August 3, 2015 9:46 am

“It is generally accepted that there are two major land temperature record issues: microsite problems, and urban heat island (UHI) effects. Both introduce warming biases.”
While these are a couple of the most serious issues, there’s more than just two generally accepted major land temperature record issues. For example, time of observation is known to introduce some pretty serious artifacts. There’s also the issue of discontinuous records.

Reply to  Jimmy
August 3, 2015 10:16 am

Regarding Time of OBServation adjustment I find the following little test revealing:
What Is The Real Value Of TOBS?
http://realclimatescience.com/2015/07/what-is-the-real-value-of-tobs/

Louis Hunt
Reply to  Science or Fiction
August 3, 2015 11:00 am

Tony Heller’s article reveals something quite interesting. By removing the stations that took afternoon readings he determined that, “The total bias caused by afternoon TOBS is a little more than 0.1C”. But then he goes on to point out, “The total NOAA adjustment is nearly two degrees F. It is unsupportable nonsense, and fraud.”

Reply to  Science or Fiction
August 3, 2015 11:33 am

Or maybe from a more basic level about TOBS:
http://climate.n0gw.net/TOBS.pdf

Reply to  Science or Fiction
August 3, 2015 2:06 pm

Louis Hunt August 3, 2015 at 11:00 am
He did not provide any sport for the claim that the NOAA adjustments is unsupportable nonsense, and fraud, in the linked article, however he has done many previous tests which are quite convincing:
1. The best correlation I have ever seen within climate science:
https://stevengoddard.wordpress.com/2014/10/02/co2-drives-ncdc-data-tampering/
2. Very good vizualisation of the adjusments:
http://realclimatescience.com/alterations-to-climate-data/

Reply to  Science or Fiction
August 3, 2015 3:08 pm

Great work on explaining reasons for Time of OBServation bias adjustments Gary.
However – when things gets just a little bit more complicated than very very trivial – then there are several influencing parameters, variables and uncertainties – I tend to believe nothing and require appropriate testing.

August 3, 2015 9:57 am

july anomaly +.18c satellite data the correct data the only data.

August 3, 2015 10:02 am

GISS says their annual temperature anomaly is accurate to .1 degree with a 95% confidence factor. Then, five or ten years later, they adjust it outside of that .1 degree range. That doesn’t even make sense. What is the error bar after the adjustment?

phodges
Reply to  Cardin Drake
August 3, 2015 10:40 am

More like one year later….or even one month

Paul
Reply to  Cardin Drake
August 3, 2015 10:44 am

“What is the error bar after the adjustment?”
My best guess would be 0.01 degrees. If you can adjust data, you can adjust error bars too.

george e. smith
Reply to  Cardin Drake
August 3, 2015 11:37 am

Which is basically gobbledegook anyway (AKA Statistician shop talk).
There isn’t any statistical significance to anything that only happens once, and climate weather data gathering is a one time affair. The Temperature is here today and gone tomorrow, to be replaced by tomorrow’s Temperature. So you have a sample of one for each member of the data set. And nobody knows, who it was that actually measured even that one sample, or where and when they measured it. Well they don’t measure it anyway; they calculate it from some model, so it is not even real observations of anything physical.

peter
August 3, 2015 10:06 am

I think temperature is a bit of a red herring. There is no possible way to measure global change over decades with the spotty recording record from the past. Maybe a century down the road with a hundred years of modern measurements we might be able to make a judgement as to the rise and/or fall of global temperatures.
A more valuable comparison tool would be major weather events. Because they were major, they were recorded, and often in some detail. We can compare what happened in the past and see how it compares to the present.
Unprecedented is a term tossed around with great frequency, but from this site I’ve learned that pretty much every extreme weather event there is a counterpart if you check back fifty or a hundred years.
For instance we know that serious hurricanes have hit the New York area in the time period it has been colonized by Europe, and before that from sedimentary deposits before that. So there was nothing unprecedented about Sandy, who was not even a hurricane.
Let them point out serious weather events that have no corresponding occurrences. After all, that is what they claim the whole crisis is about. If there are no such events, then there is no crisis, no matter what the temperature is doing.

Mike M. (period)
Reply to  peter
August 3, 2015 10:49 am

Peter,
I see two obvious problems with using major weather events. One is that they are rare, so the statistics suck. The other problem is that you will have to quantitatively define a threshold for what constitutes a major event. Events near the threshold will be much more common than events clearly above the threshold, that is the nature of extreme events. So a small change or error in the threshold produces a large change in the number of events. Now you are back to the problems of comparing old measurements to recent ones, but the errors are amplified.
Consider heat waves. A bias of one degree in temperature dramatically alters the odd of getting N consecutive days above a given T.

peter
Reply to  Mike M. (period)
August 3, 2015 4:14 pm

All too true. But, your common person does not pay attention to single digit temperature changes. They do recall the roof blowing off the house. I’m firmly convinced that the only reason the whole GW thing got a toe hold was because we had the perfect storm of events. The internet, and a period of above average years in first world countries making it easy to sell the idea that Temperatures were rising.

markx
Reply to  Mike M. (period)
August 3, 2015 6:05 pm

The ironic part about the overuse and misuse of the term “unprecedented” in climate and weather reporting is that it is almost invariably immediately followed by the word “since”.

Mike M. (period)
Reply to  Mike M. (period)
August 4, 2015 8:13 am

Peter,
“I’m firmly convinced that the only reason the whole GW thing got a toe hold was because we had the perfect storm of events.”
True. The crazy 2005 hurricane season set things up for Al Gore and the complete politicization of global warming. But it was likely just a statistical aberration, aided by the AMO. As I said before the statistics of extreme events suck.
Politically, extreme events are a ratchet. People remember the roof blowing off the house, but they do not remember the roof not blowing off the house.

Ronald
August 3, 2015 10:10 am

The only good data is the raw data. Every adjustment is plain wrong. But yes I do understand that adjustments need to be maid to keep up whit the non excising global warming. So both temperatures in the past and present must be adjusted tho fit the models.
Its not good but oke what to do about it?? if you tell someone the temperature is adjusted your a skeptic who doesn’t know about climate.
The only thing we cane do is sit back relax and watch the world turn colder, colder and colder.

Reply to  Ronald
August 3, 2015 10:38 am

Their adjusted data is nonsense. It does not count.

Editor
Reply to  Salvatore Del Prete
August 6, 2015 3:52 am

But raw data will lie to you. The current adjustment procedures are done in exactly the wrong way — and in the wrong direction — but some adjustment (including dropping the badly perturbed stations) is required.
For example, all of the CRS trend data is spuriously high because of equipment issues. So the entire surface record is inflated from the getgo. And rather than adjusting CRS data to conform with what we know about MMTS trends, NOAA and GISS do the opposite and adjust MMTS trends to conform with CRS trends. While homogenization adjusts the well sited station trends upward to match those of the poorly sited stations. All ass-backwards.
If you want to “disprove 10,000 scientists” all you have to do is kick the pins out from under their data. That’s where Anthony and our team comes in.

Reply to  Evan Jones
August 6, 2015 5:02 am

“If you want to “disprove 10,000 scientists” all you have to do is kick the pins out from under their data. ”
Evan,
That’s a good point. You’ll find that Tony Heller has been kicking pins for several years. See this compilation for the best evisceration of the NOAA/GISS fraud:
Alterations to Climate Data
https://stevengoddard.wordpress.com/alterations-to-climate-data/

Mark Albright
August 3, 2015 10:20 am

I have begun monitoring the USA monthly mean temperature using the USCRN data:
http://www.atmos.washington.edu/marka/crn/
July 2015 finished -1.0 degrees F below normal (2005-14):
http://www.atmos.washington.edu/marka/crn/201507.69.txt

August 3, 2015 10:48 am

Land air temperatures are obviously of interest to Man, but for AGW it is the sea that matters, since there is a lot more sea than land, and the “scary” warmings only come about from water vapour, which depends on sea surface temperature.
GISS may well be making a pigs ear of land air temperatures, but maybe sceptics are devoting an undue amount of energy to the issue.

Mike M. (period)
Reply to  climanrecon
August 3, 2015 10:51 am

“GISS may well be making a pigs ear of land air temperatures, but maybe sceptics are devoting an undue amount of energy to the issue.”
Thumbs up.

Bill Treuren
Reply to  Mike M. (period)
August 3, 2015 2:03 pm

Yup but the lag is greatest at sea. Logically the land temps are the canary in the mine.
I would like to see the satellite data split between land and sea it may give a better picture the Urban impact is trivial at a land cover level.

george e. smith
Reply to  climanrecon
August 3, 2015 11:43 am

Well sea Temperatures prior to about 1980 were just rubbish anyway, because sea water Temperatures and sea air Temperatures are not the same, and they are not correlated, so you can’t after the fact get one from the other.
Add to that, ocean waters circulate and the currents meander. So even if a research vessel returns to the same GPS co-ordinates, a month or a year later, there is no assurance that it is in the same water it was previously in.
g

Reply to  climanrecon
August 3, 2015 3:40 pm

The problem with the adjustments was warned about by Karl Popper in his book: The logic of scientific discovery. (As you have a degree in physics and a hold a PhD you will know the following – anyway: Karl Popper was the master mind behind the moderns scientific method, the empirical method.):
“it is still impossible, for various reasons, that any theoretical system can ever be conclusively falsified. For it is always possible to find some way of evading falsification, for example by introducing ad hoc an auxiliary hypothesis, or by changing ad hoc a definition. It is even possible without logical inconsistency to adopt the position of simply refusing to acknowledge any falsifying experience whatsoever. Admittedly, scientists do not usually proceed in this way, but logically such procedure is possible”
By this reason Karl Popper ruled out unprecise definitions, ad hoc changes of hypothesis, change of definitions from the, the empirical method. Hence such changes are unscientific and are ruled out from the modern scientific method.

Reply to  climanrecon
August 3, 2015 4:22 pm

As you point out water vapor is very significant – but believe it or not IPCC doesn´t even regard it as a “natural forcing” agent. It seems like they regard the system as inherently and extremely stable.
Regarding sea temperature and land air temperature and their combination, there are several issues both with definitions, physics and scientific theory.
For example: Exactly what is supposed to be warming – and how much?
Is it: The troposphere, close to the surface air temperature, sea surface temperature, the temperature of the deep oceans, or any combination???
It matters, because the amount of energy which may warm the atmosphere by 1 K (K = Kelvin, same as Celsius) is only enough to warm the oceans by about 0.001 K.
But is the theory precisely defined?
No!
What about the various temperature products which estimate global temperature then, do they take into account the different heat capacity of oceans and troposphere?
No!
Have they defined the measurand (What they are measuring – or providing an estimate for)
No!
Has they predicted a range of observations which would falsify their theory?
No!
And that is a pity – because the theory will then not be falsifiable.
And if it isn´t falsifiable it isn´t science.
As phrased by Popper:
“I shall not require of a scientific system that it shall be capable of being singled out, once and for all, in a positive sense; but I shall require that its logical form shall be such that it can be singled out, by means of empirical tests, in a negative sense: it must be possible for an empirical scientific system to be refuted by experience.”

August 3, 2015 10:50 am

Mark, how does the trend for the last ten years from USCRN compare to GISS for the US?

Mark Albright
Reply to  Cardin Drake
August 3, 2015 11:57 am

69 sites out of 114 total USCRN sites now have 10 years of record over the USA48 domain. Here are the annual results for the 10 years of the “USA National Thermometer” (NAT69) in deg F ranked cold to warm:
Mean Anom
————————————–
1) 2009 51.75|-1.03
2) 2008 51.81|-0.92
3) 2013 51.99|-0.79
4) 2014 52.10|-0.67
5) 2010 52.53|-0.26
6) 2011 52.78| 0.00
7) 2005 53.05| 0.27
8) 2007 53.15| 0.37
9) 2006 53.73| 0.95
10) 2012 54.89| 2.12
I don’t have a comparison to GISS but here is the comparison of annual mean temperature between NAT69 and USHCN:
USHCN NAT69 DIFF
————————————
2005 53.64 53.05 +0.59
2006 54.25 53.73 +0.52
2007 53.65 53.15 +0.50
2008 52.29 51.81 +0.48
2009 52.39 51.75 +0.64
2010 52.98 52.53 +0.45
2011 53.18 52.78 +0.40
2012 55.28 54.89 +0.39
2013 52.43 51.99 +0.44
2014 52.53 52.10 +0.43
————————————
MEAN 53.26 52.78 +0.48
2005-2009 +0.55
2010-2014 +0.42
Except for a half degree offset between the two measures of USA48 annual mean temperature, the variability in annual temperature matches quite closely over the USA48 domain and over the 10 year period 2005-2014.

Reply to  Mark Albright
August 3, 2015 7:08 pm

Thanks, that’s interesting. So they correlate well, but the half degree is about 75% of the “global warming” in the U.S.

more soylent green!
August 3, 2015 11:02 am

My contention is that after all the homogenization and adjusts, the GISS dataset no longer qualifies as “data.”

firetoice2014
Reply to  more soylent green!
August 3, 2015 1:14 pm

Correct. It is an estimate of what the data might have been, had they been collected timely from properly selected, sited, calibrated, installed and maintained instruments.
We do not do ourselves any favors by referring to the post “adjustment” temperature records as data sets.

Anne Ominous
August 3, 2015 11:16 am

When including charts and graphs like the pie chart above, by all means size it to fit the page as you have here. But please PLEASE include a link to a full-size version. Because those are just too small to see clearly.

Anne Ominous
Reply to  Anne Ominous
August 3, 2015 11:18 am

Pardon… the pie chart is fine. I meant the map. It is just too small to read clearly.

Reply to  Anne Ominous
August 3, 2015 11:37 am

Anne, the full prelininary surface stations paper by Watts et. a. Is available in the lower right corner of WUWT.

Latitude
August 3, 2015 11:22 am

warming the past moves all the temperatures up…
Since UHI makes nights warmer…changing the time of day moves it all up again

george e. smith
Reply to  Latitude
August 3, 2015 11:49 am

And if UHI makes nights warmer, that means that the UHI will radiate faster than before so contribute a greater amount of energy to the earth energy loss, so heat islands may be a good thing, provided you assign the correct Temperature to them, and not some homo-genized fake Temperature. Don’t forget, they also radiate much faster during the day; much, much more than at night.

Latitude
Reply to  george e. smith
August 3, 2015 12:09 pm

don’t forget to adjust up for UHI….

August 3, 2015 11:22 am

Suppose that we could find 20 or so sites around the USA that have been there since 1900 or so; and that have always been away from urbanization, have not moved, and we don’t suspect that any government goons have tampered with the records. Suppose we used these long term sites and their raw data — what do you suspect we would find?
Can this be done?

Reply to  markstoval
August 3, 2015 11:41 am

You can do it. Post provides all the rural CRN 1. Go add all the CRN 2 rural from surface stations.org. Average the lot for starters. Be fancy, do spatial weighted average.

Reply to  ristvan
August 3, 2015 12:20 pm

One would assume you mean follow one of the two links in the post above. The one to GISS gives me a DNS error and the one to surface stations org could use a link to the data you suggest. But as school has re-started for the year here, I’ll not have time to play with the idea for a long while.
I do wonder why no organization has offered to publish the raw data from selected “good” sites. Perhaps wood for trees will be helpful in this regard. I’ll check when time allows.

Reply to  markstoval
August 3, 2015 1:55 pm

Its been done.
you will find a minimal UHI effect.
Zeke’s paper
http://onlinelibrary.wiley.com/doi/10.1029/2012JD018509/full
On a global basis Zeke and I did the same thing.
again a minimal effect was found.. in the noise.
The biggest problem is coming up with a good definition of what counts as Rural and what counts as
Urban.
The first thing to look for ( see the post ) is the absence of a QUANTIFIABLE definition of what is rural.
As Oke found over 50% of UHI studies used airports as the rural station.
basically, define rural in a way that human judgment is removed from the process ( avoiding bias confirmation )
Next look at only rural stations.
you will find that the trends over time dont change.
other things you can do
1. Look at re analysis: same trends
2. Look at Marine Air temperature: same trends
3. Look at re analysis that uses no temperature measurements as inputs: same trends.
Bottom line. no measurable UHI effect on a global basis.
you can however cherry pick anything you want out of records

Reply to  Steven Mosher
August 3, 2015 4:41 pm

Hi SM. Figured in advance you would eventually show up. This logical ambush was set up partly just for you months ago. Lets now collect some ‘scalps’.
1. The example set chosen was all SS.org CRN1, less one that was explicitly dubious on its face. No cherry pick at all. Nice try. Fail. You deny that pristine sites are relevant? Or resent ‘cherry picking’ only sites without any siting problems? Shame on you…. Berkeley science, perhaps. NOT Feynman science.
2. The UHI correction BEST denies but NASA explicitly acknowledges is demonstrated by NASA GISS Tokyo. I even illustrated NASA’s own example. Take your discrepancy up with NASA, not me. They think it exists. EPA thinks it exists. Their websites both say so. (And I believe them.) Your problem, not mine.
3. Now, you might reply that on average for all BEST sites UHI does not exist. Well, GISS and NCDC and AUS BOM obviously disagree. And, BEST also provably does not always do proper data ingestion, so any conclusions have to be treated circumspectly. Ingestion examples (oh my, irrefutably illustrated in the ebook, or perhaps also in a sequelae to this post) include BEST problems in Reykjavik and Rutherglen, two pristine non-US stations.
And BEST has yet to explain its ‘regional expectation’ corrections to station 166900. Look it and previous comments about it up. See also footnote 24 to essay When Data Isn’t (no different than here in GISS, just differently proven) for that station’s BEST specifics. You imploded over that example some time ago over at Climate Etc. But still have no answer than BEST ‘model’ is better than watch was actually measured at BEST 166900–to which I say, what are you smoking? Anthony Watts got those referenced details along with my guest submission. He chose not to post them (yet). Were provided him to establish my proposed post bona fides, for him to use at his discretion. Have a nice day.

Reply to  Steven Mosher
August 3, 2015 5:45 pm

Beautiful response, ristvan.
The data manipulators, at BEST, NASA, NOAA, and all, are apparently incapable of taking on competent criticism and feedback.
They’ve evidently built quite a tower on very shaky ground. Pointing out the emperor has no clothes is a threat.
Stay on them. Don’t let their arrogance shake you from their tail. Clear, factual analysis scares the daylights out of them.

Reply to  Steven Mosher
August 3, 2015 7:18 pm

Hi rud
“Hi SM. Figured in advance you would eventually show up. This logical ambush was set up partly just for you months ago. Lets now collect some ‘scalps’.
1. The example set chosen was all SS.org CRN1, less one that was explicitly dubious on its face. No cherry pick at all. Nice try. Fail. You deny that pristine sites are relevant? Or resent ‘cherry picking’ only sites without any siting problems? Shame on you…. Berkeley science, perhaps. NOT Feynman science.
a) Anthony has new ratings. sorry you used old data
b) the rating system itself is subjective and has never been field tested.
I actually talked to LeRoy’s collegues about this, they did limited tested.
c) there are actually over 200 sites that are ranked as Pristine. with better sensors that the 14 you
selected.
2. The UHI correction BEST denies but NASA explicitly acknowledges is demonstrated by NASA GISS Tokyo. I even illustrated NASA’s own example. Take your discrepancy up with NASA, not me. They think it exists. EPA thinks it exists. Their websites both say so. (And I believe them.) Your problem, not mine.
a) We dont deny any UHI corrections. The algorithms do them.
b) On a GLOBAL basis no one has successfully shown a UHI effect.
I’ve got piles of failed attempts and one attempt that showed a slight effect.
3. Now, you might reply that on average for all BEST sites UHI does not exist. Well, GISS and NCDC and AUS BOM obviously disagree. And, BEST also provably does not always do proper data ingestion, so any conclusions have to be treated circumspectly. Ingestion examples (oh my, irrefutably illustrated in the ebook, or perhaps also in a sequelae to this post) include BEST problems in Reykjavik and Rutherglen, two pristine non-US stations.
a) The argument is NOT that it doesnt exist
b) the effect exists and is real and you can find it without a doubt
c) negative UHI effects also exist.. google that.. have fun.
d) On a global basis the effect is near the noise floor. There is still some hope of pulling it
out.. but it wont change the science
e) Neither Reykjavick nor Rutherglen are Pristine.
f) The problem in Iceland is not what people think.. Its actually a change in land cover.
( confirmed by a recent visit to their headquarters )
g) There is no ingest problem.
And BEST has yet to explain its ‘regional expectation’ corrections to station 166900. Look it and previous comments about it up. See also footnote 24 to essay When Data Isn’t (no different than here in GISS, just differently proven) for that station’s BEST specifics. You imploded over that example some time ago over at Climate Etc. But still have no answer than BEST ‘model’ is better than watch was actually measured at BEST 166900–to which I say, what are you smoking?
a) 166900 Is antarctica.
b) This has been explained to you before, but you choose NOT to read or choose to forget.
c) I will try again. Antarcica is one of the most challenging areas for any spatial statistics. The reasons
are pretty simple. 1) the shortness of the records. 2) the large distance between stations.
3) the presence of weather phenomena (katabic winds) which are challenging for an appproach
that relies on :LAPSE Rate as ours does. The corrections to that record are VERY LIKELY TO BE WRONG. As I pointed out to the first person who ever commented on them. Globally they make no difference. How do we know that. we know that because the answer you get using ONLY RAW DATA
and no adjustments, isnt that much different. IF you want a more accurate version of antartica I would
suggest using the approach that Odonnel used. Its like the cowtan and way approach only for the south pole. OR you could do a specialized regression for that area that took into account the unique geography
of the region. We model the climate as a function of latitude and elevation, just as Willis as done here
That regression explains 90+ % of the variation. WHERE in the world does this type of regression break
down? It breaks down ( has larger errors) in places where temperature inversions dominate during certain
seasons. It also has larger errors where there are strong coastal effects. The regional expectation approach minimizes the GLOBAL ERROR. minimizing the global error does not mean that large local errors
cease to exist.
Yet you have avoided the real issue. Take the 200+ pristine stations ( CRN and RCRN) they dont
differ from the “bad” stations.
Every year going forward that story will be the same.

willnitschke
Reply to  Steven Mosher
August 3, 2015 9:57 pm

In actual science, when it’s shown that your methods repeatedly fail, you’re sent back to the drawing board. In climate science you just hand wave it away with a few snide remarks about “cherry picking” and carry on doing your junk science. Enjoy it while you can get away with it, but the tide always turns eventually.

RWturner
August 3, 2015 11:23 am

I know how this problem can be fixed without homogenization and interpolation. Let’s build and deploy an array of satellites with advanced microwave sounding units on board to derive the temperature of the troposphere. If only someone had thought of this before, we could have been more accurately measuring the global average temperature since 1979.

kim
August 3, 2015 11:32 am

Heh, I first read the headline as: How good is the NASA GISS global temperature disaster?
==============

August 3, 2015 12:07 pm

GISS data is obsolete it is that simple. Satellite data has replaced it .

Andrew
Reply to  Salvatore Del Prete
August 3, 2015 1:48 pm

While we’re on the subject, since satellites WHY does NASA even bother with land data? (I know why they do NOW, but why did the bother before Obama?)

Reply to  Andrew
August 3, 2015 4:25 pm

1. because satellites measure something different
2. because the surface is where we live
3. understanding the climate means understand temperature from the bottom of the sea to the TOA
4. satillite series are short ( The LIA disappears and the MWP disappears)
5 curiousity

Reply to  Andrew
August 3, 2015 5:15 pm

Good point – satellites measure the temperature in the troposphere.
According to the theory energy is absorbed by CO2, mainly in the troposphere.
When the troposphere is not warming how can deep oceans be warming (if they are)?
I think they don’t´t like the satellite records because the records are suitable to falsify their theory.
That is – if the theory had been precisely defined – and thereby falsifiable.
And if they had acted scientific and predicted a range of observations which could falsify their theory.

Simon
Reply to  Andrew
August 3, 2015 6:01 pm

Steven
Thank you. It’s a pity people round here don’t understand your 5 good reasons before commenting here.

Reply to  Andrew
August 3, 2015 6:41 pm

There is no missing hotspot.

willnitschke
Reply to  Andrew
August 3, 2015 9:45 pm

Why does Mosher’s points sound like the usual climate alarmist talking points/drivel, rather than a serious discussion of the issues?

Mark Albright
August 3, 2015 12:11 pm

The surfacestations.org web page has a link to the gallery, but it seems to be broken:
http://gallery.surfacestations.org/main.php
Does anyone know how to access the gallery? I would like to begin reviewing each site in Washington and Oregon.

Catcracking
August 3, 2015 12:12 pm

This is an excellent presentation for knowledgeable “skeptics”.
We need a version of the presentation that can be understood by those average folk that cannot or will not weed through a comprehensive presentation if we are to even engage in the “battle” and can refute the administration false claims..

Reply to  Catcracking
August 3, 2015 4:46 pm

Catcracking, that was done on this specific topic in essay When Data Isn’t in ebook Blowing Smoke, available from my publisher in iBooks or Amazon Kindle or Kobo or B&N Nook. Foreword by Prof. Judith Curry. And much more ammunition was provided–all with references for you all to use.

August 3, 2015 12:41 pm

It might be a lot easier, and more useful, if you just provided a link to the expert on this issue:
https://stevengoddard.wordpress.com/alterations-to-climate-data/

Eric Barnes
Reply to  kentclizbe
August 3, 2015 4:56 pm

It doesn’t take an expert to follow a little bit of logic and common sense. Tony’s efforts demonstrate that the magnitude of the adjustments don’t follow from the data.

Reply to  kentclizbe
August 3, 2015 5:25 pm

Tony Heller is an expert in coming up with clever tests, capable of falsifying even poorly defined theories.
That is science. And that would be science even if it was done by my dog.

Reply to  kentclizbe
August 3, 2015 6:13 pm

Brian,
You speak your English, and the rest of us will speak our English, ok?
expert–n. a person who has a comprehensive and authoritative knowledge of or skill in a particular area
Tony Heller has a comprehensive and authoritative knowledge of the historical temperature data manipulation committed by the apocalypse-mongers at NOAA and their accomplice groups and individuals.
Tony has great skill in ferreting out the truth, from actual data, that is erased by the carbon-cabal.
If that’s not an expert, I don’t know what is.
And to complement his expertise, Tony has the cajones to stand up to the warmers with their insults, threats, and arrogance. At the same time, Tony has to deal with the back-biters and chihuahuas nipping at his heels from behind.
Who do you think is an expert on this issue?

Brian G Valentine
Reply to  kentclizbe
August 3, 2015 6:25 pm

[commenter using fake identity, deleted per WUWT policy –mod]

Reply to  Brian G Valentine
August 3, 2015 7:00 pm

Brian,
The subject matter is data interpretation and analysis, and software code writing and mainpulation. What do you think a software engineer does? That’s Tony’s profession. He’s not an academic, or a grant-sucker like your supposed “experts.”
Michael Mann? You’re joking, right? The “expert” who tortured code until it spit out a hockey stick?
I’m not here to defend Tony, but you clearly harbor animus towards him (that’s not healthy you know, let it go, you’ll feel better!), so I’ll just share Tony’s own previous comments on this issue:
“I have been emphasizing the difference between commercial and government software.
“Commercial software goes through constant review. Mine gets reviewed 2-3 times a day by my boss.
“Government software on the other hand has no quality control. Consider the Obamacare web site or the just announced USHCN software disaster.
“A bunch of scientists with no software training cranking out code, with the only review process being that the output confirms their biases. The error USHCN has uncovered is so blatant, that it obviously has never been through any kind of serious verification.
“They were just happy to see a lot of warming, and it didn’t matter that global warming research, climate models, and US domestic policy in Washington were based on their graphs. It wasn’t worth spending two hours doing any verification.”
Yes, that’s an expert.
Jealousy is a green-eyed monster, you know.

Reply to  kentclizbe
August 3, 2015 7:18 pm

Brian,
Clearly you have an issue with language, I apologize for not realizing that sooner.
Again, Heller is an expert at analyzing data–specifically NOAA/GISS’s “homogenizations” of the actual, raw temperature data.
That’s what this discussion is about. Heller, as I mentioned at the beginning of this discussion, is the leading expert in the world on this issue.
Is that clear enough? Here, I’ll help a bit more: I did not say Heller is an expert climatologist. I did not say Heller is an expert at peer reviewed publishing. I did not say Heller is an expert academic.
Heller’s insights about the NOAA/GISS changes/manipulations/homogenizations/cool-the-past-warm-the-present are the ne plus ultra on this issue.
Hope that helps clear up the misunderstanding. You may want to just re-read the earlier messages. They are very clear. Good luck!

lee
Reply to  kentclizbe
August 3, 2015 8:58 pm

‘Logic and common sense still do not make one an expert.’
But you are a pretty poor expert without the same.

Reply to  kentclizbe
August 4, 2015 6:03 am

Brian,
You’re arguing with yourself, dude!
At least you’re guaranteed a win! Might be a good strategy for a high school debate squad!
Who said Heller is an expert in climatology?
You brought it up, not me.
Try focusing on the content of my notes, not the voices in your head, if you’re responding to me. If you’re responding to voices that only you hear, and languages that only you comprehend, you might want to just send yourself an email instead.
Here, I’ll try again: Tony Heller is an expert in data and identifying manipulation of data. Tony Heller has extensive experience (that’s the semantic root of the word “expert”) in examining and interpreting the raw data sets used by NASA/NOAA/GISS and all their partners in scare-mongering. Tony Heller has extensive experience in examining and interpreting the changes/homogenization/tweaks/adjustments/algorithms used to cool the past and warm the present by NASA/NOAA/GISS and all their co-conspirators. Tony Heller has extensive experience in writing about the results of his investigations. Tony Heller has extensive experience in responding to luke-warmers who jumped on the bandwagon to criticize his reporting of the fraudulent GISS/NOAA/NASA temperature data manipulation. Tony Heller has extensive experience in responding to man-made-global-warming-Gaia-is-boiling crazies’ attacks.
Just read the above paragraph slowly. Focus. Count to 3. Notice that there is nothing about Heller being an expert in climatology, or tree rings, or CO2, or Shetland sheep dogs. Breath deeply. Let it sink in.
Happy to discuss the issue with you. But you’ll have to continue the argument with yourself all by yourself. Good luck!

Cube
Reply to  kentclizbe
August 4, 2015 8:18 am

Expert = a higher authority that confirms my preexisting bias.

Reply to  kentclizbe
August 4, 2015 9:21 am

Valentine,
Keep arguing with yourself. It’s fun to watch!

Reply to  kentclizbe
August 4, 2015 10:00 am

Valentine,
Yes, international cooperation among a select group who benefit financially, professionaly, socially, and more = conspiracy.
Clearly the concept is beyond you, but Tony Heller has been proclaiming it for years now.
The host here, for years, denigrated Tony’s work. Now even WUWT is finally realizing the reality that Tony has been on to for a long time–the international “climate change” scam is a criminal conspiracy, making used of fraudulent data, in order to support a power-grab by international politicians.
See the article just posted on WUWT:
http://wattsupwiththat.com/2015/08/04/hadcrut4-joins-the-terrestrial-temperature-tamperers/
If you’re still confused, see the ClimateGate emails–that was the smoking gun.
Keep searching for your “experts” and we’ll bust the scammers.

Reply to  kentclizbe
August 4, 2015 12:07 pm

You like Tony’s quotes?
Here is an recent interview with Tony. He explains his background, his work against the scam, and more.
Listen and weep:
http://duanelester.com/2015/08/03/interviews-with-jeff-dunetz-tony-heller-from-real-science-and-sarah-zagorski-from-livenews-com/

August 3, 2015 1:05 pm

“That is a real problem, since only 7.9% of USHCN is CRN 1 or 2. The NOAA solution has been to set up USCRN. This is not yet (AFAIK) being used to detect/correct USHCN station microsite issues in either the NCDC or GISS homogenization algorithms.”
http://journals.ametsoc.org/doi/abs/10.1175/JTECH-D-14-00172.1
https://www.ncdc.noaa.gov/crn/publications.html
In general CRN trends and GISS trends are the SAME.

Reply to  Steven Mosher
August 3, 2015 3:03 pm

trends over the duration of the hiatus say nothing of the baked in (homogenized) additions.
Basic calculus: first derivative remove the underlying constant value. And as long as appropriate comparison intervals are selected, GISS and NCEI achieve their politically useful results.

Reply to  joelobryan
August 3, 2015 3:08 pm

Easier.
Pick stations NOT USED by GISS.
there are 20K stations in the US.
remove the 1200 used by GISS
remove all urban stations
Answer: Doesnt change.
been there done that.

Village Idiot
August 3, 2015 1:08 pm

There’s only one litmus test as to whether data sets are true or contrived. Do the graphs point up or down (flat will do)?
Four legs good, two legs bad

August 3, 2015 2:28 pm

If you want to compare The best data CRN… with triple redundant thermometers with the
“bad” ( haha) stations you can do that here
http://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/
Opps,
you have a standard– WUWT approved gold standard CRN.
you have a theory. Non Gold stations show artifical warming.
If Non gold stations show artificial warming then when you compare them with Gold Stations you
should see a difference.
after 10 years of data…..
No difference..
5 years from now, if there is no difference what will skeptics say?
10 years?
20 years?
Bottom line, you expect there to be a difference.
there isnt.
next

Reply to  Steven Mosher
August 3, 2015 4:55 pm

Yup. No difference during the ‘pause’. Difference before, perhaps you are clairvoyant. We got no data to examine, despite Karl’s ludicrous attempt to adjust using 0.12+- 1.7C! (Per his reference Kennedy (2011), since neither Karl nor Huang gave an error estimate—but that is some previous thread somewhere else, so I digress…, especially for oceans before ARGO. Difference to CMIP5 models…well, bring out your favorite model apologies. You know, missing heat, deep heat, intramodel diversions, and (what?) about 52 other excuses for their now 18 year projection failures.
SM, it is increasing fun to see your increasing spin on all this. Berkeley Earth appears to be getting warmer (metaphorically)?

Reply to  ristvan
August 3, 2015 6:38 pm

Karl is SST.
There is no UHI in SST
The pause should make no difference unless you beleive that UHI only magically operates during the pause.
In other words. During the pause there was no warming at pristine stations.
IF UHI introduces false warming you would expect the rest of the network (UHI infected stations)
to show SOME warming.
But they dont.
If the gold standard stations were FLAT what would you expect bad stations to show?
A) also flat?
B) cooling?
C) warming
Maybe UHI took a vacation? But the physical causes of UHI were still there.
A) changes to surface properties (decreased evapotranspiration)
B) waste heat from human activity
C) changes in albedo
D) changes in surface roughness
Did these stop causing elevated warming?
No.

willnitschke
Reply to  ristvan
August 3, 2015 9:49 pm

“The pause should make no difference unless you beleive that UHI only magically operates during the pause.”
More drivel… If you’re measuring how different models of land temperature affect trends when temperatures change, and there is no temperature change, you will not see an effect on trends. Mosher seems to be making it up now as he goes along.

Reply to  Steven Mosher
August 3, 2015 9:05 pm

That’s interesting. When Mark Albright compared the two a little earlier in the thread, the correlation was strong but there was about a half degree difference in actual numbers. Do you know why that is not showing up in this temperature anomaly comparison? Strange.

Juan Slayton
August 3, 2015 2:29 pm

Hi Rud,
Since I took pictures of 2 of the stations you graph (Baker City and Fairmont) I thought I would take a closer look than I normally would, but I am having trouble finding what you reference. Could you advise me as to where in http://www.surfacestations.org/ it lists the 14 “pristine” stations? Also, in
http://www.data.giss.nasa.gov/gistemp. you advise me to “click on the monthly chart,” but I don’t find such a chart. Would appreciate your help here.
I am not qualified to rate these stations; I leave that to Watts, Jones, et al. However, I would have real reservations about the assumption that Fairmont has no micro-site problems. When we got a good look at it we found that the Stevenson screen was set on a concrete structure similar to a bird-bath, which pretty much blocked the bottom ventilation. My own concern was that the screen sat on a steep rise just to the south of the reservoir. Seems to me any significant amount of water in the reservoir would be likely to affect the air temperature coming up that slope. One would need a record of the fill levels over time to even begin to assess possible effects. (The reservoir was closed after the Sylmar quake, I don’t know if it is currently used for anything. It was pretty much dry when I was there.)

Reply to  Juan Slayton
August 3, 2015 4:58 pm

I tried to provide the linked references. The SS.org. stuff in in an excel spreadsheet that listed each station, its categories (1-5) and (urban/suburban/rural). The Giss raw is as linked. My apologies if those embedded links do not work for you. They did for me (Mac, not Windows).

Reply to  ristvan
August 3, 2015 6:31 pm

Anthony has since changed the ratings. best to ask him for the updated unpublished version

RickA
August 3, 2015 2:37 pm

I don’t like the idea of adjusting data for UHI at all.
We should neither cool the past or warm the present to adjust for UHI.
If downtown Minneapolis is warmer than rural Minnesota – so be it.
I would rather identify a percentage of urban square miles, suburban and rural and then make sure that the mix of CRN 1 and 2 sites matches the overall percentage of urban, suburban and rural for the USA (as an example).
So if 10% of the area is urban than only 10% of the thermometers should be urban, and so on.
Wouldn’t that be a better way to address the problem than pretending that it isn’t actually warmer downtown than it is?
If more people over time change rural to urban, than the proper way to adjust is to adjust the mix of urban and rural thermometers in the database – not pretend the temperature in downtown Minneapolis was colder than it really was in the past.
Just one person’s opinion.

RobL
August 3, 2015 2:46 pm

Interesting opportunity exists in USA to look at the reverse of UHI – as several large cities have been heavily depopulated over last 30-50 years (St Louis, Detroit at >60% beign most standout examples)
https://en.wikipedia.org/wiki/Shrinking_cities_in_the_United_States
Even though a lot of the buildings are still there, their energy consumption will have dropped dramatically. Could make for some interesting comparisons with growing cities.

Reply to  RobL
August 3, 2015 3:06 pm

been there done that.

Tim Hammond
Reply to  Steven Mosher
August 4, 2015 3:21 am

And? If you are going to bother to tell us you have done it, how about bothering to tell us the result?

Neville
August 3, 2015 3:02 pm

Ken Stewart has listed the areas of the planet that have shown no warming. He has used the UAH V 6 data set that has found no warming for 18+ plus years. But the South polar area has shown no warming for over 35 years and the cont USA over 18 years and Australia for over 17 years.
https://kenskingdom.wordpress.com/2015/05/13/call-that-a-pause/

August 3, 2015 3:09 pm

The satellite data sets are the real fly-in-the-ointment for the High priests of the Church of CAGW.

Dennis Bird
August 3, 2015 3:21 pm

I can attest to the heat island effect here in Houston. During this heat wave the official temperature from the airport (11 miles away ) is consistently 8 degrees above what I am recording at my home. The official temperature in Pearland, the municipality closest to me (2 miles ) is also 8 degrees above.

August 3, 2015 3:26 pm

Trust em as far as I can throw em

Reply to  wickedwenchfan
August 3, 2015 5:02 pm

On my farm, haystacks not cow pies.

Neville
August 3, 2015 3:27 pm

I understand that the IPCC uses HAD 4 as the data set of choice. Yet HAD 4 shows about 0.8 C warming since 1850 , that’s over the last 165 years.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1850/offset/trend
But the Lloyd et al study found that the standard deviation over a century is about 1C. This IPCC author used the last 8,000 years of ice cores as a proxy. So how is just 0.8C warming over the last 165 years supposed to be unusual or unprecedented? And this slight warming comes at the end of a minor ice age. Here’s the Lloyd study.
http://multi-science.atypon.com/doi/abs/10.1260/0958-305X.26.3.417

Another Scott
August 3, 2015 3:28 pm

Too bad the politics around CO2 are so charged. If we cared only about how good we were at gathering and analyzing surface temperature data everyone would be better off….

August 3, 2015 3:41 pm

The GISS temperature record is continually adjusted and the data as it exists now is no longer data and it is unsuitable for any scientific purpose. Like much of the CAGW narrative, it is now just a fairy tale of fiction.

August 3, 2015 4:48 pm

If UHI does not cause a false reading in temperature, then why not use the rural stations for the official record, as they are not encumbered with buildings, vehicles, people , streets, mass transit, etc.

Reply to  Dale Hartz
August 3, 2015 6:30 pm

Because you get the same answer if you include or exclude urban stations.

Reply to  Steven Mosher
August 3, 2015 10:37 pm

And yet Cleveland Airport temps are 3-5F warmer than the surrounding area. Most (okay, many) NOAA stations are at airports.
Asphalt is 30-40F warmer than grass 20 feet away, and takes more than 12 hours to cool to within even 10F when compared to grass.

Tim Hammond
Reply to  Steven Mosher
August 4, 2015 3:23 am

So including or excluding data that has a different value from a set has no impact on the set?

Reply to  Steven Mosher
August 5, 2015 7:18 am

Steven–the evidence doesn’t show your claim is valid. So, prove it

Steve in Seattle
August 3, 2015 5:41 pm

I have come to the conclusion that almost all the Government temp records are NOT reliable and simply tools used by the current administration to push the agenda.
NOAA Global Analysis – Annual – Land & Ocean
https://www.ncdc.noaa.gov/sotc/global/201413
2014 – Land and Ocean +0.69 ± 0.09 +1.24 ± 0.16
2013 – Land and Ocean +0.62 ± 0.09 +1.12 ± 0.16
2012 – Land and Ocean +0.57 ± 0.08 +1.03 ± 0.14
2011 – Land and Ocean +0.51 ± 0.08 +0.92 ± 0.14
2010 – Land and Ocean +0.62 ± 0.07 +1.12 ± 0.13
2009 – Land and Ocean +0.56 °C (+1.01 °F)
2008 – Land and ocean +0.49°C (+0.88 °F)
2007 – Land and Ocean +0.55°C (+0.99 °F)
2006 – Land and Ocean +0.54°C (+0.97 °F)
2005 – Land and Ocean +0.62°C (1.12°F) ** Improved Data, Smith & Reynolds
2004 – Land and Ocean +0.54°C (0.97°F) **
2003 – Land and Ocean +0.56°C (1.01°F)
2002 – Land and Ocean +0.56°C (1.01°F)
2001 – Land and Ocean +0.51°C (0.92°F)
2000 – Land and Ocean no annual data
1999 – Land and Ocean +0.41 C (0.74F)
** The 1880 – 2003 average combined land and ocean annual temperature is 13.9°C (56.9°F)
No error information from 2010 back = pure guesses
ONlY possible hope is for the pristeen USCRN network, and with that said, it is way too early to draw any conclusions.

talldave2
August 3, 2015 8:22 pm

I’m no longer able to believe this is happening by accident. Fraud. Scientific fraud.

August 3, 2015 10:29 pm

Here’s ir thermometer readings from (lowest temp to highest temp)
Clear sky
Grass
Concrete
Asphalt
From the front of my house that faces East, so in the afternoon a shadow starts at the front door (where I start taking measurements from) and follows the concrete front sidewalk to the asphalt driveway which gets Sun until early evening.comment image
My backyard abuts a 30,000-40,000 acre National Park, air temp is taken on the west side of the house under a 10′ deck. (Park side).

Editor
Reply to  micro6500
August 3, 2015 10:38 pm

micro6500
What have you found for the IR reading for a clear sky (daytime and nighttime) when surface air temperature changes significantly?

Reply to  RACookPE1978
August 3, 2015 10:49 pm

” What have you found for the IR reading for a clear sky (daytime and nighttime) when surface air temperature changes significantly?”
The difference between clear sky and concrete (I use that as a reference ) while I also have the data from my weather station.
Surface temps and sky temps (as you can see) track surface temps to some extent, and typically run 80F to over 100F colder, day or night, the difference is less the higher the humidity. Clouds are always warmer than clear skies, and can be within about 30F of surface temps.
Now my IR thermometer measures 8u- 14 u so you’d have to add any other ghg forcing, but 3.7W /m^2 is a few degrees max, both clouds and water vapor are much much larger.

August 4, 2015 12:52 am

“How good is the GISS dataset?”
Not very good.
Another answer is fabricated fictional garbage.

Editor
August 4, 2015 1:10 am

Rud,
Your link to the GISS website: http://www.data.giss.nasa.gov/gistemp is broken, and I am unable to find the page where UHI correction is mentioned. Tokyo is now only minimally adjusted.
Raw: http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=210476620000&dt=1&ds=12
Adjusted: http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=210476620000&dt=1&ds=14
It is also minimally adjusted in the source GHCN V3 dataset: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/2/21047662000.gif
The Tokyo graph appears to come from here: https://diggingintheclay.wordpress.com/2009/11/13/climate-fast-food/. Much of the data detailed there no longer hold true due to updates and refinements in the data and processing methods. I have not kept up with the changes, but such changes quickly date any analyses made.

Frank Lansner
August 4, 2015 2:32 am

Fantastid effective precise and relevant article!!!
Thank you so much for your effort to shed light on these things, you cannot overestimate how important this is.
Kind Regards, Frank Lansner

ren
August 4, 2015 7:51 am

Pressure anomalies over the Northern Polar Circle. The obvious effect of high cosmic radiation.
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_HGT_ANOM_JAS_NH_2015.gif
http://oi58.tinypic.com/vyqkjk.jpg

power engineer
August 5, 2015 3:19 am

Urban heat island effect does not only affect urban areas. My small town of 5000 people is a good example. Over the last 60 years it has experienced the fate of small towns everywhere: most have moved to the surrounding countryside creating more car traffic and the need to put down more sun absorbing blacktop for highways, Trees and lawns have been converted to blacktop parking lots for the lawyers’ and doctors’ offices and apartments that dominate the town now. Before it was mostly single family homes where most walked to work and school.
So those who minimize UHI by comparing urban and non urban sites and showing there is no difference are not drawing a valid conclusion. UHI is present in all the data. It should be renamed LHE Land Heat Effect. Suggest a contest to rename and come up with the best acronym. Three categories of prizes: Most scientifically accurate, most catchy, and funniest.

Reply to  power engineer
August 5, 2015 4:59 am

” So those who minimize UHI by comparing urban and non urban sites and showing there is no difference are not drawing a valid conclusion. UHI is present in all the data. ”
30 to 50F difference between grass and asphalt. And that would explain a lot of the difference between hemispheres.
Global warming is man made, it’s got little to do with Co2 though.

August 6, 2015 11:50 am

Reblogged this on Climate Collections and commented:
Rud Istvan provides an outstanding analysis of NASA GISS homogenization of “pristine” Climate Reference Network (CRN) sites.

%d bloggers like this: