GISS Swiss Cheese

By Steve Goddard

We are all familiar with the GISS graph below, showing how the world has warmed since 1880.

http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.lrg.gif

The GISS map below shows the geographic details of how they believe the planet has warmed. It uses 1200 km smoothing, a technique which allows them to generate data where they have none – based on the idea that temperatures don’t vary much over 1200 km. It seems “reasonable enough” to use the Monaco weather forecast to make picnic plans in Birmingham, England. Similarly we could assume that the weather and climate in Portland, Oregon can be inferred from that of Death Valley.

GISS 1200 km

The map below uses 250 km smoothing, which allows us to see a little better where they actually have trend data from 1880-2009.

GISS 250 km

I took the two maps above, projected them on to a sphere representing the earth, and made them blink back and forth between 250 km and 1200 km smoothing. The Arctic is particularly impressive. GISS has determined that the Arctic is warming rapidly across vast distances where they have no 250 km data (pink.)

A way to prove there’s no data in the region for yourself  is by using the GISTEMP Map locator at http://data.giss.nasa.gov/gistemp/station_data/

If we choose 90N 0E (North Pole) as the center point for finding nearby stations:

http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=1&name=&world_map.x=369&world_map.y=1

We find that the closest station from the North Pole is Alert, NWT,  834 km (518 miles)  away. That’s about the distance from Montreal to Washington DC. Is the temperature data in Montreal valid for applying to Washington DC.?

Even worse, there’s no data in GISTEMP for Alert NWT since 1991. Funny though, you can get current data right now, today, from Weather Underground, right here. WUWT?

Here’s the METAR report for Alert, NWT from today

METAR CYLT 261900Z 31007KT 10SM OVC020 01/M00 A2967 RMK ST8 LAST OBS/NEXT 270600 UTC SLP051

The next closest GISTEMP station is Nord, ADS at 935 km (580 miles) away.

Most Arctic stations used in GISTEMP are 1000 km (621 miles) or more away from the North Pole. That is about the distance from Chicago to Atlanta. Again would you use climate records from Atlanta to gauge what is happening in Chicago?

Note the area between Svalbard and the North Pole in the globe below. There is no data in the 250 km 1880-2009 trend map indicating that region has warmed significantly, yet GISS 1200 km 1880-2009 has it warming 2-4° C. Same story for northern Greenland, the Beaufort Sea, etc. There’s a lot of holes in the polar data that has been interpolated.

The GISS Arctic (non) data has been widely misinterpreted. Below is a good example:

Apr 8, 2009

Monitoring Greenland’s melting

The ten warmest years since 1880 have all taken place within the 12-year period of 1997–2008, according to the NASA Goddard Institute for Space Studies (GISS) surface temperature analysis. The Arctic has been subject to exceptionally warm conditions and is showing an extraordinary response to increasing temperatures. The changes in polar ice have the potential to profoundly affect Earth’s climate; in 2007, sea-ice extent reached a historical minimum, as a consequence of warm and clear sky conditions.

If we look at the only two long-term stations which GISS does have in Greenland, it becomes clear that there has been nothing extraordinary or record breaking about the last 12 years (other than one probably errant data point.) The 1930s were warmer in Greenland.

Similarly, GISS has essentially no 250 km 1880-2009 data in the interior of Africa, yet has managed to generate a detailed profile across the entire continent for that same time period. In the process of doing this, they “disappeared” a cold spot in what is now Zimbabwe.

Same story for Asia.

Same story for South America. Note how they moved a cold area from Argentina to Bolivia, and created an imaginary hot spot in Brazil.

Pay no attention to that man behind the curtain.


Sponsored IT training links:

No matter you have to pass 70-667 exam or looking for 642-165 training, our up to date 640-721 exam dumps are guaranteed to provide first hand success.


0 0 votes
Article Rating
282 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
starzmom
July 26, 2010 6:24 pm

Unbelievable. But here it is. Will Hansen et al take notice? I doubt it.

David Jay
July 26, 2010 6:45 pm

I understand the loss of the cool spot in Africa, averaging (smearing?) should move temperatures away from extremes. However, the hot spot in Brazil is a winner. I want to hear an explanation of that methodology!

Methow Ken
July 26, 2010 6:45 pm

This doesn’t even qualify as ”torturing” data until it confesses what U want:
In this case GISS is effectively pulling whatever data they need to make their ”case” out of UNmonitored thin air.

Curiousgeorge
July 26, 2010 6:55 pm

From the Moon, the Earth does look pretty smooth. Even smoother from Mars. But it looks the smoothest when ones head is up ones rectum.
Smoothing – even in the mathematical/statistical sense – requires reasonable, logical, and defensible excuses. Analytical laziness, or lack of funding for decent data collection does not qualify.

DirkH
July 26, 2010 6:56 pm

Just wait til next year.

John Eggert
July 26, 2010 6:58 pm

I’m wondering what the source of temperatures for the high arctic was prior to 1950 or so. Alert’s data only goes back to 1950. Certainly aren’t any trees up there.
Interestingly, Alert’s summer temperatures (mean monthly, based on arithmetic mean of hourly data) for May, June, July and August are nearly perfectly flat. The increase in mean annual temperature since 1977 has been most pronounced over the fall from September to November. July and August have negative trends. (Please note I have performed NO quality checks on my data and have limited skills with statistics, beyond a students T test)
All of this information was calculated using the freely available information from the Environment Canada website. Makes one wonder about the CRU claim that Canada was one of the countries with a confidentiality agreement, precluding FOI requests. (hope that link worked!)
JE

INGSOC
July 26, 2010 7:00 pm

If Giss’s credibility were a hot air balloon, how far up do you think it would go, shot so full of holes?

July 26, 2010 7:07 pm

Steve Goddard: I hope you’re aware that the GISS trend maps do not present you with all of the stations used by GISS in their product with 250km radius smoothing.
There are fewer stations used in 1880 than what you’ve presented in the trend maps but the numbers increase with time.

ROM
July 26, 2010 7:13 pm

For some time I have tried to follow some of these or similar climate statistical debates on Lucia’s Blackboard blog even though the statistics are way beyond this old farm boy’s understanding.
As an ordinary citizen, one of those who are required by the IPCC, CRU, GISS, Hansen and etc to “trust us as we know what is best for you,” what does strike me about the Blackboard’s debates about many of the statistical techniques for deriving global temperatures and climate analysis techniques are the similarities to the passionate religious debates of the medieval period about “how many Angels can dance on the head of a pin”.
And those claims and debates about climate statistical analysis techniques have about the same relationship to real life [ and real weather and the real climate ] as the “how many angels can dance on the head of a pin” did back in those times past and are about as accurate in their analysis of the real situation, be it religion or climate.

pwl
July 26, 2010 7:14 pm

There is no other way to put it, fabrication of data – BY ANY means and any way you slice it – IS FRAUD. [snip, a bit OTT]
Who is involved at NASA in GISS?
What scientific basis do they allege allows them to fabricate data?
How can they be prosecuted?
REPLY: please tone it down a bit. – Anthony

Dave F
July 26, 2010 7:14 pm

Haha, but Steve, they are realistic-looking meteorological patterns, they must be right!
“Qualitative support for the greater Arctic anomaly of the GISS analysis is provided by Arctic temperature anomaly patterns in the GISS analysis: regions warmer or cooler than average when the mean anomaly is adjusted to zero are realistic-looking meteorological patterns.”
http://data.giss.nasa.gov/gistemp/paper/gistemp2010_draft0601.pdf

AJB
July 26, 2010 7:15 pm

Steve,
This is one powerful illustration or four. But what’s that big red blob flashing on the bottom of the South America one, have you got a South Pole one for a complete set? What percentage of the land area has been homogenised (looks way more than half) and where have the SSTs come from? Beats me how you can possibly have an error margin less than the anomaly being measured on this basis? Sort of reminds me of an old Christmas tree decoration that’s started to peel and lost its lustre. It’s all coming apart Jim.

latitude
July 26, 2010 7:15 pm

Steve
What strikes me, even more that the arctic, is North America.
In your last blink globle, NA is in the top left corner.
Look at the south east. The entire SE US jumps from white to yellow.
When you compare both 1200 km smoothing and 250 km smoothing, the reds
stay about the same. Everything else in the US gets adjusted up. The entire SE US goes from below normal to above normal

nandheeswarn jothi
July 26, 2010 7:17 pm

they are not ignorant…. for them to take notice.
they are misrepresenting knowingly.

Geoff Smith
July 26, 2010 7:24 pm

So this is out right lying but why?So many groups are doing this can it be out of self interest and funding.
Maybe there really is something to the Iron Mountain Report.

rbateman
July 26, 2010 7:29 pm

What? No station on Baffin Island, home of the Coming Ice Age (starring Leanord Nimoy) ?

Mike
July 26, 2010 7:32 pm

So there is uncertainty and gaps in the data. Maybe that is why the first GISS graph has error bars on it. If you can demonstrate that their error bars are smaller than they should be you might have something worth talking about.

SteveFromWinnipeg
July 26, 2010 7:36 pm

i can’t help but question…
for the stations that have not been moved, or had their surroundings changed, AND have been around since 1900…. what is the trend line? seems to me that those are the only stations that are acceptable for use.
those smoothed 1200km pictures look far too pretty to be acceptable to anyone who knows that the information comes from stations all over the world.

Roger Knights
July 26, 2010 7:43 pm

Pay no attention to that man behind the curtain.

It’s the whizzer of Ooze.

Grant Hillemeyer
July 26, 2010 7:45 pm

I tell people when this subject comes up that it’s laughable to compare modern global temperatures measured by satilites to late 19th and early 20th centuries. They don’t care about the reality of it, they just imagine weather stations all over the world with infallible, meticulous records since 1880 lining the bookshelves. When an authority like NASA/GISS says something people assume it’s true and don’t seem to care about the nuts and bolts of it. I usually go back to sea levels because there are meticulous tide gauge records back to 1880 and ask them, “if it has warmed so much, why hasn’t the rate of sea rise accelerated with it? I don’t hear much about that from agwers except that we haven’t reached the tipping point. Well, I live at 3000′ so maybe i’m safe.

TWE
July 26, 2010 7:48 pm

Very good video from Lord Monckton- A left-wing environmentalist gives his views on CAGW:
[youtube=http://www.youtube.com/watch?v=VWVXarkPOAo&hl=en_GB&fs=1]

Joe Lalonde
July 26, 2010 7:57 pm

Steve,
Do you know how the sun can change the the heat energy hitting this planet?
How the sun can be responsible to generate an Ice Age even though we are so close to it?
It is a mechanical process from rotation.

July 26, 2010 8:01 pm

What Bob Tisdale said.
Steve Goddard, your primary discovery is that there isn’t much data in GHCN south of the equator in 1880. By forcing a trend to go back to 1880s, you are excluding all the stations whose data series begin in 1890 or 1900 or 1910 or 1920 or …
You can get a better feel for what areas are covered when here:
http://rhinohide.wordpress.com/2010/07/26/ghcn-station-history-a-pretty-chart-ii/

July 26, 2010 8:09 pm

Yes, it’s true, and well known, that there were not a lot of met stations operating in interior of Africa, or the Amazon jungle, in 1880. And that there is not a long history of measurements on the sea ice of the Arctic Ocean.
So what was your point?

Leon Brozyna
July 26, 2010 8:09 pm

I wonder what gaming system the GISS geeks use to create their artificial constructs.

HaroldW
July 26, 2010 8:11 pm

Did anyone ever figure out how the trends in the interior of Greenland could exceed the trends actually observed at stations*? Since there are no stations in the interior, the trends there must be computed by interpolating from nearby (coastal) stations. But I would expect that any interpolation would be a linear combination of actual station data (with positive weights) and hence couldn’t have a larger trend than those stations do.
*As discussed in the “Greenland” section at http://wattsupwiththat.com/2010/07/17/noaas-jan-jun-2010-warmest-ever-missing-data-false-impressions/
You can see something similar in the above maps in north central Africa. In the 1200-km-smoothed version, there’s a small region which is colored red (>2K), but in the 250-km-smoothed version which presumably shows the trends of the actual observation sites, there’s no red but only orange (1-2 K).

July 26, 2010 8:18 pm

The GISS anomaly maps for June, 2010 show holes in the Arctic, Africa and South America – almost as large as those in the trend maps. The data is missing at both ends.

Kate
July 26, 2010 8:18 pm

Caan someone look through this list and tell me if anything has changed??
http://www.cpo.noaa.gov/index.jsp?pg=/opportunities/opp_index.jsp&opp=2011/program_elements.jsp

Ben
July 26, 2010 8:20 pm

“So there is uncertainty and gaps in the data. Maybe that is why the first GISS graph has error bars on it. If you can demonstrate that their error bars are smaller than they should be you might have something worth talking about.”
Uncertainty is the error in the measurnig devices. Did you read the article? This is talking about bad statistics which interpolate data over up to the distance from Atlanta to Chicago. This argument is old, and if you still believe in the righteousness of the models, nothing will convince you otherwise until you are hit in the backside with a glacier, in which case you will probably claim “the glaciers are just weather”.
If you want to talk science, do so, but claiming that the interpolation is OK because there are error bars is showing a very serious lack of statistical knowledge. Interpolation to describe it in a short format, if it is done poorly there can be no error bars simply because there is no way to calculate error when done poorly. The results can be best described as “anyone’s guess” or “Only God knows”, or “Only Hansen knows”. That all depends on which religion you follow, but shrug, we all have our beliefs now…

Kate
July 26, 2010 8:26 pm

Regarding: “REPLY: please tone it down a bit. – Anthony”
Dear Anthony,
I noticed Mosher’s recent angry outcry against crying “fraud.” I did not understand it. We bloggers understand that we are able to “vent” here more comfortably than at CA, but even the Bishop allows my political questions. Please just keep up what you are doing. Thanks!

Stephen Pruett
July 26, 2010 8:32 pm

There is a popular saying in my field of research: “fools pool”. This is worse than pooling data, it is creating data and then using it for global climate analysis. It is becoming increasingly clear that the standards in climate science are not as rigorous as they should be.

July 26, 2010 8:32 pm

I wonder if they do TOBS adjustments on non-existent data points?

Andrew30
July 26, 2010 8:33 pm

HaroldW says: July 26, 2010 at 8:11 pm
“Since there are no stations in the interior, the trends there must be computed by interpolating from nearby (coastal) stations. “
Why “must” they be computed?
If there is no data it should be treated as a Null Value. A Null value does not affect a calculation in any way; a Null is Not a zero. The average of 2 + 3 + Null + 4 + Null is 3.
They should base their trends on the data they have, if there is warming or cooling it would show up in the data they have.
What they are doing is not correct.
They should say:
“There is a warming/cooling trend in the currently monitored parts of the planet”
There is no reason whatsoever that they “Must” compute unknowns out of the data, the trends would be unaffected by Nulls.
They are just making stuff up.

Amino Acids in Meteorites
July 26, 2010 8:52 pm

Be nice if GISS had a whistle blower.

LearDog
July 26, 2010 8:55 pm

For all of the money being thrown at this problem – these should be unstructured grids tied to topography and known ocean currents – with datapoints readily visible and documented.
What is SO well shown here – is that these ‘anomalies’ are manufactured out of whole cloth – and purely a function of the gridding algorithm.
And they know this.
Shameful. And these guys – attached to NASA. I guess quality doesn’t matter anymore ?

July 26, 2010 8:59 pm

David Jay says:July 26, 2010 at 6:45 pm
However, the hot spot in Brazil is a winner. I want to hear an explanation of that methodology!

David, Mr Goddard did not make clear what is being plotted here. It isn’t simple interpolation. The colors represent trends over 130 years, and the gray areas in the 250km plot show where info was not available for the full period. But that does not mean that there was no information there.
When GISS plots the 1200km trend plot, for most years they use the local data, which don’t appear in the 250km plot. They only interpolate to fill in the missing years.

HaroldW
July 26, 2010 9:05 pm

Andrew30 (July 26, 2010 at 8:33 pm) ,
I wouldn’t disagree with you, nor would CRU — their HadCRU temperature datasets do not presume to attribute a value far from any measurements.
But given that GISS *do* assign a value up to 1200 km from a measuring station, then the values assigned must come from some sort of smoothing/extrapolation algorithm. I understood that they used a weighted average of the the values (anomalies) at stations within a 1200 km radius. The weights presumably are greatest for nearby stations and diminish to 0 at a distance of 1200 km.
However, a weighted average — at least one in which the weights are positive and sum to 1 — can’t produce a value larger than the largest of the values which are being averaged. So, because (as in the Africa case cited above) there are extrapolations which exceed the nearby stations, my understanding of how the smoothing/extrapolation is performed must be incorrect. I was looking for some information about the actual approach.

Amino Acids in Meteorites
July 26, 2010 9:06 pm

starzmom says:
July 26, 2010 at 6:24 pm
Unbelievable. But here it is. Will Hansen et al take notice? I doubt it.
Someone named Gavin posts comments here occasionally. If it is that Gavin then he will know about this post. Also, about other posts like it previously. I can only guess what the reaction at the office will be but maybe WattsUpWithThat are words that cannot be mentioned there. 😉
REPLY: He is NOT the RC Gavin, though he’d be welcome to post if it was RC Gavin – Anthony

pwl
July 26, 2010 9:08 pm

No over the top intended Anthony. It is just infuriating that Nasa GISS climate scientists believe that they are doing hard science when they fabricate data with “statistical interpolation” for vast areas. I’m just calling it like I see it. I’ve dealt with fraudsters in business and you have to hit them hard when they are discovered otherwise they are more likely to get away with their fraudulent deeds and keep on repeating them.
Extraordinary claims require extraordinary evidence. If we can’t even get honest ordinary evidence out of Nasa GISS how can any of their data be trusted?
If those that continue to push fabricated data as if it’s real data are not held to account by social control mechanisms (legal sanctions) then they have won the day and very likely the war with their political agenda supported by fabricated data.
As someone interested in the integrity of science the techniques used by climate scientists that fabricate data are of the most serious concern as that fabricated data is used in conclusions presented as “the real world” to politicians and policy makers who then spend vast sums of public monies.
What happened to only using the data that is actually observed?
Hansen and Lebedeff published a paper in 1987 that lays out the “acceptable” use of fabrication of data where non exists. Amazing.

“The oddity about the picture is that we are given temperature data where none exists. We have very little temperature data for the Arctic Ocean, for example. Yet the GISS map shows radical heating in the Arctic Ocean. How do they do that?
The procedure is one that is laid out in a 1987 paper by Hansen and Lebedeff In that paper, they note that annual temperature changes are well correlated over a large distance, out to 1200 kilometres (~750 miles).” – GISScapades, WUWT

How can we have this 1987 paper, “Global Trends of Measured Surface Air Temperature”, by Hansen and Lebedeff falsified and rescinded?
One way is for a new paper to be published that falsifies it, correct?
What other papers are based upon this one? How can they then be falsified and rescinded or redone? How far do the dominoes fall? What is the process for tippnig them over?
What ways are there to hold Hansen et. al. legally responsible for misrepresenting fabricated data as if it’s real data? Public funds, massive amounts, have been and will be spend based upon the GISS representations. That’s just not right.

Boris
July 26, 2010 9:10 pm

Instead of snarking about Atlanta vs. Chicago, why don’t you guys use some of this cognitive surplus to disprove the papers that support the 1200km correlation? I don’t think you’re up to it because it’s pretty obvious that anomalies do correlate over long distances. Maybe I’m wrong.
REPLY: You know Boris, providing people with distances they can relate to is not snark, it’s called communicating. It is germane since most people have no idea about distances and data sparseness in the Arctic. You are one to talk about “snark” since personal denigration and snark is all you do on blogs, and from the typical drive by coward standpoint. Frankly I don’t give hoot if you are offended because I read the nasty things you write about me and the people that frequent this blog elsewhere. If you weren’t so downright obnoxious, and with a long track record of it, I’d not have an issue discussing it with you. But experience has shown it to be a waste of everyone’s time when you are involved. This post is not about disproving 1200 km correlation, it is about demonstrating vast areas of missing data that is being infilled by extrapolation/gridding/homogenization. I’m fairly sure you will repost this on another forum, and watch happily while the typical insults fly, but let’s see if you have the integrity to resist your urges, just this once. – Anthony

July 26, 2010 9:18 pm

Steve.
You should read what Bob, Ron and Nick have to say.
I’ll go at the issue in a slightly different way.
Lets suppose that we have data for the latitude 60-70N. Lets suppose that data
shows a flat TREND from 1900 to 2009. Anomaly =0. to be sure within this band some places may be warmer, some cooler. But each location shows a zero trend.
Thats the thought experiment.
Question: I tell you to estimate the TREND from 70N to 90N. What’s your answer?
1. Assume a negative TREND.
2. Assume a positive TREND
3. Imput the same trend as you see from 60-70 degrees northward.
4. Say there is no way to estimate and shrug your shoulders.
is imputing a trend not seen (1,2) defensible?
is imputing the same trend defensible?
is refusing to make any estimate defensible?
If 60-70N saw a positive trend of 1C, would you expect 70-90N to see
a higher trend? lower trend? or the same trend.

DR
July 26, 2010 9:19 pm

Extrapolation and interpolation are two different things and using one over the other should be specified in these discussions. So when does GISS?

gallopingcamel
July 26, 2010 9:35 pm

I guess that the 1,200 km smoothing is what justifies the “station drop offs” in the Canadian and Russian arctics.
It makes you wonder why NOAA and NASA have so many stations in the lower latitudes and in the USA. With 1,200 km smoothing a few dozen stations should be enough for the entire world.

July 26, 2010 9:39 pm

How do such systemic problems with data manipulation/insertion persist across so many administrations? The heads of NOAA and NASA are political appointees, I would have expected one of them along the line would have demanded data integrity.
It’s incredible to me that the status quo has existed for so long, as if no-one really paid much attention or care to what these agencies were doing for so long, but now we rely so heavily on what they say, seemingly only because they’ve been saying it for so long.
Mystifying.

Bryan A
July 26, 2010 9:43 pm

According to this site
http://www.athropolis.com/map2.htm
Alert is currently 1c
and Thule Air Base is 5c

Amino Acids in Meteorites
July 26, 2010 9:46 pm

If GISTemp actual temperature was the same as other data sets that show lower temperature I wouldn’t have a problem with these methods. But GISTemp keeps showing hottest-ever-this-and-that. You don’t hear of anomalies in the news, like, “The NASA data set anomaly is the same as the other data sets anomaly. So there’s nothing to see here.” We just hear hottest (temperature) years all the time. This continual stressing in the blogosphere of paying attention to anomaly and not temperature has a bad smell to it. So I will pay attention to the actual temperature behind the anomaly curtain. And something is wrong with GISTemp.

July 26, 2010 9:52 pm

Unbelievable. But here it is. Will Hansen et al take notice? I doubt it.
Unbelievable. But here it is. Will anyone outside our circle take notice? I doubt it.

Ian George
July 26, 2010 9:55 pm

Just checked Gmo Im.E.K.F in N Russia (77.7N). Shows 1-2C warming in the GISS world map. But check the GISS graph at:-
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=222202920005&data_set=1&num_neighbors=1
Again, much more warming in the ’30s and ’40s. Do they take the 50s as their starting point?
How many more examples are there?

Nigel Harris
July 26, 2010 9:58 pm

It seems “reasonable enough” to use the Monaco weather forecast to make picnic plans in Birmingham, England.

Deliberately misleading and snarky. We’re talking about trends in anomolies here, not absolute temperatures. And we’re talking about climate, not weather.
GISS is, at worst, assuming that the change in temperature over decades in Monaco might be a reasonable basis for estimating the change in temperature over the same timescale in Birmingham, England (but only if there is no data available for Birmingham.) If the temperature in Monaco now is, on average, unchanged from the 1880s, then let’s assume Birmingham is also unchanged from what it was in the 1880s (but still somewhat cooler than Monaco, of course). And if Monaco has got cooler, in a way that is consistent with other nearby stations where we have data, then let’s assume that Birmingham (where we don’t have data) has also cooled by the same amount. That’s all.
As it happens, Friday’s forecast for Birmingham, England is partly sunny, 26C and Friday’s forecast for Monaco is sunny, 24C.

James Sexton
July 26, 2010 9:59 pm

Steven Mosher says:
July 26, 2010 at 9:18 pm
“…..is imputing the same trend defensible?
is refusing to make any estimate defensible?”
Of course, the problem isn’t guestimates, the problem is presenting the guestimates as factual. If one has never recorded the 70-90N temps nor tracked the trends, how on earth could you possibly relate them to the trends of 60-70N? Further, if this is going to be lent any validity, why bother with measuring the temps at 60-70N? Just put a couple on the equator and extrapolate up and down the grids. If one looks at the trend maps, one could state extrapolating in that manner doesn’t appear to reflect reality.
At any rate to answer a question posed to another person(sorry) it is indefensible to publish such an estimate in the manner in which they do. It is ok to postulate and hypothesize about various events and objects. It is not ok to mislead the public. Oddly enough, some people take this garbage as gospel.

Al Gored
July 26, 2010 10:00 pm

Excellent article. It just keeps getting worse.
I sure would love to hear a response on this from the GISS. But I’m guessing they wouldn’t dare even try.
Sad days for the credibility of the scientific establishment, to put it mildly.

Editor
Reply to  Al Gored
July 26, 2010 10:07 pm

Inspired by this article, tonite I spent some time doing some graphics work. Hope you like it:
GISS' Fake'N Bake tactics create global warming data where none exists

rbateman
July 26, 2010 10:08 pm

It would be most interesting to take each years data, put it through an image restoration process to bring the psf down, and leave the non-data as NAN. The GISS images look smeared and lacking of resolution.
There are many things that could be done to present the data in a more realistic manner.

July 26, 2010 10:13 pm

There is a fellow from my Church, who decided…after doing several “visits” to Kenya in the late ’80’s and early ’90’s, to move there and work with a Christian college/university outside of Niarobi.
When he was back in the Spring, I arranged (beauty of Email) to meet at a local coffee house, have breakfast, and talk about his “work” in Kenya.
The one thing I got COMPLETELY WRONG was my belief about the temperature profiles, and that it meant to live in Niarobi.
Although it get’s “hot” a two or three months in the summer, most of the year it’s pretty moderate. AND, he lives in a “compound” outside the city, which is located at about 6,800′ ASL.
He described the temperature flutuations as this: “Max, my outdoor thermometer broke a while back. I was going to take it down. But then I had a great idea..I took it down and used a red marking pen. I then drew a line from the lowest temperature position, where the ‘bulb’ would be, up to 80 F. I hung that outside, and it’s worked great ever since.”
I was rather perplexed, until he explained: “A variety of factors control the temperature at 7,000′! It’s an intersection of altitute, latitude, and weather factors, which moderates and controls everything. The temperature varies from about 75 at night to a max of 84 during the day. It feels like a nicely warm, 80 F day where you live…almost every hour and every day of every year, 24/7, 365…etc.”
I then asked the key question: “How long has it been this way?” Now it was his turn to be perplexed. Finally, after some give and take, he indicated to me that about as long as there are any records, (which for this originally British colony, dates back about 200 years), it has been “that way”.
It seems that with day to day struggles, life in a ‘3rd world’ country, etc, GOREBULL warming doesn’t really rank high on the list in that area of the world!

July 26, 2010 10:20 pm

Sorry to post twice, but one other quick comment: ARCTIC temps going back to the ’50’s, ’60’s and ’70’s might be “unreliable” due to the fact that certain of the DEW Line and BMEWS (Distant Early Warning, for Aircraft attack, and Ballistic Missle Early Warning, for ICBM attacks) personnel just DIDN’T MAKE READINGS WHEN IT WAS TOO COLD, and they “made them up”!

July 26, 2010 10:25 pm

Steven Mosher
If Bob believes that the GISS maps do not accurately represent GISS data, he should write an article about it. That is a different issue.
Looking at places which have actual data, there is no reason to assume that one geographic region has the same trend as a nearby region. The US southwest is supposedly warming, while the southeast is cooling.

jose
July 26, 2010 10:30 pm

Anthony writes:
“This post is not about disproving 1200 km correlation, it is about demonstrating vast areas of missing data that is being infilled by extrapolation/gridding/homogenization. ”
Sorry. but this post is primarily about inferring that the methods for doing the necessary interpolation are incorrect. Or was I imagining that The Goddard wrote: “Is the temperature data in Montreal valid for applying to Washington DC.? ”
Maybe Steve should answer his own question. The reason that interpolation works is because these are ANOMALIES that are being interpolated, not absolute temperatures. I’d be willing to wager that the anomalies in Montreal are pretty similar to those in Washington D.C. That’s why the approach is valid in data-poor areas.

UK Sceptic
July 26, 2010 10:56 pm

This, particularly the South American map, elicited a one word response from me. It begins with “b” and ends with “astards!” It is unbelievable that the people creating this illusion of AGW regard themselves as scientists. Shameful. Truly appalling. That the politicians and media willfully takes their word for it is nothing short of criminal. But then, the people here at WUWT already know that. Sigh…

david
July 26, 2010 11:07 pm

It is not intuitive to me that correlations 1200 k are valid. Very often one area has a high pressure system that keeps it warm and the low pressure area, often within four of five hundred miles, is causing cool weather there.

July 26, 2010 11:09 pm

What’s Australia look like?
GISS does not plot data for most/all rural stations after the early 1990s. They only show urban data after this. Don’t ask me why.
Ken

MarkG
July 26, 2010 11:16 pm

“I’d be willing to wager that the anomalies in Montreal are pretty similar to those in Washington D.C.”
Even the two GISS station records in Montreal appear to be significantly different over the last seventy years, so why would anyone believe that Montreal’s weather will be anything like DC?
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=403716270030&data_set=1&num_neighbors=1
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=403716270007&data_set=1&num_neighbors=1
They are simply making data up for most of the world when even a quick look at the data which does exist shows that the correlation they’re claiming does not.

peakbear
July 26, 2010 11:26 pm

“Yes, it’s true, and well known, that there were not a lot of met stations operating in interior of Africa, or the Amazon jungle, in 1880. And that there is not a long history of measurements on the sea ice of the Arctic Ocean.”
Why do we need lots of stations?? To me the fact that the 2 Greenland stations correlate so well with each other convinces me that the thirties were significantly warmer there than now. Are there stations which show a significantly different profile there??

Doubting Thomas
July 26, 2010 11:44 pm

Don’t panic. All those great scientists we have in the U.S. congress are looking into the GISS data. I can’t wait to see the cover-up. The cover-up is always the best part.
– dT
(Are there any scientists in congress? Any engineers?)

HAS
July 26, 2010 11:48 pm

From a naive empiricists point of view isn’t the issue here the error limits around the various estimates of temperature (and then of trends)?
I have been thinking that the use of anomalies does rather lead people into adding them together etc without thinking about whether the variance in the data should be normalized (and what this might mean in turn for the standard errors in the statistics derived from them).
Anyway in posting on some of this at The Blackboard, Carrot Eater drew my attention to what looks like the basis for the errors reported in the GISS series referenced at the beginning of this post – “Global Trends of Measured Surface Air Temperature” Hansen and Lebedeff (1987). As I posted over at The Blackboard I hoped that the approach used in this paper to develop error terms (comparing their estimates to the output of a GCM) wouldn’t pass muster anymore – but I can’t see references to more robust methodologies on the NASA site.
Because of this thread I went and had a closer look at the GCM used by Hansen and Lebedeff (GCM II in Hansen et al 1983 “Efficient 3-D global models for climate studies”?) and do find that GCM II is optimized against observations of surface temperatures i.e. it all looks interdependent. (Also I worry about the coarse resolutions that all this is happening at).
I presume the science has moved on from here but can’t spot it on the NASA site – any helpful suggestions?

tallbloke
July 27, 2010 12:00 am

“The 1930s were warmer in Greenland.”
Warming Arctic Climate Melting Glaciers Faster, Raising Ocean Level, Scientist Says – “A mysterious warming of the climate is slowly manifesting itself in the Arctic, engendering a “serious international problem,” Dr. Hans Ahlmann, noted Swedish geophysicist, said today. – New York Times, May 30, 1937

July 27, 2010 12:17 am

I’m not quite understanding the use of anomalies. If reference station A shows say, 14C average over the reference period, and soon to be discontinued station B shows an average of 16C over the same reference period, the difference between them is 2C. So when it is discontinued. and A shows temps of 20C (an anomaly of 6C from its reference), it can then be extrapolated from that that the temps at B are 22C. Why not use that as the basis for the calculations, instead of adding another layer of indirection by saying that both rose 2C. So much information is lost by doing this. If somebody goes to see if temperatures actually rose at B, they’ve got to do complicated calculations to see if it is in agreement, instead of just being able to say yes, it’s 22C(or not as the case may be). Further, if any sort of subsequent smoothing or averaging is done on these anomalies, then more errors are introduced because these are derived values rather than actual.

jason
July 27, 2010 12:41 am

So mosher and co are saying that less than a century of fairly accurate anomaly readings are adequate?
They are also saying that on a planet where it can rain for a week and be cold in one place yet ten degrees celcius 200 miles away, the UK a few weeks ago, extrapolation is fine?
And finally. If its so unprecedented, explain the panic and the field work and anecdotal navigational tales of the arctic in the 1930s?

Baa Humbug
July 27, 2010 12:41 am

Nigel Harris says:
July 26, 2010 at 9:58 pm

Deliberately misleading and snarky. We’re talking about trends in anomolies here, not absolute temperatures. And we’re talking about climate, not weather.

I’m sick of hearing “it’s the anomaly”. Grab yourself a global geography map and a beer, and spend a couple of hours studying the topography of the planet.
Having done that, you will hopefully realise that 1200km or 250km or even 100km smoothing/gridding is not and can never be an accurate gauge of temperature anomalies.
What can be an accurate measure is if the globe was divided into “climatic regions”, with each region having it’s own station. The size of these individual regions would be irrelevant, some could be relatively small and yet others could be 100’s of square kms. But you wouldn’t have mountanious regions coupled together with valleys or coasts, or dry regions coupled with wet regions etc as it happens now with the current gridding method.
This gridding method was developed by Hansen. It’s wrong, it’s irrelevant, but he will stick with it to the end because it is so easy to manipulate as has been shown by many many people over the last 12 years.

anna v
July 27, 2010 12:43 am

paulhan :
July 27, 2010 at 12:17 am
You are right, anomalies are an extra level of complication. I have the analogy: analogies are like a map with no scale written on the side and in addition with unmapped distortions in relative distances.
The reason climatologists love them is because they cannot compute/model global average temperatures as Lucia has shown. . There is a spread of 3C between model outputs and if they publisized that how could they claim consistency of model solutions?
The logic of anomalies is the logic of the map I described above. In absence of the real map an anomaly map still carries information, just not information the world should gamble its future on.

Allyarse
July 27, 2010 12:44 am

“based on the idea that temperatures don’t vary much over 1200 km
It’s temperature anomalies which are an entirely different thing, and which do show more coherence over large distances [though whether 1200km is too far or not is another issue]. However, that difference alone makes your following sentences entirely pointless and they are a classic case of misdirection and distraction.
“In the process of doing this, they “disappeared” a cold spot in what is now Zimbabwe”
Look more closely at the whole picture. You will see that the smoothing also “disappears” orange warm anomalies along the west coast of central Africa. This is unsurprising when you consider that they are smoothing the data, a process which is inevitably going to reduce the size of anomalies, and spread out the signal from small-scale anomalies [such as the one in Zimbabwe].
So you can see that the blue over Zimbabwe contributes to an expanded white zone – effectively the blue has been used to cancel out some of the yellow and orange.
There’s no secret method here. Nothing dodgy going on to manipulate the data in one direction. It’s simply an attempt to make the best of the limited data that is available.
Also, if you really want to test whether what they are doing is reasonable, one thing you can do is to take spatially complete data [such as re-analysis data] and subsample it to replicate the holes in the observational network. Then you can compare the global means calculated with all the data and with only the subsampled smoothed data.
This would be more constructive than the “look dodgy!” accusations that you level here. Of course, it’s possible that this sort of analysis has been done already. Does anyone know?

son of mulder
July 27, 2010 1:05 am

Steve, what do the blinking pictures look like if you run against just rural data and then against just urban data for the 2 smoothing scenarios?

Tenuc
July 27, 2010 1:15 am

Due to a paucity of accurate data, poor understanding of atmospheric inhomogeneity and data massaging I don’t think any of the global temperature data-sets are able to produce anomaly information which is meaningful. The production errors are greater than the anomaly which they are trying to measure.
Even Dr. Jones, of the infamous CRU. stated that there has been no statistically significant global warming for the last 15y in an interview with the BBC. In the same period atmospheric CO2 continued to increase, so in the best case it makes only a small contribution to climate oscillation and thus the CAGW hypothesis is falsified.
The next few years are going to be interesting if the sun stays in quiet mode, as there seems to be a strong link between solar activity and changes in weather regime.

July 27, 2010 1:27 am

“Is the temperature data in Montreal valid for applying to Washington DC.? “
Yes. it is.. Here’s the plot. Lots of correlation.

Ammonite
July 27, 2010 1:30 am

pwl says: July 26, 2010 at 9:08 pm
“How can we have this 1987 paper, “Global Trends of Measured Surface Air Temperature”, by Hansen and Lebedeff falsified and rescinded?”
Hi pwl, you and anybody else are most welcome to try. GISTEMP justifies its 1200km smoothing based on this paper. It is over 20 years old. Have at it in the peer reviewed literature! Based on all the outrage it should surely be trivial to show where it all went wrong in a rigorous analysis. Lets see how far everybody gets…

July 27, 2010 1:31 am

stevengoddard says: “If Bob believes that the GISS maps do not accurately represent GISS data, he should write an article about it. That is a different issue.”
I did not write or imply that. The GISS trend maps for 250km radius smoothing have different data to work with than the GISS trend maps with 1200km radius smoothing. The maps with 250km radius smoothing have limited data, while the maps with 1200km radius smoothing are more complete. And this is why you are seeing different trends in areas.

July 27, 2010 1:59 am

I still don’t understand the complete ocean coverage, even at 250km smoothing.
We certainly don’t have data available at 250km intervals across the world’s oceans since 1880. Highly unlikely that we have it any time before satellite data became available.

Harry
July 27, 2010 2:00 am

Are you sure Zimbabwe is cooling? I find this curious, as I was thinking a few weeks ago that Zimbabwe would be a good place to look at UHI signal, since it is one of the only places on earth that is regressing at a rapid rate, and has been doing so for a while. Deurbanisation, I thought, would start showing up, as would a decrease in the use of vehicles. If Zimbabwe is really cooling, I wonder if a graph against GDP or population densities near measuring stations might be revealing.

richard telford
July 27, 2010 2:01 am

Yet another post from Goddard that should be titled “I don’t believe”.
Yet another post from Goddard that allows him to wallow in his personal incredulity and provide his readers fix of “daily hate”.
Yet another post from Goddard that takes us not one iota towards appreciating the magnitude of the problem.
It would not be hard to write a useful post, to test if the interpolation of anomalies to 1200km have any skill. It would require only a modicum of coding and statistical nous.
But Goddard would have to be brave enough to face the risk that the analysis shows that the 1200km interpolation is useful, and that it is most useful in the Arctic.

Mooloo
July 27, 2010 2:11 am

GISS is, at worst, assuming that the change in temperature over decades in Monaco might be a reasonable basis for estimating the change in temperature over the same timescale in Birmingham, England (but only if there is no data available for Birmingham.)
And you are comfortable with this? Really?
My issue is that climate is not static. Even if a good correlation in anomalies can be shown for a time (and even the stated correlations are more like 0.6 than 0.96) there is no proof that they existed before or after the time tested.
The financial markets have a tendency to make fools of themselves with the sorts of thinking we see used by Hansen. Very clever people – and make no mistake, a lot of people in high finance are very clever – find some correlation and ride it for all it’s worth. Until the crash, when the factors causing the correlation break down. “Unbreakable” correlations have a habit of being very breakable. (Remember when it was almost impossible for whole of the US to have a slowdown in the housing market, because it had never happened before. Thanks guys!)
Hansen has shown a mediocre correlation between some places over time at distances of 1200 km. He appears to use this to justify any places, at any time, having such a correlation. I struggle to agree.

Robert of Ottawa
July 27, 2010 2:11 am

As I’ve said before: The amount of warming is inversely proportional to the number of thermometers.

Alexander K
July 27, 2010 2:18 am

The concept of using statistical methods to replace actual measurement is not real-world thinking, which suggests that those who do this are inhabiting some kind of mental wonderlland. I know from my own experience of vastly differing micro-climates in a very small geographic area that anything other than using the correct measurement tool will produce garbage. In this case it may be fascinating and passionately-argued garbage, but it is still garbage nonetheless. Why is it impossible for GISS to admit that ‘don’t know, nil measurement’ is an acceptable statement.

Ryan
July 27, 2010 2:26 am

“Monaco might be a reasonable basis for estimating the change in temperature over the same timescale in Birmingham, England”
Good examples, since they demonstrate just the dangers of taking such an approach. Birmingham temperatures come from the airport, which is at the intersection of two motorways. It shows appreciable UHI. Monaco doesn’t have an airport at all. Birmingham is on flat land about as far away from the coast as you can get in the UK, but is still affected by the Gulf Stream. Monaco is between the Alps and the Mediterranean Sea.
Basically you couldn’t have two measurement sites that were much more different. Unless you head 1200km north from Birmingham, in which case you end up in the Shetland Isles. On the other hand you could move a 60 kilometers south-west from Birmingham to Ross-on-Wye – but Ross-on-Wye doesn’t show any warming (but then it doesn’t suffer from UHI)
So much for the idea that an anomaly in one site will replicated in a site 1200km away.

July 27, 2010 2:30 am

Thanks anna v,
Looking at the comments there, it looks like they are trying use it as input into the models, rather than as a guage of temperatures, which is what it was originally meant to be.
I still don’t understand why they don’t use the absolute temperatures ordinarily, and then convert it into anomalies for the models.
P.S. Always like your comments.

RoyFOMR
July 27, 2010 2:32 am

To me the elephant in the room is that we’ve warmed up since the Little Ice Age.
We don’t know why we went into that period and we haven’t a clue how we came out of it.
Now we have a field of science that claims that because we can’t explain recent warming with known natural mechanisms, it must be because of CO2.
That’s a pretty shaky foundation for any hypothesis especially one that demands endless investment and sacrifice if we are to avoid catastrophe.
When that investment flows into the coffers of the High Priesthood and the sacrices are those of others then a modicum of cynicism is not surprising.

July 27, 2010 2:42 am

Anthony and Steve Goddard: It appears I should have been clearer in what I wrote first on this thread. I originally wrote:
I hope you’re aware that the GISS trend maps do not present you with all of the stations used by GISS in their product with 250km radius smoothing.
There are fewer stations used in 1880 than what you’ve presented in the trend maps but the numbers increase with time.
I was not commenting on the accuracy of the data used to create the maps. And I was not commenting on the use of 1200km radius smoothing. There are pros and cons to the 1200km radius smoothing. In fact, I’ve written a post that was critical of GISS for presenting maps that give the appearance of a globally complete temperature record, when it is far from complete:
http://bobtisdale.blogspot.com/2010/01/illusions-of-instrument-temperature.html
My comment pertained to the last three maps in Steve’s post, and while not presented clearly, it was a note about why the trends and locations of the trends were changing in the maps with 250km and 1200km radius smoothing.
To create the trend maps, GISS uses cells where at least 66% of the data exists. Refer to their map making webpage:
http://data.giss.nasa.gov/gistemp/maps/
The note toward the bottom of the page reads, “’Trends’ are not reported unless >66% of the needed records are available.”
Since the maps with 250km radius smoothing have much less data from which to create trends (than the maps using 1200km radius smoothing), the trends will be different.
Sorry for confusion my cryptic first comment created.

Ryan
July 27, 2010 2:44 am

Actually there is a much more obvious way of showing that the 1200km smoothing is unreasonable. If you look at the 250km gridding then you will see that the cool spots are very close to dark orange and red spots. This indicates that contrary to what has been suggested by proponents of AGW, two sites less than 500km apart can show anomalies of opposite (and extreme) polarities. It therefore indicates the fallacy of extrapolating warm anomalies into areas with no data. In fact, I would theorise further that regions that tend to produce more frequent areas of high pressure will tend to produce low pressure in adjacent regions such that hot regions will often be found adjacent to very cool regions.

peakbear
July 27, 2010 2:49 am

Nick Stokes says: July 27, 2010 at 1:27 am
“Yes. it is.. Here’s the plot. Lots of correlation.”
Good correlation Nick, again it points out that a smaller network of high quality sites should be good enough to detect any long term trends (How is the SurfaceStations project going, Anthony?).
What is the gain in spreading these good observations into unobserved areas though, just use the data directly and point out in any conclusions caveats such as poor coverage in some areas.
I’m interested in what exactly has been driving the cooling in Greenland the last 70 years? I’d guess it is the PDO that shows strongly here too.

Ryan
July 27, 2010 3:15 am

The Alert NWT is based at an airfield:
http://en.wikipedia.org/wiki/CFS_Alert#Weather_Station
GISS lists it as “rural”. However, the airfield has been much the same as it is today for 60 years, so whilst UHI is probably a factor it is probably stable. GISS/GHCN drops it after 1990 although according to Wikipedia it is still recording temperatures today.
However, the GISS/GHCN data for Alert shows no warming over the period 1950-1990. Makes it even more curious as to how they have come to the conclusion that the Artic is warming rapidly when the few reliable stations nearest to the pole don’t show any warming at all.

July 27, 2010 3:22 am

Nick Stokes: July 26, 2010 at 8:09 pm
Yes, it’s true, and well known, that there were not a lot of met stations operating in interior of Africa, or the Amazon jungle, in 1880. And that there is not a long history of measurements on the sea ice of the Arctic Ocean.
So what was your point?

Just a graphic demonstration of *why* HadCRUT’s claim to have an accurate record of surface temperatures from the southern hemisphere dating from 1850 is dubious, at best:
“The historical surface temperature dataset HadCRUT provides a record of surface temperature trends and variability since 1850.”
and renders absurd its contention that:
In earlier periods the uncertainties are larger, but the temperature increase over the 20th century is still significantly larger than its uncertainty.
Introduction to Uncertainty estimates in regional and global observed
temperature changes: a new dataset from 1850
, P. Brohan, J. J. Kennedy, I. Harris, S. F. B. Tett & P. D. Jones, Accepted version: December 19th 2005.

July 27, 2010 3:28 am

Nick Stokes: July 27, 2010 at 1:27 am
“Is the temperature data in Montreal valid for applying to Washington DC.? “
Yes. it is. Here’s the plot. Lots of correlation.

You forgot the sarcasm font.

July 27, 2010 3:30 am

Regarding 1200km radius smoothing, its use by GISS in its GISTEMP product is based on Hansen and Lebedeff (1987) “Global Trends of Measured Surface Air Temperature.”
http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf
We can all find examples of surface stations where there appears to be little correlation between two surface stations that are less than 1200km apart. Though, in 1987, the correlations apparently existed; refer to the discussion of Figure 3 in Hansen and Lebedeff. But unless we can prove statistically that the correlations presented in Hansen and Lebedeff are wrong or outdated, complaining about it serves little purpose. Why?
GISS creates its global temperature product one way. NCDC infills temperature anomalies around the globe using other methods. And Hadley uses spatially incomplete data with fewer adjustments. Yet the linear trends of the three global land surface temperature anomaly products from 1880 to 2009 are remarkably similar between the latitudes of 60S-60N. Refer to the discussion of Figure 8 in my recent post:
http://bobtisdale.blogspot.com/2010/07/land-surface-temperature-contribution.html
It was also co-posted here at WUWT:
http://wattsupwiththat.com/2010/07/23/bob-tisdale-on-giss-landsea-ratios/

Shevva
July 27, 2010 3:39 am

Do GISS run academic degrees (UK)? I’d like to do a law course and would love to be able to get a good pass mark in an exam and smooth this across the rest of my exams, I could be a straight A student even in classes i’ve never been to, cool. Oh wait the real world does not work like that, i must have results to show for all the areas i’m studying, silly me.
/Sarc

July 27, 2010 3:43 am

Following up on Ron Broberg’s post and his neat animation, I’ve put together some KML files so that you can see in Google Earth what stations are in the GHCN v2.mean record for 1880, 1890, 1900… up to 2009. Details are here

July 27, 2010 3:52 am

richard telford says: “But Goddard would have to be brave enough to face the risk that the analysis shows that the 1200km interpolation is useful, and that it is most useful in the Arctic.”
Actually, the 1200km smoothing in the Arctic is only useful for exaggerating Arctic temperature anomalies. GISS deletes SST data in areas where there is seasonal sea ice and extends land surface data out over the Arctic Ocean during seasons without sea ice. By deleting the SST data, GISS biases the Arctic Ocean with the land surface data which has a significantly higher trend. Discussed it in this post:
http://bobtisdale.blogspot.com/2010/05/giss-deletes-arctic-and-southern-ocean.html
And that post also ran here at WUWT:
http://wattsupwiththat.com/2010/05/31/giss-deletes-arctic-and-southern-ocean-sea-surface-temperature-data/

Jimbo
July 27, 2010 3:57 am

When you add what has been brought up in this post by Goddard to the red spot missing data as well as the poorly site stations highlighted by Surfacestations it doesn’t inspire much confidence in me. I haven’t even touched on the divergence problem, Yamal etc.
WE "…can’t account for the lack of warming at the moment and it is a travesty that we can’t" so I say let’s make it up as we go along eh.

Ripper
July 27, 2010 3:57 am

I have been checking correlations in WA looking for steps .
Here is a sample of the two Kalgoorlie stations compared with stations in a less than 600km radius.
http://members.westnet.com.au/rippersc/kalannualmaxr.jpg
http://members.westnet.com.au/rippersc/kalannualminr.jpg
Note the Kalgoorlie Airport record includes Kanowa in the early years.
The pearson R correlation is in 11 year segments which when done by month makes any outliers stick out like dogs balls.

Ryan
July 27, 2010 4:00 am

You know, you can have a lot of fun with that 250km smoothed map. For a start it demonstrates clearly that we don’t reall yhave any data at all in areas where there isn’t “modern civilisation”. In fact, we have most data in exactly those regions where we are most likely to be impacted by UHI. In those regions where we are less likely to be affected by UHI, such as Eastern Europe (because the UHI would have been pretty stable during the cold-war period) we don’t see much warming. Alaska sees a lot of warming – but then I have a suspicion that Alaska has seen a lot of UHI since the black gold rush began.
But the most important fact I can glean is that the sites listed on GISTEMP over the period 1950 to present, whether in Alaska, Greenland or Canada, don’t show any significant warming. Any warming that is detected is in the period 1890 to 1920. So the big red dots you see all over the North America and Greenland have little or nothing to do with the alleged increase in CO2 in modern times.
According to AGW theory the temperature changes prior to 1950 are all natural as we didn’t make a big impact on CO2 concentrations before 1950 (according to ice-core data and data from Mauna Loa) – SO WHY ARE THESE MAPS SHOWING ANOMALIES THAT ARE KNOWN TO BE DUE TO NON-CO2 CAUSES????

Pascvaks
July 27, 2010 4:04 am

The only thing you learn in college, in the academic sense, is how to find an answer. There are no guarantees in life. The answer you find may be right, or may not be. But at least you found something. It’s all yours. that is, it is until you come across more information, data, what not, and then you can update ‘your’ answer. Or not! Ain’t life a beach?

Jimbo
July 27, 2010 4:05 am

“We find that the closest station from the North Pole is Alert, NWT, 834 km (518 miles) away. That’s about the distance from Montreal to Washington DC. Is the temperature data in Montreal valid for applying to Washington DC.? …….“reasonable enough” to use the Monaco weather forecast to make picnic plans in Birmingham, England. “

It does make you wonder WHY we bother to have so many local weather services? We could probably cover the USA with just 10 thermometers. Does anyone know what that number would be by the way?

Amino Acids in Meteorites
July 27, 2010 4:11 am

No wonder it’s warming so much at the Poles. There’s that big hole at each Pole extending all the way through the earth. Heat from the core of the earth, which is at several million degrees, is pouring out. 😉

July 27, 2010 4:16 am

stevengoddard says: July 26, 2010 at 10:48 pm “jose
_Anomalies vary tremendously over short distances”
# # Post: Note the area between Svalbard and the North Pole in the globe below. There is no data in the 250 km 1880-2009 trend map indicating that region has warmed significantly, yet GISS 1200 km 1880-2009 has it warming 2-4° C.
Not only distant but also seasonal anomalies can be significant as it happened during the period 1919-1940, with an extraordinary warming in the Arctic during the winter season – http://www.arctic-warming.com/ — while the summer temperatures changed only very little. .
Discussing annual average may make it difficult to detect significant differences. With regard to Spitsbergen and the arctic warming 90 years ago (details : http://www.arctic-warming.com/g.php ), the winter warming could only have been supplied by the Northern North-Atlantic, as any direct sun contribution, at this high latitude, can be ignored during the winter season.
Regardless on how GISS handled the available data from and around Spitsbergen in the 1910s , (Data: Jan/Feb::__ http://www.arctic-warming.com/f.php __) the warming is evident and effected the whole Northern Hemisphere, and had been recognised very soon.
( Ifft; George N. 1922, „The Changing Arctic”, Monthly Weather Review, Nov 1922: “Ice conditions were exceptional. In fact, so little ice has never before been noted. “; The Washington Post, Nov. 2, 1922 edition; “ Arctic Ocean Getting Warm; Seals Vanish and Icebergs Melt”..) ,
At least in the Spitsbergen case any generalisation should be met with reservation. .

James Sexton
July 27, 2010 4:18 am

Nick Stokes says: July 27, 2010 at 1:27 am
“Yes. it is.. Here’s the plot. Lots of correlation.”
Uhmm, Nick, looking at the graph, I see lots of correlation from about 1910-1950. But beyond that, no, not so much. In fact, from about 1950-1970, the anomaly looks inverted. According to your graph, it looks like Montreal gave up measuring their own temp about 1980. It looks like the graph starts about 1870 and ends with both about 1980 for a total of 110 years. Mont. and D.C. correlate very well for 40 yrs. Somewhat for 50 years and not at all for 20. Remember this is an anomaly graph not a temp graph, so similar shapes(bumps) aren’t correlation unless they are very close to each other. So, no, it isn’t correct to say Montreal and D.C. correlate. Sorry

Amino Acids in Meteorites
July 27, 2010 4:19 am

Doubting Thomas says:
July 26, 2010 at 11:44 pm
Don’t panic. All those great scientists we have in the U.S. congress are looking into the GISS data. I can’t wait to see the cover-up. The cover-up is always the best part.
– dT
(Are there any scientists in congress? Any engineers?)

……………………………………………………………………………………………………………….
There are a couple. But they are few. Mostly it’s this:
“Suppose you were an idiot. And suppose you were a member of Congress. But I repeat myself.”
~Mark Twain

RW
July 27, 2010 4:24 am

“It uses 1200 km smoothing, a technique which allows them to generate data where they have none – based on the idea that temperatures don’t vary much over 1200 km”
That’s incorrect. It is observed that there is a correlation between temperature anomalies at widely spaced locations. In fact, the correlation coefficient is 0.5 or higher out to distances of 1200km at temperate latitudes. The GISS methodology for calculating the temperature anomaly at a given point includes all data from within 1200km, with a weighting that varies linearly from 1 at 0km to 0 at 1200km.
“We find that the closest station from the North Pole is Alert, NWT, 834 km (518 miles) away. That’s about the distance from Montreal to Washington DC. Is the temperature data in Montreal valid for applying to Washington DC.?”
I got hold of weather station data from Montreal and Washington, choosing the station from each which had the longest record. I calculated the mean January temperature, and then subtracted it from the series, to convert it from absolute temperature to temperature anomaly. I calculated the correlation between the anomalies at the two locations. I found a Pearson coefficient of 0.75, which implies a significant correlation. So yes, if one had no data for Washington, one could make a pretty good guess at its temperature anomaly using Montreal data, and vice versa – at least for January. You may be interested in extending this analysis to the rest of the year.
It is always better to analyse the actual data, rather than argue from disbelief. Also, you should report correctly what the methodology actually is.

July 27, 2010 4:28 am

Bob,
I don’t find the fact that “the linear trends of the three global land surface temperature anomaly products from 1880 to 2009 are remarkably similar between the latitudes of 60S-60N” to be particularly interesting.
Hansen says that the reason GISS has diverged from HadCrut over the last decade is due to the Arctic.
And all three suffer from UHI and other issues.

July 27, 2010 4:29 am

“Note: Gray areas signify missing data.”

BBk
July 27, 2010 4:34 am

“David, Mr Goddard did not make clear what is being plotted here. It isn’t simple interpolation. The colors represent trends over 130 years, and the gray areas in the 250km plot show where info was not available for the full period. But that does not mean that there was no information there.
When GISS plots the 1200km trend plot, for most years they use the local data, which don’t appear in the 250km plot. They only interpolate to fill in the missing years.

So, when they compute the “global temperature average” to determine whether warming has happened or not, are they using fabricated, smoothed, and infilled data, or only the actual hard measurements?
If you answer that they use the smoothed data, then the trend is going to have exactly the same problem outlined here… they’re creating a ficticious baseline based on guesses to compare to (since, as you say, they don’t have measurements in south america and africa) in addition to the smoothing “spreading around” the heat with 1200 km distances.

July 27, 2010 4:38 am

Steve Goddard: You wrote, “Similarly, GISS has essentially no 250 km 1880-2009 data in the interior of Africa, yet has managed to generate a detailed profile across the entire continent for that same time period. ”
Actually, they do have data for the interior of Africa:
http://i25.tinypic.com/2z7lrty.jpg
They simply do not meet the 66% record threshold GISS uses for creating trends.

BBk
July 27, 2010 4:41 am

Bob Tisdale:
“Since the maps with 250km radius smoothing have much less data from which to create trends (than the maps using 1200km radius smoothing), the trends will be different.”
But it’s not “data.” Data is a measurement, not a guess. The 1200km smoothing creates the APPEARANCE of data where data doesn’t exist, then uses the fabricated data to generate a trend line which is ultimately meaningless.

July 27, 2010 4:45 am

Bob,
If they “do not meet the 66% record threshold GISS uses for creating trends” then they shouldn’t be in their trend maps.
I’m surprised that you are defending them.

jaymam
July 27, 2010 5:19 am

The graph Fig.A2.lrg.gif at the top purports to show that global temperatures have increased by about 8 degrees C between 1900 and 2000.
I’ve seen plenty of weather stations around the world where the temperature has been flat over that period, or has risen less than one degree, or has dropped.
Can anyone give me the names of any of the weather stations used to produce the above graph where the temperature has risen by 8 degrees C or more over the hundred years?
I wish to check.the data for those weather stations.
Since most weather stations will have scarcely risen at all, there must be some very very hot ones somewhere. Or the graph is wrong.
My government has just imposed an Emissions Trading tax, partly on the basis of that graph.
The NZ Minister of Climate Change showed a similar graph in a slide show last week, with a graph of CO2 growth also plotted to attempt to show how the two have moved together.

July 27, 2010 5:25 am

It’s very disappointing that many of you don’t understand some basic principles of meteorology and climate.
OVER LAND, regional temperature trends must correlate! I repeat: “REGIONAL” trends.
Coastal stations, instead, must not be used to infer trends over the sea.
All that means that, as a climate land region is identified, almost all the land stations over there must have a similar trend. If a station doesn’t correlate, it’s because of some non climatic influence.
Different reasoning is needed for coastal stations. It is not garanteed, in fact, that sst and air temperature have always the same trend or a trend of similar magnitude.
As an example, think at the Arctic Ocean. Over there, air temperature at a 2m elevation is bounded by the presence of ice or a mixture of ice and water. In coastal Siberia, instead, Summer temperature are free to climbe to +30° under some meteorological conditions. An anomaly of, say, +10 °C, over Siberia can’t be found over the Arctic Ocean, wathever Hansen thinks.
The only problem I see, is to identify climate regions. In some areas, 1200 km could also be a good approximation, in others is not. It depends on latitudine, geography and climate.

July 27, 2010 5:34 am

HAS: From a naive empiricists point of view isn’t the issue here the error limits around the various estimates of temperature (and then of trends)?
.
Yes. And the error can be quantified.

July 27, 2010 5:36 am

BBk says:
BBk, you have a very wrong idea about interpolation. Everything anyone says about any spatial field is based on interpolation. You can’t measure every point – you have to settle for a finite subsample which then represent the rest. So it’s meaningless to harrumph about “fabricated” data.
When you compute a global average, no explicit interpolation is necessary. You can interpolate points and then add them if you want but the summed result is still just a combination of the data points – just with different weighting. Where points are sparse, you’re just regarding them as representing a larger area. That increases the error range.

July 27, 2010 5:40 am

Paolo: The only problem I see, is to identify climate regions. In some areas, 1200 km could also be a good approximation, in others is not. It depends on latitudine, geography and climate.
.
I think that is an excellent point and a question within the ability of several of the technical bloggers. Its a question with an answer (even if I don’t have one at hand).

July 27, 2010 5:41 am

BBk replied: “But it’s not ‘data.'”
The numerical values created by 1200km radius smoothing is data. It might be data derived through methods that you disagree with, but it is data.

E.M.Smith
Editor
July 27, 2010 5:43 am

To the assertions that GIStemp uses different data for the 250 vs 1200 km plots:
The SAME data is fed into GIStemp for both. GIStemp will ‘spread it around’ from where it really is (a single point for each station) to either a 250 or a 1200 km distance.
Depending on which “STEP” in GIStemp you look at, this may or may not be used to ‘fill in missing data’. The version of code that lets you choose the range is not the version that I’ve worked with, so I’m not certain exactly where they do it in the web plots. But in the non-web portion of the code (that OUGHT to be used to make the web plots) the distance of the ‘spread’ is a parameter. I’d expect the ‘spread’ to be done mostly in the “STEP3” part of GIStemp where it calculates the anomalies (what is on the plots) for each grid/box and where you have a parameter for the size of the ‘spread’ used. By that time, all the ‘homogenizing’ infilling and the UHI adjustments have been done (and with their own different distances of spread. 1000 km IIRC in the version of GIStemp from last year.)
So the actual temperatures will have been spread around via homogenizing and UHI adjustments long before you get to the Grid/Box step and set it to 250 or 1200 for that step. The same data will be used from the same real point sources in both cases, though the stations that get merged together into any one grid/box will change with the size of the box.
Yeah, kludgey.
http://chiefio.wordpress.com/2009/11/09/gistemp-a-human-view/

July 27, 2010 5:54 am

stevengoddard replied, “If they ‘do not meet the 66% record threshold GISS uses for creating trends’ then they shouldn’t be in their trend maps.”
They aren’t in the trend maps. That’s the point. The trends for Central Africa, and Asia, and South America do not appear in the trend maps with 250km radius smoothing because they don’t meet the data availability threshold for the maps you’re creating on the GISS webpage. But with the 1200km radius smoothing, more data exists, and because of the increase in data availability, the trend maps for the GISTEMP product with 1200km radius smoothing are more complete. You may disagree with how GISS creates the data with their 1200km radius smoothing, but the increase in data is the reason for the differences in trends and their spatial completeness.
You wrote, “I’m surprised that you are defending them.”
I’m not defending anyone or anything. I pointing out errors to you that exist in your post.

Enneagram
July 27, 2010 6:05 am
July 27, 2010 6:08 am

stevengoddard replied, “I don’t find the fact that ‘the linear trends of the three global land surface temperature anomaly products from 1880 to 2009 are remarkably similar between the latitudes of 60S-60N’ to be particularly interesting.”
Others might find it interesting, Steve, which is why it was a general comment and not addressed to anyone in particular.
You continued, “Hansen says that the reason GISS has diverged from HadCrut over the last decade is due to the Arctic.”
If the only difference of any value is the Arctic, then why does your post include trend maps for Africa, Asia, and South America? You introduced the lower latitudes, not me.
You wrote, “And all three suffer from UHI and other issues.”
“UHI and other issues” are not the subject of your post.

July 27, 2010 6:45 am

Bob,
Why don’t you write up a separate article about the issues you find interesting?
The point of this article was to show that the GISS 1200 km smoothing is inconsistent with their 250 km smoothing – and that they are claiming to know long term trends in places where there is little or no data .
Do you disagree with that thesis?

Ryan
July 27, 2010 6:46 am

“Even worse, there’s no data in GISTEMP for Alert NWT since 1991. Funny though, you can get current data right now, today, from Weather Underground,”
Arr, well that would be because the Alert NWT weather station was originally run by the Royal Canadian Airforce and in the 1990’s control was handed over to the Canadian Environmen ministry. Sadly it seems nobody in team AGW could bother themselves to find out why 20 years of the most recent data was missing from this fine site, but then again it doesn’t show warming in the period 1951 to 1990 so why bother??? If they had gone to the Canadian Environment ministry I suspect they would have got the complete record.

JonK
July 27, 2010 6:47 am

Hi Steve,
“GISS Swiss Cheese”, by the way:
can you explain why among the GISS data for Switzerland (there are several stations mentioned: Geneva, Basel, Zurich, Saentis, Gotthard, stBernhard, Payerne, Jungfraujoch) there’s only one them – Saentis – showing the whole timescale from 1880 until 2010 (have a look at the graph around 1919!) . All the others stop in the eighties, some even in the sixities.
Thanks
JonK

July 27, 2010 6:48 am

Bob,
Your explanation is unpalatable.
The GISS maps say that grey areas represent missing data. If they can’t calculate an accurate 250 km trend for a limited area, then they certainly can’t calculate an accurate 1200 km trend over a larger area.

John F. Hultquist
July 27, 2010 6:49 am

Ya’ll have convinced me this problem is very messy. Starting from scratch, I would not like to be leading the team with the responsibility of designing, developing and reporting on the world’s temperature trends – regardless of the string of academic-letter qualifiers (Ph.D., and so on) for the team.
An important issue remains in that people with qualifiers before their names (such as President, Prime Minister, …) want an answer – preferably a single simple number and, maybe, a nice colorful graphic to help explain the number. The colorful graphic looks so much better on TV. At this stage, then, no one knows or cares about how the number and graphic were derived.
[ In ‘student evaluations of instructors’ students often rank, on a scale of 1 to 5, across as many as 30 items. These ranks are manipulated and presented in summary form to the administration to two significant digits and with standard deviations with respect to your department, your college, and the entire university. Read the first two lines in the first paragraph. ]
Anyway, thanks to you all for the insights and the work you are doing.

July 27, 2010 6:55 am

John F. Hultquist
According to Hansen’s theories, the world should already be heating up out of control.
The fact that they are having to do all these manipulations and artificially bump temperatures up a few tenths of degree, is an indication of how badly the catastrophists are failing.

James Atwell
July 27, 2010 7:04 am

The long term anomaly trends trends indicate significant static periods.
1880-1920 flat
1920-1945 increasing
1945 -1979 flat
1980 – 2000 rising
2000 – ongoing flat
Over this 120 year period atmosperic carbon dioxide concentrations have continued to increase year by year but temperatures have not reflected this. For over half this period there has been no upward temperature trend.
One conclusion from these results is that, over significant periods, there are cooling drivers that predominate over any anthropogenic greenhouse effects . Are there also similarly other periods when natural global heating drivers predominate?
Are we seeing the heat engine effect described so well in the earlier posts.

Rui Sousa
July 27, 2010 7:06 am

I know it is just another cherry from the lot, but it looks like as rotten cherries are taken from the lot, there aren’t many of them left…
Portugal is one spot that seems to have some heating influence in Morocco and Spain, but the data for the Lisbon weather station shows cooling in the last series starting in the 80’s, I don’t know why there are so many series for this station but it seems like worth taking a look at it:

Series: <a href="
http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=lisboa&quot; title="GISS series"
Station Location:
Station Photos:

July 27, 2010 7:34 am

stevengoddard,
1200 km and 250 km smoothing only diverge if you select trends over a time period (e.g. from 1880) where many remote regions do not have a good temperature record. Try comparing the two 1960-present, for example, the the differences will be much smaller.
Regardless, I’m not sure how the way that the GISS trend graphing engine handles incomplete data is germane to, well, anything. If you want a more useful exercise, try running GISTemp code with 250 km and 1200 km smoothing for different latitude bands (or specific countries/regions) and compare the resulting anomalies. The smoothing won’t make a huge difference in most places, with a few notable exceptions (the Arctic, parts of inner Africa, etc.). When divergences do exist, you can check the interpolated values against actual GSOD station data in those locations. Nick Stokes has been doing some interesting work to that end lately on his blog (e.g. http://moyhu.blogspot.com/2010/07/revisiting-bolivia.html )

Ryan
July 27, 2010 7:35 am

EUREKA!
I want to emphasise again that there can not have been any man-made CO2 induced warming before 1950. This is quite simply because the Mauna Loa record shows that the concentration of CO2 in the atmosphere was about 310ppm in 1950 compared to a long term trend assumed to be 285ppm. Hence in 1950 the level of CO2 was only 9% higher than the long-term trend and not sufficient to cause significant warming, whereas today it is 390ppm (37% higher than the long-term trend). It follows from this that any AGW from 1950 -2010 must be bigger than any warming from 1880 – 1950 for the AGW theory to hold, because the added C02 in the latter half of the last hundred years or so is 4x bigger than for the first half.
Now take a look at the graphs that Steve Goddard has included above. They don’t show any warming 1950 to 2010. However they do show warming 1880 to 1940. A hell of a lot of warming. In fact it appears to be warming at maybe as much as 6Celsius per century! That’s a high rate of change – and not one part of it attributable to AGW!
So I took a look at all the GISTEMP raw data for the stations north of 65degrees. Almost all of them show the same trend. Only Alaska shows warming in recent times. Russia, Scandinavia and Greenland sites either show no warming or warming during the years 1880 to 1940, followed by cooling/flat-lining in the years 1950 – 2010. Those sites showing this high degree of warming in the years 1880 to 1940 cannot be showing AGW induced climate change – they are showing the power of the planet to alter its OWN climate all without our help.
All those red dots you see in the northern latitudes in the 250km smoothed map are due to sites that don’t show AGW at all. They show a massive amount of NATURAL warming in the years 1880 – 1940. These have then been used to suggest that the entire Arctic circle is warming up in the 1200km smoothed map, due entirely to AGW. This would be a gross misrepresentation of the truth.
Why did team-AGW want to show the years 1880-1940 in their anomaly maps? Could it be that by roping in the limited, unreliable data for years where AGW was not relevant, that they were able to show spurious warming in parts of the world that have actually cooled since 1950? You decide…..

Spellbound
July 27, 2010 7:36 am

Nick Stokes wrote:
“Is the temperature data in Montreal valid for applying to Washington DC.? “
Yes. it is.. Here’s the plot. Lots of correlation.

Perhaps I misunderstand correlation, but correlation doesn’t connect the magnitude of change, only the direction, correct? So, even if DC and Montreal have a high degree of correlation in temperature anomolies, you still can’t determine the difference in magnitude from that correlation, correct? For example, if it is warmer in DC, I can reasonable conclude that it is likely warmer in Montreal (due to the correlation), but I cannot say that since it is 3C warmer in DC that is is therefore 3C warmer in Montreal. It could be .5C warmer, or 5C warmer, and the correlation can still hold.
Why, then, is correlation a defense for smoothing magnitude?
I admit my statistics are rusty, so I could be way off base here.

July 27, 2010 7:42 am

stevengoddard,
If GISTemp (and others) are doing “all these manipulations and artificially bump temperatures up a few tenths of degree”, please suggest a way to calculate global temperature using the various raw station datasets available that you think would be ideal. I’d be happy go generate a record of, say, raw GHCN + GSOD stations that have no lights visible to satellites, are away from the coast, are not at airports, and have low population density. Do you think the results will differ that much from, say, NCDC land temps?

July 27, 2010 7:54 am

Zeke,
If there isn’t an adequate dataset to calculate an accurate “global temperature” – then I would recommend that you don’t calculate a “global temperature.”

richard telford
July 27, 2010 8:26 am

Spellbound says:
July 27, 2010 at 7:36 am
Correlation is not the ideal statistic to use here. There are other statistics, for example Reduction in Error (RE) that would be sensitive to correlated trends with different magnitudes.
See http://www.nap.edu/openbook.php?record_id=11676&page=93#p200108c09960093001 for the formula.

July 27, 2010 8:28 am

Steve.
There is enough data to estimate a global temperature. The key word is estimate.
I can estimate the temperature with many fewer stations than those available. I can also test how sensitive that estimate is by systematically Removing stations and seeing that the average does not change. I can ( and have) estimated that number by using, all the stations, only one station per grid cell, only the stations with complete records for the past 110 years, only the rural stations, etc etc. The estimate does not change in any appreciable way. The estimate does not change for a simple reason.
When the earth warms over long periods that warming, that trend, is not, for the most part, locally entrained. If it warms by 1C over a century at location lat X, Lon Y, then the available data shows the following: It will also warm by 1C at position lat X2, Lon Y2. Go figure, heat moves. Now if heat didnt move, you might see one position warm at 1C over a century and another position Cool at 1C, but we don’t see that in data that covers 50% of the total land mass. We dont see that in the 50% we sample, so I’m baffled by people would think that the 50% we dont sample is different. Did we magically pick the 50% of the earth where is has warmed and magically missed all those places where it cools.
Question: is it warmer or cooler now than it was from 1900-1910? can you estimate?

Buffoon
July 27, 2010 8:31 am

To Tisdale et. all above, regarding the 1880 data etc.
If we provide a plot of data from Point A to Point B, using a broad spectrum of data accuracies between, any trend observed from point A to point B is a direct result of data. The graph shown at the beginning obviously makes a clear case for a net increase in temperature from beginning to end. If 1880-? data is shown inaccurate, the conclusions drawn from (or implied by) the data are as well.
The true contention that can be made against this graph is thus quite simple: The error bar, vs. time, does not change significantly vs. the amount of data interpolated. Interpolated data for non-continuous systems is not real, and should be considered 100% statistical error for the purpose of this graph. However, given the magnitude of interpolation shown in Steve’s representation… I question that proper mathematical logic is applied, and thus the graph, by presenting data and error bars (to inspire confidence in the implication,) is not representative of the data.
Make the documentation of error correct in a strict sense, and I suspect this graph will look quite heavy to the left, and present a much smaller and more confusing trend (or possibly lack thereof, which probably the most valid conclusion as well, given the variability in the underlying data set across the period of this graph.)

July 27, 2010 8:37 am

steveG.
Smoothing from at 250km or 1200km does not make any appreciable difference in the TRENDS ( the globally averaged trend) You will see differences in certain places, but when you put it all together there is no difference. I dont smooth AT ALL. to grids where I have no data and I get the same answer. “same” means within 10%.

July 27, 2010 8:41 am

James:
“Over this 120 year period atmosperic carbon dioxide concentrations have continued to increase year by year but temperatures have not reflected this. For over half this period there has been no upward temperature trend.”
The AGW theory does not predict a monotonic increase. There are natural cycles of warming and cooling. ups and downs. The warming produced by GHGs happens slowly over time and the impact is lagged. over time this warming shows up in a long term trend.

July 27, 2010 8:53 am

Steven Mosher
Sorry, I missed the period from 1910-1920
The hottest temperature ever recorded in the western hemisphere occurred in 1913.

Doug Proctor
July 27, 2010 9:00 am

What is the temperature anomaly record like using the 250 km smoothing instead of the 1200 km? Even though there are large gaps, as the effect of CO2 is, by “consensus”, global, should the lesser data coverage not show the same as the same, plus, it IS the same data.

July 27, 2010 9:00 am

WRT 1200km
Hansen’s study has been revisited and before people bluster on about it the should do some reading ( hehe I used to bluster about it) The effectiveness of “smoothing: out to 1200km has long been a bone of contention. The spatial correlation of the climate feild is modulated by several factors. Hansen noted a NH and SH difference. Later studies have been more detailed, using more stations to do the same thing.
1200km is not a magic figure. At some places 1200km works fine ( high lat) during some seasons. In the END it does matter ( to the global average of everything) if you use 1200km or 750km or 250km. at 0km it will make a difference. hehe
The most accessible paper I know on this is here: If your not prepared to discuss the literature, then I got other things to do
http://hadobs.metoffice.com/hadghcnd/HadGHCND_paper.pdf

Ryan
July 27, 2010 9:00 am

@Stephen Mosher “The AGW theory does not predict a monotonic increase. There are natural cycles of warming and cooling. ups and downs. ”
Then what exactly DOES it predict? What is the theory ACTUALLY stating? You see, when I was at university, a scientific theory had to make firm PREDICTIONs which could then be TESTED by OBSERVATION.
What we now have is a theory which makes certain predictions which are so elastic as to permit fitting to any observation that may be made.
Therefore we can see in the graphs above a trend from 1880 to 1940 that suggests a staggering 3Celsius rise. We are told that this is due to AGW. Hence the maps show red dots in the areas where this AGW is said to have occured. We can then predict that the rise from 1940 to 2010 should have been a further 12 degrees, to account for the continuing rise in CO2 from 1940 point on. Except that the graphs above show that this didn’t happen. It refused to rise any more.
Ah yes, but we have NATURAL changes in climate you see. Natural changes so big they totally compensate for the AGW from 1940 to 2010 in this case, one presumes? Or was the natural change big enough to cause the warming from 1880 to 1940? Do we know? Do you care? One wonders why we worry about AGW at all when we have such large NATURAL changes occurring at unexpected moments in Earths climate cycle.
Whatever. There are RED dots all over Canada and Greenland where the climate change occuring could be natural or AGW. No way of telling really. What matters is the red dot. Not the observation, which could be said to contradict the red dot. And having put a spurious red dot where no data supports its existence, why not splurge it out over the upper 30degrees of the globe.
Its the new science. Its not based on using observations to challenge theories. It based on belief.

July 27, 2010 9:06 am

Nick:
“When you compute a global average, no explicit interpolation is necessary. You can interpolate points and then add them if you want but the summed result is still just a combination of the data points – just with different weighting. Where points are sparse, you’re just regarding them as representing a larger area. That increases the error range.”
You know I did really GET this point until I actually DID the math. I mean actually DID the calculation. I think some simple examples with a small grid might be useful to explain this to people. THERE IS NO EXPLICIT INTERPOLATION.
Maybe a joint post?

July 27, 2010 9:11 am

Hansen claims that reason he sees warming during the past decade (while HadCrut doesn’t) is because of his better Arctic coverage.
GISS doesn’t have better Arctic coverage and talk of how everything comes out in the wash does not impress me.
Hansen forecast rapidly warming temperatures for the 21st century – which are not happening.

July 27, 2010 9:11 am

Ryan says: July 27, 2010 at 7:35 am : “They show a massive amount of NATURAL warming in the years 1880 – 1940.”
Ryan says: July 27, 2010 at 9:00 am : “Ah yes, but we have NATURAL changes in climate you see”
What is NATURAL if atmospheric phenomena are physical and chemical processes and energetics?

July 27, 2010 9:13 am

Alexander K
“The concept of using statistical methods to replace actual measurement is not real-world thinking, which suggests that those who do this are inhabiting some kind of mental wonderlland.”
Hmm.
1. I land you in the middle of a mystery desert. Its 110F in your current location
I ask you to estimate the temperature in any given direction. What do you estimate?
2. I tell you that 1200km north of you is a spot where the temperature is 100F. East of you at 1200KM is spot that is 110F. South of you is a spot that is 80F. West of you at 1200km it is 115F. Which direction do you walk and why?
We estimate what we dont observe all the time.

Ryan
July 27, 2010 9:19 am

@Stephen Mosher: “2. I tell you that 1200km north of you is a spot where the temperature is 100F. East of you at 1200KM is spot that is 110F. South of you is a spot that is 80F. West of you at 1200km it is 115F. Which direction do you walk and why?”
West. The point at 115F is probably an airport.

July 27, 2010 9:22 am

Steven Mosher
I remember a hot summer day in San Jose during the summer of 1998. It was 102 degrees, so we drove up to San Francisco to cool down. It was bitter cold in San Francisco.
Which direction do you drive, and why?

Steve M. from TN
July 27, 2010 9:24 am

Ok let me see if I can paraphrase what Bob Tisdale has been alluding to:
250km smoothing:
Grid cell X in the middle of Africa has 50% data recorded. This does not meet the GISS Standard of 66%, so the algorithm looks for stations within 250km to “fill in” the missing data. It can’t find any, so it is displayed as gray “no data available”
1200km smoothing:
Same grid cell, looks for other stations with in 1200km and finds some, so is able to “fill in” the missing data using trends from other stations, displays an anomaly color on the map.
Am I close Bob?
This brings a question to mind: Can “filled in” data propogate across multiple grid squares? Or does the algorithm use only “raw” data?

Gail Combs
July 27, 2010 9:26 am

Geoff Smith says:
July 26, 2010 at 7:24 pm
So this is out right lying but why?So many groups are doing this can it be out of self interest and funding.
Maybe there really is something to the Iron Mountain Report.
______________________________________________________________
Google “Maurice Strong” he has his fingers in a lot of pies starting with the Un’s First Earth Summit in 1972.
Who controls the food supply controls the people; who controls the energy can control whole continents; who controls money can control the world. – Kissinger
MONEY
“A Primer on Money” — published by the Government Printing Office in 1964
Report to Pres. Reagan: all taxes go to banks as interest
READ the comments on the Financial Stability Board
Official text: Financial Regulatory Reform: A New Foundation
Explanation of Obama’s “Financial Regulatory Reform
FOOD
History, HACCP regs and Food
food system is in trouble: s My comment has a lot of links so I will not repeat them
ENERGY
Climategate e-mail on Global Governance & Sustainable Development (B1)
Here is more on the (B1) scenario IPCC Emissions Scenarios
Here is who Ged Davis is (Shell Oil executive with IPCC connection)
Here is the context and history:
In Maurice Strong’s 1972 First Earth Summit speech, Strong warned urgently about global warming
Obama’s Chief Science Adviser is John Holden.’In their 1973 book “Human Ecology: Problems and Solutions,” Holdren and co-authors Paul and Anne Ehrlich wrote:
The de-development plan is UN Division for Sustainable Development – full text of Agenda 21
UN REFORM – Restructuring for Global Governance
Our Global Neighborhood – Report of the Commission on Global Governance: a summary analysis
a lot of research and links about Agenda 21 in the USA
USA and EU sign law harmonization agreement
====================
REPLY: This is getting quite a ways off-topic – Anthony

dp
July 27, 2010 9:26 am

“Which direction do you walk and why?
Which way is the wind blowing?

July 27, 2010 9:26 am

It was almost 100 degrees in Colorado yesterday. San Diego was 65 degrees.
Therefore, Death Valley must have been about 70 degrees yesterday afternoon, and Chicago must have been about 125 degrees.

Gail Combs
July 27, 2010 9:30 am

Mike says:
July 26, 2010 at 7:32 pm
So there is uncertainty and gaps in the data. Maybe that is why the first GISS graph has error bars on it. If you can demonstrate that their error bars are smaller than they should be you might have something worth talking about.
__________________________________________
I suggest you read AJStrata’s analysis of the error in the temperature data:
http://strata-sphere.com/blog/index.php/archives/11420

Christopher
July 27, 2010 9:39 am

Why is it that all the most severe warming always seems to happen in areas that are sparsely populated? Just a fluke? Or something more sinister?

July 27, 2010 9:42 am

Steven Mosher: July 27, 2010 at 9:13 am
2. I tell you that 1200km north of you is a spot where the temperature is 100F. East of you at 1200KM is spot that is 110F. South of you is a spot that is 80F. West of you at 1200km it is 115F. Which direction do you walk and why?
Southeast.
If it gets warm as I head north and cool as I head south, then I must be in the Southern Hemisphere. If the mystery desert is 2,400km wide, then I must be in Australia.
Therefore, I head southeast because I know some *great* bars in Sydney…

wayne
July 27, 2010 9:42 am

Steven Mosher says:
July 27, 2010 at 8:41 am
The warming produced by GHGs happens slowly over time and the impact is lagged. over time this warming shows up in a long term trend.
~~~~
Just as a secular trend in sun insolation during the recovery from the LIA in the 1600’s, mainly found in the 1900’s. It was the sun Steven Mosher.
There is about 2% water vapor in the air at any given time. CO2 could only have an affect at the ratio of CO2 to water vapor of about .04% / 2% at most or in other words 2% of any warming since year 1700, they are both GHGs.
Another way to get a rough estimate is by taking 20000 water molecules plus 270 CO2 molecules per one million in year 1700 and comparing that to 20000 water molecules plus 400 CO2 molecules per one million in 2010. That is about 1 – 20270 / 20400 or less that 1% affect. You see, the effect from the minuscule increase in CO2 is not what we have seen at most if water molecules have not decreased by the 400 less 270 difference to compensate, then there would have been no affect from Co2 increase. It was the sun, Steven, the sun. I’m surprised you have eaten some of the figments being passed around. 🙂
Your big mistake, you leave out the 20000 per million water vapor molecules in your calculations. What happens to CO2 molecules also happens to water vapor molecules, there are just a huge amount more water vapor molecules.

July 27, 2010 10:07 am

wayne:
“There is about 2% water vapor in the air at any given time. CO2 could only have an affect at the ratio of CO2 to water vapor of about .04% / 2% at most or in other words 2% of any warming since year 1700, they are both GHGs.”
you should meet mr stratosphere. I dont think you understand using a % tells you very little. C02 exists throughout the atompsheric column as does H20 at varying concentrations. Go say hello to mr stratosphere. Then go look at “line broadening”.
Then come back. Or go study a LBL radiative transfer model. Or if your an engineer who has worked with radiative physics just run the code you use everyday to design sensors or missles or airplanes that have IR stealth. We all know how important C02 is to the transfer of radiation in the real world. Ya, it warms the planet. Definately doesnt COOL the planet. Warms it. question is how much. Read your Lindzen, spenser, christy, Willis, Mcintyre, Monkton, etc etc. Yup. Warms the planet. How much? thats the real question

Ben
July 27, 2010 10:11 am

Looking at the GISS map at the top of this page, is there a way to produce a map of 1200 km smoothing with Death Valley assigned temps from Portland, Oregon and Monaco given temps from Birmingham, England, etc? I’m just curious what that would look like. It would probably make me uncomfortably cold all year long.

max
July 27, 2010 10:45 am

So it begs the question what % of all the warming in the last 100 years comes from within these GISS 1200 km smoothing areas.

frank
July 27, 2010 10:45 am

David Jay says: July 26, 2010 at 6:45 pm
I understand the loss of the cool spot in Africa, averaging (smearing?) should move temperatures away from extremes. However, the hot spot in Brazil is a winner. I want to hear an explanation of that methodology!
A possible explanation: Let’s suppose a large number of stations were added to Brazil between 1880 and 1920 and that these stations show strong enough warming from 1920-2010 to deserve bright red color. These new stations don’t show up on the 1880-2010 data at 250 km resolution because the first 40 years of data are missing. At 1200 km resolution, we have data for many of these grid cells extrapolated from 250-1200 km away for 1880-1920. This allows us combine extrapolated data for 1880-1920 with “real” data for 1920-2010 and see the red color that would be present on a 1920-2010 plot. The question is will Steve Goddard present data for any period besides 1880-2010 so that we can see if the mysterious red color arises from real data or extrapolated data.

July 27, 2010 10:54 am

Ben,
I am working on just such an analysis to show people how much missing data could ‘skew’ the results. First I have to finish the user guide.. maybe next week.
But you DONT assign TEMPS. you assign ANOMALIES. that is deviations from the norm.
For example, is death valley temps were constant its anomaly would be ZERO.
If I use death valley anomalies to ‘fill in” the artic, I would input a ZERO anomaly to the arctic. That is, I would assume that if death valley hadn’t warmed then it is safe to assume that the arctic hadnt warmed. ( if I choose to make that assumption)
When it comes to ‘infilling” I can
1. NOT infill
2. Infill the GLOBAL AVERAGE of all grids.
3. Infill using the closest grids.
4. Infill with the highest anomaly on the planet( worst case)
5. Infill with the lowest anomaly on the planet (best case)
then I can compare those 5 approaches. That gives me an idea of how important missing data could be.
That excercise might be instructive.
Its about 50F in SF now. How warm is it where you are?

Contrarian
July 27, 2010 11:00 am

Steve Mosher wrote,
“If 60-70N saw a positive trend of 1C, would you expect 70-90N to see
a higher trend? lower trend? or the same trend.”
I’ll tackle that one. You should answer “NA” unless you have *data* showing a reasonably strong correlation between trends from 60-70N and 70-90N over a different period of time. By hypothesis, there is no other period with data. So you answer, “NA” —not available.

A C Osborn
July 27, 2010 11:04 am

Steven Mosher says:
July 27, 2010 at 8:37 am
Zeke Hausfather says:
July 27, 2010 at 7:42 am
So you guys are saying that E.M. Smith’s analysis of the all of the Raw data is wrong?
That Ken Stewarts analysis of Australian data is wrong?
Don’t you find it odd that when other people look at the data they don’t get the same Trends as you and GISS get?
Are you saying that they can’t do the analisys correctly?
If so have you pointed out their errors?

Spector
July 27, 2010 11:06 am

Piecewise Linear Data Approximation
Just for reference, I have noticed that it is possible to do a piecewise linear step approximation of the global ocean surface temperature anomaly data from January 1880 to June 2010 with just five linear segments and have a root-mean-square error of 0.0934 deg C. For this data set, that appears to be equivalent to the error obtained using a 15-term, power-series approximation.
The Microsoft Excel Solver tool was allowed to pick the initial value of the first segment, the interior segment date points (subject to manual pruning) and the slopes for each segment. Interior initial values were the calculated end-points of the previous segment. The Microsoft Excel Offset and Match functions were used to select the segment data applicable for each source date.
This representation appears to show a cyclic process having about a 30-year half period. I present this because I have not seen this technique used before to represent climate data. I hope it is useful.

Seg          Decimal            Initial      Slope
No.           Dates              Value      Deg/yr
 1    1880.042 -to- 1910.621    -0.0324    -0.01061
 2    1910.621 -to- 1941.127    -0.3569     0.01357
 3    1941.127 -to- 1971.831     0.0571    -0.00085
 4    1971.831 -to- 2004.208     0.0309     0.01309
 5    2004.208 -to- 2010.458     0.4548    -0.00049
July 27, 2010 11:15 am

wayne:
“You see, the effect from the minuscule increase in CO2 is not what we have seen at most if water molecules have not decreased by the 400 less 270 difference to compensate, then there would have been no affect from Co2 increase. It was the sun, Steven, the sun. I’m surprised you have eaten some of the figments being passed around. 🙂
Your big mistake, you leave out the 20000 per million water vapor molecules in your calculations. What happens to CO2 molecules also happens to water vapor molecules, there are just a huge amount more water vapor molecules.”
You clearly dont understand the physics of radiative transfer, the windows through which IR can pass and those through which it cannot. You too need to meet mr. stratosphere.
Here is a question. If you were building an airplane to fly at 50K feet. And you wanted that plane to be INVISIBLE to IR sensors on the ground, you would want to know how the IR energy that plane gave off was TRANSMITTED through the atmosphere. The plane has ‘hot parts’, a gas plume, and heating due to aero friction ( called aero heating) Each of these heat sources radiatates at different wavelengths. (more or less)
So, you have this heat source (IR) in the sky. can you see it at the ground? can a stinger missile “see” it. can a SAM missile see it? How much IR energy makes it through the atmosphere?
is the IR “blocked?” yup. but some gets through. How much gets “blocked”. It depends what is “in the way.” Did the crazy engineers who designed this baby
http://en.wikipedia.org/wiki/Northrop_YF-23
have to understand, predict, and verify how IR transmits through the atmosphere?
YUP. yup, we sure did. What tools did we use? RTEs, radiative transfer equations.
Did the physics underlying those tools predict accurately? yup. Did those physics say that C02 would block some of the IR? yup. Was it altitude dependant? yup. Are those same physics the ones that underlie the AGW theory. yup. Are the models they use today much better than the classified ones we used in the early 80s? yup.
Did we fly the plane and check our predictions? yup. Were they accurate? yup.
did we use the same tools on the B2? yup. C02 has an effect. been there done that.
is h20 more important? yup. Does that make AGW false. nope. Its part of the theory

A C Osborn
July 27, 2010 11:15 am

frank says:
July 27, 2010 at 10:45 am
It doesn’t look like it.
http://chiefio.files.wordpress.com/2010/03/brazil_full_hair.png

James Sexton
July 27, 2010 11:18 am

Unbelievable conversation. Steve Goddard, keep plugging away at it. Eventually, the rest will see the truth in what you are stating.
To the GISS rationalizers, we can’t extrapolate or assume things in which we have no experience. For instance, I know fire is hot, no because each time I see a fire I experience its heat, but rather I’ve experienced fires heat on many occasions in many different manners, so I assume all fires operate in a similar fashion because of my experience. No one really knows if we can extrapolate 1200 km in Africa or not. Why? Because we’ve never observed, or experienced the temperature anomalies there compared to where we do have thermometers. For instance, we don’t know how to figure the temp anomaly in Tanzania from a thermometer anomaly data in Capetown. Maybe one can, maybe one can’t. We don’t know because we’re not currently measuring the temp anomaly in Tanzania. Let’s work that out in a math formula. Let’s say just for argument sake the current average yearly temp in Capetown is 16C and that the anomaly is +2. So, the equation would look like 16(x)=2. The currently average temp in Tanzania is ?, well we don’t know, so we’ll call it Y. So, the anomaly is ……….well, we don’t know because we couldn’t possibly know how it relates to the anomaly in Capetown. I’m glad you guys aren’t accountants. Someone show me the formula for knowing something never measured specific to the composition, time and proximity.
One thing we do know is that while we can average temperatures for the globe, the heat isn’t evenly distributed, we just don’t know how or why.(We know somethings, but we don’t know many things.) But we(the GISS) are basically performing an average distribution function to determine specifically where the distribution of the heat is going. I don’t understand how you guys and gals can’t see that. Someone send me the extrapolating formula and I’ll destroy it in less than a day.
“Its getting hotter where we don’t measure temps.” Brilliant and beautiful. With apologies to the late Pierre Bosquet, “It is magnificent, but it is not war science: it is madness”

frank
July 27, 2010 11:20 am

The fundamental question is: “How much error does extrapolation over 1200 km introduce?” Conventional wisdom asserts that there is a strong enough correlation between temperature anomalies between stations within 1200 km that useful information can be obtained from sites that far away. When I look at the real data presented in Hansen and Lebedeff (http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf), the situation isn’t as simple as I thought. In the tropics, correlation is weak (0.8 to <0) even over distances of 250 miles. In the Arctic, the correlation between stations that are about 1000 km apart ranges from 0.8 to 0.3. What causes this variation? Are the better correlations only seen between inland stations and worse correlations between coastal and inland stations. Are coastal stations well correlated only when they are influenced by the same ocean currents and poorly correlated when this isn't true (for example, different coasts of Greenland or locations strongly and not strongly influenced by the MOC/Gulf Stream)? Is the good correlation in the Arctic mostly present during seasons when sea ice is melting or forming and temperatures are constrained by phase change to be near 0 degC. If a more refined set of rules about situation were useful correlation exists could be abstracted the data, we might find that the red areas in the Arctic would shrink dramatically.
J. Box, Int. J. Climatol. 22: 1829 – 1847 (2002) claims that temperatures on the East and West Coasts of Greenland are not correlated. He has a lot of other useful info on Greenland temperatures. http://www.astro.uu.nl/~werkhvn/study/Y3_05_06/data/ex6/gl.pdf shows that

Frank K.
July 27, 2010 11:25 am

“When you compute a global average, no explicit interpolation is necessary. You can interpolate points and then add them if you want but the summed result is still just a combination of the data points just with different weighting. Where points are sparse, youre just regarding them as representing a larger area. That increases the error range.”
I suppose this is true if you don’t care what the final average is. If you’re interested in the numerical value of average itself, then the “weighting” becomes quite important…
In any case, the GISS “reference station” method is an ad hoc approach to deriving a thermodynamically meaningless anomaly index. Given the large uncertainty associated with the spatial correlation of the anomalies (see the original Hansen paper), it is amusing to see people talk about the “highest index value ever!” as if we really know these values within +/- 1 C.
By the way, since the anomalies are calculated relative to a local temporal average (using a predefined reference period), has anyone generated a spatial map of these averages? I’d be interested to see what the “ideal” reference temperatures are for different locations. For example, if the anomaly for 2009 for Chicago was +0.5 C, what is the underlying average that this anomaly is referenced to? 15 C? I wonder how different (or not) this is to, say, Detroit or Indianapolis.

July 27, 2010 11:26 am

Steve M. from TN: Regarding your July 27, 2010 at 9:24 am paraphrasing:
Nope. The 250km and 1200km smoothing has already been performed before the trends are analysed.

Gail Combs
July 27, 2010 11:27 am

RW says:
July 27, 2010 at 4:24 am
“It uses 1200 km smoothing, a technique which allows them to generate data where they have none – based on the idea that temperatures don’t vary much over 1200 km”
That’s incorrect. It is observed that there is a correlation between temperature anomalies at widely spaced locations….
….I got hold of weather station data from Montreal and Washington, choosing the station from each which had the longest record. I calculated the mean January temperature, and then subtracted it from the series, to convert it from absolute temperature to temperature anomaly. I calculated the correlation between the anomalies at the two locations. I found a Pearson coefficient of 0.75, which implies a significant correlation.
_____________________________________________________________
“Pearson correlation coefficient is largely used in economics and social sciences…..” Pearson Coefficient in Analyzing Social and Economical phenomena.
Assumptions in calculating the Pearson’s correlation coefficient:
“1. Independent of case: In Pearson’s correlation of coefficient, cases should be independent to each other.
2. Distribution: In Pearson’s correlation coefficient, variables of the correlation should be normally distributed.
3. Cause and effect relationship: In Pearson’s correlation coefficient, there should be a cause and effect relationship between the correlation variables.
4. Linear relationship: In Pearson’s correlation coefficient, two variables should be linearly related to each other, or if we plot the value of variables on a scatter diagram, it should yield a straight line.”

I do not see why anyone would be using Pearson’s correlation coefficient on temperature data. I do not think the data meets the criteria.
Independent of case:
I am taking it that “cases should be independent to each other” means “Two events are independent if the occurrence of one of the events gives us no information about whether or not the other event will occur; that is, the events have no influence on each other.” http://www.stats.gla.ac.uk/steps/glossary/probability.html#indepevents
Since storm systems sweep whole continents how could the two data sets (cases) be considered independent of each other?
Distribution:
I can not see how temperature data over a year could be considered normally distributed. I would think it is bimodal (winter /summer)
Cause and effect relationship:
How does the weather in Washington DC cause the weather in Montreal?
Untrained people keep plugging numbers into “statistical programs” without understanding the underlying principles. It is the main reason I hate the flavor of the month programs that supposedly teach managers and scientists statistics and actually muck things up instead.

JAE
July 27, 2010 11:34 am

It is so comforting to know that our premiere space agency is so capable.

Billy Liar
July 27, 2010 11:39 am

richard telford says:
July 27, 2010 at 2:01 am
‘It would not be hard to write a useful post, to test if the interpolation of anomalies to 1200km have any skill. It would require only a modicum of coding and statistical nous.’
I lok forward to reading your post.

Ben
July 27, 2010 11:46 am

Hi Steve,
I am in Portland, Oregon. It is now 70 F and at the time of your reply it was about 68 F at my house. Thank you for your clarification regarding ANOMALIES vs TEMPS.

peterhodges
July 27, 2010 11:51 am

Scott Ramsdell says:
July 26, 2010 at 9:39 pm
How do such systemic problems with data manipulation/insertion persist across so many administrations? The heads of NOAA and NASA are political appointees….It’s incredible to me that the status quo has existed for so long.,..

BINGO
because there is no difference between administrations. people need to wake up and realise there is only one party in this country. the debt and war party.
as long as folks continue to vote repuglicrat there will by only the continuing slide towards totalitarianism in this country.

Steve M. from TN
July 27, 2010 11:57 am

Bob Tisdale says:
Nope. The 250km and 1200km smoothing has already been performed before the trends are analysed.
So I’m missing how they fill in the missing data. Ok, I think I see why they don’t use trends. Back to the drawing board:
Grid X has less than 66% data, so it reaches out 250km or 1200km looking for close by grid squares with greater than 66% data. Finds 1 or 2 (whatever) and performs an algorithm to fill in the missing data? Or more likely the algorithm doesn’t fill in individual datum, but just calculates an anomaly?

PJB
July 27, 2010 11:57 am

I will duck as is appropriate.
When you deal with anomalies, they are anomalous to a given baseline (normal, average, median) over a SELECTED time period. (As nice as they may be for modelling and other considerations.)
At least temperatures are absolutes that do not depend on a specific period of reference.

Contrarian
July 27, 2010 12:04 pm

Zeke Hausfather wrote,
“If GISTemp (and others) are doing ‘all these manipulations and artificially bump temperatures up a few tenths of degree’, please suggest a way to calculate global temperature using the various raw station datasets available that you think would be ideal.”
There is no need to calculate a global temperature, nor any method for doing so. What we really wish to know is whether there is a global temperature trend, and its slope. We can do that via sampling — compute trends for those stations which have unbroken records of sufficient length. There is sufficient data for that in N America, parts of W Europe, Japan, Eastern Australia, and a few other places. If trends for those well-recorded areas agree closely, we can be confident there is a global trend.
(Appreciated your temp reconstruction, Zeke, and the others also. Good work. Now someone needs to construct some trends as above, from original (non-adjusted) CLIMAT and similar records).

Gail Combs
July 27, 2010 12:04 pm

frank says:
July 27, 2010 at 10:45 am
David Jay says: July 26, 2010 at 6:45 pm
I understand the loss of the cool spot in Africa, averaging (smearing?) should move temperatures away from extremes. However, the hot spot in Brazil is a winner. I want to hear an explanation of that methodology!
…. The question is will Steve Goddard present data for any period besides 1880-2010 so that we can see if the mysterious red color arises from real data or extrapolated data.
______________________________________________________________________
Steve does not have to because CHIEFfIO already has
http://chiefio.wordpress.com/2009/11/16/ghcn-south-america-andes-what-andes/
http://chiefio.wordpress.com/2010/01/08/ghcn-gistemp-interactions-the-bolivia-effect/

wayne
July 27, 2010 12:14 pm

Steven Mosher says:
July 27, 2010 at 10:07 am
you should meet mr stratosphere. I dont think you understand using a % tells you very little. C02 exists throughout the atompsheric column as does H20 at varying concentrations. Go say hello to mr stratosphere. Then go look at “line broadening”.
Then come back. Or go study a LBL radiative transfer model. Or if your an engineer who has worked with radiative physics just run the code you use everyday to design sensors or missles or airplanes that have IR stealth. We all know how important C02 is to the transfer of radiation in the real world. Ya, it warms the planet. Definately doesnt COOL the planet. Warms it. question is how much. Read your Lindzen, spenser, christy, Willis, Mcintyre, Monkton, etc etc. Yup. Warms the planet. How much? thats the real question
~~~~~
Please don’t bring up the “authority” bit Stephen. I’m aware of broadening. I respect all of those men you list, as you, but don’t have to agree with them on every statement they make, and I don’t. Dr. Spencer and Dr. Christy do some great work. Lindzen has his head planted in firmly in reality and I’m thankful. Dr. Spencer’s work on population density is one of the best indications of what has happened and the scale I have seen. I said nothing of COOLING. Do I rule out cooling, over ridden by the suns increase over the decades, no, not in a bottle but in the atmosphere system. Cool the stratosphere, yes, seems it does lacking much water vapor there. Cool or warm the troposphere, haven’t seen anything conclusive either way. Co2 can absorb and radiate in bands, especially LW, sure. Still doesn’t guarantee its interaction in the atmosphere either way. Warm in the lab in a bottle, warm it does. MODTRANs is a perfect model on the atmosphere scale, I don’t think so. But for you and I to have a serious discussion of CO2 effect on the atmosphere as a whole and the climate system you would have to get off your absolution of belief without bounds.
But you can’t get away from the H2O to CO2 ratio. Explain that and I will listen. Just don’t go off on tangents to try to get me to ‘believe’. Most scientists have been wrong on gray subjects over history and I keep in mind that most scientists are probably wrong on this one too, maybe both sides in the details of the cause and effect.
Oh, using a rough % tells me wonders in physics many times. Maybe you can take a LBL radiative transfer model and explain how CO2 is not just a small percent in relation to water vapor concentration and effect in the real atmosphere. I see your point, if everything stays still, then…, but it doesn’t stay still, factors in the atmosphere are always changing and changing only for reasons planted in physics and always to maintain an equilibrium as much as the system can perform once again by physics. No mystics.
I’ll give Co2 2% of the warming plus a bit for the logarithmic nature of all GHGs, right now that is all. I seriously think the balance was the sun over these decades coinciding with Co2 rise. If all climates moderate due to the sun’s pause in the next two decades, if the pause continues, we will know.

An Inquirer
July 27, 2010 12:28 pm

Gail Combs says: “It is observed that there is a correlation between temperature anomalies at widely spaced locations….”
The existence of correlation does not suggest that extrapolation over 1200 km or over 250 km is legitimate. If trend A is increasing @ 10% and trend B is increasing @ 1%, you will get an extremely high correlation, but that does imply that we can toss out trend B, and say that the overall trend is 10%.
Numerous studies have shown that temperature trends in urban areas are increasing faster than temperature trends in rural settings. (I would understand an argument that Arctic stations are not urban. However, we have seen some Arctic station experience local siting issues leading to warming bias.) Moreover, temperature trends on land are increasing faster than trends over oceans, and GISS is using land stations to project temperature trends over oceans.
And back to the question of positive correlation — it is not always positive. My hometown has had anomalies decrease by .033 from the 1930s to the 1990s while Minneapolis (150 km away) has had anomalies increase by .013 over the same time.
(BTW, I noted that you used January temperatures in your Montreal / Washington correlation. Perhaps January was the easiest to compute, but it is curious that you used January rather than the year.)

July 27, 2010 12:32 pm

There is nothing linear about climate or weather.
Sometimes it is minus 20F in Boulder, fifty degrees warmer in the mountains, and seventy degrees warmer on the western slope.

July 27, 2010 12:35 pm

stevengoddard replied, “ Your explanation is unpalatable. “
Not sure why. GISS presents how they prepare the trend maps on their mapmaking webpage. GISS writes, “Trends: Temperature change of a specified mean period over a specified time interval based on local linear trends.” And they further qualify it with, “’Trends’ are not reported unless >66% of the needed records are available.” Refer to:
http://i32.tinypic.com/jtkvpy.jpg
And that’s a screen cap from this page:
http://data.giss.nasa.gov/gistemp/maps/
You wrote, “The GISS maps say that grey areas represent missing data.”
Correct. They note this at the top of the map output page:
http://i25.tinypic.com/ngaky8.jpg
But for the trend maps you’ve presented in this post, it does not mean that monthly data does not exist in Africa, or Asia, or South America. It means those grey areas did not pass the requirement of 66% of the needed records in order to create a trend.
If you’re not aware, there is little 250km radius smoothing data in Africa, Asia, and South America for the year 1880:
http://i28.tinypic.com/o00wg1.jpg
But there’s considerably more on those continents in 1930:
http://i27.tinypic.com/70xogn.jpg
And there’s even more in 1980:
http://i26.tinypic.com/2iizfc3.jpg
In fact, in 1980, GISS doesn’t really need the 1200km smoothing to infill data in Africa, Asia, and South America.
And it’s well known that the number of stations have decreased in recent decades so that there less data available for the 250km radius smoothing in Africa, Asia, and South America in 2009, for example:
http://i29.tinypic.com/2vbn689.jpg
So your statement in the post, “Similarly, GISS has essentially no 250 km 1880-2009 data in the interior of Africa, yet has managed to generate a detailed profile across the entire continent for that same time period,” is incorrect. 250km radius smoothing data does exist on a monthly basis between 1880 and 2009 in Africa, Asia, and South America.
In an earlier comment, you replied, “Why don’t you write up a separate article about the issues you find interesting? “
I do and sometimes I link them here at WUWT to posts where they are relevant, as my comments on this thread have been. Do you find this comment irrelevant, Steve?
http://wattsupwiththat.com/2010/07/26/giss-swiss-cheese/#comment-440748
You wrote, “The point of this article was to show that the GISS 1200 km smoothing is inconsistent with their 250 km smoothing…”
The GISS trends you presented in this post for 1200km radius smoothing are not inconsistent the 250km radius smoothing. The trend maps present the available data in agreement with the GISS qualifications on their map making webpage, which I have discussed previously.
You continued, “– and that they are claiming to know long term trends in places where there is little or no data.”
No. They aren’t claiming any such thing. They are presenting trends for areas that pass a specified threshold for data availability and the data availability varies depending on the smoothing radius. This is why you’re seeing different trends with the different smoothing.
Steven, in summary, I have shown that monthly data is available in Africa (and Asia and South America) in growing amounts until the mid-to-late 20th century, which contradicts your statement, “…GISS has essentially no 250 km 1880-2009 data in the interior of Africa…”
And I presented the threshold GISS uses to determine trends for the maps, which explains why “…they ‘disappeared’ a cold spot in what is now Zimbabwe…” when you presented the map with the greater smoothing radius.

July 27, 2010 12:55 pm

Steve M. from TN says: “Grid X has less than 66% data, so it reaches out 250km or 1200km looking for close by grid squares with greater than 66% data. Finds 1 or 2 (whatever) and performs an algorithm to fill in the missing data? Or more likely the algorithm doesn’t fill in individual datum, but just calculates an anomaly?”
Nope. The 66% threshold of data availability only come into play with the trends.
GISS provides a brief overview of the process they use to create their GISTEMP product here the heading of Current Analysis Method here:
http://data.giss.nasa.gov/gistemp/
And they go into more technical detail here:
http://data.giss.nasa.gov/gistemp/sources/gistemp.html

Billy Liar
July 27, 2010 1:05 pm

Steven Mosher says:
July 27, 2010 at 8:28 am
‘If it warms by 1C over a century at location lat X, Lon Y, then the available data shows the following: It will also warm by 1C at position lat X2, Lon Y2. Go figure, heat moves.’
Gibberish.
Reductio ad absurdum – one thermometer would be enough. Where shall we put it?

Buffoon
July 27, 2010 1:30 pm

“For example, is death valley temps were constant its anomaly would be ZERO.”
//No, the derivative of its anomaly curve would be zero.
Its anomaly would be the distance of that constant temperature from the defined baseline. If the defined baseline is a lower temperature than death valley, death valley would show a constant high anomaly.
That is, I would assume that if death valley hadn’t warmed then it is safe to assume that the arctic hadnt warmed. ( if I choose to make that assumption)”
//I would assume that NYC has warmed, therefore so too has northeast Greenland ( if I chose to make that assumption,) and as you seem to imply, both would be reasonably human conclusions which are concurrently logical fantasies.
When it comes to ‘infilling” I can
//Note: You’re talking about infilling locations which *contain (unmeasured) data that could alter the data with which they are infilled.*
*I went to a local high school during basketball season to test your method of estimating pure error. Unfortunately, the basketball team is away now and again.
1. NOT infill
//Subsequently making a conclusion about a total area from some of its points is obviously assumptive and depending on the dispersion of sampling locations possibly terrible.
*I measured the heights of 1 student from 9th, 3 students from 10th, 5 students from 11th and 27 students from 12th grade and found an accurate mean height for the whole school.
2. Infill the GLOBAL AVERAGE of all grids.
//Making a conclusion about the contents of any area from the average of other areas also incompletely measured is obviously assumptive
*We measured the heights of all students in grades 9 and 11 and now accurately know the average height of students in 10th and 12th.
3. Infill using the closest grids.
//Making a conclusion about the coupling between adjacent systems is obviously assumptive.
*Cindy Lou Who, a student at Sample HS, is obviously approximately as tall as that guy next to her, Tommy “Treetop” Thompson, because, you know, they’re dating.
4. Infill with the highest anomaly on the planet( worst case)
//Making a conclusion that highest measured anomaly from baseline gives you information about the limitations of unmeasured regions is obviously erroneous.
*Johnny “Checkmate” Chang is the tallest boy on the chess team. Therefore, the whole basketball team must be shorter than John.
5. Infill with the lowest anomaly on the planet (best case)
//Making a conclusion that lowest measured anomaly from baseline gives you information about the limitations of unmeasured regions is obviously erroneous.
Tommy Thompson is the smallest boy on the basketball team, therefore the rest of the boys in the school must be at least as tall as he.
Assuming unmeasured points could be outside of the scope of the measured points suggests no non-estimate method of error can be obtained for past data sets. Where did they get the error bar from??!?

Doubting Thomas
July 27, 2010 1:44 pm

To the scientists,
Atmospheric temperature is not a measure of atmospheric heat content. Water vapor plays a huge role. This afternoon in Orlando it was 95 deg. F with 44% relative humidity (RH) while at noon today in Phoenix it was 94 deg. F with 27% RH.
The temperature is nearly the same but the total heat content is quite different. Orlando had an enthalpy (total heat content) of 39.3 Btu/lb of air, while Phoenix had 32.7 Btu/lb. The air in Orlando was only 1 deg. F warmer but it had about 17% more heat per pound of atmospheric air than Phoenix.
To put it another way, if Phoenix had the same absolute humidity as it did at noon today the temperature would be have to be increased to about 125 deg. F before that air would have the same energy content as the Orland air.
Given the above, how is it that temperature tells us anything useful about the heat content of the planet? Sure Phoenix is usually dry and Orland is usually humid but sometimes Phoenix is wetter than normal and sometimes Orlando is drier than normal. It is certainly not true that average temperature at a given location always corresponds with, or correlates to, average total heat content.
For example, the average temperature on 7/22 in Phoenix was 90 deg. F and the average enthalpy was 34.0 Btu/lb but on 7/26 the average temperature was 95 deg. F and the average enthalpy was 33.6 Btu/lb. Total heat content was greater when the temperature was 5 deg. F cooler … no correlation there.
The AGW hypothesis says that increased CO2 will increase total heat by driving the biosphere to a new and higher heat-equilibrium point. I don’t understand how anyone can accept historical records of temperature, alone, as telling us anything about whether the heat content of the biosphere has increased or decreased.
Hopefully one of you can enlighten me on the subject.
dT

Brego
July 27, 2010 1:45 pm

Re: Steven Mosher says:
July 27, 2010 at 11:15 am
Steven, I think you are being just a bit disingenuous here. Heat-seeking missiles utilize near infrared wavelengths, not thermal infrared.
http://www.ausairpower.net/TE-IR-Guidance.html
These same near infrared wavelengths are those utilized by infrared astronomers.
http://www.ipac.caltech.edu/Outreach/Edu/Windows/irwindows.html
High sky transparency and low sky emission are only found in the near infrared, but if clouds enter the picture then all bets are off. 🙂

July 27, 2010 1:47 pm

Bob Tisdalr,
This video shows the GISS coverage holes for May, 2010 Pink represents missing data. GISS has huge holes in Africa.
[youtube=http://www.youtube.com/watch?v=NDm4_NwRzVU]
When Hansen claims in December that 2010 was the warmest year on record – by 0.01 degrees, are you going to rush to his defense?

July 27, 2010 1:51 pm

Bob,
The GISS 250km 1880-2009 trend map shows “missing data” for almost the entire African interior. The fact that they added a few stations 70 years later does not change the fact that they do not have any 1880-2009 trend data for most of Africa.
This article is about long term trend claims by GISS and their lack of data to support it.

Gail Combs
July 27, 2010 2:02 pm

Here are a couple of station records from my home state North Carolina
Norfolk NC
Fayetteville NC
North – Raleigh NC
Or better yet look at some of the Stations in the Tennessee area all within less than 200 km
Hendersonville
Copperhill
Knoxville
Dahlonga
Walhalla
Rome
Middlesboro
Valleyhead
Rogersville
Greenville
Scottsboro
Pennington Gap
My Statistics teacher always said to PLOT the data. Well here are the plots and I do not see any patterns that justify saying the data from one area can be used to determine the trend in another area in this general location. The patterns of these station records are all over the place: lines up, lines down, broken lines at two different levels, sine waves trending up sine waves flat. I do not care what type of fancy statistics you use. These plots of the actual data prove the hypothesis that you can use temperature records from one area to predict the temperature data from another area can be falsified.
I do not have the computer skills to do the plotting on one graph. I already addresses the use of Pearson’s correlation coefficient in another comment.

July 27, 2010 2:07 pm

Billy Liar
You hit the nail on the head. The same folks who insist that hot temperatures in the 1930s were unique to the US, also want us to believe that a half dozen thermometers can accurately represent the whole planet.

bemused
July 27, 2010 2:27 pm

Stephen Goddard wrote:
“It was almost 100 degrees in Colorado yesterday. San Diego was 65 degrees.
Therefore, Death Valley must have been about 70 degrees yesterday afternoon, and Chicago must have been about 125 degrees.”

No Stephen, you are still confused about the difference between temperatures and temperature anomalies.
According to Accuweather (not sure how accurate their figures are), Denver had a high of 97F yesterday, the climatological mean maximum is about 89F. That is an anomaly of +8F. San Diego was 68F yesterday, with the climatological mean maximum about 77F = -9F anomaly. Death Valley is located between the two and so you might expect the anomaly to sit somewhere in the middle.
In fact, Death Valley had a max of 118F which is an anomaly of about +2F (climate mean max = 116F), i.e. +2F is in between -9F and +8F.
Of course, this isn’t a sensible thing to do for a single day. What GISS do is calculate it for monthly or annual means which will result in a far more robust correlation.

Z
July 27, 2010 2:41 pm

Similarly, GISS has essentially no 250 km 1880-2009 data in the interior of Africa, yet has managed to generate a detailed profile across the entire continent for that same time period. In the process of doing this, they “disappeared” a cold spot in what is now Zimbabwe.
Same story for Asia.
Same story for South America. Note how they moved a cold area from Argentina to Bolivia, and created an imaginary hot spot in Brazil.

Steve, come on now – you should know better than that.
Just as climate does not have any relationship to weather, 1200Km grid squares are not made up of 250Km grid squares put together. They are completely independent. Why should spacial aggregation be different to temporal aggregation?

It's Marcia
July 27, 2010 2:43 pm

This is an interesting comment thread. I want to see more like it!
On weather in the San Francisco, it can be foggy, cloudy and 55F in San Francisco, and with a 40 minute drive east to a city called Walnut Creek, (on the other side of the Oakland Hills from San Francisco) it can be bright, sunny and 97F. This litteraly has happened.

July 27, 2010 3:00 pm

Contrarian says: July 27, 2010 at 12:04 pm
Contrarian, the GHCN v2.mean data that most of us used does come directly from the CLIMAT forms, in modern practice. Historically other sources were used, but I believe there has been no attempt to adjust them.
But there has been another development – Ron Broberg has been processing the GSOD data, which comes directly from raw SYNOP data. He used that to calculate trends, and so did I.

Stephen Skinner
July 27, 2010 3:10 pm

“…based on the idea that temperatures don’t vary much over 1200 km. ”
Where does this idea come from? I was in Comogli, Italy one Winter and the temp was +3. In Milan, 120km away it was -5, and in Como, a further 50km away it was -20.

July 27, 2010 3:11 pm

MarkG: Comparing stations near Montreal – you need to use anomalies.
Here is a plot comparing Montreal Dorval and McGill (GHCN station data).
Here is Dorval, McGill plus Washington NA (GHCN data).
Here is a plot comparing the 5×5 grids containing Montreal and Washington DC (CRUTEM3 data).
Here is a plot comparing the 5×5 grids containing Birmingham and Monaco (CRUTEM3 data).
Regarding the 1200km smoothing – in many areas it will make no difference, but in many areas it will make a difference (as is clear from the main posted article)

SteveSadlov
July 27, 2010 3:15 pm

Here in California, if you move 10 miles the temperature may be radically different. This is due to both the effects of elevation changes as well as our myriad microclimates.

MarkG
July 27, 2010 3:22 pm

“Of course, this isn’t a sensible thing to do for a single day. What GISS do is calculate it for monthly or annual means which will result in a far more robust correlation.”
One of the GISS graphs I posted shows a temperature change of around +1.5C from the 1940s to the 1970s. The other shows around -1C from the 1940s to the 1970s. That’s two temperature stations only a short distance apart, yet for an entire decade their ‘temperature anomaly’ is around 2.5C different.
How can anyone claim ‘robust correlation’ between widely separated areas when two stations only a short distance apart show a difference of 2.5C for pretty much an entire decade?
Either the data from one of those stations is garbage or the idea of correlation between stations is garbage. Neither option is good for the Warmers.

Malcolm Miller
July 27, 2010 3:58 pm

I have read everything on this post today and there is only one conclusion I can come to. It’s the same as that given by Sir Arthur Stanley Eddington, the astrophysicist, as his explanation of the universe: “Something unknown is doing we don’t know what.” Most of the arguments and explanations can be sulled up in three words: “We don’t know.”

July 27, 2010 4:09 pm

MarkG
“One of the GISS graphs I posted shows a temperature change of around +1.5C from the 1940s to the 1970s. The other shows around -1C from the 1940s to the 1970s. That’s two temperature stations only a short distance apart, yet for an entire decade their ‘temperature anomaly’ is around 2.5C different.
How can anyone claim ‘robust correlation’ between widely separated areas when two stations only a short distance apart show a difference of 2.5C for pretty much an entire decade?”
The claim is supported by actually DOING the correlation study for a large number of stations ( thousands). See the literature I cited above. if you want to discuss actual studies, then that would be great. To be sure in a hundred station sample you may find many with correlated positive trends and few with a negative trend. The approach that methods use is to average all the stations within a given geographical area.
you have a station at position X,Y. Its trend is +1C. You have another at X2,Y2. Its trend is -1C. Estimate the trend at a point in between them? Well, when well allocate stations to grid cells, what we do is average all those within the cell. If the two points in question were the only ones in a grid, the grid would get an average of ZERO.
depending on your gridding you average stations from varying distances. ( size varies with latitude) Giss uses equal area grids.

July 27, 2010 4:12 pm

SteveSadlov says:
July 27, 2010 at 3:15 pm (Edit)
Here in California, if you move 10 miles the temperature may be radically different. This is due to both the effects of elevation changes as well as our myriad microclimates.
temperature doesnt matter. trend matters. if its 100F in a place forever the anomaly is zero. If its -100F 5 miles away and that temperature is also steady, ITS ANOMALY is zero as well. We dont work in temperature. We work in anomaly. departure from the mean. the mean for that location over time. TREND differences matter. temperature difference doesnt.

July 27, 2010 4:18 pm

Stephen Skinner says:
July 27, 2010 at 3:10 pm (Edit)
“…based on the idea that temperatures don’t vary much over 1200 km. ”
Where does this idea come from? I was in Comogli, Italy one Winter and the temp was +3. In Milan, 120km away it was -5, and in Como, a further 50km away it was -20.
The “idea” is the result of comparing ANOMALIES. changes in temp from the mean for that location.
if your house was 0C in 1900 and 1C now, thats a 1C change.
if myhouse was 14C in 1900 and 15C now, thats a 1C change.
Anomaly measures the change AT A LOCATION from its own local mean.
The desert goes from 50C to 51C– anomaly 1C
the Artic goes from -30C to -29C, 1C change.
its not temperature that we deal with. NOT. its the change from the local mean. Anomaly.
its NOT TEMPERATURE Changes over 1200km. Its the change in temperatures VERSUS their local mean. You can think of this as a trend change.

Doubting Thomas
July 27, 2010 4:19 pm

Steve Mosher: Would you comment on my post from 1:44 PM (above)?
dT

July 27, 2010 4:36 pm

bemused
Let’s do the math properly for your example. The Mojave Desert is 90% of the way from Denver to San Diego, so if we weight the interpolation correctly it should have been seven degrees below normal in Death Valley. not two degrees above normal. So you missed by nine degrees. That is an error 50 times greater than the precision which GISS reports.

bemused
July 27, 2010 4:41 pm

MarkG:
“How can anyone claim ‘robust correlation’ between widely separated areas when two stations only a short distance apart show a difference of 2.5C for pretty much an entire decade?”
http://wep.fi/pic/1987_Hansen_Lebedeff.pdf
Look at Figure 3.
For stations separated by around 500km the correlations between anomalies in the northern mid-latitudes are generally above around 0.8. Sure, there are a few station pairs which have low correlations even though they are close together, but the vast majority of stations show a good correlation.
Everyday experience also tells you this -if the summer was colder than average in Boston and Atlanta, then it was probably colder than average in Washington DC as well.

July 27, 2010 4:43 pm

Steven Mosher,
If you look at a typical temperature anomaly map, you usually see variations of many degrees over relatively small distances. It is nonsense to assume that the variations are linear.
In this Australian BOM map, you can see anomaly variations of 12C over a few hundred miles
ftp://ftp.bom.gov.au/anon/home/ncc/www/temperature/maxanom/daily/colour/latest.gif
GISS will most likely claim a record this year by one or two hundredths of a degree -using anomalies which may be off by several degrees in many locations.

bemused
July 27, 2010 5:00 pm

Steven Goddard:
“In this Australian BOM map, you can see anomaly variations of 12C over a few hundred miles
ftp://ftp.bom.gov.au/anon/home/ncc/www/temperature/maxanom/daily/colour/latest.gif

That is a daily anomaly map. When you average them over a full month, the variations become much smoother. e.g. see here:
ftp://ftp.bom.gov.au/anon/home/ncc/www/temperature/maxanom/month/colour/latest.gif
-in this monthly anomaly map the maximum change in anomaly over 1000km is more like 2-3K.
In fact, this looks very similar to the GISS map over Australia (which uses the 1200km smoothing):
http://data.giss.nasa.gov/work/gistemp/NMAPS/tmp_GHCN_GISS_HR2SST_1200km_Anom06_2010_2010_1951_1980/GHCN_GISS_HR2SST_1200km_Anom06_2010_2010_1951_1980.gif

July 27, 2010 5:31 pm

Steven Goddard replied, “This video shows the GISS coverage holes for May, 2010 Pink represents missing data. GISS has huge holes in Africa.”
Are you attempting to redirect the discussion from trends in annual data to a snapshot of monthly data? That tactic doesn’t work with me, Steven.
You wrote, “When Hansen claims in December that 2010 was the warmest year on record – by 0.01 degrees, are you going to rush to his defense?”
Steven, this is another failed attempt at redirection. All the regular bloggers here at WUWT understand my position on AGW, and I replied to a similar accusation from you in my July 27, 2010 at 5:54 am reply above. Here’s a link:
http://wattsupwiththat.com/2010/07/26/giss-swiss-cheese/#comment-440841
In your next comment, you wrote, “The GISS 250km 1880-2009 trend map shows ‘missing data’ for almost the entire African interior.”
That’s correct. And I explained the reason for this numerous times in this thread.
But then you wrote, “The fact that they added a few stations 70 years later does not change the fact that they do not have any 1880-2009 trend data for most of Africa.”
Have you researched this dataset? They didn’t simply add a few stations 70 years later. As I showed in my July 27, 2010 at 12:35 pm comment, the GISTEMP African coverage was almost complete in 1980 with the 250km radius smoothing:
http://i26.tinypic.com/2iizfc3.jpg
It was nearly the same in 1970:
http://i30.tinypic.com/34ynvid.jpg
And in 1960:
http://i31.tinypic.com/2rpe0z6.jpg
So that’s a 20 year span with almost complete coverage of the African interior–and Asia, and South America.
And here’s a gif animation of the annual coverage in 1880, 1890, 1900, 1910, 1920, 1930, 1940, 1950, and 1960:
http://i27.tinypic.com/2pyoxo1.jpg
Sure does look like a gradual increase to me and not a simple addition of “a few stations 70 years later,” as you claimed.
You concluded with, “This article is about long term trend claims by GISS and their lack of data to support it.”
And as I discussed in my earlier reply to you…
http://wattsupwiththat.com/2010/07/26/giss-swiss-cheese/#comment-441123
…GISS does have data to support the trends they present. You may not agree with how they create it, but the data exists.
What part of this reply or the one that preceded it (July 27, 2010 at 12:35 pm) do you disagree with, Steven?

James Sexton
July 27, 2010 5:32 pm

A guy, during his diligent tracking of temperature, finally configures the numbers and realizes his temps are getting warmer. So much so, he can ascertain it has been warming to a trend of 1 degree C over the last 40 years. From that, he knows the trend 500 miles to the east of him the trend has been 0.5 degrees warmer than average………..nope, not a chance, no way in hell one can come to that conclusion.
Steven Mosher says:
July 27, 2010 at 4:09 pm
“you have a station at position X,Y. Its trend is +1C. You have another at X2,Y2. Its trend is -1C. Estimate the trend at a point in between them?”
Yes, but the heat doesn’t distribute evenly, so, it is thus, only an estimate. Or a guess, if you will. One can correlate some parts of the world with other parts of the world, but it has been done by a process called observation. Does it not seem strange to you that the “observed” warming is in places we don’t “observe” anything? Given the way GISS presents its graphs, one has to really look at the fine print to see that these are just guesses. All the while, people take this to be literal. Our friends at GISS know this. PEOPLE IN CONGRESS ARE CONSIDERING LEGISLATION BASED ON THESE GUESSES WHICH THEY ARE TAKING FOR PROVABLE FACTS!. Even if these guesses are correct, it is wrong to present them in the manner in which is being done.
Moreover, each area of the globe is unique. I can show you from various temp gathering sources the trend where I live is different than the trends of places much closer than 500 miles away, but then we’d get into sighting issues and the like. But, I’ll grant that if it is warmer here(in SE Kansas), it is typically warmer in Wichita and Topeka. That being said, I know this is a unique place in the world of which there is no other. I know this because I’ve traveled much of the globe and tried to learn as much as I can about the rest. About 500 miles to the west of me sits the Rocky Mountains. There, the weather and currents do some strange things. You can probably talk to Anthony about this phenomena and be much better informed than reading this, but it is real just the same. Weather currents and patterns die or develop there. Sometimes they move in a seemingly tangential manner. The fact is, sometimes I can gauge what’s coming to us from weather in Washington state, other times I can’t. I lived in Fairbanks, Alaska (Ft. Wainwright) for 4 years. As you can imagine, weather and climate dominated much of my thought and concerns. My experience was that we got weather from all directions in no discernible pattern. I’m not saying there wasn’t one, I just couldn’t discern it. My recollection of Germany(BK) is that it was fairly mild there, constantly. Yeh, we got snow, but not too much. Yeh we had warm days, but they weren’t hot like Kansas. Now go any direction 500 miles from BK, Germany and tell me you can extrapolate the anomaly, not a chance! One can’t sit in Germany and say it’s cold here, so it’ll be real pleasant in Rome. Or, it’s warm here, I’ll go to Edinburgh to cool off. You can do that sometimes but I would pack my suitcase betting that to be true. Now, given the differences in the way each location I described operates, what model are they using for Brazil? Africa? Central Asia? And why? How did they come to the conclusion because climate behaves in a certain manner in a couple of similar locations that they could come to an understanding of how climate operates in totally different locations? Especially seeing that we don’t measure the temps? Steven, you’re a bright guy and I respect your opinions, but this isn’t defensible. If we don’t have accurate weather readings from these remote places, then we have no way of knowing how the climate acts in those locations. I suspect the plains of Asia west of the Urals will act differently than the jungles of the Amazon as to the sierras of Africa(as to how they effect surrounding areas). And I didn’t even go into the Antarctic. A one size fits all 1200Km radius probably isn’t the best way to extrapolate temp anomalies for the globe, unless one wants to purposefully create alarmist scenarios.

Frank K.
July 27, 2010 5:36 pm

http://wep.fi/pic/1987_Hansen_Lebedeff.pdf
Look at Figure 3.
“For stations separated by around 500km the correlations between anomalies in the northern mid-latitudes are generally above around 0.8. Sure, there are a few station pairs which have low correlations even though they are close together, but the vast majority of stations show a good correlation.”
I have studied this paper in the past, particularly Figure 3, and the ONLY region where the correlation is any good in 44.4 N and above. Everywhere else, especially the Southern Hemisphere, it’s marginal or complete crud! But hey, it’s climate science – anything goes…

July 27, 2010 5:41 pm

Bob Tisdale,
You do realize that a trend involves data at both ends, as well as the middle.
GISS is lacking data in Africa from the start of the period. They are also lacking data at the end of the period (present.)
I am quite surprised that you don’t seem to understand this rather basic concept.

Doubting Thomas
July 27, 2010 5:42 pm

Bemused:
Would you comment on my post of 1:44 PM (above)? I can’t understand why everyone thinks temperature, alone, tells us anything useful about the planet’s energy budget.
– dT

HAS
July 27, 2010 5:42 pm

Coming back to this I do still think that most of this debate would go away if rigorous error terms were reported for the statistical estimates and relationships being bandied about.
A simple example is the nonsense about the number of points of measurement. If global surface temperature is defined as the integral of temperature at each point on the globe across the globe divided by the area of the globe, then a sample of one gives you very large error, and these reduce as the number of measurements increase. So more thermometers is obviously better.
Also interpolating data points using no more than actual measurements gives you no more information than you had to begin with. It is a waste of time.
Interpolation only helps if new information is being added through deriving the additional temperature estimates from observations of other phenomena related to temperature and/or relationships between temperatures. However the uncertainty in those derivative measurements need to be reported and are in addition to the uncertainty in the actual temperature observations. Quite a bit of the discussion has been generated because of a lack of transparency over what those models are (and in my case the errors implicit in them).
In these circumstances it is quite likely that diminishing returns set in – adding more temperature estimates reduces the certainty of the estimate of global temperature.
I therefore worry that I don’t see much that is easily available that actually addresses these latter errors. In a quick squiz through this thread I see the odd comment about the appropriate statistics to use, but the only reference to error calculations in the literature is from Steven Mosher’s reference to Caesar et al (2006) “Large-scale changes in observed daily maximum and minimum temperatures: Creation and analysis of a new gridded data set”. If I read this correctly it describes a method for calculating errors at points where the temperature has been measured, but is silent on the errors elsewhere (although perhaps carries the implication that the report errors can be extended to points where there is no data).
A final point on the importance of errors. The poor person in the desert will happily follow your estimates if you have a reputation for good estimates and ignore you if you don’t.

July 27, 2010 5:43 pm

Some people in this discussion seem satisfied with the idea that interpolation sorta, kinda maybe works sometimes within a few degrees.
We are talking about a global temperature measurement reported within one one hundredth of a degree. It is ludicrous.

Bill Illis
July 27, 2010 5:46 pm

The original Hansen and Lebedeff 1987 paper showed that the correlation drops to 0.5 at 1,200 kms for all locations and there are a few latitude bands that are even worse than that.
For those of you using “annual” data or some smoothed annual data routine, the GISS 1,200 km smooth happens at the “Monthly anomaly” stage so you are not measuring what is actually ocurring here.
The problem is two-fold: in earlier times, there were not enough stations so a longer smooth is probably appropriate. In the most recent periods, GISS, NCDC, and Hadley/CRU are just being lazy/picking warming stations so that there is not adequate enough coverage.
One would have to assume there is not a weather station in all of Africa or all of central South America. I’m can guarantee you there are well-run, dedicated, and model weather stations that are available. “Nobody in Africa cares about the weather?” Sorry, it is fundamental human nature and people want to know and 5% of people are obsessed with it. There are thermometres everywhere.
In regards to the Alert weather station, well, there is a state-of-the-art weather station at Eureka (not far away) that is staffed by as many as 5 professional scientists at any one time (and has been since 1947) so there is no need for an Alert station too. The air base is not really used any more as well.

Spector
July 27, 2010 5:55 pm

I think the obvious solution to this problem is to have more automated stations all over the world. Perhaps special designs should be developed for installation on top of sea-ice that may melt and land-locked glacial ice. Of course, it may not be possible to have unattended stations in areas where they might be easily be stolen or vandalized.
I think that all the data points for these global averages should all be given the same weight so that extrapolation over unknown regions does not give some readings an undue weight. I do not think we can reliably project the boundary conditions of an unmeasured region into the interior unless we really do know all factors forcing the interior region weather. This should be obvious.

jaymam
July 27, 2010 5:58 pm

Are the three diagrams at the top of this post created using the same GISS data?
If they are, everybody can forget about all the models and your theories and everything else climate-related that they are doing. Because the graph must be false, and that alone is enough to demolish the AGW myth.
In the world maps labelled 250 km smoothing and 1200 km smoothing, Change 1880-2009, most change around the world is under 1 degree. The maximum change anywhere is 4 degrees, and there are few places like that. (e.g. Canada and Siberia, I’ll bet they are delighted!)
How does that reconcile with the graph labelled Global Land-Ocean Temperature Index which shows an average rise around the world of 8 degrees C over 100 years?
This is the most important topic that sceptics should concentrate on.

July 27, 2010 6:06 pm

jaymam wrote, “How does that reconcile with the graph labelled Global Land-Ocean Temperature Index which shows an average rise around the world of 8 degrees C over 100 years?”
Please look at the graph again, jaymann. The rise is approximately 0.8 deg C, not 8 deg C.
Regards

July 27, 2010 6:29 pm

And this is why I plan to climb Kilimanjaro this winter! Gotta see the glaciers up there before they melt! http://www.offtrackbackpacking.com

trbixler
July 27, 2010 6:59 pm

It makes sense to me to use Death Valley and Palm Springs as my GISS stations of preference for California. Kind of keeps the trends running in the right direction.

DR
July 27, 2010 7:03 pm

I’ll trust John Christy’s research and others who actually do field work on the effects of land use change that affect not only urban but “rural” thermometers as well.
http://climateclips.com/archives/212
Also, as Roy Spencer has pointed out there is an amplification of LT temperatures during El Nino events in relation to the surface. In 2010, GISS appears to have completely wiped out that relationship and then some.
Unless and until this mess is sorted out, I find it difficult for anyone to legitimately defend not only GISS but the entire surface station network. Replication of error is still error nonetheless.

July 27, 2010 7:15 pm

stevengoddard says: “You do realize that a trend involves data at both ends, as well as the middle.” And you continued, “GISS is lacking data in Africa from the start of the period. They are also lacking data at the end of the period (present.)”
Let’s see, you object to the trends being presented in the GISTEMP product with 1200km radius smoothing, and for that dataset, here’s the map of annual anomalies for 1880:
http://i27.tinypic.com/2ll1zzb.jpg
And here’s 2009:
http://i28.tinypic.com/1zh3121.jpg
The African coverage may be incomplete in 1880, but again, Steven, GISS writes, “Trends: Temperature change of a specified mean period over a specified time interval based on local linear trends.” And they further qualify it with, “’Trends’ are not reported unless >66% of the needed records are available.”
With that in mind, here’s a gif animation of the maps of the GISTEMP annual temperature anomalies with 1200km radius smoothing, at 20 year intervals from 1890 to 1990. Note how the data in the African interior is almost complete by 1910:
http://i26.tinypic.com/16hojv5.jpg
So apparently GISS has greater that 66% of the needed records, otherwise, they don’t print the trend. Do you have any data or documentation that shows that GISS is presenting trends in areas where they have less than their 66% threshold for the dataset with the 1200km radius smoothing?

DR
July 27, 2010 7:23 pm

Further, RPS has done considerable work in this area.
From his weblog in January:
NASA GISS Inaccurate Press Release On The Surface Temperature Trend Data

My comments below remain unchanged. Readers will note that Jim Hansen does not cite or comment on any of the substantive unresolved uncertainties and systematic warm bias that we report on in our papers. They only report on their research papers. This is a clear example of ignoring peer reviewed studies which conflict with one’s conclusions.

Have GISS apologists address the issues outlined by Pielke et al. If so, where? Certainly not in this thread.

July 27, 2010 7:35 pm

Bob,
Are you trying to make an argument that GISS 250 km maps are incorrect? They clearly show large regions of “no data.”

Doubting Thomas
July 27, 2010 7:53 pm

I give up. Everyone please continue to ignore my post from 1:44 PM. It’s now so far up the page that you would need oxygen tanks and a team of Sherpa to climb back up to it.
– dT

James Sexton
July 27, 2010 7:54 pm

stevengoddard says:
July 27, 2010 at 5:43 pm
“Some people in this discussion seem satisfied with the idea that interpolation sorta, kinda maybe works sometimes within a few degrees.
We are talking about a global temperature measurement reported within one one hundredth of a degree. It is ludicrous.”
Yep, keep hammering!
How most people come to understand temperature anomaly: We gather average observed temperatures for specific periods of time(T). We see that in one period of time the average temperature was N. In a later period of time we observe the average temperature was M. We note the difference between N and M. We see that the difference is P. (N-M=P) So the anomaly(Z) = P/T.
The way GISS understands anomalies: Gather average observed temperatures for specific periods of time(T) specific to one location. See that in one period of time the average temperature was N, specific to one location. In a later period of time, observe the average temperature was M, specific to one location. They note the difference between N and M, specific to one location.
Then, they realize there are other locations, only they haven’t gathered the observed temperatures. So they then state, if observed temperatures at this point and time here were N, then over there is must have been iA! And now, if the temperatures here are M then over there must be iB! So the difference for there is iA-iB which =iC and iC/T is the anomaly(Z) for there! So the total anomaly is Z + iZ /2! Which, of course = iZ.

James Sexton
July 27, 2010 8:04 pm

I forgot to add, please for the love of every thing holy, take that to your high school algebra teacher or your high school science teacher and have them explain why I’m wrong. I’m wrong on several levels, but when it is explained how I’m wrong, you should come to an understanding as to why GISS is wrong………..on several levels…….sophomoric math and science. How much bastardization of the (once) hard sciences can we stomach? Are we going to continually lower the standards in public education just so we pass this BS off as science? DAMMIT my grandchildren go to public schools!! What’s wrong with you people?

Frank K.
July 27, 2010 8:30 pm

Doubting Thomas says:
July 27, 2010 at 7:53 pm
Hey dT. I looked at your post above and of course you are correct. The global temperature anomaly product of the GISS is merely an adhoc index, and is thermodynamically meaningless. However, this does not prevent them from say things like the earth is the “hottest” it’s ever been in 120 years of recorded history!
Moreover, while most of this discussion has focused on the land data, let’s not forget the sea surface temperature data. You know, those ocean temperatures obtained in the past by sailors nautical climatologists using very careful measuring techniques involving buckets and thermometers…

bemused
July 27, 2010 9:58 pm

Doubting Thomas,
Sorry, have just read your post. You’re right – global surface temperature will not tell the whole story. To look at energy in the system you have to consider sensible and latent heat in the full depth of the atmosphere (not just the near surface layer) and also the ocean heat content.
Here’s an interesting link looking at the moist enthalpy near the surface:
http://atmoz.org/blog/2008/05/07/using-surface-heat-content-to-assess-global-warming/
it doesn’t change the overall signal though…

bemused
July 27, 2010 10:32 pm

Steven Goddard at 4:36
I don’t think GISS says anything about daily temperature anomalies, so I have no idea where you get your x50 precision statement from. In my original post I said that this was a silly thing to do for a daily value.
I was simply explaining to you the difference between absolute temperatures and temperature anomalies; a simple point that you very clearly failed to grasp when you wrote this article and your earlier replies.

James Sexton
July 27, 2010 10:34 pm

Doubting Thomas says:
July 27, 2010 at 7:53 pm
“I give up……”
The reason why no one commented is because it is a fairly nuanced perspective. I can’t argue with the logic. Sure, humidity(read water) carries heat/energy. There is another that posts here that affirms the energy is mostly carried in the oceans instead of the air. Similar, but not exactly the same perspective. I think there is much to be made of the water/energy vs the air/energy topic. You guys should get together and write a paper.
Most won’t comment on it because it would change the dynamics of the argument. (I don’t because I can neither add nor subtract from the assertion.) Most aren’t prepared to discuss the issue with either of you. Most acknowledge that temperature is a measurement of heat. Heat is an expression of energy. Water holds heat, so water holds energy. I’m reading Ohm’s law right now, but I can’t find a reference to water!?!? Dang, I’m talking about electricity when we’re talking about energy, or heat which electricity is. See where I’m taking this? You’re point, likely, sent many people to the books, but, in general, most alarmists and skeptics alike, simply aren’t prepared or equipped to discuss this perspective.
When it leaves water, where does it go? In what form? How is it measured? How long is the heat in water? Does the density of the water determine how much energy it holds? Is this really relative to global warming or is it a constant which holds a certain amount of energy? Does ice or snow hold the energy?
Beyond doubt, water, in vapor form or full liquid holds energy and is an important factor of our climate. But, I’d rather talk about temps anomalies that exists when temps don’t.
Just because we don’t comment on it, doesn’t mean people haven’t read it, we just don’t know jack about it……I’d stay with it though…….seems valid.

July 27, 2010 10:39 pm

bemused
GISS doesn’t measure temperature anomalies, nor does anybody else. Take a trip to your local Wal-Mart and ask them for a “temperature anomaly thermometer.”
What GISS does around the Arctic is to take temperature readings in a few locations, manipulate them upwards, then extrapolate them across vast distances with no data. Then they calculate imaginary anomalies based on past imaginary temperatures.
Perhaps you should think more and accuse less?

Doubting Thomas
July 27, 2010 11:19 pm

bemused says: July 27, 2010 at 9:58 pm (and thanks for responding), “it doesn’t change the overall signal though…” But we’re still stuck with the stark fact that the average “moist” enthalpy in Phoenix (how is “moist” different from “normal” enthalpy?) on 7/22 was greater than on 7/26 when the temperature was 5 deg. F higher. (Ref. my post of 1:44 PM.)
The cited post says, “Put another way, if global warming were to be framed as a change in surface energy as opposed to surface temperature, the degree of warming would be twice as large.” That is not, “the same signal.” It’s a much stronger single.
I have to study the post, and it’s way past my bedtime, but I don’t immediately see how these two statements can sleep comfortably together.
– dT

Amino Acids in Meteorites
July 27, 2010 11:19 pm

Bill Illis says:
July 27, 2010 at 5:46 pm
GISS, NCDC, and Hadley/CRU are just being lazy/
I don’t know if it’s laziness. I think they’ve put effort into deciding which stations to use, and the way to phrase why they did chose those stations so to deflect any appearance of being shifty. So I wouldn’t say lazy. But I’m not as diplomatic as you I think.

Amino Acids in Meteorites
July 27, 2010 11:24 pm

For those who haven’t sen this video yet, this is Joseph D’Aleo talking about the dropped stations:

jaymam
July 27, 2010 11:25 pm

Thanks Bob Tisdale. I didn’t spot the decimal point.
So an average global rise of 0.008 degrees C each year is a catastrophic problem according to the warmists?

Doubting Thomas
July 27, 2010 11:37 pm

James Sexton,
I dropped out of high school sometime in the the 12th grade. If I can get it, you can get it. Find a psychometric chart. This is simple engineering not rocket science.
Evaporating water uses energy but energy (they say) is always conserved. So any energy used to evaporate water is still there. It’s in the “form” of evaporated water. It’s called latent heat because it’s not expressed as temperature. Water exists on earth in three phases; ice, liquid and vapor. Anytime it changes from one phase to the other it either uses, or gives back, energy. Since our wonderful planet has a whole lot of water, one has to take it’s three phases into account.
The temperature of atmospheric air is not a measure of it’s total energy content.
In the context of man-made global warming, all the argumentation about temperature (or temperature anomalies) is, at best, a big waste of time. Temperature, alone, cannot tell us anything about the energy budget of a wet planet.
– dT

Doubting Thomas
July 27, 2010 11:49 pm

It’s cold in Pasadena, CA. Really cold. Could we get some of that east coast warming out here?
I read somewhere that there are amino acids in meteorites. The D’Aleno video is nice.
Bedtime … zzzzz
– dT

July 28, 2010 2:27 am

stevengoddard says: “Are you trying to make an argument that GISS 250 km maps are incorrect? They clearly show large regions of ‘no data.'”
No. I’m not saying the GISS maps with 250km radius smoothing are incorrect. But they are not the dataset that have the trends applied to them. It’s the maps with the 1200km radius smoothing that have the trends.

July 28, 2010 4:29 am

Steven Goddard: Have you ever plotted the GISTEMP combined land+sea surface temperature anomalies and compared the products with 250km and 1200km radius smoothing? That’s how you determine if the 1200km radius smoothing really influences the end product.
Here a comparison of the changes in zonal mean temperature anomalies from 1880 to 2009 for the datasets with the 250km and 1200km radius smoothing. I’ve used the data that GISS presents in the zonal mean plots below the trend maps—the ones you’ve used in this post.
http://i28.tinypic.com/2hqufbb.jpg
Note how the two datasets only diverge at high latitudes. Why? Those are the latitudes where GISS deletes SST data and extends land surface data out over the oceans Remember my post on this? Here’s a link:
http://bobtisdale.blogspot.com/2010/05/giss-deletes-arctic-and-southern-ocean.html
And here’s a time-series comparison of the two GISTEMP combined products with 250km and 1200km radius smoothing, in which I’ve limited the latitudes to 60S-60N, excluding the poles:
http://i29.tinypic.com/55pc86.jpg
The difference in trends is 0.003 deg C/Decade or 0.03 deg C/Century. This indicates the 1200km radius smoothing adds basically nothing to the GISTEMP product for the vast majority of the globe.
In other words, outside of the Arctic, your complaints about the GISTEMP product with 1200km radius smoothing are unfounded.

July 28, 2010 5:34 am

Bob Tisdale,
You might want to think about these statements from Hansen. You are completely missing the point.

the 12-month running mean global temperature in the GISS analysis has reached a new record in 2010…. GISS analysis yields 2005 as the warmest calendar year, while the HadCRUT analysis has 1998 as the warmest year. The main factor is our inclusion of estimated temperature change for the Arctic region.

July 28, 2010 5:41 am
Jose Suro
July 28, 2010 5:49 am

I’ll be blunt. I’ve said it here before and I will say it again. I dislike the use of anomalies to describe a temperature record. Anomalies can be used in two ways: As “snapshots of real data” (good but unnecessary), and as “Fudge Factors” (Bad). Yes, we like to see TRENDS. But real data will show you trends just as well as anomalies when both are based on real data.
As far back as Kepler, who as best I can tell was the first to use the term anomaly in science (as in “mean anomaly”) when describing planetary motion, it has been used as a “fudge factor”; Hence my dislike for the term and its usage. Kepler was wrong by the way, although not by much, but he had to fudge it to make it work, fascinating to read how he did it. He actually built a model, yes a model – (what a novel idea!), and had to fudge that too by the way, and yes, it was also wrong.
So why are we using anomalies to express global temperatures? Both NASA and NOAA go through great lengths to justify the use of anomalies instead of temperatures. I’ll get to those in a second.
First, I have another question. Why would Hansen go through such great lengths back in 1987 to justify the validity of anomalies over ~1000km grids? Regardless of the answer, in this particular application, anomalies are obviously being used as “Fudge Factors” – to hide the absence of real data.
And why did they pick the mean to be from 1951 to 1980? This is the mean now used by GISS and NCDC. Well, in the 1987 paper they mention that: “The zero point of the temperature scale is arbitrary.”
Arbitrary – From Webster:
1 : depending on individual discretion (as of a judge) and not fixed by law
Why would they do that? I don’t have an answer, but I will say that where there is individual discretion, there is prejudice (not in the pejorative sense) – by definition.
And that’s the whole problem with anomalies, in this case the zero point is a 22-year period in the life of a 4-billion year old planet that just happens to coincide with one of the largest upticks in temperature in the last 100 or so years, not that 100-years means squat in the big planetary time-scale, picture, whatever. And, they are creating data where it doesn’t exist.
So why do that? What’s the urgent need for this fudging? I can only speculate and suggest that they needed these uniformly spread out global anomalies for one thing: Model Feed.
Now let’s look at the justifications for anomalies made by NASA and NOAA. First from the NCDC:
“Absolute estimates of global average surface temperature are difficult to compile for several reasons. Some regions have few temperature measurement stations (e.g., the Sahara Desert) and interpolation must be made over large, data-sparse regions. In mountainous areas, most observations come from the inhabited valleys, so the effect of elevation on a region’s average temperature must be considered as well. For example, a summer month over an area may be cooler than average, both at a mountain top and in a nearby valley, but the absolute temperatures will be quite different at the two locations. The use of anomalies in this case will show that temperatures for both locations were below average.”
So, in the first part they say “we need to fudge”. The second part, the last two sentences, is bogus because with real data sets for both locations there is no need for anomalies to calculate “below average” TRENDS.
And from GISS:
“Anomalies and Absolute Temperatures
Our analysis concerns only temperature anomalies, not absolute temperature. Temperature anomalies are computed relative to the base period 1951-1980. The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km. For a more detailed discussion, see The Elusive Absolute Surface Air Temperature.”
So I went and read “The Elusive Absolute Surface Air Temperature” and read this:
“Q. If SATs cannot be measured, how are SAT maps created?
A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts. We may start out the model with the few observed data that are available and fill in the rest with guesses (also called extrapolations) and then let the model run long enough so that the initial guesses no longer matter, but not too long in order to avoid that the inaccuracies of the model become relevant. This may be done starting from conditions from many years, so that the average (called a ‘climatology’) hopefully represents a typical map for the particular month or day of the year.”
Ahhh, models…. Remember Kepler? Love the word “hopefully” in there too…. Notice, that in contrast to the NCDC, they do not mention the lack of data directly, instead they refer to the Hansen paper saying they can correlate temperatures out to ~1000km. They jump from the premise of big variations in “short distances” to concluding that anomalies are better for “monthly or annual” and “much larger regions” It still means the same thing, they don’t have the data and are fudging it. The really interesting thing is how they go through great lengths to say that SATs are worthless. But they still have to use them as their base data!
I speculate that what might have possibly started as Model Feed at the beginning has now been evolved into another global depiction tool all on its own.

Ryan
July 28, 2010 6:09 am

Tisdale: The red blob in South America bears no relation to any of the surrounding data points. According to GISTEMP the nearest data point to the center of that red dot is Cuiaba, 630km away – which shows no warming. Next up would be Santarem showing 1Celsius warming. Next up Manaus – no real warming, maybe 0.5Celsius if you were generous. Next up Corumba (1000km away) – 1 Celsius warming. No more long term data after that.
The GISS map with 1200km smoothing shows a 2-4Celsius anomaly in Brazil. I cannot find any data to support it. The anomaly in GISTEMP is 1Celsius at most.
Looking at the 250km smoothing I think the algorithm is doing rather more than you suppose. The only real data is dispersed around the red dot. The anomaly showing 1-2Celsius at the North coast and the South coast appears to have been joined by the alogorithm and the red center appears to then be an extrapolation of the trend from cooling/stasis at the East and West coasts through the warming center to a red hot core.
All of this appears to be made up however, since there is no actual data in that red dot at all, let alone any data showing >1Celsius anomaly. The Area seems to be centered on Mato Grosso a heavily forrested part of Brazil with little human habitation. It doesn’t surprise me that there is no reliable data for this area.
Given that the stations that could be used to support it are all at the coast.

DR
July 28, 2010 6:14 am

Bob Tisdale;
Is then the divergence of GISS from the rest of the global temperature products solely based on their Arctic interpolation/extrapolation (which is it)?

Ryan
July 28, 2010 6:26 am

Mosher/Bemused: If the smoothing of anomalies was valid over 1200km was a valid approach then the maps for the 250km smoothed data would be more or less the same as the map using 1200km data. In fact that is clearly not the case as you can prove to yourself by comparing the first two maps on this thread.
The map of 250km smoothing shows regions +3-4Celsius anomalies right next to -2Celsius anomalies – they are only 250km apart. These differences don’t happen just once or twice – they happen all over the map.
Fact is that the idea that temperature anomalies would be the same over distances of 1200km came from the idea that global warming was, well, a global phenomena. So a 2Celsius warming seen in Canada must mean 2 Celsius warming everywhere else more or less. Fact is that the 250km map vividly illustrates the flaw in that argument. It begs the question “just how global is global warming?”
Still, I don’t expect either of you to respond to this post, since I notice that both of you dodge the more difficult questions in favour of using obfuscation and cod science to make weaker points. Tell you what, go away and reads the Encyclopaedia Britannica and come back when your ready.

July 28, 2010 7:51 am

stevengoddard replied, “You are not making any sense. This map is the GISS 250km trend map… …Note the word ‘trend’ in the link.”
I always make sense.
Apparently, you forget that there is data associated with the maps. What the graphs in my earlier comment showed was, between the latitudes of 60S and 60N, the infilling of the 1200km radius smoothing has had basically no effect on the linear trends from January 1880 to June 2010. Read the comment and understand the graphs, Steven:
http://wattsupwiththat.com/2010/07/26/giss-swiss-cheese/#comment-441550
And to confirm my point, here’s another comparison graph. This one compares the GISTEMP combined land+sea surface temperature data with 250km and 1200km radius smoothing for one of your examples, Africa. The infilling of the 1200km radius smoothing added a whopping 0.002 deg C/decade to the linear trend for Africa, Steven.
http://i26.tinypic.com/25rp2fr.jpg

July 28, 2010 8:01 am

Ryan: July 28, 2010 at 6:26 am
Excuse me for throwing in my two cents. As I showed in my reply to Steven Goddard above…
http://wattsupwiththat.com/2010/07/26/giss-swiss-cheese/#comment-441550
…the infilling and smoothing caused by the 1200km radius smoothing has a significant impact only at high latitudes and this is caused in part by GISS deleting SST data and extending Land Surface data out over the oceans. Refer to:
http://bobtisdale.blogspot.com/2010/05/giss-deletes-arctic-and-southern-ocean.html
But between the latitudes of 60S-60N, the infilling and smoothing of the 1200 km radius smoothing has little to no effect:
http://i29.tinypic.com/55pc86.jpg
That’s the bottom line.

RW
July 28, 2010 8:18 am

Gail Combs – your link contains some quite wrong statements about the Pearson correlation coefficient.
“1. Independent of case: In Pearson’s correlation of coefficient, cases should be independent to each other.”
Not so. This is one of the things we’re testing with correlations, isn’t it? If the two datasets were completely independent, there would be no correlation, would there?
2. Distribution: In Pearson’s correlation coefficient, variables of the correlation should be normally distributed.
Can’t see how this is relevant.
3. Cause and effect relationship: In Pearson’s correlation coefficient, there should be a cause and effect relationship between the correlation variables.
This contradicts point number one, and also the mantra of statistics: “correlation does not imply causation”. Two things can be strongly correlated without any causal connection at all.
4. Linear relationship: In Pearson’s correlation coefficient, two variables should be linearly related to each other, or if we plot the value of variables on a scatter diagram, it should yield a straight line.”
Not correct. If the relationship is non-linear, Pearson’s correlation coefficient is an underestimate of the true correlation, but that does not mean it can’t be used.
Generally, there is no reason not to use Pearson’s correlation coefficient to estimate the correlation between temperature anomalies.
An Inquirer: I think your comments were aimed at me rather than Gail Combs. A trend of 10% and a trend of 1% in two different anomaly series certainly would not imply that the two series were strongly correlated. Even if they were, the GISS methodology would do nothing that resembled “tossing away” the 1% and saying “the overall trend is 10%”. So I really don’t know what your point is.
It’s very easy to find two relatively nearby weather stations for which the anomalies do not correlate well. This is not news. If you look at the plots of this in Hansen’s paper which first described the situation, you can see very clearly that there is a lot of scatter about the relationship between correlation and distance. Nonetheless, the relationship clearly exists.
I started with January because that was the first column in the datasets I downloaded. The results for any other month, or the whole year, would be similar, and just as easy to compute. You’re welcome to replicate my analysis; I don’t have time.

bemused
July 28, 2010 8:20 am

Doubting Thomas:
Re: “same signal” when looking at surface energy rather than temperature.
All I meant by this was that you still see a clear warming signal when you look at surface enthalpy instead of surface temperature. But, as you say, when you look at enthalpy (which is a better reflection of energy in the air) the rate of warming is twice as fast ( at least according to that article). I’m not quite sure if we’re arguing or agreeing 😉

Ryan
July 28, 2010 8:47 am

Tisdale: Basically you are throwing something into the heap that isn’t being discussed here at all. Its just plain obfuscation. Its also misleading. If you said that the Artic was melting and the Sahara was freezing then “on average” the trends would be the same. Not the same impact however, is it?
The point of these maps is that they show greatest warming at centers of greatest propaganda. They suggest that the Amazon is going to dry up and the Artic melt leaving the Polar bears with no home. However, there is not actual data to back up the claims made by these maps.

Bill Illis
July 28, 2010 9:09 am

Every month, when GISTemp at:
http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt
…. is updated, there are over 100 changes in previous monthly averages in their record back to 1880.
Last month, 6 of the 12 monthly numbers from 1880 were changed. 8 of the 12 months from 1881 were changed and so on.
Naturally, the majority of the earlier records are adjusted down and the majority of the recent records are adjusted up. The switch date seems to be around 1977.
This happens every month. I have been tracking it for over two years with this useful website.
http://www.changedetection.com/
So let’s say one has been doing this for 20 years. Each month, you bump down half of the 1880’s by a -0.01C, 20 years * -0.005C average * 12 months = -1.2C [ie. there is lots of opportunity to make a major change in the trend.]
Let’s compare the recent GISTemp to the earliest version at the Wayback Machine – 2005 – 1880 is now 0.06C cooler than it was before (which seems to be about the average for all the 1880s), 2003 is is now 0.03 warmer than it used to be. So 5 years of changes have added 0.1C to the trend alone.
http://web.archive.org/web/20050914112121/http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt

July 28, 2010 9:20 am

Ryan replied, “Basically you are throwing something into the heap that isn’t being discussed here at all.”
Then disregrard what I wrote. Simple as that.

Jay
July 28, 2010 9:44 am

Bill Illis wrote
“Last month, 6 of the 12 monthly numbers from 1880 were changed. 8 of the 12 months from 1881 were changed and so on. ”
How are these “adjustments” justified?
In engineering, changing old data is called fudging the data or cheating!
What reason, other than some new data like a thermometer calibration or time of observation can be used to validly adjust data. And why would adjustment s be needed 130 years later, surely diligent workers would have caught this stuff in the 1980s!

Jose Suro
July 28, 2010 10:31 am

“Bill Illis says:
July 28, 2010 at 9:09 am
Every month, when GISTemp at:
… is updated, there are over 100 changes in previous monthly averages in their record back to 1880.”
Interesting observation. This could have something to do with how the model regenerates every month while keeping the base period (1951-1980) as the zero point? Another one against anomalies….

Buffoon
July 28, 2010 11:52 am

Tisdale wrote:
But between the latitudes of 60S-60N, the infilling and smoothing of the 1200 km radius smoothing has little to no effect:
http://i29.tinypic.com/55pc86.jpg
Bob, read this very carefully please.
Set 1, real data
(10+20+30+40+50+60+70+80+90+100)
Set 2, infilled by interpolation of indexes
(10+15+20+25+30+35+40+45+50+55+60+65+70+75+80+85+90+95+100)
Amazingly, the mean of both sets is 55. My ‘infilling’ with averages had no effect on my outcome.
The reason they are both 55 is because the additional data in set 2 comes from interpolation of data set 1. I have added a data set with the same mean to a data set to produce a larger data set with the same mean, but new statistical characteristics.
Data set 2 has 9 more elements than data set one. From a layman standpoint, I have more confidence in my average because I have used more data. From a mathematical standpoint, I have produced recursive, meaningless indexes to the mean but still lowered my standard deviation (commonly used for mathematical error calculations.)
However, critically, from a scientific standpoint, I have produced additional data points which have double the pure error of each index from which it is computed. I have not reduced the pure error of my mean at all.
So, making conclusions based on 250km OMITS the empty space from which you are not taking data. Making conclusions based on 1200km smoothing created additional data coverage which has not improved your error against reality. These are mathematical tricks.
In reality, at 250km smoothing, I leave about, say, 90% of Africa untouched. Therefore, at 250km smoothing, computing the average temperature of Africa, I have 10% accuracy by area. At 1200km smoothing, I have now covered 90% of Africa, 80% of that being manufactured data. I have 10% accuracy.
If the data doesn’t exist at smaller smoothing, it shouldn’t be used at higher smoothing unless the system is proven to be linearly coupled (IE the average between any two points of equal distance is invariant to the sample size between those distances, provided the data set is symmetrically distributed) Prove that and you win the argument…

Buffoon
July 28, 2010 12:10 pm

Mosher:
Anomaly measures the change AT A LOCATION from its own local mean.

its NOT TEMPERATURE Changes over 1200km. Its the change in temperatures VERSUS their local mean. You can think of this as a trend change
So.. what you are saying is..
TO BE ACCURATE, EACH RECORD MUST HAVE AN ACCURATE LOCAL MEAN
TO BE ACCURATE, EACH MEAN MUST INCLUDE THE WHOLE SAMPLE PERIOD
TO BE ACCURATE, EACH RECORD MUST BE CONTINUOUS OVER THE PERIOD
TO BE ACCURATE, EACH MEAN IS LOCAL: NOT INTERPOLATED
THEREFORE, INTERPOLATED MEANS TO DERIVE INTERPOLATED ANOMALY IS FAKE DATA
How accurate is a station 1 kilometer away? Giving each area of measurement 1200km of effect means you are assuming that it reflects accurately the local mean OVER 1200KM. That is pure crap.
These are math tricks, and they are wrong. End of story.

July 28, 2010 1:04 pm

Buffoon wrote, “Bob, read this very carefully please,” and you went into a wonderful description of impacts of interpolation. You concluded with, “If the data doesn’t exist at smaller smoothing, it shouldn’t be used at higher smoothing unless the system is proven to be linearly coupled.”
Actually, I object to the use of the 1200km radius smoothing for another reason: the illusion of a spatially complete instrument temperature record. The 1200km radius smoothing gives a false impression of the instrument temperature record. It portrays that the data is complete globally from 1800 to 2010, when it is far from complete. Refer to:
http://bobtisdale.blogspot.com/2010/01/illusions-of-instrument-temperature.html
The point of my exercise on this thread, though, was to show something entirely different. There is a common belief that the GISS 1200km radius smoothing raises the trend in temperatures globally, and I have shown that this is not the case. As you quoted [and I will clarify with an addition], But between the latitudes of 60S-60N, the infilling and smoothing of the 1200 km radius smoothing has little to no effect [on long-term trends]:
http://i29.tinypic.com/55pc86.jpg
Regards

Buffoon
July 28, 2010 1:10 pm

Bob,
If that is the case, I simply did not manage to figure out which side of the issue you were arguing. In such an event, apologies for my (correct but not necessarily warranted) pedantics. Let us join hands and run merrily through interpolated fields.

Buffoon
July 28, 2010 1:16 pm

“But between the latitudes of 60S-60N, the infilling and smoothing of the 1200 km radius smoothing has little to no effect [on long-term trends]:”
I see what you are saying, but, I suspect my argument may be, without the 1200km smoothing, we do not reach the threshold for spatial coverage from which we may have GISS’ 66% magic number. At 250km there is an abhorent lack of spatial coverage, at 1200km there is fake spatial coverage which won’t change the 250km trend since it’s derived from the same dataset.
Steve’s original post seems to be : Spatially adequate data for the period doesn’t exist for the whole period and thus there can’t be a valid trend. I wholeheartedly support that conclusion, suggesting that interpolated data provides enough spatial coverage to subjectively meet some threshold value, but the underlying dataset is not complete enough to support the conclusion that is the only valuable purpose of the first graph on the page
Are we, in fact, on different sides of the issue or not?

July 28, 2010 1:17 pm

Jay and Jose Suro: Some of the changes in the GISTEMP product result from data arriving late or dues to corrections in the supplied data. And GISS makes improvements to its product from time to time and these changes can have a wide range of small effects, and impact early data. Refer to:
http://data.giss.nasa.gov/gistemp/updates/
And then someone will notice a shift in the value of one month early in the record and there’s no explanation.
Lucia tries to monitor the changes.
http://rankexploits.com/musings/

Z
July 28, 2010 1:47 pm

Bob Tisdale says:
July 28, 2010 at 1:04 pm
The point of my exercise on this thread, though, was to show something entirely different. There is a common belief that the GISS 1200km radius smoothing raises the trend in temperatures globally, and I have shown that this is not the case.

This only (as does the example given by Buffoon) only when the “smoothing” smooths. Smoothing does not, for example, create hot spots in Brazil.
A more accurate representation of what is seeming to happen is if we take sequence two from Buffoons example and try to transform it into sequence one by dropping the interpolations. Except we don’t drop all the interpolations at once, we drop them one at a time, starting with the lower numbers. Let’s call this something snappy like “progress” – let the critics call it something snappier like: “The March of the Thermometers”. What happens to the average over time then? Who thinks we’re at the end of this process?

Gail Combs
July 28, 2010 2:27 pm

RW says:
July 28, 2010 at 8:18 am
Gail Combs – your link contains some quite wrong statements about the Pearson correlation coefficient….
2. Distribution: In Pearson’s correlation coefficient, variables of the correlation should be normally distributed.
Can’t see how this is relevant.

_________________________________________________________________
The statement you made that the data must be normally distributed is irrelevant. Is exactly what I was talking about.
People with very little knowledge of statistics dump numbers into a stat program and think that just because they got numbers out everything is OK and they are proving something.
The whole blasted mathematical basis upon which Pearson correlation coefficient rests is based on NORMALLY DISTRIBUTED DATA. IF it AIN”T NORMALLY DISTRIBUTED it is garbage in and garbage out.

Gail Combs
July 28, 2010 2:49 pm

bemused says:
July 27, 2010 at 4:41 pm
MarkG:
“How can anyone claim ‘robust correlation’ between widely separated areas when two stations only a short distance apart show a difference of 2.5C for pretty much an entire decade?”
…Sure, there are a few station pairs which have low correlations even though they are close together, but the vast majority of stations show a good correlation.
Everyday experience also tells you this -if the summer was colder than average in Boston and Atlanta, then it was probably colder than average in Washington DC as well.
________________________________________________________________
OK than we will go with your colder than average. It was freeze you tail off cold in North Carolina, Florida, England, Spain and so cold in Tibet it killed more than a million animals. Even the Russians and Chinese were complaining of the cold this winter. The USA has snow in ALL fifty states.
Here in North Carolina it was colder than normal until the middle of June and the people in California and Oregon are still complaining of the cold. South Africa has dead penguins and South America dead cows because of the cold. The Aussies are also saying it is cooler than normal. Even the sea temperatures have turned cold. So if this world wide temperature “correlation” is so “scientifically” correct how come we are hearing “it is the hottest year EVAHHhh” ????

July 28, 2010 3:40 pm

Z: You replied, “This only (as does the example given by Buffoon) only when the ‘smoothing’ smooths. Smoothing does not, for example, create hot spots in Brazil.”
I can’t describe or comment on the creation “hotspots in Brazil” because I haven’t investigated it. But GISS extrapolates data. It is not simply a smoothing process.
And please understand that the sentence you are quoting was written because I was illustrating that, outside of the high latitudes, the 1200km radius smoothing does not add to the long-term temperature anomaly trend. As I noted to Buffoon, I am not defending the use of the 1200km radius smoothing. In fact, I’m against its use because it gives the false impression of a globally complete instument temperature record from 1880 to present, when, in reality, the record is far from complete.

July 28, 2010 3:46 pm

Buffoon wrote, “Steve’s original post seems to be : Spatially adequate data for the period doesn’t exist for the whole period and thus there can’t be a valid trend.”
If you were to run through the thread, you’d find that Steve’s opinion is based on the lack of data in the end years of the data with 250km radius smoothing. Refer to his July 27, 2010 at 5:41 pm comment:
http://wattsupwiththat.com/2010/07/26/giss-swiss-cheese/#comment-441283

KLA
July 28, 2010 4:50 pm

Tisdale wrote:
But between the latitudes of 60S-60N, the infilling and smoothing of the 1200 km radius smoothing has little to no effect:
http://i29.tinypic.com/55pc86.jpg
Bob, read this very carefully please.
Set 1, real data
(10+20+30+40+50+60+70+80+90+100)
Set 2, infilled by interpolation of indexes
(10+15+20+25+30+35+40+45+50+55+60+65+70+75+80+85+90+95+100)

Simple demonstration of the Nyquist sampling theorem:
You have the following real data set on a grid of say 80 km:
(1, 1, 1, 1, 2, 0, 1, 1, 1, 10, 1, 1, 1, 1, -5)
15 real datapoints spaced over 1200 km.
But because of sparsity of measerment stations, you know only the number a index 0 (1), index 9 (10) and index 14 (-5). Linear interpolation (smoothing) would infill the missing ones, creating the following set:
(1,2,3,4,5,6,7,8,9,10,7,4,1,-2,-5)
The average of the interpolated data set is 4, while the average of the real data is 1.2.

bemused
July 28, 2010 5:05 pm

Gail Coombes wrote:
“OK than we will go with your colder than average. It was freeze you tail off cold in North Carolina, Florida, England, Spain and so cold in Tibet it killed more than a million animals. Even the Russians and Chinese were complaining of the cold this winter. The USA has snow in ALL fifty states.
Here in North Carolina it was colder than normal until the middle of June and the people in California and Oregon are still complaining of the cold. South Africa has dead penguins and South America dead cows because of the cold. The Aussies are also saying it is cooler than normal. Even the sea temperatures have turned cold. So if this world wide temperature “correlation” is so “scientifically” correct how come we are hearing “it is the hottest year EVAHHhh” ????”

Hi Gail, maybe it has something to do with the fact that globally it was a very warm period between December 2009 – February 2010. Sure it was cold over the population centers of Europe and the US, but that is not global.
Did you know that USA takes up only 1.9% of the global surface area? United Kingdom takes up only 0.05%, Australia takes up 1.5%, China also takes up 1.9%, Russia takes up 3.3%, South America is 3.5% (those are the places you mentioned). All those combined only come to just over 12% of the global surface area.
Nobody is saying that correlations are global, only out to around 1000km. Did you know that a 1200km diameter circle (the approximate area where GISS assumes some correlation) occupies only 0.2% of the global surface area – that is hardly a global correlation. So, just because it was a cold winter in some places doesn’t mean that it must have been cold globally.
If you don’t want to take my work for it, take it from Roy Spencer. His satellite data show it to have been a very warm year (particularly last winter -N Hemisphere):
http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps
http://www.drroyspencer.com/2010/07/june-2010-uah-global-temperature-update-0-44-deg-c/
You can argue over whether or not 1998 or 2010 will end up being the warmest year globally since we began taking measurements, but you cannot deny that the last 12 months have been among the warmest few. Just because you had a bit of snow in your back garden doesn’t change that. If you don’t trust the satellite data then I suggest you take that up with Roy Spencer.

Gail Combs
July 28, 2010 8:09 pm

bemused says:
July 28, 2010 at 5:05 pm
Hi Gail, maybe it has something to do with the fact that globally it was a very warm period between December 2009 – February 2010. Sure it was cold over the population centers of Europe and the US, but that is not global.
Nobody is saying that correlations are global, only out to around 1000km. Did you know that a 1200km diameter circle (the approximate area where GISS assumes some correlation) occupies only 0.2% of the global surface area – that is hardly a global correlation. So, just because it was a cold winter in some places doesn’t mean that it must have been cold globally.
_________________________________________________
I am well aware that the January, February and March data was warmer because the sea temperatures were warm and the oceans make up 70% of the earth’s surface. However I have a major problem with the Climate Scientists playing fast and loose with statistics. I am no Steve M, but I made my living as a quality engineer using statistics on a regular basis so I cringe when I see mismatched data blythely homongenized.
I live in the piedmont/sand hills area. The weather patterns are not the same as in the mountains or the other side of the mountains (500 km) or the seashore (250 km) as I demonstrated in my comment:
Gail Combs says:
July 27, 2010 at 2:02 pm
If you want to do the type of “homogenizing” that is done you had better do the correct type of sampling AND take into account the fact that terrain changes weather patterns. Also do not expect me to believe the data is good to +/-0.1C
I have written to many sampling plans for liquids, solids and widgets to believe what climate scientists are trying to pedal.

RW
July 29, 2010 6:17 am

Gail Combs:
“People with very little knowledge of statistics dump numbers into a stat program and think that just because they got numbers out everything is OK and they are proving something.”
It seems to me that you have very little knowledge of statistics. You found a web page that made a number of mistaken claims about the Pearson coefficient, and you seem to think that permits you to make statements like this. The web page you found was wrong; those four statements about the Pearson coefficient are wrong.
“The whole blasted mathematical basis upon which Pearson correlation coefficient rests is based on NORMALLY DISTRIBUTED DATA. IF it AIN”T NORMALLY DISTRIBUTED it is garbage in and garbage out.”
This is not correct at all. The distribution of the data is to a large extent irrelevant. If you compare the data sets (1,2,3) and (10,20,30), you will find that neither is normally distributed, the Pearson coefficient will be 1, and this makes perfect sense. All the Pearson coefficient is doing is telling you on a scale of 0 to 1 how well a linear relationship can explain the covariance of two variables. The mathematical definition of the Pearson coefficient does not in fact require any particular statistical distribution of the values of the two variables.
It’s poor form to throw around phrases like “People with very little knowledge of statistics…”, even if you yourself understand statistics. It’s much worse when you’re making basic errors.

Buffoon
July 29, 2010 7:15 am

Bob
If you were to run through the thread, you’d find that Steve’s opinion is based on the *lack of data* in the end years of the data
//Lack of data means the trend isn’t valid. Are you saying there isn’t a lack of data? I don’t get what you’re arguing.
KLA
(1, 1, 1, 1, 2, 0, 1, 1, 1, 10, 1, 1, 1, 1, -5)
15 real datapoints spaced over 1200 km.
But because of sparsity of measerment stations, you know only the number a index 0 (1), index 9 (10) and index 14 (-5). Linear interpolation (smoothing) would infill the missing ones, creating the following set:
(1,2,3,4,5,6,7,8,9,10,7,4,1,-2,-5)
//Sorry, I don’t follow. You give me a set of real data with 15 indexes. Then you construct an interpolation example from 3 indexes. If you interpolate over real data, you can calculate error by location. The average error in your interpolated set vs. your real set is 2.8, handily, exactly the difference in means. You can’t make free accuracy.
Nyquist sampling says that a signal can be reconstructed perfectly if the sampling rate is 2 times the highest frequency in the data It therefore assumes the data is functional. Temperature data as taken is not functional, it is stepped across irregular areas. The void is assuming that the temperature data is linearly coupled such that it would be subject to the sampling laws for bandlimited signals or averaging processes. Please demonstrate the sampling frequency required for accurate temperature reconstruction of, say, Africa, by finding the smallest distance over which there can be a significant step change, and then come back to me with the suggestion that the data sampling rate shown in the 250km graph would support nyquist-shannon reconstruction with an expectation of accuracy.

July 29, 2010 6:01 pm

Buffoon replied, “Lack of data means the trend isn’t valid. Are you saying there isn’t a lack of data? I don’t get what you’re arguing.”
While there is insufficient data to create trends from the dataset with 250km radius smoothing, there is apparently enough data with to do it from the data with 1200km radius smoothing.
As I explained to Steven upthread:
#############
To create the trend maps, GISS uses cells where at least 66% of the data exists. Refer to their map making webpage:
http://data.giss.nasa.gov/gistemp/maps/
The note toward the bottom of the page reads, “’Trends’ are not reported unless >66% of the needed records are available.”
Since the maps with 250km radius smoothing have much less data from which to create trends (than the maps using 1200km radius smoothing), the trends will be different.
#############
Buffoon, presenting examples to me why this is right or wrong doesn’t serve any purpose (unless you’re trying to educate me), because it is GISS who established the 1200km radius smoothing and GISS who set the threshold on data availability for trends for their data.
I’ve noted to you the problems I find with the 1200km radius smoothing in an earlier comment.
What many took as an argument between Steven and me was simply a lack of communication–I was simply trying to explain to Steven how and why, regardless of whether or not it was correct. In other words, GISS set the “rules” for how this was done and I was simply trying to explain those to Steven.
Regards

Gail Combs
July 29, 2010 6:45 pm

Frank K. says:
July 27, 2010 at 5:36 pm
I have studied this paper in the past, particularly Figure 3, and the ONLY region where the correlation is any good in 44.4 N and above. Everywhere else, especially the Southern Hemisphere, it’s marginal or complete crud! But hey, it’s climate science – anything goes…
_________________________________________________________________
Thank you for checking out the paper. I am not surprised at your finding.
70% of this planet is water and you only have to look at how the ENSO patterns change to see how utterly bogus the concept of a 1200KM smoothing is . This is especially when coupled with a claimed 0.1C accuracy as implied by all the media releases.

Gail Combs
July 30, 2010 5:39 am

RW says:
July 29, 2010 at 6:17 am
“It seems to me that you have very little knowledge of statistics. You found a web page that made a number of mistaken claims about the Pearson coefficient, and you seem to think that permits you to make statements like this. The web page you found was wrong; those four statements about the Pearson coefficient are wrong…..
___________________________________________________________________
You are working with the text-only light edition of “H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8
The correlation coefficient r (also called Pearson’s product moment correlation after Karl Pearson) is calculated by ….
Assumptions:
* linear relationship between x and y
* continuous random variables
* both variables must be normally distributed
* x and y must be independent of each other
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is from a statistical program manual and illustrates the fast and loose use of Pearson correlation that I dislike so much because it is done without any real knowledge of the person typing on the keys:
Chapter Seven: Correlation and Regression
The most widely used bivariate test is the Pearson correlation. It is intended to be used when both variables are measured at either the interval or ratio level, and each variable is normally distributed. However, sometimes we do violate these assumptions. If you do a histogram of both EDUC, chapter 4, and PRESTG80, you will notice that neither is actually normally distributed. Furthermore, if you noted that PRESTG80 is really an ordinal measure, not an interval one, you would be correct. Nevertheless, most analysts would use the Pearson correlation because the variables are close to being normally distributed, the ordinal variable has many ranks, and because the Pearson correlation is the one they are used to. SPSS includes another correlation test, Spearman’s rho, that is designed to analyze variables that are not normally distributed, or are ranked, as is PRESTG80
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::This article is about the correlation coefficient between two random variables.
If we have a series of n measurements of X and Y written as xi and yi where i = 1, 2, …, n, then the Pearson product-moment correlation coefficient can be used to estimate the correlation of X and Y . The Pearson coefficient is
[formula]
also known as the “sample correlation coefficient”. It is especially important if X and Y are both normally distributed. The Pearson correlation coefficient is then the best estimate of the correlation of X and Y .
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Chapter 12 Correlation:

Pearson correlation coefficient page 12-2………………… 2007 A. Karpinski
1. Pearson product moment correlation coefficient…
3. Correlational assumptions
• The assumptions of this of the test for a correlation are that both X and Y are
normally distributed (Actually X and Y must jointly follow a bivariate
normal distribution).

o No other assumptions are required, but remember:
• The correlation coefficient is very sensitive to outliers
• The correlation coefficient only detects linear relationships….
o The covariance only describes linear relationships between X and Y.
o There may be a non-linear relationship between X and Y, but rXY will only
capture linear relationships. rXY will not be useful in measuring non-
linear relationships between X and Y.
o The correlation coefficient is quite sensitive to outliers
• Anytime you report a correlation, you should examine the scatterplot
between those two variables.
⇒ To check for outliers
⇒ To make sure that the relationship between the variables is a
linear relationship
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pearson product-moment correlation coefficient
In statistics, the Pearson product-moment correlation coefficient (sometimes referred to as the MCV or PMCC) (r) is a common measure of the correlation between two variables X and Y. When measured in a population the Pearson Product Moment correlation is designated by the Greek letter rho (ρ). When computed in a sample, it is designated by the letter “r”….
The statistic is defined as the sum of the products of the standard scores of the two measures divided by the degrees of freedom: [Formula]
Note that this formula assumes the Z scores are calculated using sample standard deviations which are calculated using n − 1 in the denominator. When using population standard deviations, divide by n instead.
The result obtained is equivalent to dividing the covariance between the two variables by the product of their standard deviations…..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
http://en.wikipedia.org/wiki/Bimodal_distribution
Bimodal distributions are a commonly-used example of how summary statistics such as the mean, median, and standard deviation can be deceptive when used on an arbitrary distribution. For example, in the distribution in Figure 1, the mean and median would be about zero, even though zero is not a typical value. The standard deviation is also larger than deviation of each normal distribution.
Elementary Concepts in Statistics
Are All Test Statistics Normally Distributed?
Not all, but most of them are either based on the normal distribution directly or on distributions that are related to and can be derived from normal, such as t, F, or Chi-square. Typically, these tests require that the variables analyzed are themselves normally distributed in the population, that is, they meet the so-called “normality assumption.” Many observed variables actually are normally distributed, which is another reason why the normal distribution represents a “general feature” of empirical reality. The problem may occur when we try to use a normal distribution-based test to analyze data from variables that are themselves not normally distributed…..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(This is why my Stat. teacher said to PLOT THE DATA!)
____________________________________________________________________
RW says:
July 29, 2010 at 6:17 am
This is not correct at all. The distribution of the data is to a large extent irrelevant.
It’s poor form to throw around phrases like “People with very little knowledge of statistics…”, even if you yourself understand statistics. It’s much worse when you’re making basic errors.
______________________________________________________________
I think your statements are a very good illustration of why I made the statement about a little knowledge in statistics being dangerous. And that is without thirty years of watching snakeoil salesmen/consultants selling high priced stat programs to novices with promises of magically improved production performance.
Oh and you never even caught the trap I set to find out if anyone here had any real knowledge of Statistics….

ShugNiggurath
July 30, 2010 6:40 am

GISS 250km data is no longer available (the page opens, but no graphical representations and all download links give an error). If I were i a US taxpayer I’d ask them where the data went myself.

RW
July 31, 2010 1:23 am

Gail Combs – the internet’s a big place. No matter how wrong the idea, you’ll easily be able to find 20 people who will support it. Your statements about the Pearson coefficient are not outrageously wrong, but they are wrong nonetheless. The maths behind the Pearson correlation coefficient does not assume or require that the data be normally distributed, and the coefficient gives meaningful results for non-normal data. I gave you an example that showed this quite clearly.
The wider point is that the correlation between temperature anomalies at widely spaced locations doesn’t go away if you use a different correlation statistic. It is a physical fact.