On the "march of the thermometers"

I’ve been away from WUWT this weekend for recovery from a cold plus family time as we have visitors, so I’m just now getting back to regular posting.  Recently on the web there has been a lot of activity and discussions around the issue of the dropping of climatic weather stations aka “the march of the thermometers” as Joe D’Aleo and I reported in this compendium report on issues with surface temperature records.

Most of the station dropout issue covered in that report is based on the hard work of E. M. Smith, aka “chiefio“, who has been aggressively working through the data bias issues that develop when thermometers have been dropped from the Global Historical Climate Network. My contribution to the study of the dropout issue was essentially zero, as I focused on contributing what I’ve been studying for the past three years, the USHCN. USHCN has had a few station dropout issues, mostly due to closure, but nothing compared to the magnitude of what has happened in the GHCN.

That said, the GHCN station dropout Smith has been working on is a significant event, going from an inventory of 7000 stations worldwide to about 1000 now, and with lopsided spatial coverage of the globe. According to Smith, there’s also been an affinity for retaining airport stations over other kinds of stations. His count shows 92% of GHCN stations in the USA are sited at airports, with about 41% worldwide.

The dropout issue has been known for quite some time. Here’s a video that WUWT contributor John Goetz made in March 2008 that shows the global station dropout issue over time. You might want to hit the pause button at time 1:06 to see what recent global inventory looks like.

The question that is being debated is how that dropout affects the outcome of absolutes, averages, and trends. Some say that while the data bias issues show up in absolutes and averaging, it doesn’t effect trends at all when anomaly methods are applied.

Over at Lucia’s Blackboard blog there have been a couple of posts on the issue that raise some questions on methods.  I’d like to thank both Lucia Liljegren and Zeke Hausfather for exploring the issue in an “open source” way. All the methods and code used have been posted there at Lucia’s blog which enables a number of people to have a look at and replicate the issue independently. That’s good.

E.M Smith at “chiefio” has completed a very detailed response to the issues raised there and elsewhere. You can read his essay here.

His essay is lengthy, I recommend giving yourself more than a few minutes to take it all in.

Joe D’Aleo and I will have more to say on this issue also.

0 0 votes
Article Rating
239 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Another Ian
March 8, 2010 1:34 am
dearieme
March 8, 2010 1:39 am

A “march” can go in either direction. What we have is the Retreat of the Thermometers. The enquiring mind wonders whether it was a tactical retreat.

fredb
March 8, 2010 1:59 am

[taunting of Anthony lends nothing to the conversation ~ ctm]

March 8, 2010 2:02 am

I’ve completed USHCN original raw data vs USHCN version 2 revised raw data
blink charts for Iowa. –
http://www.rockyhigh66.org/stuff/USHCN_revisions_iowa.htm
As with Illinois and Wisconsin, the majority of stations had their early temperatures lowered, which artificially increases the warming trend.
These revisions in the base data seem as large as those Dr Hansen did with GISS, but while the GISS homogenized data was in-your-face alteration that you could compare with the original raw data, USHCN version 2 pretends it IS the raw data.
Any use of these revised numbers for climate research is a waste of time.
Here are the Illinois and Wisconsin charts –
http://www.rockyhigh66.org/stuff/USHCN_revisions.htm
http://www.rockyhigh66.org/stuff/USHCN_revisions_wisconsin.htm
These are all the USHCN stations in the three states, no cherry picking.

March 8, 2010 2:22 am

E.M.Smith does not like anomalies, and likes to do his analysis with absolute temperatures, In that world, the “march of the thermometers” towards the Equator, or wherever, may have cause a real temperature bias.
But the climate scientists do it differently. They do two things that prevent that bias. One is the use of anomalies. That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period. If you are only looking at temperatures relative to those means, it scarcely matters whether stations being dropped are hot or cold. It only matters whether they are rising relative to that local long term mean.
The other main protection is gridding. The global average is not an average of stations. It’s an average of grid cell averages, which you can see on the GISS plots. The grid average is an average of stations in the grid. So removing a station affects the global average only by its effect on its grid average. If the station was colder than the grid average (and you weren’t using anomalies), then there might be a small effect on the global average due to that differential. But cold stations live in cold grids, and are about equally likely to be colder or warmer than their grid mean.
There’s no point in advancing an argument about a simple global average of station’s temperatures. No-one uses it. That is the real spherical cow.

toyotawhizguy
March 8, 2010 2:50 am

One wonders if the public at large, especially those who have become AGW believers, realize that the average global temperatures (based on earth weather stations data) reported by CRU at UEA, the IPCC and others is only a very small sampling of actual surface temperatures throughout the world, and that at least 99.9% of the earth’s surface temperatures (which includes ocean surface temperatures) are not adequately represented in the data sets reported by the total of the world-wide network of earth weather stations.
Such an insignificant sampling of temperatures becomes an issue on a planet that has experienced surface temperature extremes as high as +57.7 deg C (+135.86 deg F)[1] and as low as -89 deg C (-128.2 deg F)[2]. That’s a peak to trough record setting temperature range of 146.7 deg C (264.06 deg F). The usual annual temperature range on the earth is -40 deg C (-40 deg F) to +40 deg C (+104 deg F), which is a peak to trough temperature range of 80 deg C (144 deg F). These record setting extremes and the usual annual ranges should put the Warmist’s alarmism over a +0.6 deg C average global temperature rise in the past century (as claimed by the IPCC) into perspective.
[1] The hottest air temperature ever recorded was 57.7 °C (135.9 °F) at Al ‘Aziziyah, Libya, on September 13 1922. [Source:Wikipedia]
[2]The coldest air temperature ever recorded on Earth is −89.2 °C (−129 °F), at Vostok Station, Antarctica on 21 July 1983. [Source:Wikipedia]

March 8, 2010 2:54 am

My biggest concern is the change from manual to automated measurement of temperature. Having seen how real people behave taking real measurements, I know that there is a significant change going from manual to automatic measurements.

March 8, 2010 3:01 am

But with less thermometers doesn’t the margin of error increase statistically

Peter of Sydney
March 8, 2010 3:18 am

Even if the thermometer readings are correct and that there is a trend of increasing temperature, so what? It doesn’t mean it’s due solely to AGW, if at all.

Editor
March 8, 2010 3:30 am

Nick Stokes (02:22:28) :
As E.M. Smith’s post has just explained – he is looking at the SHAPE of the data that GIStemp has to try to cope with. He is doing this PRIOR to working out HOW WELL GIStemp copes with these biases.
It is worth taking the time to read properly, especially if you THINK you know what he has been saying.

Editor
March 8, 2010 3:32 am

Another Ian (01:34:38) : Thanks for the sign post. Glad you like it!

Graham Gorrie
March 8, 2010 3:40 am

Re so many thermometers being located at airports where the jet exhaust (among other heat sources) can increase the readings. The AGW alarmists would have us believe that the increase in jet travel contributes to an increase in global temperature. It seems that the planes are directly contributing to a false, higher reading on the thermometers by blasting them. So I guess I can agree that planes are involved.

curious
March 8, 2010 3:45 am

Anthony,
you and D’Aleo wrote:
“It can be shown that they [i.e. NCDC] systematically and purposefully, country by country, removed higher-latitude, higher-altitude and rural locations, all of which had a tendency to be cooler.”
If it “can be shown”, why wasn’t it shown in your “report”? E. M. Smith claimed recently that he only “speculated” about the bias, and that he took “no position on motivation”, because he couldn’t “see inside folks minds”.
But it is you who signed the SPPI report. You should take some responsibility for your [snip] you refer us to E. M. Smith, who seems to change his mind about the existence of the warming bias.
You wrote:
“The thermometers were marched towards the tropics, the sea, and airports near bigger cities. These data were then used to determine the global average temperature and to initialize climate models. Interestingly, the very same stations that have been deleted from the world climate network were retained for computing the average-temperature base periods, further increasing the bias towards overstatement of warming by NOAA.”
Do you think this statement is supported by the “very detailed response” of Smith? What about Roy Spencer’s results then? Are you going to dismiss it, like Smith, as another “hypothetical cow”?

RichieP
March 8, 2010 3:48 am

OT
The ex-editor of the Guardian has brazenly called for “climate change” to be a matter of belief and faith:
http://www.guardian.co.uk/commentisfree/2010/mar/07/climate-change-inertia-prophet#start-of-comments
“the plain fact is that we surely need a prophet, not yet another committee. We need one passionate, persuasive scientist who can connect and convince – not because he preaches apocalypse in gory detail, but in simple, overwhelming terms. We need to be taught to believe by a true believer in a world where belief is the fatal, missing ingredient.”
Though I do like the “gory detail” pun …

Philhippos
March 8, 2010 3:58 am

Can anyone tell me if stations in UK and Europe have been checked for siting as has been done in USA? I would be happy to tour round and find them if I can get a clue to their siting.
Also, can anyone tell me why the Tips & Notes tab on this site doesn’t work for me?
[Reply: on my browser I can’t see Tips & Notes unless I expand the screen. ~dbs, mod.]

Tony Rogers
March 8, 2010 4:06 am

It seems to me that there is an interesting potential bias in GisTemp due to reducing station numbers. I read Hansen at al 1999 the other day and it conatins some interesting information on this.
On page 25 it states – “For most of the twentieth century, 55-60% of the U.S. stations are rural (population less than 10,000), about 20% are small town (population less than 50,000), and 20-25% are urban. However, in the final (near-real time) year of data, before USHCN data are available, the urban proportion of stations jumps to about 50- 55%.”
GisTemp adjusts for UHI by making the trend for urban stations the same as the trend for nearby rural stations. Figure 3 seems to imply that this is done by making prior years warmer rather than recent years cooler. (That seem to be like saying that the temperature at an urban station 100 years ago is as if it had the same city as today. It would seem to make more sense to adjust todays temperatures down. i.e. as if the city wasn’t there today.
Anyway, we can see from Figure 1 (b) that the total number of stations has dropped rapidly from a peak of about 6000 in 1970-ish.
GisTemp calculates the temperature for a 2 deg x 2 deg grid box (about 200 km square) by averageing all stations within 1200 km with each station weighted by its distance fron the grid centre. Then an anomoly is calculated for the grid point.
Now it seems to me that, whilst the trends of both rural and urban stations might be the same, the changing proportions can effect the overall trend.
Consider two sets of stations contributing to a grid point:
Station set A: Rural. 75% of total affecting grid point in 1900, 50% in 2000. Temp in 1900 is 10 deg C. Temperature in 2000 is 10 deg C. Adjustments are zero.
Station set B: Urban. 25% of total affecting grid point in 1900, 50% in 2000. Measured temp in 1900 is 10 deg C. Measured temp in 2000 is 12 deg C due to UHI. UHI adjustment raised temperature in 1900 to match rural trend. i.e. adjusted temp for 1900 is also 12 deg C (because rural trend is 0).
Then calculate the grid point temperature in 1900 and 2000 (assuming equal distances from drid point).
1900: Temp = 10 * .75 + 12 * .25 = 10.5
2000: Temp = 10 * .5 + 12 * .5 = 11
Haven’t I just created a 0.5 deg per century temperature trend even though the trends of adjusted temperatures for both rural and urban stations were zero?

JP Miller
March 8, 2010 4:10 am

Nick Stokes (02:22:28) :
You seem to be missing the point, Nick. Creating anomalies and grid averages does not necessarily eliminate bias. It depends significantly on how you do both of those procedures. EM has described that quite clearly, but you seem either to not understand that or, at least, you do not acknowledge the point.
More importantly, EM is doing what GHCN and GISSTEMP should have done: eliminate bias in the raw data first, then compute anomalies and gridded averages.
And, oh, one more thing: GHCN and GISSTEMP should have made all their raw data and adjustment/ conversion logic and code publicly available so people (scientists and others) could examine whether what they did is reasonable or not.
While he has not completed the global analysis yet, the early returns (e.g., Pacific Basin) suggest the “warming trend” of the last ~30 years is an “anomaly” of the raw temperature data machinations. One hopes he submits this work to a review by unbiased climate scientists (if there is such a thing) and statisticians to get independent judgment as to whether what he has done holds up to serious — not pejorative — scrutiny.
Kinda like science is supposed to be done…

March 8, 2010 4:17 am

Re Nick Stokes (02:22:28) :
I understand that E.M. Smith and others disagree with you and maintain that the anomalies of the dropped stations in a grid are, or can be affected on a time specific bases.
But I would like to leave that alone for the moment. I have see in public literature the claim made that the world mean temperature has moved from this, pick a number, to this, pick a higher number, and so for P.R. purposes it appears some times the mean is used.
But my real question (until the rest is resolved, and I do not think it it yet resolved) is, why use a method where the mean is distorted? Are we incapable of both?

Philhippos
March 8, 2010 4:26 am

Tips & Notes page opens ok but I can find no way of putting any text into it. Am using IE8

March 8, 2010 4:26 am

“twawki (03:01:33) :
But with less thermometers doesn’t the margin of error increase statistically”
Yes, it does but it not a linear effect. Going from 7000 to 1000 does increase your random error by 7 fold only 2.6. Depending on the level of significance you were shooting for and the error range of the data itself, you may go from an error of say 0.11 °C to 0.31°C. this is just for random errors in the data and does not address bias issues.

March 8, 2010 4:27 am

Re Mike McMillan (02:02:16) :
Your blink charts are very impressive and warrants more commentary. (a great deal more) The lowering of the past appears the strongest.
Can you give a quick summary of how you know this is USHCN original raw data vs USHCN version 2 revised raw data, and what GISS does with this data?

March 8, 2010 4:31 am

Re: vjones (Mar 8 03:30),
He is doing this PRIOR to working out HOW WELL GIStemp copes with these biases.
Well, I kmow what the report has been saying. Summary point 5:

There has been a severe bias towards removing higher-altitude, higher-latitude, and rural stations, leading to a further serious overstatement of warming.

Mark
March 8, 2010 4:31 am

There are two issues here:
1. What has E.M Smith been saying.
2. What have others been saying based on their understand of what E.M Smith has been saying.
Even if Smith has been reasonable it does not follow that those who have been basing claims on his work/their understanding of his work have also been reasonable.
Anthony’s article goes some way to addressing point 1 but not point 2, though as he says, a response will be forthcoming. And that will be interesting to read.

Doug McGee
March 8, 2010 4:40 am

Toyota,
You do realize don’t you, that you don’t need a thermometer at every square inch of the planet to get regional (or grid) averages, don’t you?
Oh yeah, the average temperature of the planet can increase, without ever breaking a daily record.

Medic1532
March 8, 2010 4:46 am

Just my 2 cents worth could it be that the loss of stations is a result of the cold war ending?? SAC (Strategic Air Command) had bases and weather stations all over the world but mainly focused on the northern hemisphere. With the fall of the USSR we no longer needed super accurate weather charts to predict things like fallout patterns from a global thermonuclear war.

March 8, 2010 5:09 am

Satellite derived global mean temps show trends that are similar to thermometer temp trends. If station dropout were causing an artificial warming, then how does one explain satellite data?
Are you arguing about a few trees while missing the forest?

Frank K.
March 8, 2010 5:32 am

“But the climate scientists do it differently. They do two things that prevent that bias. One is the use of anomalies. That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period. If you are only looking at temperatures relative to those means, it scarcely matters whether stations being dropped are hot or cold. It only matters whether they are rising relative to that local long term mean.”
Nick – could you please explain how the use of “anomalies” makes any sense thermodynamically? What does a “world average temperature” arrived at with this method really mean? Thanks.

roger
March 8, 2010 5:35 am

re: Philhippos
not sure if UK stations have been checked for siting. it would be interesting to contribute if it hasn’t been done. they are generally at RAF bases which have been around since prop planes, they wouldn’t have considered the exhaust from jets. also maybe the areas of tarmac may have changed over 50 years or so.
Roger

Editor
March 8, 2010 5:56 am

Nick Stokes (04:31:38) :
re Report “Summary point 5:
There has been a severe bias towards removing higher-altitude, higher-latitude, and rural stations, leading to a further serious overstatement of warming.”
Again you have to read it in context of what E.M.Smith is saying. He is looking at (for the moment anyway) biases in the raw data that go into GIStemp. The statement above is correct in that context as the data that therefore goes into GIStemp is warming. The report may well take it out of context and infer that the this also causes the global average temperature to warm also, but EMS is not saying that – yet anyway – as he has not yet got to that part of his analysis.
On the other hand, as this post outlines, stations from cooler areas (high altitude, high latitude) have greater seasonal temperature variations and are more susceptible temperature extremes. I understand that much of “Global Warming” has actually been shown to be “Winter warming”; minimum temperatures are not as low. When global temperatures are falling (as in now)this may mean lower lows in stations with this type of location, and we should therefore not lose them from the record.
If anyone is in any doubt about the cyclical nature of climate and its effect on the temperature data recorded worldwide, take a look at this post on
mapping global warming (Figures 8, 9 and 10 in particular)
Also Mark (04:31:39) : said it well – I concur.

March 8, 2010 5:57 am

Re: Frank K. (Mar 8 05:32),
Well, Celsius itself is an “anomaly” – it’s the difference between a temperature and the freezing point of water. And it’s fine for most thermodynamics. Anomalies can tell you about change in world average temperature, which is generally proportional to change in heat content (except for phase change etc).

Tim Clark
March 8, 2010 6:01 am

Nick Stokes (02:22:28) :
E.M.Smith does not like anomalies, and likes to do his analysis with absolute temperatures, In that world, the “march of the thermometers” towards the Equator, or wherever, may have cause a real temperature bias.
But the climate scientists do it differently. They do two things that prevent that bias. One is the use of anomalies. That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period. If you are only looking at temperatures relative to those means, it scarcely matters whether stations being dropped are hot or cold. It only matters whether they are rising relative to that local long term mean.

jack mosevich
March 8, 2010 6:11 am

Do land based thermometers matter any more what with Dr. Roy’s satellite measurements?

B.D.
March 8, 2010 6:12 am

The other main protection is gridding. The global average is not an average of stations. It’s an average of grid cell averages, which you can see on the GISS plots.
It should be noted that gridding causes the influence of the dense (e.g. N. America) networks to be reduced. With gridding, the globe is significantly warmer now than the 1930s. Without it, the globe is barely warmer.

Tim Clark
March 8, 2010 6:17 am

Nick Stokes (02:22:28) :
E.M.Smith does not like anomalies, and likes to do his analysis with absolute temperatures, In that world, the “march of the thermometers” towards the Equator, or wherever, may have cause a real temperature bias.
But the climate scientists do it differently. They do two things that prevent that bias. One is the use of anomalies. That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period. If you are only looking at temperatures relative to those means, it scarcely matters whether stations being dropped are hot or cold. It only matters whether they are rising relative to that local long term mean
.
Uh, Nick, I think you need to rethink that statement. We’re not talking absolute station temp here. If you drop stations that are showing no trend or a cooling trend and leave mostly stations that have a warming trend (airports), you bias the trend upward, regardless of gridding.
But the climate scientists do it differently.
Yes, we have noticed they are somewhat……kinky.

Tim Clark
March 8, 2010 6:30 am

Forgive the double post, it’s a Monday.

Frank K.
March 8, 2010 6:32 am

Nick Stokes (05:57:35) :
Re: Frank K. (Mar 8 05:32),
“Well, Celsius itself is an anomaly its the difference between a temperature and the freezing point of water. And its fine for most thermodynamics. ”
Actually, the appropriate temperature scales for thermodynamics are Kelvin or Rankine degrees, as you know, since these are absolute scales. Try using Celsius (or Fahrenheit) in the ideal gas law…
“Anomalies can tell you about change in world average temperature, which is generally proportional to change in heat content (except for phase change etc).”
Is that correct? The heat content is given by the first law of thermodynamics, which can be written for a closed system as:
dE/dt = Q-W
where E is the sensible+potential+kinetic energies, Q is the heat transfer, and W is the work transfer. If you integrate this equation over a given time interval, you see that the change in energy E is relative to an ** initial ** state E(t=0), not to some time-averaged state!
Anomalies are useful for characterizing trends and interpolating data, but one shouldn’t endow them with any thermodynaic meaning. Moreover, this idea that you ** can’t ** use absolute temperatures to characterize global surface temperatures is just silly.

March 8, 2010 6:38 am

“Gridding” is not a solution to missing data. One cannot manufacture statistical precision by averaging an existing dataset into subgroups. The only way to create extra statistical precision is by adding more independent (and hopefully, identically distributed) data. The solution to missing data is to use an analytical method that does not rely on a uniformly populated data space.
This is similar to “naive” analysis of econometric and financial data (my fields) in which data gaps are “filled forward.” Although that operation is defensible as a prediction of what value the missing data would actually have taken it is *not* useful when performing an analysis of the entire dataset. In creates an autocorrelation in the process which leads to misleading error analysis.

Stephen Wilde
March 8, 2010 6:38 am

vjones (05:56:30)
” I understand that much of “Global Warming” has actually been shown to be “Winter warming”; minimum temperatures are not as low.”
That would be consistent with a simple acceleration of the longitudinal progression of air circulation systems.
Faster west/east movement such as we did see during the period 1975 to 2000 would mean that on average air would spend less time over the continents where fastest cooling occurs.
Slower west/east movement would allow continental interiors to cool off more and lead to a cessation of the apparent warming effect.
Are we quite sure that all we have been observing is not just a simple function of the speed of movement of air masses around the globe ?
Losing the stations most likely to be affected would be a neat method of ‘hiding a decline’ now that the speed appears to have fallen once more.

Pascvaks
March 8, 2010 6:39 am

Tis not the thermometer that is the problem..
Tis the reader of the thermometer that is the problem.
Tis not the number on the thermometer that is the problem..
Tis the number the computer program crunches that is the problem.

March 8, 2010 6:57 am

Well, Celsius itself is an “anomaly” – it’s the difference between a temperature and the freezing point of water. And it’s fine for most thermodynamics.

Nope, thermodynamics is based on absolute temperature scales. It must be this way, otherwise the results are dependent on the relative scales used.

Sarnac
March 8, 2010 7:01 am

Simple airport temperature anomaly test …
1: Identify a list of temperature sensors at airports inside the US
2: Look for a temperature trend DROP on 9/12 and 9/13 2001 when US airports were completely closed down compared to 9/10 2001, when airports were at regular-use levels.
3: Compare these temperatures to the change on 9/10 to 9/12 & 9/13 of 2000, 2002, 2003, … up to 2009
Granted, these are different days, and 2-3 days later in the early fall, but if the drop from 9/10 to 9/12 in 2001 is statistically significant compared to the change in all the other years, then we can say those airports are jet-usage-biased.

Sarnac
March 8, 2010 7:22 am

Re: vjones (05:56:30) :
“I understand that much of “Global Warming” has actually been shown to be ‘Winter warming’; minimum temperatures are not as low.”
So if the mins rise but the maxes don’t, this sounds like some form of validation of Willis Eschenbach’s Thunderstorm Thermostat Hypothesis (and its follow-up, sense-and-sensitivity)
So …
1: the planet warms (coming out of the Little Ice Age) (IMHO without statistically significant human help, but that is irrelevant to this argument)
2: assume everywhere warms evenly (absurd but useful for trivial analysis)
3: but where the planet starts to locally overheat, it follows Eschenbach’s logic and locally-thunderstorms, blocking energy absorption by putting up a sun-reflecting “umbrella” of white-topped thunderclouds, increasing local albedo.
THIS IS WONDERFUL

geo
March 8, 2010 7:25 am

It seems to me the range of skeptics runs to three main lines of thought:
1). The globe isn’t warming at all –we’re measuring it wrong (siting, land use, dropouts, whatever).
2). The globe is warming somewhat, but not as much as we think because we’re measuring it wrong, and what is left if you could do it right falls well within the range of expected natural variability.
3). The globe is warming somewhat, but not as much as we think because we’re measing it wrong, and after you take out what is reasonable to expect for natural variability, the C02 warming signal is much less (to the point of not being a threat in any urgent timeframe and a smallish fraction of what IPCC assigns to C02).
Reading Chiefio’s article, he seems to be firmly in camp #1. I respect that he gets there by intense “data diving” of thermometer data. I also recognize that if an analysis seems to conflict with the “real world”, it’s likely there’s a flaw with the analysis rather than the real world, even if I can’t put my finger on exactly what it is –the famous “bumble-bees can’t fly; here’s the analysis proving it” scenario.
I generally find myself in catagory 3.

Richard M
March 8, 2010 7:30 am

I believe Nick Stokes has admitted he has never read chiefio’s reports yet tells us that he knows exactly what he has done. Sounds exactly like Nick’s work with Miskolczi.
In other words, he’s so embedded in groupthink that he refuses to objectively read something that might change his beliefs. Is that about right, Nick?

carrot eater
March 8, 2010 7:50 am

Tim Clark (06:17:00) :
” We’re not talking absolute station temp here.”
It’s all you see as you flip through EM Smith’s work, or the SPPI report. So clearly, somebody is talking absolute station temps. EM Smith calls it ‘measuring the data’, or somesuch.
“If you drop stations that are showing no trend or a cooling trend and leave mostly stations that have a warming trend (airports), you bias the trend upward, regardless of gridding.”
Indeed you would. Which is exactly the point of the analysis of Zeke, clear climate code, and Tamino. Before the time of the station number drop, the global trends calculated from the dropped stations and the surviving stations are the same.

carrot eater
March 8, 2010 7:53 am

Dan Hughes (06:57:37) :
“Nope, thermodynamics is based on absolute temperature scales. It must be this way, otherwise the results are dependent on the relative scales used.”
Depends on what you’re doing. In many cases, simply knowing the relative change or difference in temperature is enough. If all you want to know is whether something is increasing or decreasing in temperature, then relative changes are all you need.

March 8, 2010 7:56 am

geo (07:25:50),
It appears that one of your three scenarios is correct. There may be some overlap, because we don’t have all the answers [and we certainly don’t have sufficient data due to its being “lost,” repeatedly adjusted, fabricated, etc.]
I’m with Prof Lindzen [probably between #2 and #3, with 3 being most likely]. Lindzen thinks the climate sensitivity is below 1, which makes the effect of CO2 insignificant, even if it doubles from here, which is very unlikely.
That’s why the alarmist crowd is flailing around, looking for an alternate excuse to force Cap & Trade. Methane currently seems to be their fallback position.

Sarnac
March 8, 2010 8:26 am

oops … last comment incomplete … hit tab instead of turning off CAPS, then space and accidentally posted … (WUWT could really use a preview button as the first tab after the comment-post-box) … continuing …
THIS IS WONDERFUL
Instead of overheating the planet, we instead are un-freezing after the LIA …
Places that are warm won’t get that much warmer if at all, but the likely will get wetter (and grow more crops and other plants) … and interestingly, the sahara is turning green
(darn … WUWT link has quotes IN the URL, don’t know if my href= can use single-quotes, so here it is again: http://wattsupwiththat.com/2009/12/16/another-al-gore-reality-check-“rising-tree-mortality”)
Places that are colder will get warmer (so they can extend their growing season … and drop more plants and crops).
So the thunderstorms dynamically thermo-regulate, we get more food just as we seem to really need it (human populations are expected to peak between 8B and 10Billion around 2050(wikipedia) to 2070(Nature, $, 2001)) … and hopefully this isn’t the warm-bounce before the end of the Holocene interglacial optimum

March 8, 2010 8:39 am

carrot eater (07:53:33) :
Nope, thermodynamics is based on absolute temperature scales. All thermodynamics textbooks will clearly state this fact early on in the text.
As to the correctness on my statement I will point to all textbooks that are presently in use at all universities on the entire planet.
Now, you point me to texts that say that relative temperature scales can be use in thermodynamics. As Frank K noted above, try using C or F in the perfect gas law.

mikef2
March 8, 2010 8:47 am

Hi Carrot Eater,
I’ve just posted a similar comment over at Lucias, as I’m serially lurking today.
I’m really trying hard to get my head arround this issue, which at Lucias seems to be “Smith has got it wrong” but I think the Lucia crowd are starting from a position of acceptenance of the raw data, where Smith is not, so your comment above makes me ask again, because I’m really not sure, if you could explain to me how such a thing as the Campbell island scenario can be handled by the anomoly method.
As I understand it, EMSmith has looked at raw data for New Zealand including Campbell island and this shows no significant temp trend.
Campbell island is then dropped from the network, and without it New Zealand now shows a positive warming trend. So he is showing that its the choice of data that creates the bias.
How can any anomaly method get around this….the Temp trend today shows a positive trend compared to the baseline which had Campbell island in it (as it is bound to as campbell island was a ‘cold’ input) but surely todays trend is just a statistical artifact because Campbell is no longer recorded.
Isn’t this fundementally biasing the result?

Slartibartfast
March 8, 2010 8:54 am

Nope, thermodynamics is based on absolute temperature scales

To be fair to Stokes, he did mention that they were looking at changes in temperature, and delta T is the same on the Celsius (relative) and Kelvin (absolute) scales.

supercritical
March 8, 2010 8:54 am

Carrot Eater,
If all you want to know is whether something is increasing or decreasing in temperature, then relative changes are all you need.
What is that ‘something’? Is it an object, a location? If so then you can only determine an increase/decrease from a continuous series of readings taken at that point. A break in the readings means that you can only make the delta determination within the time-period of the continuous sequence. And so, you will have to start again with the start of the next unbroken sequence.
I’ll give you a simple analog. Say I am the weather recorder for my village, but I go on holiday every year for the month August, and so no records are taken. It follows that my records to determine the average annual village temperature, are useless !
And, in comparison or in concert with other local village records, my records can ONLY be included in creating averages of monthy temperatures where I have data. As I have no records for any August, my records cannot be used for annual averaging. And, any attempts to overcome this problem by filling-in or averaging by reference to other local site records will result in an UNKNOWABLE ERROR in the result.

mikef2
March 8, 2010 8:54 am

..or to use Lucias Toy Planet analogy..
I’ve got 10 temp readings on Toy Island and they average 21C
I take one away so I’ve only got 9 readings and now my average is 22C
Therefore I’ve warmed by 1C
…unless the one I took away was always ‘too cold’ and the temp was actually always 22C…..in which case I’m at the same temp as I always was but I now think I’m at +1C
???????

red432
March 8, 2010 9:02 am

off topic: apparently carbon dioxide is depleting the oceans oxygen: http://news.yahoo.com/s/mcclatchy/20100307/sc_mcclatchy/3444187
— I’m no expert but couldn’t this be more easily blamed on other kinds of pollution like fertilizer runoff?

James Sexton
March 8, 2010 9:13 am

geo (07:25:50) :
It seems to me the range of skeptics runs to three main lines of thought:……..
Another camp…….warming isn’t necessarily evil (and probably good for mankind) and even if we could do something about it, we probably shouldn’t.
or another, it is impossible to distinguish “natural” occurring CO2 from man caused emissions and ridiculous to even have a conversation about it.

Jack Hughes
March 8, 2010 9:14 am

If losing thermometers “doesn’t matter” then why not just remove all except one and use the one remaining thermometer to plot the graphs.
This could be in a big “uber-grid” representing the whole world – and we have already learned that thermometers can be dropped from a grid without affecting the grid.

geo
March 8, 2010 9:21 am

@Sarnac (07:01:02) :
Oooh, I kinda like that one! There may be some other adjustments to look at to take changing weather patterns out (say a front is so ungracious as to choose just that moment to move through). You’d probably want to look at the temp change from other close sites to those airports to confirm their temps were pretty reasonably close to each other on 9/10 vs 9/12-9/13.
Doesn’t help with all that tarmac tho. . .just the operation of the jets themselves, and all those cars going to and fro in the close vicinity (that wouldn’t have been on those days).

DR
March 8, 2010 9:29 am

Was the HO-83 hygrothermometer issue taken into account for all this? Weren’t they replaced during about the same time the ‘march of the thermometers’ time frame?

rbateman
March 8, 2010 9:43 am

I have no idea how GISS manages to justify this:
http://gallery.surfacestations.org/main.php?g2_view=core.DownloadItem&g2_itemId=57846&g2_serialNumber=2
when the raw data for Ashland, Oregon looks like this:
http://www.robertb.darkhorizons.org/TempGr/AshOre1.GIF
other than to use the former as an excuse to drop stations, after monkeying with the data.

rbateman
March 8, 2010 9:45 am

Smokey (07:56:34) :
Methane, running all round my brain. They say it will kill you faster than previously imagined, but they won’t say when.

March 8, 2010 9:46 am

Smokey,
Why do you favour Lindzen’s estimate of climate sensitivity over the large number of studies which have estimated it to be much higher, particularly in the light of the response by Trenberth et al? And what makes you think that the “alarmists” want to force Cap & Trade on everyone, any why would they?

Editor
March 8, 2010 9:49 am

Stephen Wilde (06:38:36) :
“Are we quite sure that all we have been observing is not just a simple function of the speed of movement of air masses around the globe ?”
Interesting – I hadn’t thought of that angle or heard it discussed anywhere, but it seems inherently sensible to consider it.
geo (07:25:50) :
From a postion of being a warmist (several years ago, before my skeptic husband challenged me to actually look at the science) I moved to Cat 3, then to Cat 1 (as I dug deeper). I’m probably now in Cat 2., but basically open to whatever I find the data says.

Pascvaks
March 8, 2010 9:51 am

Ref – geo (07:25:50) :
“It seems to me the range of skeptics runs to three main lines of thought:
1). The globe isn’t warming at all –we’re measuring it wrong (siting, land use, dropouts, whatever).
2). The globe is warming somewhat, but not as much as we think because we’re measuring it wrong, and what is left if you could do it right falls well within the range of expected natural variability.
3). The globe is warming somewhat, but not as much as we think because we’re measing it wrong, and after you take out what is reasonable to expect for natural variability, the C02 warming signal is much less (to the point of not being a threat in any urgent timeframe and a smallish fraction of what IPCC assigns to C02).
_______________________
geo, I’m a 4).
4.) The globe is warming somewhat, the way it always does following a “Little Ice Age” type of event; one day it will cool again, the way it always does following a Warming type of event; the thermometers we’re using are OK; the compters we’re using are OK; it’s the computer programmers and Chicken Littles in the World that you have to look out for, and squash whenever they start squaking; it’s also the Fat Alberts of the World that you need to throw rotten fruit and vegetables at, whenever they pull into town with their snakeoil cures and rainmaker gizmos.

anna v
March 8, 2010 10:12 am

I will presume to summarize what E.M Smith is doing.
1) GISS and similar programs have taken raw temperature DATA as input and, after treatment, come out with an OUPUT that says the world is catastrophically warming.
2)To check this prophecy :
a) he looked at the programs coming out with the prophecy of catastrophe
b)he looked at the DATA that are used by the programs to come out with the prophecy
It is the raw DATA that is being described as “the march of the thermometers”
If, for example, he looked at the raw data and he found that it was numbers from the New York phone book , it would be clear to all that no matter what the program did, the output would be nonsense.
He did look at the data and he found that systematically cold places etc were dropped and the DATA entering as input to the program, before any manipulation, are biased towards the prophecy of warming. This is before looking at grids averages etc and if there are errors within the programs. The DATA is biased.
Biases have to be corrected to use the data meaningfully.
He has not found in the programs that prophecy catastrophe and want to stampede the western world into economic stagnation, an effort to acknowledge and correct for these biases.
If a Pamina or a Papageno has a program that corrects for these biases, it cannot correct the IPCC report that has used the biased data to project catastrophe unless the world repents with a major “mea culpa”.
That’s about it, as far as the march of the thermometers goes.

carrot eater
March 8, 2010 10:31 am

Dan Hughes (08:39:39) :
Take a look in your thermo textbooks, and show me where it says that it matters whether you express a change in temperature in Kelvin or Celsius.
Yes, you need the absolute temperature in the gas laws. You need the absolute temperature for Stefan-Boltzmann law.
You definitely don’t need absolute temperatures to express how much a temperature has risen or dropped over time.
And this is exactly what we’re doing here – seeing how much a temperature has risen or dropped over time.

carrot eater
March 8, 2010 10:54 am

Jack Hughes (09:14:25) :
“If losing thermometers “doesn’t matter” then why not just remove all except one and use the one remaining thermometer to plot the graphs.”
Losing thermometers matters if it leaves you undersampled.
Extreme examples, to illustrate the point:
Say you have 1,000 weather stations in North Dakota. All of them say that North Dakota is warming at some x C/decade, +/- some variance. Losing a few of these stations won’t make any significant difference; you simply don’t need that many weather stations to get a good idea of the temperature trends in North Dakota. This is oversampling.
On the other hand, say you only have three thermometers in Brasil. Say North Brasil is warming, and two of your thermometers are there. South Brasil is cooling, and you had one thermometer there, but it broke. Now, you have a problem because of the lost weather station. It leaves you undersampled: not enough measurements to describe the variations in trend over space.
What Zeke Hausfather, the clear climate code and Tamino have done is shown that by only using the stations that remain after 1992, you get the same global results as what you get by using the stations that had dropped off. This tells you that overall, the missing and the current stations were not different in their trends before 1990.
Now, if you look hard enough, you might find some region where the dropped stations were behaving differently from the surviving ones, before 1990. This would be a problem if you wanted to know the trends at that particular location. But we’ve already seen that this would be the exception to the rule.
Now, some parts of the earth were possibly undersampled both before and after 1990. The Arctic comes to mind. Maybe some parts of Africa. So adding measurements from there would be helpful.

carrot eater
March 8, 2010 10:58 am

anna v (10:12:12) :
“He did look at the data and he found that systematically cold places etc were dropped and the DATA entering as input to the program, before any manipulation, are biased towards the prophecy of warming.”
This is precisely why you need to use anomalies. If you use anomalies (and everybody does), then simply dropping a cold place will not make the average warmer. It just won’t.
However, dropping a place that was cooling will make the average trend warmer.

mikef2
March 8, 2010 11:00 am

Anna V
Thats how I understand it too.
I just can’t get my head around how lots of seriously bright people want to ignore that part and go off in tangents about their elaborate math to make these anamolys. Cool, great, good luck to them. It makes no difference if the data is contaminated BEFORE they start….do they just not want to see what is painfully obviously staring them in the face?
And the vibe back is that its just cherry picking a few sites to cause a stir…but the actual reporting sites used is shockingly low, so these ‘cherry picked’ sites start to take on some meaning.

HAS
March 8, 2010 11:01 am

Just to help Nick Stokes 2:22:18 and subsequent comments, and to build on Graham Giller 6:38:15, data manipulation to create anomolies doesn’t add any information to your data set and gridding removes it.
So using anomolies isn’t a silver bullet that removes bias (removing or adjusting your dataset for corrupt data may).
In fact the big risk with using anomolies is that the normalisation of the data seduces people into thinking that they have converted a data set from something that is clearly derived from sub-climates with quite different characteristics to something much more homogeneous.
They haven’t. To repeat there is no more or no less information, and its quality stays the same (at least on the assumption you are using 1-1 determanistic transformations of your data).
Better for this reason alone not to use anomolies because the point where you think it is simplifying things is the very point at which you are making unwarrented assumptions about your data (e.g. combining observations into averages without thinking about the underlying distribution of the individual observations).

jorgekafkazar
March 8, 2010 11:10 am

Sarnac (07:01:02) : “Simple airport temperature anomaly test …”
I like it.

March 8, 2010 11:23 am

Re mikef2 (08:47:25) :
I do think this is the issue, if the bias (false warming or cooling relative to time, within stations choosen and dropped, composing a given grid; compared to including all the available stations within the grid over the same period and arriving at a different anomaly and or mean temperature) is in the GISstep temperatures, then something is allowing the bias to leak through.

Gary Hladik
March 8, 2010 11:26 am

andrew adams (09:46:35) : “And what makes you think that the “alarmists” want to force Cap & Trade on everyone…”
Not every “alarmist” favors C&T. Hansen, IIRC, prefers taxing fossil fuels instead. But the almost universal “warmist” goal is to reduce CO2 emissions by making carbon combustion more expensive, forcing a switch to other energy sources that would otherwise be more expensive (some include nuclear in the list, others don’t).
“…any why would they?”
Cap & trade has the potential to make a lot of money for people on the inside, like Al Gore (carbon credits), commodities brokers, financial institutions, and not least the oil companies (who always make money when the price of oil rises). “Making” money out of a politically-created market (i.e. out of literally nothing) is a financial wizard’s dream; it’s even better than inflated mortgage-backed securities.
The CO2 emission reduction exercise is pointless, of course, because rising emissions from China, India, and other developing countries will negate any reductions by the gullible, who will end paying for their reductions and the dislocations (if any) their reductions were supposed to prevent. Guess Who ends up paying the bill?

Bill S
March 8, 2010 11:42 am

Anyone know what the reason is for the “pulse” of blue reporting stations at the start of each decade? The number of blue stations goes up dramatically in years ending in zero and then drops immediately back to the norm. I don’t know if it’s important but at the very least it seems weird.

PeterB in Indainapolis
March 8, 2010 11:43 am

Nick Stokes,
Ok, lets say you were using a gridding system, and that particular grid was fairly “cold”. If thermometers were removed from higher elevations and more rural areas, whereas thermometers in lower elevations and more urban areas were kept, should that not make the overall grid “less cold” than it was with all of the thermometers included?
If not, why not?

anna v
March 8, 2010 12:01 pm

Re: carrot eater (Mar 8 10:31),
And this is exactly what we’re doing here – seeing how much a temperature has risen or dropped over time.
You have to realize that as far as global warming or cooling goes, which is the real problem we are being forced to face, it is the value of the temperature in Kelvin that is important. That is what turns temperature from a measure of whether to expect ice on the road or melting tarmac into a heat gauge. It is excess or lack of heat that will tell us whether the planet is warming or cooling.
An attempt has been made to make a heat budget for the earth, but as far as I can see in the 2008 Trenberth paper it was done using satellite data, not temperature data.
The temperature is measured at 2m in the atmosphere. Most of the heat content or lack thereof is in the surface layers of ocean and land. If there were no winds and no evaporation etc the surface temperature and the 2m temperature would be at equilibrium and one could use the 2m temperature to gauge the heat radiated.
Because there are convection currents, storm systems etc, and large bodies of atmosphere are moved to surface areas with temperatures that do not correspond to the surface temperature, the correspondence of energy with temperature is broken. Thus there could be large changes because of redistribution of heat and lack thereof on planet wide scales , in a non linear manner. When one comes to calculating anomalies over the average temperature of the region the correlation with what is really happening with heating and cooling of the planet is really third hand. It could be that the temperatures are absolutely stable and anomalies show heating because of peculiar redistributions of heat, (PDO, ENSO etc. etc) as seen in http://nsidc.org/images/arcticseaicenews/20100303_Figure4.png .
If anybody has a link of the time dependence of “radiation energy in in minus radiation energy out” from satellites I would really like to see it.

anna v
March 8, 2010 12:08 pm

It could be that the temperatures are absolutely stable and anomalies show heating because of peculiar redistributions of heat, (PDO, ENSO etc. etc) as seen in http://nsidc.org/images/arcticseaicenews/20100303_Figure4.png .
Sorry, this makes little sense. My only excuse is it is close to my bedtime 🙂 .
I meant that the heat input output is absolutely stable over the planet but anomalies show heating because of peculiar redistributions of heat, (PDO, ENSO etc. etc) as seen in http://nsidc.org/images/arcticseaicenews/20100303_Figure4.png .

March 8, 2010 12:16 pm

Re: Frank K. (Mar 8 06:32),
As to the ideal gas law, the Charles Law part was worked out in the eighteenth century by scientists using Celsius. But yes, in thermo it’s more natural to use Kelvin.
My point was that a lot of thermo is about heat fluxes and changes in heat content. That’s the case in atmospheric science. And you can measure that with K, C, F or anomalies.

March 8, 2010 12:17 pm

David A (04:27:37) :
Re Mike McMillan (02:02:16) :
Your blink charts are very impressive and warrants more commentary. (a great deal more) The lowering of the past appears the strongest.
Can you give a quick summary of how you know this is USHCN original raw data vs USHCN version 2 revised raw data, and what GISS does with this data?

Last summer (’09) I downloaded all the Raw and all the Homogenized charts from
http://data.giss.nasa.gov/gistemp/station_data/
for the states of Illinois, Wisconsin, and Iowa. I made raw/homogenized blink charts, and have been posting them on surfacestations.org’s gallery as the stations were surveyed. Since I had the old raw charts when the “new improved” raw charts came out, I downloaded the new ones and made the raw/new-raw blinks.
What I’ve noticed while gluing the charts together was that the temperature peak in the 1930’s was nearly always lowered, and the 1998 peak was often adjusted lower, but Not lower than the 1930’s peak.
I’m guessing that across the USHCN, that adjustment results in 1998 being hotter than the 1934 record, but no longer so high that subsequent years couldn’t beat it. After all, you can’t have global warming if you can’t break old records.
The new GISS homogenized charts that I’ve examined show even more warming, but the difference between new raw and homogenized charts is much less than between the original raw and homogenized.
The removal of the original USHCN raw charts makes it much more difficult to expose the computer-induced warming that’s going on.

A C Osborn
March 8, 2010 12:24 pm

HAS (11:01:40) :
But if they can’t use “Adjustments”, “homogeneousing”, “Anomolies” and “Gridding” they wouldn’t be able to “Hide the Decline” with their Computer Programs.

A C Osborn
March 8, 2010 12:26 pm

carrot eater (10:54:41) : – “Now, some parts of the earth were possibly undersampled”
Like no thermometers at all even though they are still there and still providing valuable data?

Steve Keohane
March 8, 2010 12:30 pm

carrot eater (10:31:57) : And this is exactly what we’re doing here – seeing how much a temperature has risen or dropped over time.
You’re wrong, we are looking at the raw data, and how it can bias any further analysis, even extracting your beloved ‘anomaly’.

Jerker Andersson
March 8, 2010 1:03 pm

“The question that is being debated is how that dropout affects the outcome of absolutes, averages, and trends.”
Wouldn’t one easy test be to just use the period when there are up to 7000 stations and then calculate the average over that time?
After that you could simulate a station drop out during the exact same period and calculate averages again.
That would give a good hint imo what the station drop out may do to the meassured global temperatures.

March 8, 2010 1:04 pm

Re: PeterB in Indainapolis (Mar 8 11:43),
Yes, removing thermometers that are colder than the grid cell average, as I said, will make the grid cellaverage temperature colder. Mountain sites are likely to be in that category. But, as I also said, there are two protections against bias. Gridding doesn’t work so well in that case, but anomalies will.

Feet2theFire
March 8, 2010 1:13 pm

I am doing a lot of reading on the Younger-Dryas Impact Event, which – like the dinosaur killer of 65 million BC – seems to have killed all the megafauna in North America. It is still in dispute as to whether that is real or not (I think it is very likely true).
1989 seems to have been hit by an extinction event, with a lesser one just recently.
I love the term, “The Great Dying of the Thermometers.” It fits in well with what else I am into.

March 8, 2010 1:26 pm

In an airport on the way back to NYC at the moment, so I probably can’t contribute as much to this post as I should.
A few quick points:
1) The strong correlation between temps reconstructed based on only “dropped” and only continuous stations pre-1992 should lay to rest any claims of fraud or manipulation on the part of NOAA/NCDC. The fact that “dropped” stations show a greater warmer trend on average than continuous stations is actually consistent with a drop-out of colder thermometers, since colder places tend to have a greater warming trend on average.
Pre-cutoff and post-cutoff stations: http://rankexploits.com/musings/wp-content/uploads/2010/03/Picture-98.png
Post-cutoff minus pre-cutoff: http://i81.photobucket.com/albums/j237/hausfath/Picture170.png
2) The whole notion of station “dropping” is something of a misnomer, since those stations were never actively reporting. GHCN was put together in the early 1990s from station data collected via various projects; only the 1300 or so Global Surface Network stations ever provided regular monthly updates via CLIMAT reports. However, GHCN version 3 is being developed as we speak, and should hopefully be released next year. It will collect station records from those stations not in the GSN system that had contributed data to GHCN version 2, as well as new stations established in the interim.
3) Anomalies are essential to properly calculating trends in global temps. The methods used (SAM, CAM, RSM, and FDM) each have their small differences (see Chad’s posts related to them, and Lucia’s recent analysis of the RSM used by GISS). However, all anomaly methods give almost the same trend globally; the major difference is in dealing with fractional station records.
4) The importance of gridding depends largely on how big and representative a set of temp data you are looking at. For GHCN raw data globally, its actually not that important, since GHCN by its nature attempts to select a geographically distributed and proportionate set of stations. If you were to add in all the USHCN stations to the GHCN network, the effects of gridding would be much more obvious.
Here are two graphs I made over at Lucia’s place (see the posts there for the details of the CAM and gridding processes used):
Gridded vs. non-gridded global anomalies via GHCN v2.mean (raw): http://i81.photobucket.com/albums/j237/hausfath/Picture169.png
Gridded vs. non-gridded absolute temps via GHCN v2.mean (raw):
http://i81.photobucket.com/albums/j237/hausfath/Picture168.png
If you plot the absolute temps with the number of stations available, you can see obvious step-changes in absolute temps associated with changes in station number:
http://i81.photobucket.com/albums/j237/hausfath/Picture166.png
However, anomalies appear largely insensitive to these changes.

Bill S
March 8, 2010 1:40 pm

Re Nick Stokes (13:04:02) :

Yes, removing thermometers that are colder than the grid cell average, as I said, will make the grid cellaverage temperature colder. Mountain sites are likely to be in that category. But, as I also said, there are two protections against bias. Gridding doesn’t work so well in that case, but anomalies will.

Chiefio has also maintained in other posts that all of the cold temperatures are in the baseline period, though, so removing them from the later record but not from the baseline does create a warm bias regardless of whether you use anomalies and gridding or not.

Bill S
March 8, 2010 1:42 pm

I didn’t do that last post very well–didn’t indent. Nick’s comments are the first paragraph, my reply is the second. Sorry about that.
[ Fixed. “Friendly -mod.” ]

latitude
March 8, 2010 1:55 pm

geo
#5 No one has proven AGW in the first place, so temperatures mean nothing.
Figure out everything else first, then see if any trace gasses have any effect.

March 8, 2010 1:55 pm

Re: Nick Stokes (Mar 8 02:22),
Thanks Nick, well put.
There are a few things that people need to realize. Every ounce of effort you expend on problems that are not problems is a waste of effort.
Here are a list of non problems:
1. Station drop out ( except WRT overall certainty if the station count
goes too low
2. Rounding ( please see Lucias site)
3. Thermometer accuracy ( please see the law of large numbers)
4. using (Tmax+Tmin)/2 (go download data from CRN and see for yourself)
5. The notion that “averaging” results in the loss of data.
6. The selection of “base periods”
7. The colors used in charts.
here are a list of Open questions ( not problems, but open questions)
1. What is the provenance of the data being used.
2. What adjustments are made and what are the exact calculations.
3. Is there UHI contamination in the signal
4. Does microsite bias matter? how much
5. How should the uncertainty due to spatial coverage be computed?
6. What is the optimal method for station combining and area averaging
( see romanM)
5&6 are methodological questions not specific to climate science.
1&2 should have been addressed long ago. They are record keeping
and statistical questions.
3&4: these questions touch on climate science
without 1 &2 and 5&6 being put to bed, I would question
work on 3&4. By question I mean hold open.
So, just to set priorities. also. 5&6 may depend upon the characteristics of 1&2. understanding the quality of your data and the various kinds of holes.
What do you think Nick?

supercritical
March 8, 2010 1:58 pm

All this mathematical finagling is dodgy, and is in the process of being revealed as mere cleverity
And so certain Climate Scientists ought to dwell on what Francis Bacon has to say about the uses of mathematics;
mathematics …. ought only to give definiteness to natural philosophy, not to generate or give it birth. From a natural philosophy pure and unmixed, better things are to be expected.
……. inquiries into nature have the best result when they begin with physics and end in mathematics.

Tim Clark
March 8, 2010 2:11 pm

Nick Stokes (13:04:02) :
Re: PeterB in Indainapolis (Mar 8 11:43),
Yes, removing thermometers that are colder than the grid cell average, as I said, will make the grid cellaverage temperature colder. Mountain sites are likely to be in that category. But, as I also said, there are two protections against bias. Gridding doesn’t work so well in that case, but anomalies will.
Do you believe what you write????
Hypothetical three stations in same grid;
1. ave temp 25
2. ave temp 26
3. ave temp 27 = ave ann temp in grid =26
remove station 1
ave ann temp = 26.5

Keith W.
March 8, 2010 2:12 pm

Nick Stokes – the other thing one has to remember with regard to the way GHCN uses gridcells is the fact that the temperature numbers are adjusted based upon the temperatures from any other stations within 1200 kilometers. This means that the gridcell containing Boston, Massachusetts, is affected by the temperature of the gridcell containing Atlanta, Georgia. While there would be a mitigating factor if there were temperatures from more Northernly gridcells, the higher latitude sites are decreasing.
Which means the lower (and warmer) latitude sites are effecting the higher latitude sites more than the reverse. If you ever check an anomaly map, the areas with the greatest warming anomalies have been the higher latitudes over the past decade.

March 8, 2010 2:16 pm

Re: Jerker Andersson (Mar 8 13:03),
in fact a couple people have done this test over at Lucias.
1. Zeke
2. Lucia
3. the CCC folks.
They all confirm what the basic math suggests. The methods are robust to station removal. I know it seems counter intuitive. Back in 2008 when this issue was first raised I thought the same thing. but guys at CA quickly smacked me down. ( thanks guys)dropping cold stations would change everything. But it doesnt. Lets see if I can do a very simple example.
heck maybe I can even find the comment in CA where I did it.
Ok Lets make two stations: one hot; one cold. and lets make them
cool over time: ready?
10,9,8,7,6,5,4,3,2,1
20,19,18,17,16,15,14,13,12,11
Now lets do the average in RAW TEMP:
15,14,13,12,11,10,9,8,7,6
Ok notice something? ( look at the rate )
Now Lets lose some data! pretend Phil jones is in charge of things
and we lose the cooler.
10,9,8,7,6,5, 4
20,19,18,17,16,15,14,13,12,11
Now average:
15,14,13,12,11,10,9,13,12,11
DANG! if I work in raw temps and lose stations my average gets messed up.
Spherical cow to the rescue. lets use the dreaded anomaly.
10,9,8,7,6,5,4,3,2,1
20,19,18,17,16,15,14,13,12,11
base period for series 1 {6,5,4} average = 5
base period for series 2 {16,15,14} average = 15
Anomalize
5,4,3,2,1,0,-1,-2,-3,-4
5,4,3,2,1,0,-1,-2,-3,-4
Now, have Dr Jones lose data
5,4,3,2,1,0,
5,4,3,2,1,0,-1,-2,-3,-4
Average the anomalies.
you can do that math
example for you

David Alan Evans
March 8, 2010 2:18 pm

For those of you who actually think temperature is a good metric of energy, try this experiment that many women the world over do every day. (As a single parent I did too.)
1) Pre-heat an oven to 200-250ºC.
2) Without protection, place the item you want to cook in the oven.
See! No damage!
Now carry out the following experiment.
Place your hand in the stream of 100ºC steam venting from the pot on the hotplate.
These experiments must be performed in this order as the delay after the visit to the burns unit if performed in the wrong order may render the results invalid.
DaveE.

Gary Hladik
March 8, 2010 2:37 pm

Re Nick Stokes (02:22:28), having read E.M. Smith’s article, I see now that Stokes is referring to the “hypothetical cow” of how GisTemp is supposedly produced, whereas Smith discusses the “real cow” of how it’s actually produced. Smith hasn’t completely dissected this process, but is far enough along to see that:
1) The input data are biased
2) Homogenization, correction, etc. are performed on the input data before anomaly calculation
Later steps in the GisTemp process may or may not offset the second problem, and that’s what he’s still looking at.
Fair summary?

March 8, 2010 2:40 pm

Re: Steve Keohane (Mar 8 12:30),
Steve.
Just to help you understand anomalies.
5,5,5,4,4,4,5,5,5
Now lets pick a base period {4,4,4} average = 4
Anomalize!
1,1,1,0,0,0,1,1,1
Note I picked a cool base. See how the pattern of the data doesnt
change. the early trend is zero, mid trend is zero, late trend is 0.
the overall trend from start to finish is the same too!
Now pick a different base the first period:
0,0,0,-1,-1,-1,0,0,0
What do you see? anomalies change nothing. and picking a cool period
isn’t any great evil deed. I can always shift to a warm baseline. It’s just
addition.

March 8, 2010 2:48 pm

Re: carrot eater (Mar 8 07:50),
Indeed you would. Which is exactly the point of the analysis of Zeke, clear climate code, and Tamino. Before the time of the station number drop, the global trends calculated from the dropped stations and the surviving stations are the same.
Actually I think Zeke’s analysis showed a minor difference

Anticlimactic
March 8, 2010 2:49 pm

I have just watched a film called ‘The End Of The Line’, about overfishing round the world. Researchers were puzzled that although fishermen in all areas reported lower catches, the annual total was going up, which made no sense. They traced the anomaly to China where they found that the Chinese government were paying their officials based on the amount of fish caught, so these officials were making the figures up! Human nature.
The CRU, NOAA and GISS were formed to measure global warming. If there is no global warming then they cease to have a purpose, their functions could be absorbed elsewhere [eg. CRU in to the UK Met Office, and NOAA in to the real NASA to augment the satellite data.]
We of course expect the people in these organisations to be honorable, and to dutifully report global cooling if it occurs, even if it means them losing their jobs!
[……but, human nature!]

March 8, 2010 2:50 pm

Re: Medic1532 (Mar 8 04:46),
Nick stokes does a fair explanation of why the number of stations dropped?
Nick?

supercritical
March 8, 2010 2:59 pm

Near-surface air temperature is but one of the met. records. Why not use the others?
Would it be useful to know if the ‘global average’ windspeed has changed? Wind direction? And then how about rainfall? Cloud-cover? Insolation?
It seems such a ‘no-brainer’ to use these other indicators to see if the climate is changing, so can any climatologist tell me why they are not being used?

March 8, 2010 3:01 pm

Yo Ant-nee ( best south jersey lingo)
Get some glutamine ( amino acid for the immune system) for that cold
and then meet in the gym for some benches
Get well,
JB

March 8, 2010 3:07 pm

Re: Keith W. (Mar 8 14:12),
I presume you mean GISS, not GHCN. They use rural stations within 500 km, or 1000 if they have to, to do their UHI adjustments. They use stations within 1200 km to calculate the anomaly base for a grid point (not an adjustment). In both cases the results are weighted inversely by distance, so the further stations have less effect.
While the derivation of the anomaly base should be as local as possible, even if it is biased by remote stations, that just provides a constant offset, and doesn’t affect trends.

Alex Heyworth
March 8, 2010 3:14 pm

Re: steven mosher (Mar 8 14:16),
Good couple of posts, Steven. Liked your first one identifying what the real issues are.
WRT your second post, I would be convinced there was no issue IF I thought what GISS does in calculating anomalies was as simple as your example. As it is, I am about 3/4 convinced, but would like to see what Chiefio comes up with.
On your first post, I’m inclined to suggest that the real issue isn’t to do with air temperature at all, but with heat content, as suggested by David Evans above. However, I’m a bit dubious that we have much of a handle on ocean heat content yet. For example, I have yet to see an estimate of ocean heat content change that has admitted what its error bars are.

DocMartyn
March 8, 2010 3:17 pm

The march of the thermometers would work as an explanation if the sites that were deleted had a smaller (Max + Min) range than the remainder. I was looking for the effect of altitude and urban heat Island effects on (max + min).
Does anyone know the effect of height on (max + min) ?

geo
March 8, 2010 3:22 pm

@steven mosher (14:16:25) :
Nice. What does “micro-site” mean in this context? I ask this, because the airport work leads me to believe there are sort of 3 levels of UHI, one of which is micro-site along the lines of what surfacestations is looking at with the French 1-5 scale, and classic UHI which is more of a “regional” phenomenon.
But the airport thing to me is a ‘tweener that does not comfortably fit in either. The 1-5 scale clearly has certain assumptions built into it at a very basic level about just how big a potentially contaminating nearby heat source might be –and large commercial jet engines are pretty clearly far outside those base assumptions.
Yet it isn’t necessarily “UHI” (in the classic sense) either, so far as being predictable by population density or somesuch. So, ‘tweener. Would be nice to have language to describe and catagorize that middle case. Airports are the classic example, but possibly not the only example, so “airport problem” while understandable doesn’t really satisfy my need to generalize taxonomies, y’know? I suppose I’m leaning toward UHI as the top-level catagorization, with micro-site, whatever-the-general-term-for-the-airport-king-of-thing, and regional as the subsets.

March 8, 2010 4:27 pm

Of course, if you take this thing to a (ridiculous) limit, you don’t need any thermometers anywhere.

Jan Pompe
March 8, 2010 4:28 pm

steven mosher (13:55:32) :
“What do you think Nick?”
I’m not really sure that I care much for the opinion of a mathematician that thinks this:
“Nick Stokes (13:04:02) :
Re: PeterB in Indainapolis (Mar 8 11:43),
Yes, removing thermometers that are colder than the grid cell average, as I said, will make the grid cellaverage temperature colder. ”
Thank you for continuing the strawman here:
steven mosher (14:16:25) :
Sure where the trends are the same across the stations and if you normalise (do we really need to invent new terms such as anomalise just for climate science?) before you average the trend will remain unchanged.
I would have thought that patently obvious so why waste time with it?
However the reall problem is that the trends from station to station are not the same and the the stations dropped are those with a cooling or steady trend in favour of those with say UHI contamination we have a problem that needs to be addressed.

E.M.Smith
Editor
March 8, 2010 4:30 pm

curious (03:45:07) : E. M. Smith, who seems to change his mind about the existence of the warming bias.
No, I have not changed my mind at all. There is a clear warming bias in the DATA. There are changes that put more warm thermometers in the present and with flatter seasonal profiles. That IS station bias. Period.
Somehow folks seem to take that and want to turn it into “I think the anomalies are warming”. They are two very different things. Please keep them apart. There are more kinds of warming bias than just rising anomalies.
There are at least 5 ways of cooking up an anomaly that I know of. Before you can asses how well they each work and what one has the least problems with the particular “issues” in the data that you have, you must first understand that DATA.
So, for example, the First Difference method resets to zero on any data gaps. Well, if you look at the “Digging in the clay” article linked at the top, you find LOADS of data gaps. Hmmm… Maybe FD “has issues” here….
If you look at “Climatology” (i.e. what GIStemp does with things like making up values to fill in the blanks with “The reference station method”) you can avoid the damage from the blanks, but you get ‘leakage’ of the warming BIAS in another station into a missing station and into the anomaly maps (As Gavin said in his email from the foia batch… The GISS guys know this.)
You can go down the list of methods. They each have “issues”.
So to choose a “good one” (or as I am doing, to measure one) you simply MUST know what issues are in the DATA prior to the process.
So what we find are LOADS of bias in the DATA. Latitude, altitude, airport percentage, etc. That you might find a way to mitigate it, does not mean the bias is not there and does not mean you can ignore it. (And does not answer why it is being put in in ever greater amounts when there is no need to do so… FWIW, my bias it to assume “stupidity” rather than “malice” in conformance with Hanlon’s Razor: Never attribute to malice that which is adequately explained by stupidity. [ my paraphrase ] But I’d really rather have much less bias in the data to begin with. )
For each method used, that added bias has the risk of reaching the result in significant amounts and has the probability of increasing the error bars. Neither is good.
So I cooked up my own “method” that I think is better than either of those other two (and much better than the “some of this some of that, recursively” adjusting and in-filling done by the whole of GIStemp, not just the grid / box anomaly STEP3) since it avoids the “in-fill” errors but also carries forward the anomaly over data gaps so will preserve a trend better with holey data. A I found that the USA is basically FLAT in trend but with a rolling pattern roughly in step with the PDO. Hardly CO2 matching.
http://chiefio.files.wordpress.com/2010/02/usa-dt.gif
But hey, it uses anomalies so it can’t possibly have any error in it. So the USA ought to now get a complete free pass on any AGW issues. Right!?
/sarcoff>
And that is my whole point. We are asked to take on faith that “the anomaly will fix it” and there are many different ways to do anomalies with many different results and no clear way to know which one is right.
(Though I’m pretty sure the way GIStemp does it is wrong. Applied AFTER all the UHI, in-fill, homogenizing, USHCN.v2 / GHCN averaging, etc. )
Now look at that chart again. Notice how wide the swings get back in the very early years (on the far right). The error band is opening up with station reductions. Now tell me station drop outs don’t matter. I’m good with that. Happy as a clam. Because if station drop outs don’t matter, then we are now clearly in a 260 year cooling trend and we can all go home.
But if station drop outs DO matter, then they matter for all time, and not just the far past.

Do you think this statement is supported by the “very detailed response” of Smith? What about Roy Spencer’s results then?

Take a look at that graph again. Notice the dip between the 30’s and now? That’s the GIStemp baseline. Yeah, I think setting your baseline in a dip matters. Make the ‘baseline’ from 1925 to 2005 and you get different (and cooler) anomalies. We are exactly a zero now and that is where we were in the 190x and 181x eras. And it was all done with anomalies so it’s perfect… 😉
BTW, Spencer’s early result saw “nothing of interest” and my comment was “needs to look at more data”. Having now looked back to the ’70s he is finding divergence with Jones. I have no disagreement with that. It is just the kind of thing I would expect to find. I’d further speculate that the further back he looks, the more divergence he will find. All the ‘odd adjustments’ seem to be piled up in the older data (at least given what I’ve seen comparing ‘old really raw’ with ‘as in the data products’).
So I’d have to say I agree with Spencer. There is an unexplained bias that gets bigger the further back in time you look.

carrot eater
March 8, 2010 4:39 pm

Alex Heyworth (15:14:57) :
No, that isn’t what GISS does, but their method also works just fine.
Roughly, what GISS does:
Station A is hot. say it never changes. Readings are 30 30 30 30 30
Station B is cold. say it also never changes. Readings are 0 0 0 0 0.
Now say that station B stopped reporting its data. So we only have the first three readings: 0 0 0.
Using absolute temperatures, the average would be 15 15 15 30 30. That of course is wrong.
Using GISS, you start with the longer record. so you take the 30 30 30 30 30. Great.
Then you get the next longest record. That’s 0 0 0.
You find the period where they overlap, and find the mean of each station over that period. That’s 30 and 0. Find the difference of those means, and add it to each value of the second station.
So now station B becomes 30 30 30.
Now you combine A and B. And you get 30 30 30 30 30. Then, you subtract out the baseline, and you end up with the final anomalies 0 0 0 0 0. But that’s cosmetic; the trend doesn’t change by re-centering the anomalies on zero.
So in summary, you had two constant-temp stations, one hot and one cold. Dropping the cold made no difference; the combined average still had a constant temp.
Now, if A and B had different trends, then dropping one would make a difference. That’s true, no matter what method you use.

rbateman
March 8, 2010 4:41 pm

steven mosher (14:16:25) :
I must say that’s quite the clever trick.
Except they didn’t drop data from merely warm or cool stations.
Specifically, a warm rural station relates to a cool rural station in the same way as a warm urban station relates to cool urban station.
In the real world, they dropped the rural stations, and kept the UHI affected urban stations.
Result: Unprecedented warming.
That’s how you get the gridded output to rise dramatically. You dump the vast majority of stations that don’t show appreciable UHI.

carrot eater
March 8, 2010 4:46 pm

Gary Hladik (14:37:42) :
The only adjustment GISS makes is a UHI adjustment. But why are you surprised that it comes before the stations are combined? It has to be, if you want it to actually be incorporated in the spatial averages.
As for actually using the GISS method, see ccc and Ron Broberg.
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
http://rhinohide.wordpress.com/2010/03/08/gistemp-high-alt-high-lat-rural/

"Popping a Quiff"
March 8, 2010 5:07 pm

Nick Stokes (02:22:28) :
Your argument leaves to much room for tinkering with the method. And with the obvious bias of people like James Hansen tinkering only leaves suspicions.
Let’s compare the data from station that have been dropped to stations that have been retained. According to your method everything should be the same—shouldn’t it—even the actual temperatures?
But I bet a cup of coffee they aren’t.

E.M.Smith
Editor
March 8, 2010 5:08 pm

@Tony Rogers: You understand. Tears of joy… 😉
@J.P. Miller: Thanks. The code is public (though barely readable…) and I’ve gotten it to run. One of only a few folks on the planet to do so 8-0
The issue that bugs me is that they OUGHT to have a full test suite including neutral, warming, cooling, red – pink – whatever data patterns and they ought to have a full QA suite that feeds in broken data and bogus values and they ought to have a published set of benchmarks. And they don’t.
Instead you get really rough code, some pointers to papers that barely apply to what the code does (like one justifies The Reference Station Method in one place at one point in time; but does not show how that is going to work when, oh, the PDO flips and the Jet Stream goes loopy and your reference period is now in a quite different weather regime; nor why applying The Reference Station Method can be done recursively… Showing that a ‘fill in’ from 1000 km away ONCE works does not mean you can do it 3 times in a row and maybe be making fill ins from fill ins from a datum 3000 km away…). And a hearty “Trust Us, we ARE Rocket Scientists”…
That’s the problem…
David A (04:27:37) :
Re Mike McMillan (02:02:16) :
Your blink charts are very impressive and warrants more commentary. (a great deal more) The lowering of the past appears the strongest.
Can you give a quick summary of how you know this is USHCN original raw data vs USHCN version 2 revised raw data, and what GISS does with this data?

The USHCN and USHCN.v2 are by definition ‘not raw’. They are constructed sets with adjustments. You know they are “as constructed” if you get them from the constructor NCDC at their site (and I do, as does GIStemp. Instructions under the GIStemp tab on my site.)
What GISS does with, well, i’ve spent about a year working that out and there a few dozens of pages devoted to it… buy a bottle of Scotch first, though 😉
@Anna V.: You got it! Gozintas, Gozoutas, Transforms, and deltas. Prove validity or find the issues. Can’t tell the players without a benchmark…
BTW, the ‘mysterious property’ I’m working on as potentially publishable, that I can’t mention yet, is related to exactly the kinds of bias you get in the DATA from station selection bias… and the idea that the anomaly automatically fixes all is just wrong. It helps, sometimes a lot, but it does not fix. (And GIStemp does it at the end in stead of the beginning… How something done in STEP3 will fix what was done in STEP0, STEP1, and STEP2 is still a bit of a mystery to me 😉

"Popping a Quiff"
March 8, 2010 5:13 pm

E.M.Smith (16:30:02) :
That IS station bias.
A warming bias which is their desired outcome. Otherwise they’d keep the rural and mountain stations in the record—-if for nothing else to avoid the very real possibility of looking suspicious, which possibility must have crossed their minds.
The fact that they dropped those particular stations makes anyone conclude they want warming bias—-no matter what argument they use to convince of otherwise.

"Popping a Quiff"
March 8, 2010 5:18 pm

Nick Stokes (02:22:28) :
E.M.Smith does not like anomalies
Did he actually say this? In all cases he does like anomalies? Or are you trying to influence how people view EM Smith?

"Popping a Quiff"
March 8, 2010 5:24 pm

Nick Stokes
so what you are saying is ‘pay no attention to that actual temperature behind the curtain’
I’ll stay awake in Kansas instead

David Alan Evans
March 8, 2010 5:46 pm

Alex Heyworth (15:14:57) :
A loss or gain of 1ºC in ocean temperature would mean an enormous energy loss/gain. You are right. we have no handle on this
My personal view is that the atmosphere is a bit player.
DaveE

Harry Lu
March 8, 2010 5:53 pm

steven mosher (13:55:32) :
etc.
Climate4u (excelent web sight!
http://www.climate4you.com/GlobalTemperatures.htm#Temporal stability of global air temperature estimates
This section shows temporal changes in various temperature record.
Interestingly it is CRU data that shows the least changes! its a pity no one HERE believes them any more.
/harry

E.M.Smith
Editor
March 8, 2010 6:05 pm

Tim Clark (06:01:55) :
Nick Stokes (02:22:28) :
E.M.Smith does not like anomalies, and likes to do his analysis with absolute temperatures, In that world, the “march of the thermometers” towards the Equator, or wherever, may have cause a real temperature bias.

Swing and a miss…
I have no distaste for anomalies. I have a distaste for the ASSUMPTION that you can throw an anomaly step into a program and then claim perfection.
Big Difference.
Also, different tools ought to be used for different things.
So be careful with the word ‘analysis’. Many folks in the climatology world seem to use that as shorthand for “Global Average Temperature Trend Analysis”. It isn’t.
So I’ve done an “analysis” of the STATION BIAS in the DATA. It’s still an analysis, but using anomalies on it would remove exactly the things you are trying to find… How big is the station selection bias. Where is is located. What types of stations carry more of it, or less. What regions of the planet show the most, or the least. For all of that kind of analysis, you want to avoid anomalies. That’s the “Gozinta” part.
I’ve also done an anomaly based analysis for the purpose of making a neutral and uncomplicated (by things like The Reference Station Method, data drop outs within a station, UHI “correction” etc.) analysis. This type of analysis does benefit greatly from anomaly based processes. It will be used for the “Delta” part.
I’ve got a first rough benchmark of GIStemp, but I need to make some code to turn the product of GIStemp into something that can be directly compared with dT/dt. That can either be from making them both use the same grid/boxes or by taking the GIStemp grid/boxes and turning them back into a “temperature” series. Still TBD. That’s the “Gozouta” part.
Then, and only then, I can compare the Gozinta to the Gozouta and the Delta and see where there are Variances … and those variances will tell an interesting tale, one way or the other.
And while many folks seem to think I was done at the “Gozinta” step, that was only a first step. I’m pretty much ready for a final approval on the “Delta” step (and then the code will be published with results). It already shows that “Global Warming” is not Global and that the pattern is more in line with Instrument Change and Airport Growth than CO2; but that’s an early conclusion so “more to come”.
The “Gozouta” is the one I dread, as I’ll have to go back into the GIStemp rats nest again… Oh Well, can’t be helped.

But the climate scientists do it differently.

I noticed…
They do two things that prevent that bias. One is the use of anomalies.
Nope. It does NOT “prevent” that bias. It CAN mitigate. If done right, it can mitigate a lot. To “prevent” would require that it not be in the data to begin with. So put the thermometers back. The whole point behind measuring the DATA to to measure the degree of MITIGATION done later. To leap to the conclusion that the mitigation is PERFECT is exactly the problem…
That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period.
And, in GIStemp, they do not do that. They compare an average box of thermometers in time A to an average box of DIFFERENT THERMOMETERS in time B. And they do it AFTER calculating UHI, doing in-fill and homgenizing, etc, etc, etc.
Now maybe you are willing to just ASSUME that process is perfect. I’m not. I want to see benchmark results. What I’ve seen so far says it’s “Pretty good, but not perfect”. And when the STATION BIAS in the input for the Pacific Basin data are measured in 10 C over the life of the data, if you are 95% of perfect you get a 1/2 C warming that is bogus.
Are you really willing to bet the world economy on the HOPE that GIStemp is over 95% perfect? Really?
, it scarcely matters whether stations being dropped are hot or cold.
And that attitude, IMHO, is why The March Of The Thermometers happened.
Once you swallow the whopper that the individual stations used are not relevant, that “The Anomaly Fixes All”; then any old box of thermometers will do. Heck, put one on the BBQ. It may read 350 F, but if the climate is getting warmer, next decade it will read 351 F at the same fuel setting and “The Anomaly Will Fix It”…
I’m not willing to bet so much on so little with no evidence.

March 8, 2010 6:54 pm

I have a distaste for the ASSUMPTION that you can throw an anomaly step into a program and then claim perfection… To leap to the conclusion that the mitigation is PERFECT is exactly the problem… you are willing to just ASSUME that process is perfect.

I sense a disturbance in the Force… as if a million goal posts were moved, and then were still.
I don’t see where anyone in this thread or at Lucia’s or the CCC or even Tamino have claimed “perfection.” What’s at issue was Watts and D’Aleo’s contention that a warming bias was introduced by the march of the thermometers. It’s been shown, by a number of different methods including GISTemp’s that no bias was introduced. The same warming trend was shown with the dropped stations and without. QED, as far as the contention in the SPPI report goes.
That’s a falsifiable hypothesis, and we’ll all be paying attention if you falsify it, Chiefio, but the question was not whether the surface temperature record is “perfect.” That’s an entirely valid question that should be carefully investigated, but it’s a separate question.

E.M.Smith
Editor
March 8, 2010 7:23 pm

Tim Clark (06:17:00) :
Uh, Nick, I think you need to rethink that statement. We’re not talking absolute station temp here. If you drop stations that are showing no trend or a cooling trend and leave mostly stations that have a warming trend (airports), you bias the trend upward, regardless of gridding.

Good point. I think I got who’s who a bit lost in the prior comment. There are also longer term issues with station change. Drop one on one side of the jet stream, pick one up on the other, then have the PDO flip and change the relationship between them. Now your “in-fill” replacement may be out of phase with when the baseline established a relationship…

Tim Clark (06:30:50) : Forgive the double post, it’s a Monday.

It’s Monday already? When did that happen 😉

rbateman
March 8, 2010 8:09 pm

Paul Daniel Ash (18:54:01) :
I believe the temperature record has been left in a lesser state than actually exists.
They didn’t just drop stations, they left them with gaping holes. Holes that I find out are sometimes plugged with data forgotten or a process never checked independently.

E.M.Smith
Editor
March 8, 2010 8:15 pm

geo (07:25:50) : It seems to me the range of skeptics runs to three main lines of thought:
Then there is:
0). We just don’t know. And can’t. The temperature histories are too short and too full of holes to really say much useful at all (and being moth eaten even more as we watch). We know it was much hotter and much colder in the past from natural causes. We know there are many profound cyclical events of long duration (See Bond Events). And there is just no way we can disambiguate those events (that we don’t understand) from ‘normal’ that we have not measured well enough to predict, or even to report, with any accuracy.

1). The globe isn’t warming at all –we’re measuring it wrong (siting, land use, dropouts, whatever).
Reading Chiefio’s article, he seems to be firmly in camp #1.

Close. I agree with the part after the “-” but can’t really see a way to say there is “no warming” flat out.
So far all I can say is “no warming in the USA and some other places” with a modest error band, widening in the distant past. Some other continents also show little or no warming, or a bit of cooling, but the error bands are much wider. A couple of continents show some warming, but is it real or measurement error? (Like New Zealand where we can tease out a recent bit of warming trend, that is exactly what would be expected from putting all your thermometers at airports… so is it “real”? )
So I mostly sit in #0 with occasional bouts of #1 that I fight off with a dose of LIA exit says we’re warming from that point. Then I end up with:
http://chiefio.wordpress.com/2009/10/09/how-long-is-a-long-temperature-history/
where I settle on: It all depends on where you put your starting point.
Just like all fractal things, patterns repeat.
So we are warming, and cooling, each day.
And warming and cooling with each storm wave.
And warming and cooling each seasonal swing.
And warming and cooling with each El Nino / La Nina cycle.
And warming and cooling with each PDO flip (I’m on the Pacific, for folks elsewhere, substitute AMO, AO, etc.)
And warming and cooling as axial tilt shifts.
And warming and cooling as the solar “constant” changes ;-0
And warming and cooling as the ice ages come and go.
And it’s a fools errand to try and say if you “are warming” or “are cooling”.
The answer is “yes” to both, and at all times.
It’s a very satisfying thing, in a Zen like Mu! sort of way:
“The question is ill formed”
But only a few Zen Heads really like that answer, so I usually keep it to myself… and just go with the flow of “it’s probably not warming, much” just because it avoids a long philosophy discussion and folks looking at you strangely when you say “Zen” is the answer 😉
( But it really is the answer, and O really is the form of it… the empty vessel… I do not know… )

March 8, 2010 8:37 pm

Re: Tim Clark (Mar 8 14:11),
I got mixed up with the colders – yes, removing a colder station will make the cell average warmer, as you’d expect. The point is that the average to refer to is that of the grid, not the globe.

Pamela Gray
March 8, 2010 8:39 pm

Here is one example of how station drop might affect anomaly over time. Anomaly changes are different depending on climate zone and GPS address, just like deserts are more sensitive to atmospheric treatments than forests are. With the same degree of “treatment”, IE CO2 increase, you may have one altitude level of therms bouncing up, while another altitude level of therms stays the same. Because cities tend to be near waterways which tend to be at lower altitude, station drops from higher altitudes (which will have different anomaly responses to CO2 forcing) will affect your overall anomaly, bleeding out the more robust to climate warming forcing therms, and leaving the overly sensitive lower altitude therms to run with the ball.
Whether this is the case or not needs examination.

March 8, 2010 8:41 pm

rbateman (20:09:56) :
I believe the temperature record has been left in a lesser state than actually exists.
They didn’t just drop stations, they left them with gaping holes. Holes that I find out are sometimes plugged with data forgotten or a process never checked independently.

Don’t just believe.
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/
http://www.drroyspencer.com/2010/02/new-work-on-the-recent-warming-of-northern-hemispheric-land-areas/
Check it.

March 8, 2010 8:53 pm

Re: Tim Clark (Mar 8 06:17),
“Uh, Nick, I think you need to rethink that statement. We’re not talking absolute station temp here. If you drop stations that are showing no trend or a cooling trend and leave mostly stations that have a warming trend (airports), you bias the trend upward, regardless of gridding.”
No, what I said was “It only matters whether they are rising relative to that local long term mean.” Which seems to me to be exactly the same as you are saying. Of course, if you select high or low trend stations, that will be reflected in the results. But the D’Aleo/Watts report is all about selecting stations from warm regions, not with a warming trend.
In fact, one of the ironies is that it is cold stations that have the greatest warming trend (“Blame it on Canada”). So eliminating high-lat stations should make things cooler.

Pamela Gray
March 8, 2010 9:00 pm

Nick: pre- or post-homogenization for those higher lats? Citation/link please.

anna v
March 8, 2010 9:08 pm

I have come to the conclusion that the basic problem in the status of climate calculations is that the people doing them live in a linear world. Unfortunately the physics behind climate is highly nonlinear.
All statements saying that “anomalies reflect trends and correct biases” depend on there being one mechanism affecting the average temperature curve in each location, and on that being linear in the relevant variables.
Example:
A varying heat source in a closed room. Thermometers next to the source, and at various places in the room will show different temperatures, but a time plot of the temperature will show similar, within errors, variations in tandem with the source variations. Hotter close to the source, cooler far away. An anomaly over the average of each thermometer may be defined and it will show the same trend revealing the variations of the source and the anomaly can be averaged.
Add a second gradually moving heat source in the room and the same thermometers. Each thermometer will show different trends depending on the location with respect to the moving source and to the varying immovable source and anomalies over the average for each thermometer will be every which way.
The planet is much more complicated than this simple example. There are enormous motions of sea currents, air currents and jet streams , non linear and chaotic. These motions take bulk air with temperatures characteristic of their points of origin and distribute and mix it planet wide, non linearly. We then sit in some thousand places and take temperatures while varying non linear sources with different time sequences go about.
Again I will refer to the february anomaly plot: http://nsidc.org/images/arcticseaicenews/20100303_Figure4.png
It is worth studying it. There is an positive anomaly of 14C degrees in the arctic and a negative of 4C in Russia and all the rest in between. What has happened? Huge masses of hot air moved to the arctic where there are few storm systems and moved huge masses of frigid air at -45C south into multiple churning storm systems that mixed this cooling potential so that it ended up as a -4C anomaly in Russia. The overall positive anomaly reported is an artifact of ignoring the non linearity of the heat distribution system in the atmosphere and taking an average of apples and oranges.
And as I stated badly yesterday night above, by this miss match calculations it is quite possible for the heat content of the earth to be absolutely stable in time and the anomaly be positive.

"Popping a Quiff"
March 8, 2010 9:11 pm

Nick Stokes (20:53:22) :
Your argument has room for exchanging the dropped stations for the retained?
The results will be the same?

E.M.Smith
Editor
March 8, 2010 9:16 pm

rbateman (09:43:28) : I have no idea how GISS manages to justify this:
http://gallery.surfacestations.org/main.php?g2_view=core.DownloadItem&g2_itemId=57846&g2_serialNumber=2
when the raw data for Ashland, Oregon looks like this:
http://www.robertb.darkhorizons.org/TempGr/AshOre1.GIF

Well, GISS never justifies anything 😉
(Think about it… I’ll wait… 2 meanings… )
But that one IS pretty bizarre …
I took the opportunity to use it as a ‘test case’ for my dT/dt method. Mine looks pretty darned good in comparison. I know, I ought to make a graph out of it (especially now that I’ve actually made graphs in a posting so everyone knows I can do it now and have the hardware 😉
But that box is presently shut down, so it’s “reboot and all that” stuff… (I hope to be out of the ‘reboot to change what you do’ process Real Soon Now thanks to some donated resources… – thanks guys, you know who you are 😉 but for right now, I’m going to post Yet Another Table Of Numbers…
The “dT” column is the ‘accumulated change of temperature’ while “count” will always be 1 for a single station. The dT/yr ought to match the change from one year to the next (so it ought to be easy to double check against the data chart). Be advised, the dT/yr is calculated from past to present while the ‘accumulated dT’ is shown present to past (puts the ‘baseline start of time’ at now).
Sure looks to me like a whole lot of nothing happening, right in sync with the base data you posted. Looks to me like GIStemp “screwed the pooch” somehow… But don’t worry, “The Anomaly Will Fix It” 😉

Thermometer Records, Average of Monthly dT/dt, Yearly running total
by Year Across Month, with a count of thermometer records in that year
-----------------------------------------------------------------------------------
YEAR     dT dT/yr  Count JAN  FEB  MAR  APR  MAY  JUN JULY  AUG SEPT  OCT  NOV  DEC
-----------------------------------------------------------------------------------
2006   0.15 -0.15    1   2.3 -1.1 -3.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
2005   0.52 -0.37    1  -1.5  0.4 -0.8 -1.3  0.2 -1.9  0.2  0.5 -0.6 -0.3  0.8 -0.1
2004   0.68 -0.16    1  -2.2  0.8  1.7  3.0  0.6 -0.7 -0.3  0.6 -2.0 -1.0 -0.8 -1.6
2003  -0.13  0.81    1   3.6 -0.8  1.4 -2.5  0.4  0.7  0.8  1.9  1.4  3.0 -1.4  1.2
2002   0.29 -0.42    1  -0.7  1.9 -1.8  2.1 -3.4  1.0  1.1 -2.0 -1.1 -1.9 -0.6  0.3
2001  -0.31  0.60    1  -0.1 -3.4  2.3 -0.4  2.2 -1.3  0.7  0.5  1.1  0.4  4.0  1.2
2000  -0.32  0.01    1   0.4  2.7 -0.6  0.0  1.6  1.5  0.0  0.3 -0.1 -0.3 -5.3 -0.1
1999   0.36 -0.67    1  -2.7 -1.7 -1.4 -0.5  0.6 -0.6 -3.1 -1.4 -2.0  1.1  2.2  1.4
1998   0.42 -0.07    1   1.9  1.1 -0.8 -0.7 -4.4  0.7  2.5  0.4  1.8 -0.7 -1.5 -1.1
1997   0.61 -0.18    1  -0.2 -2.6  0.4 -0.5  3.1  0.0 -2.4 -0.1  1.6  0.0  1.0 -2.5
1996   0.77 -0.17    1  -3.2  0.3  0.6  1.5 -1.0  0.3  2.5  2.6 -3.3 -0.1 -1.2 -1.0
1995   0.06  0.72    1   3.3  2.7 -0.7 -1.8 -0.7 -0.6 -1.8 -1.2  0.4  1.0  5.7  2.3
1994   0.01  0.05    1   1.0  0.0 -1.6  0.6 -0.6 -0.1  4.9  1.0  0.4 -3.2 -1.5 -0.3
1993   1.54 -1.53    1  -1.5 -4.1  0.2 -2.0 -1.6 -2.3 -4.3 -2.7  1.0  0.5 -2.8  1.2
1992   0.07  1.47    1   2.5  1.2  3.4  3.8  5.5  4.0  0.6  0.7 -2.1 -0.7 -1.2 -0.1
1991  -0.20  0.27    1  -1.7  3.9 -1.6 -4.3 -1.5 -1.5 -1.0  0.4  0.6  2.5  3.6  3.9
1990  -0.60  0.40    1   1.8  1.6  0.7  0.3 -0.4 -1.0  2.9  1.2  1.6  0.1 -1.1 -2.9
1989   0.20 -0.80    1  -1.4 -3.4  0.0  1.8  1.0  1.3 -2.6 -1.0 -0.2 -3.3 -0.7 -1.1
1988   0.62 -0.42    1   0.6 -0.2 -0.1 -1.6 -2.8 -2.5  2.8 -0.4  0.4  0.3 -0.8 -0.8
1987   0.72 -0.09    1  -4.1 -1.8 -2.2  2.0  1.6 -0.6 -0.1 -1.6  2.2  1.9  0.6  1.0
1986  -0.69  1.41    1   3.9  3.3  3.9 -1.7  0.8  1.6 -3.3  3.0  0.1  1.6  3.7  0.0
1985  -0.73  0.03    1  -0.1 -1.2 -2.4  3.1 -0.8  2.3  1.2  0.3 -1.5  1.1 -2.9  1.3
1984  -0.15 -0.58    1  -1.3 -1.7 -0.7  0.1 -1.1 -0.4  3.0 -0.9  1.0 -1.5 -0.9 -2.5
1983  -0.28  0.13    1   3.5  2.1  2.5 -0.3  0.5 -2.3 -2.6 -0.8 -1.2 -1.2  1.1  0.2
1982   0.77 -1.05    1  -4.7 -0.4 -1.5 -1.7  0.6  0.8 -0.1 -1.6 -2.1  1.5 -1.8 -1.6
1981   0.17  0.61    1   1.6 -2.2  1.4  0.2  0.5  2.5 -0.6  3.3  0.5 -2.2  0.9  1.4
1980   0.65 -0.48    1   1.5  2.4 -2.9  0.6 -1.7 -2.8  0.1 -0.4 -1.5 -1.1  0.8 -0.8
1979   0.12  0.53    1  -3.8 -1.4 -0.5  1.2  1.5 -0.6 -0.3 -1.0  4.5  0.7  1.7  4.3
1978   0.42 -0.29    1   4.9 -0.6  3.6 -3.0  2.3 -1.4  0.6 -2.5 -2.2  1.8 -2.2 -4.8
1977  -0.56  0.98    1  -2.3  2.6  0.6  3.2 -3.2  4.0 -0.1  4.3 -1.2 -0.8 -0.8  5.4
1976  -0.78  0.22    1   2.4 -0.4 -0.1  2.0  0.5 -1.0  0.1 -0.3 -1.1  1.5  2.0 -3.0
1975   0.27 -1.05    1  -1.3  1.0 -1.5 -2.5  0.8 -2.3  0.2 -2.7 -1.0 -1.2 -1.7 -0.4
1974   0.65 -0.37    1  -0.9 -2.7  1.2 -0.9 -3.0  0.8 -1.6  1.4  2.4  1.3  0.0 -2.5
1973  -0.02  0.67    1   2.1  1.1 -3.3  1.6  1.1  0.1 -0.3 -1.3  1.7 -0.3  0.1  5.4
1972  -0.77  0.75    1  -0.4  1.6  3.1 -0.4  1.7  1.9  1.0 -0.5  0.0  1.0  0.9 -0.9
1971   0.74 -1.51    1  -4.2 -2.5 -1.6  1.5 -1.2 -3.4 -1.0  0.5 -0.1 -1.8 -3.1 -1.2
1970   0.04  0.70    1   4.9  2.2  0.3 -2.6 -2.0  1.2  1.9  1.7 -1.7  1.7  2.6 -1.8
1969   0.57 -0.53    1  -1.5 -4.4 -1.3  0.6  2.6  0.0 -1.8 -0.1 -0.1 -1.4 -0.5  1.6
1968   0.21  0.36    1  -0.8  4.3  2.7  2.9 -0.4 -0.3  0.1 -3.9 -1.7  0.3 -0.6  1.7
1967   0.39 -0.18    1   0.6 -0.2 -2.3 -5.1 -1.0  1.8  3.0  3.0  1.8 -0.6 -0.4 -2.8
1966  -0.22  0.61    1   0.2  0.4  0.4  1.0  2.6  0.8 -1.6  0.9  1.9 -1.4 -0.5  2.6
1965  -1.05  0.83    1   0.7  1.0  2.6  2.6  0.8  0.4  1.2  0.3  0.3 -0.1  3.0 -2.8
1964  -0.43 -0.62    1   1.2 -5.3 -1.8  0.2 -2.4 -0.2  1.8  0.1 -3.3  2.1 -0.9  1.0
1963  -0.78  0.35    1   0.5  4.3  1.3 -3.4  2.4 -0.3 -2.3  0.5  1.7  0.7 -0.7 -0.5
1962  -0.38 -0.39    1  -4.3 -1.8 -0.8  1.8 -0.5 -2.7 -0.5 -2.8  2.7  0.1  1.8  2.3
1961  -0.12 -0.27    1   1.4  1.4 -1.6 -0.1  0.2  0.8 -1.2  2.9 -2.9 -0.9 -1.3 -1.9
1960  -0.52  0.40    1  -1.2  0.7  1.5 -1.5  0.3  0.8  0.1 -0.3  2.3 -0.6  0.5  2.2
1959   0.85 -1.37    1   0.5 -4.1  1.3  1.3 -4.3 -0.3  0.0 -3.0 -0.9 -1.3 -1.1 -4.5
1958  -0.82  1.67    1   3.7  2.3 -1.7 -0.3  1.7 -0.1  2.9  4.3 -1.9  3.8  2.7  2.6
1957  -1.11  0.29    1  -4.3  3.5  1.3 -0.7  0.3  2.4 -2.0 -0.9  1.9 -0.1 -0.2  2.3
1956  -1.09 -0.02    1   2.7 -0.2 -0.1  3.6  1.7 -2.0  2.5 -1.4  0.3 -2.2 -1.2 -3.9
1955  -0.63 -0.46    1  -1.3 -3.7 -0.1 -3.7 -2.3  2.9 -1.2  2.7  1.1  0.8 -1.7  1.0
1954  -0.73  0.10    1  -1.6  1.7  0.0  1.7  3.9  0.9  0.2 -1.7 -3.4  0.8 -1.5  0.2
1953  -0.73  0.00    1   3.6  1.0  1.2 -1.8 -2.8 -1.1 -2.1 -0.7 -0.1 -3.4  5.6  0.6
1952  -0.41 -0.32    1  -1.5 -1.5 -0.5 -1.3  0.4 -3.4  1.0  0.6  0.3  3.5 -3.3  1.8
1951  -0.47  0.06    1   2.9  1.3 -0.6  3.0  0.1  1.7 -0.1 -1.3  1.4 -1.1 -1.4 -5.2
1950  -0.78  0.32    1   1.0  0.5 -1.2 -3.2 -1.5 -0.7  0.9  1.6 -0.9  2.1  0.9  4.3
1949  -1.40  0.62    1  -4.9  0.3  1.7  5.2  3.0  0.2  1.5  0.6  0.8 -3.5  1.8  0.7
1948   0.22 -1.62    1   1.6 -4.0 -3.7 -4.4 -4.6  1.6 -0.6 -0.3 -2.1  0.0 -0.7 -2.2
1947  -0.17  0.38    1  -0.8  1.4  2.4  0.9  1.0 -0.1 -1.8 -2.8  1.7  3.5  0.1 -0.9
1946   0.74 -0.91    1  -1.5 -0.5  0.4  0.5  0.4 -2.0 -1.5 -0.2 -0.8 -4.5 -1.1 -0.1
1945   0.64  0.10    1   0.1  1.2 -1.3  0.6 -0.1  1.0  1.2  1.2 -1.3 -1.1  0.7 -1.0
1944   0.88 -0.24    1   2.6 -3.5 -1.2 -2.9  1.0  0.7 -0.3  1.1 -2.1  2.1 -1.9  1.5
1943   0.79  0.09    1  -3.5  3.5  1.2  1.5  1.0 -0.7 -0.6 -2.9  2.2 -1.3  0.7  0.0
1942   0.98 -0.19    1  -1.3 -3.9 -2.6 -0.1 -2.0 -0.1 -0.3  2.5  3.9  2.9 -1.0 -0.3
1941   1.42 -0.44    1   0.5  2.3  0.8 -0.6 -1.7 -3.5  1.7 -2.3 -1.9 -1.9  2.0 -0.7
1940   0.93  0.49    1   3.4  3.1  0.5 -1.3  1.6  3.2 -1.3  0.0 -1.0  0.7 -1.7 -1.3
1939   0.37  0.57    1  -0.4 -1.3  2.6  2.0  0.1 -1.9 -1.1  2.6 -0.7 -0.3  2.9  2.3
1938  -0.17  0.53    1   5.4  0.8 -1.9  1.3  0.0  0.8  1.6  0.0  1.8 -0.1 -3.1 -0.2
1937   0.63 -0.80    1  -6.9 -1.5  0.6 -2.5 -0.6  0.4  1.1 -1.0 -0.5 -1.5  1.2  1.6
1936  -0.05  0.68    1   1.8 -1.6  2.0  1.6  1.6  0.0  0.5 -0.5 -1.6  3.5  2.9 -2.0
1935   1.77 -1.82    1  -2.6 -1.1 -6.8 -3.0 -2.4  0.3 -0.8 -0.8  1.4 -1.9 -3.9 -0.2
1934  -0.24  2.01    1   4.4  5.2  4.7  2.4  5.1  0.2 -0.8  0.4  2.5 -1.4  1.7 -0.3
1933  -0.13 -0.11    1  -1.0 -1.9 -1.0  1.5 -2.1 -1.2  2.4  1.9 -3.8  1.8 -1.4  3.5
1932   0.92 -1.06    1  -3.0 -1.9 -0.1 -2.8 -4.2  1.1 -3.5 -1.5  2.2  0.1  3.0 -2.1
1931   0.10  0.83    1   4.6 -0.5 -0.7  0.4  4.8  0.4  1.9  0.3  0.0  0.1 -1.9  0.5
1930  -0.18  0.27    1  -1.4  3.6  2.2  3.7 -1.8  0.2  0.0  0.0  0.3 -0.8 -0.6 -2.1
1929   0.39 -0.57    1  -3.0 -3.0 -1.8 -1.3 -1.9 -0.1 -0.2  0.8 -1.1  1.7  0.4  2.7
1928   0.07  0.32    1  -0.2  0.4  2.3 -0.1  3.5 -1.4 -0.2 -0.5  2.1 -1.2 -1.5  0.6
1927   1.62 -1.55    1   1.3 -1.8 -3.6 -4.8 -2.1 -2.2 -0.8 -0.1 -0.5 -1.6 -1.2 -1.2
1926   0.46  1.17    1  -1.1  1.2  1.7  3.0 -0.6  2.4 -0.5  0.5  1.8  2.7  2.9  0.0
1925   0.37  0.08    1   1.4  0.2  3.0 -0.4 -1.6 -0.7  2.9  0.4 -4.2 -0.9 -0.7  1.6
1924   0.07  0.31    1   0.3  1.8 -1.7  1.6  3.0  2.5 -1.8 -1.4 -0.2  0.9 -0.7 -0.6
1923  -0.40  0.47    1   2.4  1.3  1.3  2.0 -0.2 -2.4 -1.2  1.4  0.0 -1.1  2.8 -0.7
1922   0.23 -0.63    1  -2.4 -2.0 -2.3 -1.5  1.5  0.4  2.0 -0.8  2.8 -1.4 -3.6 -0.3
1921  -0.21  0.44    1  -0.6  0.6  2.2  1.0 -0.3  1.0  0.0 -1.4 -0.7  3.2  1.7 -1.4
1920  -0.34  0.13    1  -1.0  0.8 -0.9 -2.5 -1.4  0.6 -1.7  0.8  0.0  1.1  2.9  2.9
1919   0.27 -0.62    1   0.2 -1.0 -0.5  0.0  1.9 -3.9  1.9  1.3 -2.4 -3.2 -0.9 -0.8
1918  -0.35  0.63    1   3.3  2.1  4.2  2.3  0.7  3.4 -2.0 -1.3  1.3 -1.3 -3.1 -2.1
1917  -0.90  0.55    1   0.9 -5.2 -4.9 -1.4  0.3  0.4  4.3  1.5  0.7  3.2  2.7  4.1
1916   0.22 -1.12    1  -4.7  2.5 -0.9 -1.2 -0.5  0.0 -2.1 -2.1  1.2 -1.4 -0.6 -3.7
1915  -0.03  0.26    1  -0.5 -0.3 -0.8  0.8 -3.0  0.5 -1.3  2.6  1.1  0.2  0.6  3.2
1914  -0.75  0.72    1   5.4  1.8  3.3  2.0  1.0  0.1  2.4 -1.4 -2.5  0.5 -1.7 -2.3
1913  -0.42 -0.33    1  -4.6 -2.9  0.8  0.8  0.8 -1.0 -1.4  0.8  0.6  1.3  0.2  0.6
1912  -0.58  0.16    1   3.0  4.9 -3.9 -1.8  0.8  0.2 -2.8 -0.6  2.2 -1.9  0.9  0.9
1911   0.80 -1.38    1  -1.0 -2.3 -0.3 -3.4 -4.1  0.4  0.2 -0.1 -2.0 -1.2 -0.6 -2.1
1910   0.21  0.59    1  -2.1 -0.8  2.8  2.6  3.8 -1.2  3.6 -0.4 -2.0  0.5 -0.5  0.8
1909   0.26 -0.05    1  -0.7  2.8  0.6 -1.0  2.0  1.4 -4.5 -0.5  0.4  0.8 -1.3 -0.6
1908   0.61 -0.35    1   2.4 -6.3  1.3 -0.1 -3.5 -0.3  2.4  2.5  1.5 -3.5  1.7 -2.3
1907   0.97 -0.37    1  -0.8  0.8 -1.5 -0.3  1.1  1.0 -3.2 -3.2 -0.8  2.3 -0.2  0.4
1906   0.82  0.16    1  -1.8  0.7 -2.7  0.8  0.9 -0.9  1.3  1.2 -0.3  1.0  0.1  1.6
1905   0.41  0.41    1   3.1  2.7  3.6  0.2 -2.8 -1.1  3.1 -0.5 -0.1 -1.0 -2.7  0.4
1904  -0.37  0.77    1  -0.9  2.1 -1.1  2.2  0.8 -0.1  1.7  2.1  1.5 -0.6  1.6  0.0
1903  -0.27 -0.10    1   0.2 -4.6  0.9 -0.6  1.5  1.3 -0.1 -1.1 -1.4  0.9  2.1 -0.3
1902  -0.02 -0.25    1   0.8  1.9 -1.0  0.8 -1.0 -0.4 -0.8 -2.3  1.9 -1.2 -2.1  0.4
1901   0.52 -0.53    1  -3.9 -0.3 -3.2 -1.3 -0.2 -1.8 -1.1  4.2  0.3  3.1 -0.6 -1.6
1900  -0.49  1.01    1   2.2  1.9  4.1  0.3  3.0  2.0 -0.2  1.0 -3.2  0.6 -1.2  1.6
1899  -0.11 -0.38    1   2.4 -3.5  0.5 -1.9 -2.2 -0.4  0.0 -4.7  1.4 -0.3  4.2 -0.1
1898  -0.12  0.01    1  -0.9  2.1  2.2 -0.4 -3.5  0.7  1.3 -0.2  1.0 -1.2 -0.2 -0.8
1897   0.14 -0.26    1  -2.9 -1.3 -4.4  4.3  5.2 -1.0 -3.6  1.8 -0.1 -1.1  1.1 -1.1
1896  -0.47  0.61    1   2.5  0.0  0.9 -3.2 -0.8 -0.3  2.9  0.9  1.2 -0.8  0.7  3.3
1895  -0.44 -0.02    1  -0.2  4.1  0.2  1.1 -1.3  2.1 -0.5 -1.8 -0.8  1.0 -3.8 -0.4
1894  -1.67  1.23    1   3.2 -1.1  0.7  2.4  1.3  0.5  1.7  1.1  1.1  3.0  1.5 -0.7
1893  -0.21 -1.46    1  -3.3 -2.6 -2.2 -0.4 -1.2 -1.1 -0.1 -0.2 -2.4 -2.5 -0.7 -0.8
1892  -0.28  0.08    1  -0.8  2.9  1.2 -2.6 -0.9  1.1 -1.0 -1.6  1.5 -0.2 -0.6  1.9
1891  -0.59  0.31    1   5.2  0.4  1.8  0.5 -0.6 -1.2 -0.1  1.2 -2.5  1.2 -0.2 -2.0
1890   1.77 -2.37    1  -6.9 -4.2 -5.3 -4.4 -0.7 -5.3 -3.7  1.0  0.5 -2.7  1.5  1.8
1889   1.77  0.00    1   0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
For Country Code 425725970070

Mostly I’ve passed in country codes, but individual station IDs can be used too. Maybe I need to change that “For Country Code” to “For Stations in” … Hopefully I got it right from squinting at the fuzzy number on the graph in your link ….
It would be very interesting to run this on more odd cases. So much to do…

March 8, 2010 9:27 pm
Pamela Gray
March 8, 2010 9:31 pm

Ashland’s temps are not doing anything for one very good reason. Given that it is Ashland we are talking about, they don’t feel like doing anything ;>) cough cough.

March 8, 2010 9:40 pm

Re: Alex Heyworth (Mar 8 15:14),
Thanks:
“WRT your second post, I would be convinced there was no issue IF I thought what GISS does in calculating anomalies was as simple as your example. As it is, I am about 3/4 convinced, but would like to see what Chiefio comes up with.”
A few points: CCC have the ability to do this test with GISSTEMP and their
reimplemented version ( which has been regression tested ) I believe that
they have already done this. ( carrot can you fetch nick barnes?)
Lucia has implemented the hansen method from A) the text B) checking the code to make sure she implemented it correctly. Zeke has implemented
a similar method, benchmarked against GISSTEMP. Grant Foster also has done a similar approach. CheifIo also did a limited test with GISSTEMP.
the difference was in the 1/100s place.
Like you I think the best test of the GISS method is to run the giss code.
Failing that you have a variety of approaches to the problem:
1) write code from scratch that matches the paper and the code ( lucia)
2) Examine the math describe in the paper ( nick stokes at first)
3) Toy examples of parts of the problem ( see mine here and Lucias first toy)
4) Independent solution with refernce to the problem ( Zeke &Foster)
6) independent database and approach ( Spenser)
All these aproaches and the limited example run by the chief show NO significant bias.
let me explain how people could approach this problem from an even more
conceptual standpoint. I think it helps frame the issue. Look at UAH.
from 1979-2010. You will see a trend, lets say its about .13C/century.
if 1979 is zero then 2010 is .4C ( trend wise) if dropping thermometers
on the land ( 1/3 of the total sample) dropped the land temps you would
see a departure from UAH only iff the drop was huge. lets just say the sea was .4C and land was .4C
if dropping thermometers dropped the land to .2C the total average would
drop to 3.3C. you would see that. ( .11C per century) If the sea was .4C and the land was .4C
and dropping thermometers reduced the land to .3C ( 25%) you see .366C
or a trend of .12C century. Lets suppose that dropping thermometers
hit the record by 10%. thats a big error, right. Then the land would show
.36 and the sea would still show .4 the average? .38C
very simply. It takes really big errors in the land record to move the overall average ( please dont argue about the SST record , its not germane). Now this doesnt mean that the land record should NOT be examined for improvements. But dont expect to find much.
There are at MOST a few 1/10’s of a degree on the table ( from 1979 on)
Mickittrick and Michaels estimate perhaps .3C total from 1979 to present.
In light of UAH that seems a bit much to expect.
So when people tell me they
More interesting is the record prior to 1979. More interesting is the phenomema Spenser desscribed in moving from zero population to small rural. Think about the early record. If you are looking for UHI and the spenser phenomena is right ( UHI is most visiable in the very low population to slightly bigger) Then finding UHI in the post 1940 record
is going to be increasingly difficult– signal thresholding. Finding it in the early record? neat problem.

"Popping a Quiff"
March 8, 2010 10:07 pm

so Nick,
I’ll make it easy for you, something you want to talk about:
is the anomaly for the dropped stations that same as those retained?
I won’t bother to press you about the actual temps for both.

E.M.Smith
Editor
March 8, 2010 10:24 pm

Keith W. (14:12:13) : adjusted based upon the temperatures from any other stations within 1200 kilometers. This means that the gridcell containing Boston, Massachusetts, is affected by the temperature of the gridcell containing Atlanta, Georgia. While there would be a mitigating factor if there were temperatures from more Northernly gridcells, the higher latitude sites are decreasing.
BINGO!
There are some added bits of finesse to consider, but that is the basic issue and, IMHO, why GIStemp diverges from a “pure anomaly” result and why all the Hypothetical Cow “proofs” prove nothing. The Anomaly is a smoke screen behind which is hidden all the “other stuff” that STATION BIAS can influence. And with the Grid/box anomaly process coming AFTER all those other things, it can not “fix it”. No matter how hypothetically proven…
So we have, oh, a station in the high mountains 1000 km north of Boston gets dropped. Now a missing month of data (or in some cases, a missing year…) gets “filled in” based on a Carolina Beach. But in the baseline, the cold mountain is used, and in the present the Carolina Beach. “No Problem” as The Reference Station Method will “fix it up”; but we know from the FOIA emails that Gavin, at least, recognizes that bias can leak through…
So lets say that there was a Very Cold Year in the baseline ( It IS planted in a “swoon”) and that mountain was cold by 4 degrees compared to the average (high cold places DO have more volatility…) but now the data are from Carolina at a nice water moderated beach. Not going to get a -4 there, be lucky if it’s a -2 even on a similarly cold day / weather pattern … (Note: I’m saying cold compared to the average offset that GIStemp has computed in the Reference Station Method)
So we take these two “Boston” temperatures and find a -4 delta vs a -2 delta and we say “Boston” has warmed by +2 C anomaly. It doesn’t take many of these to get the average of them to be +1/2 C and discover “global warming”.
That is, IMHO, “how it works”.
Because the in-fill and the UHI and the homogenizing all happen before the GRID / Box anomaly step: STATION BIAS can cause errors in the TEMPERATURE DATA (that we see as those bizarre GISS temp graphs folks keep posting, like Ashland above) that can then make it into the anomaly maps ( that are made in a step AFTER those biased data / temperature graphs like, the Ashland map, are made.)
So I’m sorry, but the baseline DOES matter and the STATION BIAS does matter, and the station dropouts DO matter. Because they all feed into this error forming process. Prior to the grid / box anomaly production.
The ‘bits of finesse’:
As it does the UHI ‘correction’ it starts close and looks ever further out for a Reference Station to use. As you get further away, the correction gets worse. So while just down the coast from Pisa might work well, on the slopes of the German Alps works much less well. As stations are dropped, what UHI correction is done will become ever more dodgy due to looking ever further away for a ‘reference’.
Station drops matter.
A station with ‘too few rural neighbors’ will not get a UHI correction at all. So as the rural stations are dropped, there are an ever greater number of urban stations that have their temperatures simply ‘passed through’ unchanged. (Then a later bit of code drops them).
Station drops matter.
A USHCN station (with all those lovely adjustments in it) is “unadjusted” in STEP0 by comparing it with the GHCN record for the same station. If there is NO such station, the USHCN data is passed through AS IS (to be further ‘adjusted’ in the later GIStemp steps that are now “putting back in” the adjustments that were never taken out… a ‘double dip’). As GHCN stations plunged to just 134 in the USA, more USHCN stations are passed through “AS IS” and double dip adjusted.
Station drops matter.
A station will start looking for “nearby” stations for “in-fill” data via the reference station method. It starts nearby and circles out to 1000 km looking until it has “enough”. As stations are dropped, it must look ever further to find a reference station. The further away a station is, the less it will correctly provide fill-in and the more likely you have a bogosity. (the more likely it is to be in a divergent micro-climate zone).
Station drops matter.
I could go on, but I won’t.
The point: The actual code is NOT a hypothetical model and it does do things that depend on the actual number of stations available to it. As that number drops, those processes produce ever more broken results. Some of these have nothing to do with the “peer reviewed” papers that purport to describe what is happening, but have a lot to do with programmer choices. (Such as that “just pass the USHCN through unchanged if the GHCN is missing”… I’m sure that was not part of the paper describing how to ‘unadjust’ USHCN vs GHCN… )
So I’m sorry, but Station drops matter and station bias matters.

Alex Heyworth
March 8, 2010 10:38 pm

Carrot eater and Steve Mosher, thanks for your responses. CE, I was aware of the basics of the RSM from reading (so far) the first half or so of Hansen 1997. If all subgrids only had two stations to worry about, the method would be uncontroversial. It is what happens as a result of reiterating the process dozens of times that is more of an issue to my mind.
Steve, I am not overly concerned that there is likely to be a huge error in the global record to date as a result of the reduction of station numbers. As a number of commenters have pointed out, any resulting error is likely to be a reduction of the actual global trend, not an increase.
What it is a big issue for is regional variations from the global trend. This is important from the POV of assessing the actual impacts of GW. We are a long way from having any convincing picture of what regional impacts will be.

E.M.Smith
Editor
March 8, 2010 10:43 pm

@ Anthony Watts:
One heck of a lot flatter than the GIStemp results!
And a heck of a lot closer to the real data, if I do say so myself.
Thanks, many many thanks. It’s always nice to have a tool tested and shown to produce good results. 😉
If you have any other stations, or groups of station, where you would like such data for comparison, just send me an email. It’s about a minute a report to run them. Oh, and the code actually produces a file now with “Comma Separated Values” for Year, dT, dT/year, and Count. Makes it a lot easier to suck it into a graphing program… I don’t include the monthly data in the csv file as they are mostly for my own R&D interests.
On my infinite “to do” list is to use it to look at regions where GISS have rosy patches on their Anomaly Map and see what is actually there 😉
E.M.Smith

steven mosher
March 8, 2010 11:10 pm

Many of us ex programmers share a common approach. We want to see what happens to the numbers when GISSTEMP is run.
Other approaches, mathematical arguments, back of the envelop theorizing dont cut it for us. We want to know what happens in GISSTEMP.
made sense to me to ask for this
A while back I made this request.
http://chiefio.wordpress.com/2010/01/27/temperatures-now-compared-to-maintained-ghcn/#comment-2969
Run GISSTEMP with two datasets.
The dataset with the drop out
A dataset containing ONLY those stations that still remain.
this proceedure should give us a solid idea of how dropping or adding stations matters. The only time it matters is when the number of stations
becomes very small.. in the begining of the record.
As you can see from the Comments above EM also had this idea
but could not make it work.
ccc was able to make this test work.
ccc did that test: ( ccc have Gisstemp running as well. )
For those of you who know python and would like to mess around with GISSTEMP they have a great project.
Does station drop matter to the most important question?
the most important question is not how an individual sub box may get
impacted ( there are 8000 of them)
If you look you will find subblocks where the drop out can have an
effect ( especially if the trends are different between stations)
the question is this: what happens to the global average. Does it change
significantly?
Significantly?
Here is a nice link:
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
you will see what the “difference” is.
And remember this is for the land. The land is 1/3 of the total. If you want
to change the GLOBAL average you have to move the land numbers by a lot.

steven mosher
March 8, 2010 11:12 pm

Anthony Watts (23:08:26) :
Hi ANthony. Nick stokes has implement a process that will allow you to see how a station is adjusted and he output a list of stations for me.
have a look.

steven mosher
March 8, 2010 11:16 pm

Paul Daniel Ash (18:54:01) :
yes there is a disconnect in how two different communities talk about
“significant difference”

steven mosher
March 8, 2010 11:24 pm

Jan Pompe (16:28:40) :
steven mosher (13:55:32) :
“What do you think Nick?”
I’m not really sure that I care much for the opinion of a mathematician
******************
that’s ok. You know Nick and I disagree about plenty. But he has a blog.
he writes code and shares it. I can see what he does mathematically.
I dont always agree with him but his view is one that I would consider.
You see since he shows his math his views on other things dont matter
to me.

steven mosher
March 8, 2010 11:28 pm

Sarnac (07:01:02) :
Simple airport temperature anomaly test …
That test was actually performed to show the impact of contrails
wikipedia. We discussed this at CA sometime ago, probably on unthreaded.
The grounding of planes for three days in the United States after September 11, 2001 provided a rare opportunity for scientists to study the effects of contrails on climate forcing. Measurements showed that without contrails, the local diurnal temperature range (difference of day and night temperatures) was about 1 degree Celsius higher than immediately before;[6] however, it has also been suggested that this was due to unusually clear weather during the period.[7]
Condensation trails have been suspect of causing “regional-scale surface temperature” changes for some time.[8][9] Researcher David J. Travis, an atmospheric scientist at the University of Wisconsin-Whitewater, has published and spoken on the measurable impacts of contrails on climate change in the science journal Nature and at the American Meteorological Society 10th Annual conference in Portland, Oregon. The effect of the change in aircraft contrail formation on the 3 days after the 11th was observed in surface temperature change, measured across over 4,000 reporting stations in the continental United States[8]. Travis’ research documented an “anomalous increase in the average diurnal temperature change”.[8] The diurnal temperature change (DTR) is the difference in the day’s highs and lows at any weather reporting station.[10] Travis observed a 1.8 degree Celsius departure from the two adjacent three-day periods to the 11th-14th.[8]. This increase was the largest recorded in 30 years, more than “2 standard deviations away from the mean DTR”.[8]
[edit]

E.M.Smith
Editor
March 8, 2010 11:33 pm

anna v (21:08:02) : I have come to the conclusion that the basic problem in the status of climate calculations is that the people doing them live in a linear world. Unfortunately the physics behind climate is highly nonlinear.
All statements saying that “anomalies reflect trends and correct biases” depend on there being one mechanism affecting the average temperature curve in each location, and on that being linear in the relevant variables.

Anna, you are within a hairs width of what I’ve figured out. I cast it with a specific, but you have the generalization down cold. Bravo.
And now I’ll be quite on that point …

March 9, 2010 12:49 am

E.M.Smith (23:33:18) :
anna v (21:08:02) :
Ah haa! Nice one!

March 9, 2010 12:56 am

E M Smith
Exactly the same linear afflictions affect those calculating histroic sea levels
Reading the latter part of Chapter 5 of AR4 is a lesson in how many ways they find to say ‘We have no idea what is happening so have just extrapolated the already thin data using a straight line.’
Tonyb

Editor
March 9, 2010 1:02 am

E.M.Smith (22:24:26) :
A station with ‘too few rural neighbors’ will not get a UHI correction at all. So as the rural stations are dropped, there are an ever greater number of urban stations that have their temperatures simply ‘passed through’ unchanged.
As I understand it, an urban station with no rural neighbours will not “pass though” unchanged” but will simply not be used. There are many instances of trunkation of urban stations also where the urban record is long but the rural record adjusting it is short. The urban record is truncated to the date where it can be corrected by the rural neighbour(s).

steven mosher
March 9, 2010 1:18 am

Alex Heyworth (22:38:10) :
On the regional trend. yes.
I came up with another way of demonstration that there is no REAL
problem here. basically through comparing UAH LAND with GISS LAND
It’s interesting but I leave it for another day

Jan Pompe
March 9, 2010 2:00 am

steven mosher (23:24:23) :
perhaps I should have put a smiley somewhere there. I’m pretty sure he didn’t mean it quite the way it came out.

Tony Rogers
March 9, 2010 3:35 am

One of the most interesting figures in Hansen et al 2001 (http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf) is the one titled “1900-1999 U.S. Temperature Change (deg C) – (A) USHCN Data”. It shows a matrix of images for unlit (rural), peri-urban and urban stations across the top and down the side the raw data, time of observation adjustement, Max/Min & SHAP data adjustment and data fill and UHI adjustment.
Anyway, the point is that the raw station data for rural stations shows a cooling of -0.05 deg C over the century. It is only when you add in the adjustments that you get a warming of +0.39 deg C for rural stations. I am not saying that the raw data is correct in itself, but they are placing a lot of confidence in the accuracy of these adjustments.
It is my opinion that, if you wish to try to measure changes in the heat content of the atmosphere (which is what global warming actually is!) then you should use only data that is not locally contaminated by man’s influence. Consequently, you should ONLY use rural stations. Pretending that you can adjust for UHI and use contaminated stations is just fooling yourself.

carrot eater
March 9, 2010 4:05 am

Alex Heyworth (22:38:10) :
“Carrot eater and Steve Mosher, thanks for your responses. CE, I was aware of the basics of the RSM from reading (so far) the first half or so of Hansen 1997. If all subgrids only had two stations to worry about, the method would be uncontroversial. It is what happens as a result of reiterating the process dozens of times that is more of an issue to my mind.”
Try it out, and see what it does. If you repeat my exercise and give station B a trend of some sort, you’ll see some odd things happen; of course you lose information about that trend after the station drop, but you also get a weird little jump at that point. In that case, having multiple stations helps water down these problems.
The other unsatisfying thing about the RSM is that the order in which you add the stations matters to some extent, when records are of different lengths. That’s what Tamino tried to address with his modified RSM.

March 9, 2010 4:23 am

Re: steven mosher (Mar 8 13:55),
Steve,
Sorry I missed your query at the end there . Jan brought it to my attention 😉
First a comment on non-problem 5
5. The notion that “averaging” results in the loss of data.
Well, of course it does diminish information content. The neat thing about the GISS approach to anomaly is that they take advantage of this. Getting a correct anomaly for all stations individually is hard, because of missing values in the agreed period. But on gridding, you lose the information of the individual stations anyway. Therefore might as well calculate an anomaly for the smallest unit that remains, the grid cell (or its centre point). Then with no loss you get better coverage in the base period.
So OK, the Open questions ( not problems, but open questions)
1. What is the provenance of the data being used.

Yes, an important question.
2. What adjustments are made and what are the exact calculations.
Yes, though fortunately with GISS this is not now open. You can see
3. Is there UHI contamination in the signal
And how much? And is it changing the trend (as opposed to a static bias)?
4. Does microsite bias matter? how much
Yes, but relate it to the trend. Stationary bias doesn’t matter much.
5. How should the uncertainty due to spatial coverage be computed?
I would have said estimated. But yes.
6. What is the optimal method for station combining and area averaging
( see romanM)

Yes – I haven’t got into that discussion, except to point out that no-one seems to refer to what GISS actually does. I’m not sure how much it differs from Tamino’s, but it does incorporate the monthly basis of Roman’s.

carrot eater
March 9, 2010 4:53 am

Nick Stokes (04:23:06) :
GISS uses monthly offsets, but calculates them one by one, as each station is added to the mean. So the ordering of the station can matter.
Tamino got rid of that ordering by calculating all the offsets simultaneously, but he used single offsets, not monthly. It’s unclear to me whether he knew he was diverging from GISS in that respect, but he hadn’t seemed to have studied that particular issue much yet.
So Roman completed the loop by telling him to put the monthly offsets back in again. So now it preserves that feature of GISS, while having the innovation that Tamino was looking for in the first place.
All that said, we don’t actually know the simultaneously calculated offsets actually works any better, yet. In principle it should, but the improvement has not been illustrated yet.

rbateman
March 9, 2010 5:11 am

E.M.Smith (22:43:21) :
Anthony Watts (23:08:26) :
Here’s a csv paste of year, high,low,(high+low/2) for Ashland that I produced
from the raw data:
1888,67.35753425,40.42,53.88876712
1889,69.00821918,41.91780822,55.4630137
1890,64.56438356,38.24383562,51.40410959
1891,63.8739726,39.97534247,51.92465753
1892,64.1369863,39.6,51.86849315
1893,61.39726027,34.59452055,47.99589041
1894,64.05753425,39.19452055,51.6260274
1895,65.64931507,37.68493151,51.66712329
1896,65.10410959,40.26849315,52.68630137
1897,64.87123288,39.42739726,52.14931507
1898,65.3890411,38.9109589,52.15
1899,63.97260274,39.02465753,51.49863014
1900,64.70136986,39.81917808,52.26027397
1901,64.70136986,39.81917808,52.26027397
1902,64.09863014,39.58630137,51.84246575
1903,64.47671233,39.07123288,51.7739726
1904,65.57260274,40.6739726,53.12328767
1905,65.58356164,42.05205479,53.81780822
1906,65.21917808,43.34246575,54.28082192
1907,63.96438356,43.04383562,53.50410959
1908,64.4630137,42.02876712,53.24589041
1909,63.52876712,41.77808219,52.65342466
1910,65.04109589,42.6739726,53.85753425
1911,62.72876712,40.00821918,51.36849315
1912,61.80958904,41.4,51.60479452
1913,62.62465753,39.44383562,51.03424658
1914,65.44109589,39.19726027,52.31917808
1915,65.52054795,40.1260274,52.82328767
1916,64.02739726,37.49041096,50.75890411
1917,64.90410959,38.65479452,51.77945205
1918,66.6109589,39.2,52.90547945
1919,65.7369863,37.78082192,51.75890411
1920,65.77534247,38.24109589,52.00821918
1921,66.23013699,39.38356164,52.80684932
1922,65.36438356,37.96164384,51.6630137
1923,66.42191781,38.56438356,52.49315068
1924,67.93972603,38.99452055,53.46712329
1925,65.87123288,40.51506849,53.19315068
1926,68.87123288,41.64109589,55.25616438
1927,64.72328767,40.27671233,52.5
1928,66.25479452,40.05479452,53.15479452
1929,66.11506849,38.06575342,52.09041096
1930,65.7369863,39.29589041,52.51643836
1931,67.40821918,40.76986301,54.0890411
1932,64.47671233,39.77808219,52.12739726
1933,64.52054795,39.43835616,51.97945205
1934,68.48493151,42.67123288,55.57808219
1935,64.90958904,39.6630137,52.28630137
1936,66.44931507,40.63561644,53.54246575
1937,63.75068493,40.43150685,52.09109589
1938,64.54794521,41.61917808,53.08356164
1939,66.88493151,41.28767123,54.08630137
1940,67.24931507,42.75890411,55.00410959
1941,66.6109589,41.69041096,54.15068493
1942,66.85753425,40.70410959,53.78082192
1943,67.68219178,40.15342466,53.91780822
1944,67.24931507,39.90136986,53.57534247
1945,67.02465753,40.35342466,53.6890411
1946,62.96712329,41.21917808,52.09315068
1947,63.15616438,42.33150685,52.74383562
1948,60.07671233,39.63835616,49.85753425
1949,63.70958904,38.31780822,51.01369863
1950,64.03835616,39.0630137,51.55068493
1951,64.8109589,38.3260274,51.56849315
1952,63.97260274,38.20273973,51.08767123
1953,63.00547945,39.1369863,51.07123288
1954,63.98082192,38.40547945,51.19315068
1955,62.68219178,38.18082192,50.43150685
1956,62.90136986,37.93424658,50.41780822
1957,63.16438356,38.62739726,50.89589041
1958,66.51232877,41.28219178,53.89726027
1959,65.01369863,37.90958904,51.46164384
1960,65.48493151,38.90136986,52.19315068
1961,64.67945205,38.65205479,51.66575342
1962,63.85479452,38.12054795,50.98767123
1963,63.4630137,39.60547945,51.53424658
1964,63.38630137,37.89863014,50.64246575
1965,64.8,39.11506849,51.95753425
1966,66.21917808,39.9260274,53.07260274
1967,65.79452055,39.74246575,52.76849315
1968,66.1890411,40.59178082,53.39041096
1969,65.1260274,39.82191781,52.4739726
1970,66.67123288,40.77260274,53.72191781
1971,62.74520548,39.24383562,50.99452055
1972,65.22739726,39.5260274,52.37671233
1973,66.20547945,40.83835616,53.52191781
1974,65.90410959,39.91232877,52.90821918
1975,62.83287671,39.19863014,51.01575342
1976,64.07260274,38.68767123,51.38013699
1977,65.56164384,40.63013699,53.09589041
1978,64.50136986,40.76849315,52.63493151
1979,65.60273973,41.56164384,53.58219178
1980,65.10136986,40.27671233,52.6890411
1981,65.89726027,41.63150685,53.76438356
1982,63.81643836,39.98630137,51.90136986
1983,64.01643836,40.24383562,52.13013699
1984,63.8369863,38.34109589,51.0890411
1985,65.48767123,36.78630137,51.1369863
1986,67.23561644,40.03561644,53.63561644
1987,68.13150685,38.88493151,53.50821918
1988,67.52876712,38.06849315,52.79863014
1989,65.2,37.35342466,51.27671233
1990,65.98630137,38.1369863,52.06164384
1991,66.22465753,38.79178082,52.50821918
1992,69.70410959,40.58356164,55.14383562
1993,66.2,38.57534247,52.38767123
1994,67.00821918,38.01369863,52.5109589
1995,67.52876712,40.0109589,53.76986301
1996,67.24931507,39.65753425,53.45342466
1997,66.0739726,40.30410959,53.1890411
1998,66.04383562,40.01369863,53.02876712
1999,65.92876712,37.68767123,51.80821918
2000,66.10684932,38.15342466,52.13013699
2001,67.9369863,37.92054795,52.92876712
2002,67.99178082,36.2630137,52.12739726
2003,68.49041096,38.74794521,53.61917808
2004,68.02191781,38.67945205,53.35068493
2005,67.04383562,38.30958904,52.67671233
2006,67.57534247,38.35890411,52.96712329
2007,66.59452055,36.67945205,51.6369863
2008,66.57534247,36.60547945,51.59041096
2009,66.56438356,36.60547945,51.58493151
And I kept a record of any holes plugged from B-91 forms, nearest neighbor (Grant’s Pass), AMS Journal Monthly Weather Review or preceeding/following reading extrapolation.
What I would like to produce to go with my efforts is how to assign a confidence level. Ex – I fill in 2 days out of 365 by using other than an actual reading. How do I figure that?

Gail Combs
March 9, 2010 5:12 am

E.M.Smith (22:24:26) :
“A station with ‘too few rural neighbors’ will not get a UHI correction at all. So as the rural stations are dropped, there are an ever greater number of urban stations that have their temperatures simply ‘passed through’ unchanged.
vjones (01:02:47) :
“As I understand it, an urban station with no rural neighbours will not “pass though” unchanged” but will simply not be used. There are many instances of trunkation of urban stations also where the urban record is long but the rural record adjusting it is short. The urban record is truncated to the date where it can be corrected by the rural neighbour(s).”
Where is you verification that this is actually what is happening? Even the guys who do the programming find that what they THINK is happening in the program is not necessarily what IS happening. Also WHO dropped the stations the computer program or the human inputting the information?
There is also the problem with the definition of “rural” The “UHI effect” can be caused “by any replacement of natural vegetation by man-made surfaces, structures and active sources of heat.” Dr. Spencer performed an analysis comparing International Hourly Surface data to population density. The study seems to indicate the effect of UHI on the temperature anomalies is the greatest during the growth of rural to a population density of 1000 people per sq. km.
http://wattsupwiththat.com/2010/03/04/spencers-uhi-vs-population-project-an-update/
My other problem with station drop out is that not all stations are created equal. As Chiefio showed the Pacific without Australia and New Zealand is basically flat as a pancake.
As is Central Park in NYC
http://www.john-daly.com/stations/WestPoint-NY.gif
Washington DC as of 2000 was cooling slightly
http://tinypic.com/view.php?pic=k2ekh5&s=6
The temperature pattern in many cases is cyclical and 30 yrs or even 100 yrs is much too short to show that cycle. So the biggest issues I have is first the short time period for the base line data and the observed data. And second choosing the baseline at the bottom of the cycle and then comparing that to the data gathered during the upswing of the cycle.
http://www.climate-movie.com/wordpress/wp-content/uploads/2008/12/temperature_adjustments1.gif
Longest running Siberian station
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=222202920005&data_set=1&num_neighbors=1
I am sure the political pressure to DO SOMETHING NOW is because “they” are afraid we are going to head into a temperature down swing. That is why the chant changed from global warming to climate change. If a math student could back out the 60 yr data from the HadCRU3 time series, I am sure it is not a deep dark secret except to the unwashed masses, no matter how often we hear “hottest …. ever”
Math students analysis: http://dev-null.chu.cam.ac.uk/htm/soundandfury/220709-analysing_temps.htm

rbateman
March 9, 2010 5:33 am

I’ll do Medford next from raw data, just to see what GISS has been up to.

kim
March 9, 2010 6:48 am

Go look at Roman M’s blog.
==============

kim
March 9, 2010 7:14 am

Roman M’s blog is statpad.wordpress.com
You can also click to it through Jeff Id’s ‘The Air Vent’ in the sidebar.
===============

Tim Clark
March 9, 2010 11:15 am

steven mosher (23:10:55) :
Here is a nice link:
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/

Is that the graph that you are using to debate this point?
If so, IMHO then we may be debating apples and oranges (or I may be misinterpreting the debate myself).
I think E.M.’s position is;
1. the station drop-out is affecting the recent trend. The dropout station data in your graph is not displayed post ~1992. Complete the comparative analysis by continuing the data for the dropped out stations to date.
2. the dropped out station data is still being included in the determination of to-date anomalies even after being dropped out. In other words, current station anomalies are being averaged against a base period that includes the dropped out stations.

carrot eater
March 9, 2010 1:12 pm

Tim Clark (11:15:24) :
“the station drop-out is affecting the recent trend. The dropout station data in your graph is not displayed post ~1992.”
If it showed no signs of affecting the trend before 1992, then you have no strong reason to think it would greatly affect the trend after 1992. It’s possible, but you can’t say it’s likely or certain.
“Complete the comparative analysis by continuing the data for the dropped out stations to date.”
Until somebody goes and gets all that data, this will be difficult. Roy Spencer avoided this difficulty by using a different data set altogether; his entirely different data set with no station drops shows similar trends to anybody elses’s. It sounds like the NOAA is collecting some of the missing data from the individual countries as we speak, so some of it may show up this year.
“the dropped out station data is still being included in the determination of to-date anomalies even after being dropped out. In other words, current station anomalies are being averaged against a base period that includes the dropped out stations.”
The graph linked here http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
shows that this objection does not matter. On the global scale, the dropped out stations weren’t doing anything different from the current stations.

rbateman
March 9, 2010 1:27 pm

Comparing Medford, Ore to Ashland, Ore
http://www.robertb.darkhorizons.org/TempGr/Med_AshAv.GIF
Raw data.
It’s obvious that while Ashland cools at night, Medford holds in UHI.
But the real news is that the daytime highs don’t really change that much.
So much for UHI Barbecue Days in Medford.
Next to determine what time of the year is most affected, if it’s not universal across the year.

March 9, 2010 1:51 pm

Re: “Popping a Quiff” (Mar 8 22:07),
is the anomaly for the dropped stations that same as those retained?
Steven’s link shows that well. The answer is yes. Zeke has a new post looking at various breakdowns re associated factors.
And Tim , I think the CCC plot is the right one here. EMS’s claim is that the 1990’s dropout was selective. It chose stations known to perform differently. But that plot shows they weren’t performing differently.

carrot eater
March 9, 2010 2:31 pm

Nick Stokes (13:51:54) :
To be fair to EMS, I don’t know if he’s outright and explicitly claimed the dropout was selective. That claim is more clearly in the SPPI report.

Tim Clark
March 9, 2010 2:59 pm

carrot eater (13:12:19) :
Tim Clark (11:15:24) :
“the station drop-out is affecting the recent trend. The dropout station data in your graph is not displayed post ~1992.”
If it showed no signs of affecting the trend before 1992, then you have no strong reason to think it would greatly affect the trend after 1992. It’s possible, but you can’t say it’s likely or certain.

I’m not taking a position either way. But please don’t be condescending. Look at the adjustments on the GISSTEMP page and you’ll notice they keep going up, up, up. So the effect of the adjustment is increasing. If the remaining stations have a greater adjustment value versus the dropped stations, then yes, I do have strong reason.
“Complete the comparative analysis by continuing the data for the dropped out stations to date.”
Until somebody goes and gets all that data, this will be difficult. Roy Spencer avoided this difficulty by using a different data set altogether; his entirely different data set with no station drops shows similar trends to anybody elses’s. It sounds like the NOAA is collecting some of the missing data from the individual countries as we speak, so some of it may show up this year.

So you are saying the data for the dropped stations post dropout is unavailable. Fair enough.
“the dropped out station data is still being included in the determination of to-date anomalies even after being dropped out. In other words, current station anomalies are being averaged against a base period that includes the dropped out stations.”
The graph linked here http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
shows that this objection does not matter. On the global scale, the dropped out stations weren’t doing anything different from the current stations.

That’s the same graph I looked at and linked to in my original posting. It does nothing to validate your contention post-dropout.

carrot eater
March 9, 2010 3:35 pm

Tim Clark (14:59:56) :
GISS only makes one adjustment, and it’s for UHI, and this is what it does:
http://clearclimatecode.org/gistemp-urban-adjustment/
“That’s the same graph I looked at and linked to in my original posting. It does nothing to validate your contention post-dropout.”
It rather puts a damper on the idea that the missing stations were intentionally made to be missing in order to introduce some sort of spurious warming trend. And again, it gives you no particular reason to think that finding those missing data and putting them back in would have a major effect.

carrot eater
March 9, 2010 3:38 pm

Tim Clark (14:59:56) :
And to clarify: if the two subsets add up to the same trends during the past, then there is nothing unfair about keeping the missing stations in the past. This is directly puts away the concern expressed in
“In other words, current station anomalies are being averaged against a base period that includes the dropped out stations.”

Anticlimactic
March 9, 2010 3:57 pm

What about attaching weather stations to vehicles? I am assuming a fully automated system, possibly solar powered, and including GPS data, which either automatically transmits the data via wi-fi when they return to base, or via text or satellite links, depending on availability.
If, say, they were attached to mail vans then the urban deliveries would build up a good model of the UHI effect, and the more rural deliveries would provide an interesting picture of variability due to location.
In some of the remote areas of the world then attaching them to long distance lorries, buses or trains would provide information which could not possibly be delivered from a few scattered surface stations.
Just a thought.

rbateman
March 9, 2010 4:34 pm

carrot eater (15:38:12) :
Tim Clark (14:59:56) :
And to clarify: if the two subsets add up to the same trends during the past,
_________________________________
They most certainly do not add up to the same trends.
I just showed you a UHI influenced Medford, Ore vs a no-trend Ashland, Ore less than 20 miles apart on the map.

carrot eater
March 9, 2010 5:34 pm

rbateman (16:34:36) :
You’re telling me about two stations someplace.
Zeke, Clear climate, Tamino and now Ron Broberg are showing what happens when you use all of them. All four find that if you calculate the global mean anomaly history using only the stations that continue past 1992, you get the same result (pre-1992) as when you use only the stations that dropped off by 1992.
And on top of that, it looks like both Medford and Ashland have data through to the present. Medford has data through at least 2009 in the GHCN. Ashland has data through 2009 in the USHCN. So what have they got to do with station drops, anyway? Neither were among the dropped. Maybe you’re making some other sort of point, I don’t know, but the topic here is whether the drop-off in stations around 1990 had any effect on the calculated global trends.

Rattus Norvegicus
March 9, 2010 6:30 pm

A couple of points:
1) carrot eater: EMS made his claim about “colder” high latitude or high altitude stations in one of his earliest posts on this subject. I don’t feel like wading through the morass of junk on his site to get you an exact link, but you know where the site is. This does seem to be the root of the SPPI claims.
2) Tamino partitioned the two sets of sites, post 1992 and pre 1992, into two datasets and ran the analysis. They both showed essentially the same anomalies except in the earliest periods.

Amino Acids in Meteorites
March 9, 2010 9:13 pm

Nick Stokes (13:51:54) :
I’m sure it’s not exactly the same. But it doesn’t matter since the stations retained could have been selected so that the anomaly of them would match the anomaly as if none had been dropped. There is room for manipulation to no end in deciding which stations to retain and which to drop. You should know this. And if you wanted to be unbiased it should matter to you. You also should be pointing out all the potential problems of dropping stations.
But the question that is vital is this:
is the temperature reading of the retained stations the same as those dropped?
Because I see only GIStemp is making claims about 2006 and 2009 that other data centers are not.

Amino Acids in Meteorites
March 9, 2010 9:15 pm

Rattus Norvegicus (18:30:21) :
So you are saying rural and mountain stations have not been dropped from GISS use?

anna v
March 9, 2010 9:16 pm

Re: Rattus Norvegicus (Mar 9 18:30),
I like the “except in the earliest periods”.

Rattus Norvegicus
March 9, 2010 9:21 pm

Amino,
There really is no end to paranoia and conspiracy theories, is there. All of the stations which do not report to GSN suddenly changed their behavior. Right. Give me a break.

Amino Acids in Meteorites
March 9, 2010 9:23 pm

Rattus Norvegicus (18:30:21) :
I see you are sticking to the playbook:
GLOBAL WARMING RELIGION RULES
1- NEVER discuss the Science—avoid talking temperature data
2- Attack the Man—“I don’t feel like wading through the morass of junk on his site”
3- Repeat the MANTRA until you feel you’ve won the argument —Tamino, anomaly, Tamino, anomaly, Tamino, anomaly, Tamino, anomaly, Tamino, anomaly

Amino Acids in Meteorites
March 9, 2010 9:33 pm

Rattus Norvegicus (21:21:02) :
I’m not interested in anomaly since anyone can play with sata in that regard.
I am interested in this:
is the temperature reading of the retained stations the same as those dropped?

Rattus Norvegicus
March 9, 2010 9:33 pm

Amino,
Tamino’s analysis showed that urban stations were disproportionately dropped from the record.
And anna v, in the earliest periods the pre 1992 stations showed more warming than the post 1992 stations. No bias.

Amino Acids in Meteorites
March 9, 2010 9:34 pm

Rattus Norvegicus (21:21:02) :
It is asking a lot to tell me to trust anything passing through the hands of James Hansen.
Can you honestly tell me from looking at his environmental activism that he can be trusted to be unbiased in an environmental issue?

Amino Acids in Meteorites
March 9, 2010 9:35 pm

Rattus Norvegicus (21:33:46) :
Tamino…… huh, who exactly is ‘Tamino’?

Amino Acids in Meteorites
March 9, 2010 9:39 pm

Rattus Norvegicus (21:21:02) :
paranoia and conspiracy theories
What you call paranoia and conspiracy are really just legitimate questions that everyone should ask—especially since GIStemp looks so different than other data sets. and also because of James Hansen’s environmentalism and connections to politics. And also because of his failed prediction from 1988.
Science is supposed to ask all questions. Nothing is settled. Nothing is beyond debate.

Amino Acids in Meteorites
March 9, 2010 9:41 pm

Rattus Norvegicus (21:33:46) :
This video gives a quick review of James Hansen’s 1988 testimony. You will see it heavy laden with politics.

Rattus Norvegicus
March 9, 2010 10:14 pm

I’ll deal with a few of issues here:
1) I really don’t feel like wading through EMS’s site. He makes some really silly accusations in his posts, thing like, and this is a paraphrase, “why does GISS not use data prior to 1880?”. The answer is pretty easy, the GISS analysis begins in 1880 because that is the point where the initial analysis showed there to be adequate spatial coverage to do a valid analysis. CRU had a different opinion, but not that much different — they start in 1850. This is the sort of crap I was referring to.
2) Actually, I was discussing science. To expect a radical change in the behavior of the “dropout” stations post 1992 is silly. Yes it might be possible, but please postulate a physical reason for this. At least he high latitude stations tend to show a greater warming trend than the rest of the globe, how would dropping high latitude stations result in a warming trend?
3) I am not skilled at statistics. Neither are you. However, I have been reading Tamino for several years and he strikes me as a competent statistician. He has several published papers in the literature on time series analysis, so he can at least get past peer review, something that Anthony cannot. The methods of analysis he uses are all fairly straightforward. You might learn something about statistical analysis from reading his blog, I know I have.
4) anna v.: if you looked at the charts you would see that since the 1880’s the dropped stations show slightly more warming than the post 1992 stations. Not much of a help for your position. After the 40’s or 50’s they are virtually identical to the stations which were retained after 1992.
5) GISS makes claims about 2005 because it was statistically significantly warmer than 1998. In their record 2009 was the second warmest year by a very small amount, however it was not statistically significant. The reason for this difference is because GISS attemps to account for the arctic by estimating temperatures for the high arctic (north of 80) via interpolation. CRU does not, and the satellites do not measure above 82.5 north or south. Jones disagrees with Hansen’s methodology, surprise — he has his own way of analyzing the data, which does not include the high arctic.
I’m sure this won’t satisfy the hardcore…

Rattus Norvegicus
March 9, 2010 10:17 pm

Amino,
Tamino used the RAW, that is unadjusted data, for his analysis. It did not “pass through the hands of James Hansen”.

anna v
March 9, 2010 10:27 pm

Re: Rattus Norvegicus (Mar 9 21:33),
no bias?
Something that would make a trend flatter is dropped and it is not a bias?
!!!!

anna v
March 9, 2010 10:47 pm

Let me clarify:
looking at http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
The post removal has a hockey stick effect. flat/heat
The pre removal shows cold/flat/heat.
Trends are not given in the link so cannot judge if the anomaly trends are the same, pre and post.
Maybe the program of removal was aimed at agreeing with the hockey stick, for all we know. Maybe it is the random result of economic constraints ( difficult stations dropped). A bias there is.
In any case as I have said several times, anomalies are removed from reality (heat content) by the non linearity of the underlying heat transport system of the planet’s atmosphere, as can be seen in the latest february anomalies.

Jan Pompe
March 10, 2010 1:14 am

anna v (22:47:46) :
Let me clarify:
looking at http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
The post removal has a hockey stick effect. flat/heat
The pre removal shows cold/flat/heat.
An interesting point Lucia makes is that using the anomaly method the two series will converge during the shared baseline and diverge as we move away from that baseline. This can be seen in the chart he publishes but only going back in time. Any claim that it will not also happen in the future is without visible means of support.

carrot eater
March 10, 2010 3:11 am

Amino Acids in Meteorites (21:34:51) :
Please practice what you preach. Many of these comments from you are not about science, but about attacking the man.
anna v (22:47:46) :
The whole point is that the dropped subset and surviving subset give the same results, prior to 1992. This means that before 1992 at least, dropping those stations would not bias the global results. And if that’s so, why would you think it would make a big difference afterwards?

carrot eater
March 10, 2010 3:12 am

Amino Acids in Meteorites (21:33:17) :
“is the temperature reading of the retained stations the same as those dropped?”
I don’t know how many times it’s been said now, but multiple analyses show that you get the same results from either subset of stations.

anna v
March 10, 2010 6:31 am

I have also read that each of these multiple analyses uses different but still manipulated data, not raw data.
A link was given above to a plot where one does not get the same results. How is it possible for cold/flat to have the same trend as flat/flat?

Tim Clark
March 10, 2010 7:10 am

Rattus Norvegicus (22:14:58) :
2) Actually, I was discussing science. To expect a radical change in the behavior of the “dropout” stations post 1992 is silly. Yes it might be possible, but please postulate a physical reason for this. At least he high latitude stations tend to show a greater warming trend than the rest of the globe, how would dropping high latitude stations result in a warming trend?
carrot eater (15:35:52) :
GISS only makes one adjustment, and it’s for UHI, and this is what it does:
It rather puts a damper on the idea that the missing stations were intentionally made to be missing in order to introduce some sort of spurious warming trend. And again, it gives you no particular reason to think that finding those missing data and putting them back in would have a major effect.

Aren’t you forgetting TOBS. That’s the biggest single correction they make in terms of absolute value. And it escalates in a trend to-date fashion resembling the hockey stick. And the escalation really begins post-dropout. I’m not a conspiracy theorist. But the increasing trend value post-dropout is obviously greater than anything preceding. Also you keep repeating the mantra that the pre trends are somewhat analogous and should equal the post trend ad infinitum. That presupposes your assumption that the trend is associated with a linearly (or logarithmic) increasing climate forcing and not a paradigm shift (PDO). Sound familiar?
Fill in the data for dropped stations post drop and I’ll look into it.

Amino Acids in Meteorites
March 10, 2010 7:16 am

carrot eater (03:11:01) :
I am showing who James Hansen is.
How can telling people he is an environmental activist be an attack?
Wasn’t he arrested at a coal protest? Didn’t he testify in court that vandalism is ok if it is in the name of environmentalism?
How is the truth an attack? Does telling the truth about him make him look bad? Is that why you call it an attack? Is it wrong that people know he is an environmental activism? You don’t want people to know that?
If I had compared him to scientists that say cigarette smoke doesn’t casuse cancer that would be a smear attack.
If I compared him to people who deny there was a holocaust that would be a smear attack.
If i compared to people who still claim the earth is flat that would have been an attack.
But I didn’t do these things. I told the truth about him—that he is an environmental activist and therefore he should be questioned as to whether he can be trusted to be unbiased about an environmental issue.
…………………………………………………………………………………………………………
So tell me, is the truth an attack?

Tim Clark
March 10, 2010 7:24 am

Documentation:
GISS Surface Temperature Analysis
What’s New
Feb. 16, 2010: Urban adjustment is now based on global nightlights rather than population as discussed in a paper in preparation.
Nov. 14, 2009: USHCN_V2 is now used rather than the older version 1. The only visible effect is a slight increase of the US trend after year 2000 due to the fact that NOAA extended the TOBS and other adjustment to those years.
Sep. 11, 2009: NOAA NCDC provided an updated file on Sept. 9 of the GHCN data used in our analysis. The new file has increased data quality checks in the tropics. Beginning Sept. 11 the GISS analysis uses the new NOAA data set.

This is on top of previous TOBS added between 1990-2009. I’ve always questioned why they continue to add in TOBS when almost all station times of observation were adjusted by 1995, and the correction is a one-time event for each station. From the magnitude of the TOBS correction, it appears more stations are needing correction each year. WUWT.

rbateman
March 10, 2010 7:32 am

carrot eater (17:34:32) :
rbateman (16:34:36) :
You’re telling me about two stations someplace
_______________________________________
Oregon to be exact. Two places 18 miles apart in the same climactic valley.
One shows a warming UHI bias, one does not.
You have been going on for days about all stations do the same thing, and you are wrong.
There has been no evidence of warming in the cherry-picked stations that make up CRU for probably 15 years, and it’s a travesty. They cancelled each other out.
If we took ALL AVAILABLE stations we would see cooling the last 15 years, not the catastrophic and artificially sweetened GISS of death warming.
And what is even worse is the artificial holes in GISS data sets that are filled in with FILTEMP designed to artificially warm the trend. Holes that don’t exist in the raw data.
It was YOU who claimed that there is no difference between rural and urban data sets, they show the same thing. I just showed you Medford and Ashland, small town and big town right down the road from each other.
There is no difference before UHI sets in, but afterwards it’s as plain as day.
oh, btw…. where’s your graphs?

Tim Clark
March 10, 2010 8:06 am

carrot eater (03:12:56) :
Here’s a good graphic. It shows the number of stations (out of 6000+) classified as warm. See if you can get the data from them and determine the number that are no longer included in the GISTEMP determination.
Nothing like real data to prove a point with me!
http://data.giss.nasa.gov/gistemp/warm_stations/

Jan Pompe
March 10, 2010 10:18 am

Amino Acids in Meteorites (07:16:56) :
When using data sets we should always proceed with due diligence regardless of how we trust the source.
“So tell me, is the truth an attack?”
It can be it can also support depends on what it tells us about a person’s work. What matters is whether it’s relevant if it points to the possibility of bias it may be relevant but nothing will beat directly testing the validity of or bias in the work under examination.

March 10, 2010 10:48 am

he is an environmental activist and therefore he should be questioned as to whether he can be trusted to be unbiased about an environmental issue.
That’s not a scientific question. If he were a judge, then yes, his opinion would matter. As a scientist, all that matters is whether his work is verifiable and replicable. One should absolutely question his conclusions, but science should question all results regardless of what one thinks a particular scientist’s opinions are.
You were the one saying “Attack the Man” was a bad thing to do. Someone said Chiefio’s site is difficult to read: it is. If that is “Attack the Man,” then how is “It is asking a lot to tell me to trust anything passing through the hands of James Hansen.” and “Tamino…… huh, who exactly is ‘Tamino’?” not “Attack the Man?”

carrot eater
March 10, 2010 2:57 pm

Tim Clark (07:10:21) :
TOB is done in the USHCN, and is a pretty big deal there. And it does get passed through to GISS. But NOAA doesn’t do any TOB for the rest of the world, nor does GISS do it.
GISS does UHI. That’s it. And it uses raw data for all but the US.

carrot eater
March 10, 2010 3:32 pm

Tim Clark
“But the increasing trend value post-dropout is obviously greater than anything preceding.”
Obvious to whom? You see a marked difference in trend, 1975 to 1990, vs 1990 to 2010? I don’t.
“Also you keep repeating the mantra that the pre trends are somewhat analogous and should equal the post trend ad infinitum.”
Because it answers most of the objections being raised here, about baselines and whatever else.
“That presupposes your assumption that the trend is associated with a linearly (or logarithmic) increasing climate forcing and not a paradigm shift (PDO). Sound familiar?”
What has that got to do with anything? For the station drop to matter, the dropped stations have to have had different trends from their neighbors in the same grid box. What has that got to do with PDOs or anything?

carrot eater
March 10, 2010 3:53 pm

rbateman (07:32:06) :
“You have been going on for days about all stations do the same thing, and you are wrong.”
I’ve never said that. Please go up and down this thread. You will never see a time where I said that. This is important to understand, too. Trends differ somewhat as you go around the world. Arctic different from tropics, Antarctic Peninsula different from East Antarctica; there are regional differences. Just make a trend map at the GISS page and you can see it.
The point is, if you drop stations such that the average trends for that particular gridbox don’t change, *then* dropping stations doesn’t have much impact. From the multiple analyses, adding up all the grid boxes, we see no major difference in the global number; you can use the current ~1200 stations and get the same results, pre-1990, as using the full set. But if you had a gridbox that had stations of all different trends, and you somehow sat there and selected out all the fastest warmers or slowest warmers or whatever, then you could bias the results. But that simply isn’t what happened.
Amino Acids in Meteorites (07:16:56) :
You said to stick to science. Please do.

Amino Acids in Meteorites
March 10, 2010 6:34 pm

Paul Daniel Ash (10:48:20) :
He hides his work and resists FOI. Why would a scientist do that? Unless he has something to hide.

Amino Acids in Meteorites
March 10, 2010 6:37 pm

Paul Daniel Ash (10:48:20) :
I am not attacking the man. So it is an attack to tell people he is an environmental activist that doesn’t like people to know his methods with temperature data?
Am I lying? Am I exaggerating? No, I am not. He has made this bed for himself. Now he is sleeping in it. I did not do these things to him.

Amino Acids in Meteorites
March 10, 2010 6:39 pm

Paul Daniel Ash (10:48:20) :
“Tamino…… huh, who exactly is ‘Tamino’?”
……………………………………………………………………………………………
Well, no one knows who he is.
So, who is he?
I am asking who he is. I am not the one concealing his identity. Blame him. Makes sens to blame him, doesn’t it?

Amino Acids in Meteorites
March 10, 2010 6:52 pm

Jan Pompe (10:18:05) :
I agree with you that it doesn’t matter what the person does in their personal life. I don’t have a problem with that at all.
What is the problem is all the secrecy. The secrecy leaves one asking a lot of questions. If he was forward to have everyone look at everything he does at his government paid job then he would avoid all these suspicions.
But since he is an environmental activist and global warming is a political and environmentalist issue it can make one wonder what is really going on behind the scenes.
It is not like he doesn’t have a history with politics and environmentalism. To say that is not an attack. I am sure these things about him have to be taken into account in trying to ascertain why he is secretive. How could they not be? If it turns out he has a legitimate reason in relation to science as to why he makes things so difficult then one could disregard his personal views. But his stubbornness makes me have serious doubts about the purity of his motives.
Am I being unfair?

Amino Acids in Meteorites
March 10, 2010 7:25 pm

carrot eater (15:53:55) :
Stick to the science. You are right.
The science shows that there has been no statistically significant warming since 1995. The science shows there has been cooling since 2005. It also shows the ‘manmade co2’ level continues to rise while no warming is happening. The science shows co2 is a small player in climate and does not control temperatures. Science also shows that there can be no runaway warming caused by co2.
The science shows climate models ‘perform poorly’.
http://www.scribd.com/doc/4364173/On-the-credibility-of-climate-predictions
‘….computer climate model outputs not matching observation…’
http://www.scribd.com/doc/904914/A-comparison-of-tropical-temperature-trends-with-model-predictions

Jan Pompe
March 10, 2010 9:48 pm

Amino Acids in Meteorites (18:52:57) :
“Am I being unfair?”
It’s not really a question of fairness truth isn’t the slightest bit interested in whether it’s fair.
If a person who is responsible for data makes due diligence impossible with his behaviour that of course needs to be addressed the proverbial blowtorch to the feet might be appropriate.
We due however need to be careful being an activist could be a result of what one believes he is seeing or it can colour his vision. His data might be perfectly OK if we toss it out because all we see is activist so we trust him are not then also the losers?
My point is where due diligence is applied as it should in all cases where 3rd party data is used then it is just an unnecessary distraction.

Amino Acids in Meteorites
March 10, 2010 10:56 pm

Jan Pompe (21:48:53) :
the proverbial blowtorch to the feet might be appropriate
LOL! Thanks for the laugh.
And I agree, due diligence is what matters.

Bernard J.
March 10, 2010 11:22 pm

AAiM.
With respect to your comment time-stamped 19:25:21, are you able to address this question:
Given the noise inherent in the temperature signal over the twentieth century after any long-term warming trend is removed, how rapid would a warming trend have to be in order for a statistically “significant” signal to be observed in less than a 15 year period?

Amino Acids in Meteorites
March 11, 2010 8:45 pm

Bernard J. (23:22:41) :
Im not sure what youre driving at.
If there is as much noise as you are talking about then how can warming or cooling ever be detected.
There is clear cooling in the temp record since 2005.
Do you see it?
BTW, you won’t see it in GIStemp.

Amino Acids in Meteorites
March 11, 2010 9:08 pm

Bernard J. (23:22:41) :
No statistically significant warming since 1995
http://wattsupwiththat.com/2009/12/26/no-statistically-significant-warming-since-1995-a-quick-mathematical-proof/
…………………………………………………………………………………………………………..
which Phil Jones agrees with
http://wattsupwiththat.com/2010/02/14/daily-mail-the-jones-u-turn/

Jan Pompe
March 12, 2010 12:30 pm

Amino Acids in Meteorites (21:08:17) :
That’s hilarious he actually does admit that there is no statistical significance in that warming
” it is difficult to establish the statistical significance of that warming”
while at the same time putting a it’s still warming significantly spin on it.

carrot eater
March 12, 2010 12:53 pm

Amino Acids in Meteorites (20:45:03) :
“If there is as much noise as you are talking about then how can warming or cooling ever be detected.”
In the NH, we are going from winter to spring, and then to summer.
This is what we expect to observe, over the months.
But if you look at temperature data for the last week where ever you live, could you find a statistically significant warming?

March 12, 2010 1:07 pm

carrot eater (12:53:33)
“…if you look at temperature data for the last week where ever you live, could you find a statistically significant warming?”
I see what you did there. We’re a week away from the vernal equinox. Ask that same question in the middle of June, or the middle of December in the S.H.

carrot eater
March 12, 2010 1:25 pm

Right. By the time it’s June, it will have obviously have become warmer. But if you choose any one week between now and then, you will not see a statistically significant trend over that week. You’ll see some noise.
That’s the point.

Bernard J.
March 13, 2010 1:21 am

AAiM (and Jan Pompe too, it seems).
Carrot eater has given you a clue.
It seems that my fairly obvious point sailed over your head, so all I can suggest is that you read my post again and parse it carefully – it poses a very important question that seems to be missed by most.
I really am interested in a serious response.
Those who are unable to address it are obviously not qualified or competent to comment on the nature of the warming trend since 1995, and those who are able to answer should be able to present a very important caveat to the commentary about the ‘significance’ of recent warming.
I’m surprised that this presents such a difficult hurdle for the apparently statistically literate people who comment here.

Amino Acids in Meteorites
March 13, 2010 10:19 am

Bernard J. (01:21:44) :
I did understand your point. I don’t need to examine it more closely.
Should we start with the Medieval Warm Period instead of just the last 15 years or just the 20th Century? Maybe then we could see warming/cooling trend more accurately.

Amino Acids in Meteorites
March 13, 2010 10:21 am

Jan Pompe (12:30:45) :
I agree with you.
At least the spot light he is now under since ClimateGate broke has this honesty, as much as it is, coming out of him.

Jan Pompe
March 13, 2010 12:29 pm

Bernard J. (01:21:44) :
“I really am interested in a serious response.
I don’t think that you are. You seem to want to believe the spin that Jones tried to put on it.
If carrot-eater hasn’t overdosed on carrots then he really has no excuse for trying to draw an irrelevant comparison between a week and 15 years.
I suggest you get some real perspective:
http://wattsupwiththat.com/2009/12/09/hockey-stick-observed-in-noaa-ice-core-data/
and look at the trend and the variations over a few thousand years say to the holocene optimum. To get to this century just add .7C.

Bernard J.
March 13, 2010 3:51 pm

AAiM and Jan Pompe.
Neither of you have answered the question, even if you yourselves believe that you have.
I asked a simple question: paraphrased, it was – given the noise in the contemporary temperature time series, what minimum period of time is required to discern a signal from this noise? The answer requires some basic mathematical processing… beyond “just add .7C”.
I suspect that the calculations are beyond you, which is a pity because I really am interested in an informed ‘sceptical’ opinion on the matter.
Tamino had this to say on the subject:
http://tamino.wordpress.com/2009/12/15/how-long/
I really would like to know if anyone can counter his work, and claim that any period less than 15 years can actually give a meaningful indication of what the global climatic system is doing.
If not, the promotion of the “no (statistically significant) warming since 1995” meme is a demonstration of the scientific/statistical illiteracy/innumeracy of those who present it.

Jan Pompe
March 13, 2010 5:11 pm

Bernard J. (15:51:56) :
You don;t get more basic than simple addition but if you don’t understand why I made that remark I don’t think I can help you.

Bernard J.
March 14, 2010 4:52 am

Jan Pompe.
I know exactly why you made that remark.
And now I’m wondering if you can answer my question.
More specifically, I’m wondering if you can refute Tamino’s demonstration that anything less than 15 years provides the statistical power to rise above the inherent noise in the temperature signal.
I really am interested in your response to this. It will underscore the statistical validility that reinforces those who make the claim of “no warming since 1995” without further caveat, and in so doing it will indicate exactly those who are able to comprehend what is noise, and what is signal.
If anyone can show that it is scientifically/statistically possible to make a claim that it is possible to identify, with statistical significance, a warming trend (or, more significantly, a lack thereof) using less than 15 years of data, the world seriously needs to know about it. Such a proof would turn climatology upside down, and would be a boon in advancing the sceptic cause.
As AAiM has gone conspicuously quiet on this, could you do this Jan? Proof that less than 15 years of data are necessary to definitively identify a warming/cooling/static trend would go a long way to trashing Phil Jone’s apparent prevarication about the matter – and conversely an inability to provide such proof goes a long way to vindicating the intent of his statement, in addition to supporting Tamino’s work.
And for what it’s worth, I really don’t care two bits about Phil Jone’s “spin” or whatever: all I want to know is the scientific truth. Surely this is easy to provide?
If the basic addition of “.7C” is all that is required to blow that smug so-and-so Tamino out of the water, it shouldn’t take you too long to show why, Jan. If it takes a bit more than adding “.7C” to a quantity, then we need to know about this too.
This is the perfect opportunity for you, or AAiM, or anyone else with the statistical competence, to make a major point – so why can it not be made in a few quick sentences, or a few paragraphs at the most?
At it’s most basic the matter boils down to this:
1) there is undisputable noise in the temperature ‘system’
2) this noise necessarily means that there is a minimum period of data that must be gathered before a signal can emerge
3) there are statistical methods for determining what the length of such a period is, dependent upon the magnitide of the noise
4) if Tamino is wrong, it should be a simple matter to demonstrate why he is, and what the actual minimum period required to discern signal from noise is.
This should be bread-and-butter to those here who wish to demolish the claims of the AGW crowd. So where’s the answer?

Bernard J.
March 14, 2010 5:46 am

AAiM (21:08:17, 11 Mar 2010).
I’ve been hoping that someone (and perhaps Luboš Motl himself) might follow up on your link to Motl’s post. Alas it seems that it’s not going to happen any time soon.
There are a number of (fatal) issues with Motl’s ‘analysis’, addressed in postings on that thread and also, by implication, in Tamino’s analysis. Whilst these flaws are interesting in and of themselves, insofar as they show what not to do, they (and Motl’s post) are actually not pertinent to the matter at hand.
Read my original post again. Motl’s naïve ‘analysis’ doesn’t actually answer my question.

March 14, 2010 5:47 am

Bernard J. (04:52:40):

If anyone can show that it is scientifically/statistically possible to make a claim that it is possible to identify, with statistical significance, a warming trend (or, more significantly, a lack thereof) using less than 15 years of data, the world seriously needs to know about it. Such a proof would turn climatology upside down, and would be a boon in advancing the sceptic cause.

There is no skeptic “cause”. There is scientific skepticism, and it must be kept in mind that skeptics have nothing to prove. Even so, here is a cooling trend of less than 15 years data: click
Tamino cherry-picks 15 years because it supports his CAGW agenda. But real world observations show that thirty years is about a half cycle of global /warming/cooling: click
Grant Foster has a personal agenda: catastrophic anthropogenic global warming; runaway global warming caused by human activity. Yet there is zero empirical evidence of that fantasy: click
As we see, for every one molecule of CO2 emitted by human activity, the planet emits 33 molecules of CO2 naturally. Those figures come from the IPCC.
Tamino is a thoroughly deranged True Believer, nipping at the heels of the Big Dog: WUWT. He is tortured by his irrelevance. If people accepted tamino as credible, they would have designated him as the “Best Science” site. Instead, he didn’t even make the finals.
Please, don’t waste any more of our time with the irrelevant tamino.

Jan Pompe
March 14, 2010 5:51 am

Bernard J. (04:52:40) :
“Jan Pompe.
I know exactly why you made that remark.”
No you don’t. You obviously have not understood anything
“And now I’m wondering if you can answer my question.”
I have no interest in doing that because I don’t disagree with what Tamino has done.
I DO NOT DISAGREE WITH THIS:
“The simple fact is that short time spans don’t give enough data to establish what the trend is, they just exhibit the behavior of the noise”
http://tamino.wordpress.com/2009/12/15/how-long/
But that and this from you
“More specifically, I’m wondering if you can refute Tamino’s demonstration that anything less than 15 years provides the statistical power to rise above the inherent noise in the temperature signal.”
Does not in any way say the same thing.
What I do disagree with (like AAIM) is that 15, or even 30 years or 100 maybe might be long enough. Especially in a system where we see lags of 800 years between temperature change and CO2 level Dansgaard cycles of ~1500 years during glacials and Bond events every 1000 -1500 years during interglacials.
Kapiche?

Jan Pompe
March 14, 2010 6:18 am

Bernard J. (04:52:40) :
I sympathise with what you are trying but it’s not going to be. Not everything the warmists say is wrong some of it is just not relevant. The article of tamino is just a distraction he knows as well as anyone I suppose that 15 0r 30 years is just a blip.
http://www.foresight.org/nanodot/wp-content/uploads/2009/12/histo3.png
Here is 8000 years of Holocene it’s obvious from this perspective that there has been a steady downward trend in temperature that became steeper 3000 years ago and the recent warming just another blip of “noise”. Tamino does not want us thinking of it like that but just for us to look at the past 30 years and get very scared.

supercritical
March 14, 2010 6:32 am

Bernard
As a layman, it seems odd to talk about a series of thermometer readings in terms of signal processing.
Firstly, there is an assumption that a ‘signal’ is actually present, which implies an a-priori attitude. OK when you are talking about a deliberate human communication buried in static interference, but otherwise?
We all know that ‘signal’ has all the characteristics of ‘noise’ unless you know in advance what you are looking for, and what form it might take.
So what are the climatologists looking for? Changing climate? Yet, after fifteen years they detect ‘nothing significant’ in these strings of temperature readings.
Hm.

Amino Acids in Meteorites
March 14, 2010 9:05 pm

Bernard J. (15:51:56) :
Tamino had this to say on the subject:
Who is Tamino? And why does he hide who he is?

Amino Acids in Meteorites
March 14, 2010 9:18 pm

Bernard J. (15:51:56) :
I did answer your question. I just didn’t give the answer you wanted. And that answer isn’t needed. Sorry, I won’t be giving that. I won’t step into your trap. I don’t want to get into arguments over statistics since one can make statistics say anything they want them to.
If 15 years isn’t a good time period for you then let’s make it 1500 years. You won’t have to split hairs then over statistics on a time scale you—or should I say Tamino and not you—say is too short. You’ll find that it was warmer on earth 1000 years ago than now.
I don’t want to play your piddly game. Because no matter what answer I give you’ll go back to manmade co2 is causing dangerous global warming, and you have Tamino’s work to prove it—that is your mantra.
If there is any further problem you have with no statistically significant warming over the last 15 years then please contact Phil Jones about it since he also is saying that.

Amino Acids in Meteorites
March 14, 2010 9:21 pm

Bernard J. (04:52:40) :
As AAiM has gone conspicuously quiet on this, could you do this Jan?
Wrong.
I just came here to check tonight. I don’t sit in front of my computer with this page open on it 24 hours a day.

Amino Acids in Meteorites
March 14, 2010 9:49 pm

Jan Pompe (05:51:42) :
I have no interest in doing that because I don’t disagree with what Tamino has done.
I DO NOT DISAGREE WITH THIS:
“The simple fact is that short time spans don’t give enough data to establish what the trend is, they just exhibit the behavior of the noise”

…………………………………………………………………………………………………………….
You always have an interesting way of answering.
I am impressed, again.

Bernard J.
March 14, 2010 10:57 pm

It really is a simple question: given the noise in the modern temperature record, what is the shortest span of time required where one can say, with at least 95% confidence, that a signal might be expected to rise above said noise?
There has been an awful lot of noise from those who are responding to my question, but thus far there has been no signal at all rising above the clamour.
Let me put it this way: if it emerges at the end of this year that there is “significant” warming since 1995, but not from 1996, what does this mean?
If, for the next 40 years, the same pattern emerges, where warming can be “significantly” identified for any period longer than about the latest 15 years, what does this mean?
Will no-one seriously attempt to answer the question?

Stu
March 15, 2010 2:06 am

Bernard, did you miss Smokey’s good value reply?
“There is no skeptic “cause”. There is scientific skepticism, and it must be kept in mind that skeptics have nothing to prove. Even so, here is a cooling trend of less than 15 years data: click
Err, Smokey, you do know what statistical signifiance means, right? Sure, there are trends on timescales of less than 15 years, and indeed you’ll be able to find trends over almost all timescales. The question is, if a 15 year trend is not statistically significant, why should that 6 year trend be (the graph ends in 2008)?
If you plotted that graph from 1995, all the lines would be pointing up. Still wouldn’t be a significant trend though.
Besides, I don’t really know what the fuss is about, since ‘statistical significance’ should really read ‘statistical artefact’ in which the 95% level is fairly arbitrarily chosen. Even if a trend were statistically significant at the 95% level, there’s a still a 1 in 20 probability that it’s happened by chance.
What’s more, beyond statistics there must be a physical reason for every fluctuation in the average temperature of the Earth and statistical significance is just a tool for working out whether what you see matters on longer timescales.

March 15, 2010 6:32 am

Stu (02:06:59) :
“Err, Smokey, you do know what statistical signifiance means, right?”
Ask Phil Jones. He made that statement regarding the flat temperatures over the last fifteen years. Or was Jones wrong?

Amino Acids in Meteorites
March 15, 2010 6:35 am

Bernard J. (22:57:00) :
It really is a simple question:
…………………………………………………………………………
It is so simple we have other more interesting things to do.
Bye now.

Amino Acids in Meteorites
March 15, 2010 6:41 am

Smokey (06:32:55) :
What these guys don’t want to say is that if the graph had been more aggressive in in either direction statistical significance would have been found. But since it wasn’t upward enough they can’t claim global warming is happening. But if there had been statistical significance in the last 15 years i think there would be most global warming believers who would be pointing out that warming in the last 15 regardless if it was a long enough time frame or not.
Why do I know this? Experience.
And also, look at how excited some of them are about the fairly high anomaly in UAH for the past 2 months. A 2 month time frame is enough for some of them.

Bernard J.
March 15, 2010 6:58 pm

AAIM (06:41:17, 15 Mar 2010).
As I said elsewhere, the magnitude of the signal and the magnitude of the noise operate together to determine the magnitude of the period required for the signal to be statistically discerned over the noise.
It’s a simple relationship, and I am simply asking you to determine what period of time is required in order to be able to say with a degree of statisitcal confidence that a temperature signal is discernable from the noise.
If the signal is such that it takes more than 15 years to pick from noise, this makes no difference to the long term significance if the rise is relatively constant. Indeed, such a rise may still have profound implications for societies and for the biosphere within one human lifetime – but this brings us back to the original question of the noise and the signal, and I really am waiting to see what method the statisticians here believe is appropriate to quantify the time needed to discern one from the other.

Bernard J.
March 15, 2010 7:01 pm

To clarify, by “long term significance” I meant “significance” in an implication sense, not a statistical sense.

Stu
March 17, 2010 12:03 pm

Smokey said:
“Ask Phil Jones. He made that statement regarding the flat temperatures over the last fifteen years. Or was Jones wrong?”
Phil Jones said:
“I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level.”
Phil Jones said the trend is 0.12C/decade. 0.12C/decade is not flat. Smokey is wrong.
By the way, Smokey also said:
“Tamino cherry-picks 15 years because it supports his CAGW agenda.”
No he didn’t. If you look, you’ll see that he analysed trends for start years going back to 1975, looking for the shortest period that gives statistical significance.
Again, Smokey is wrong. I know I said that already, but it bears repeating.