Before One Has Data

Guest Post by Willis Eschenbach

Anthony Watts has posted up an interesting article on the temperature at Laverton Airport (Laverton Aero), Australia. Unfortunately, he was moving too fast, plus he’s on the other side of the world, with his head pointed downwards and his luggage lost in Limbo (which in Australia is probably called something like Limbooloolarat), and as a result he posted up a Google Earth view of a different Australian Laverton. So let’s fix that for a start.

Figure 1. Laverton Aero. As you can see, it is in a developed area, on the outskirts of Melbourne, Australia.

Anthony discussed an interesting letter about the Laverton Aero temperature, so I thought I’d take a closer look at the data itself. As always, there are lots of interesting issues.

To begin with, GISS lists no less than five separate records for Laverton Aero. Four of the records are very similar. One is different from the others in the early years but then agrees with the other four after that. Here are the five records:

Figure 2. All raw records from the GISS database. Photo is of downtown Melbourne from Laverton Station.

This situation of multiple records is quite common. As always, the next part of the puzzle is how to combine the five different records to get one single “combined” record. In this case, for the latter part of the record it seems simple. Either a straight linear offset onto the longest record, or a first difference average of the records, will give a reasonable answer for the post-1965 part of the record. Heck, even a straight average would not be a problem, the five records are quite close.

For the early part of the record, given the good agreement between all records except record Raw2, I’d be tempted to throw out the early part of the Raw2 record entirely. Alternately, one could consider the early and late parts of Raw2 as different records, and then use one of the two methods to average it back in.

GISS, however, has done none of those. Figure 3 shows the five raw records, plus the GISS “Combined” record:

Figure 3. Five GISS raw records, plus GISS record entitled “after combining sources at the same location”. Raw records are shown in shades of blue, with the Combined record in red. Photo is of Laverton Aero (bottom of picture) looking towards Melbourne.

Now, I have to admit that I don’t understand this “combined record” at all. It seems to me that no matter how one might choose to combine a group of records, the final combined temperature has to end up in between the temperatures of the individual records. It can’t be warmer or colder than all of the records.

But in this case, the “combined” record is often colder than any of the individual records … how can that be?

Well, lets set that question aside. The next thing that GISS does is to adjust the data. This adjustment is supposed to correct for inhomogeneities in the data, as well as adjust for the Urban Heat Island effect. Figure 4 shows the GISS Raw, Combined, and Adjusted data, along with the amount of the adjustment:

Figure 4. Raw, combined, and adjusted Laverton Aero records. Amount of the adjustment after combining the records is shown in yellow (right scale).

I didn’t understand the “combined” data in Fig. 3, but I really don’t understand this one. The adjustment increases the trend from 1944 to 1997, by which time the adjustment is half a degree. Then, from 1997 to 2009, the adjustment decreases the trend at a staggering rate, half a degree in 12 years. This is (theoretically) to adjust for things like the urban heat island effect … but it has increased the trend for most of the record.

But as they say on TV, “wait, there’s more”. We also have the Australian record. Now theoretically the GISS data is based on the Australian data. However, the Aussies have put their own twist on the record. Figure 5 shows the GISS combined and Adjusted data, along with the Australian data (station number 087031).

Figure 5. GISS Combined and Adjusted, plus Australian data.

Once again, perplexity roolz … why do the Australians have data in the 1999-2003 gap, while GISS has none? How come the Aussies say that 2007 was half a degree warmer than what GISS says? What’s up with the cold Australian data for 1949?

Now, I’m not saying that anything you see here is the result of deliberate alteration of the data. What it looks like to me is that GISS has applied some kind of “combining” algorithm that ends up with the combination being out-of-bounds. And it has applied an “adjustment” algorithm that has done curious things to the trend. What I don’t see is any indication that after running the computer program, anyone looked at the results and said “Is this reasonable?”

Does it make sense that after combining the data, the “combined” result is often colder than any of the five individual datasets?

Is it reasonable that when there is only one raw dataset for a period, like 1944–1948 and 1995–2009, the “combined” result is different from that single raw dataset?

Is it logical that the trend should be artificially increased from 1944 to 1997, then decreased from that point onwards?

Do we really believe that the observations from 1997 to 2009 showed an incorrect warming of half a degree in just over a decade?

That’s the huge missing link for me in all of the groups who are working with the temperature data, whether they are Australian, US, English, or whatever. They don’t seem to do any quality control, even the most simple “does this result seem right” kind of tests.

Finally, the letter in Anthony’s post says:

BOM [Australian Bureau of Meteorology] currently regards Laverton as a “High Quality” site and uses it as part of its climate monitoring network. BOM currently does not adjust station records at Laverton for UHI.

That being the case … why is the Australian data so different from the GISS data (whether raw, combined, or adjusted)? And how can a station at an airport near concrete and railroads and highways and surrounded by houses and businesses be “High Quality”?

It is astonishing to me that at this point in the study of the climate, we still do not have a single agreed upon set of temperature data to work from. In addition, we still do not have an agreed upon way to combine station records at a single location into a “combined” record. And finally, we still do not have an agreed upon way to turn a group of stations into an area average.

And folks claim that there is a “consensus” about the science? Man, we don’t have “consensus” about the data itself, much less what it means. And as Sherlock Holmes said:

I never guess. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. — Sir Arthur Conan Doyle, A Scandal in Bohemia

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

83 Comments
Inline Feedbacks
View all comments
Dr A Burns
June 24, 2010 3:03 pm

This claims there’s no missing data for the period for Laverton 087031:
http://www.bom.gov.au/clim_data/cdio/metadata/pdf/siteinfo/IDCJMD0040.087031.SiteInfo.pdf
You can calculate average temps from the max and mins for Laverton given here:
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=087031&p_nccObsCode=36&p_month=13

janama
June 24, 2010 3:11 pm

Willis – there’s more data for Laverton available here
http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=aus&station=087031&period=annual&dtype=raw&ave_yr=0
this record goes back to 1910 and is the High Quality temperature record used for annual temperature analyses.
Ken at http://kenskingdom.wordpress.com/ has been going through the full Australian record state by state. Here’s nearly finished.

Andrew Partington
June 24, 2010 3:17 pm

There is a town called Laverton in outback Western Australia as well. This could be the picture in Anthony Watt’s post. Perhaps the inconsistencies are due to a confusion between the two?
[reply] See update and Willis Eschenbach’s followup post. RT-mod

janama
June 24, 2010 3:22 pm

BTW – the data from 99 – 03 perfectly matches the data for Moorabbin airport 34 km away.

carrot eater
June 24, 2010 3:41 pm

I haven’t looked at their data for this location, but the Australian BoM does something that’s probably more akin to what Willis seems to want – painstaking adjustments with manual human judgment, guided by field notes about station moves, etc.
If that’s what you want, then that’s where you go. You don’t go to GISS. Different records are created for different purposes, so you have to know if the record you’re looking at is appropriate for whatever it is you are trying to do. The point of GISTEMP is to build up spatially averaged anomalies, to represent regional or global trends. They feel they can do this without making really detailed adjustments, so they don’t. As it is, those adjustments they do make do not have much impact on the global trends, as you can see.

Colin J Ely
June 24, 2010 3:55 pm

Why the missing data at Laverton RAAF? It was an RAAF Base from before WWII, the ATC Radar, at least until a few years ago was still operational.

Mike Edwards
June 24, 2010 4:10 pm

“carrot eater says:
June 24, 2010 at 1:43 pm
The GISS adjustment does one thing. It’s very clear what it does, and why it does it”
Instead of throwing rocks at Willis here, perhaps you should do some work yourself and show how and why the adjustments made by GISS for THIS station make sense. If the adjustments do make sense, then it should be straightforward to show that to us all.
Willis is saying that the adjustments look strange. From what I’ve seen so far, I’d have to agree with his conclusions.
If the adjustment in Fig 4 is meant to be a compensation for the UHI effect, then its shape seems unjustifiable – UHI for this site is surely going to be increasing with time as more and more urbanization creeps around it. Any apparent drop in UHI effect is more likely to be a pointer to the fact that local comparison “rural” stations ain’t so rural any more…

June 24, 2010 4:31 pm

Carrot eater unfortunately BOM don’t make UHI adjustments for Laverton. It appears NASA do. Who should we believe?

Matt in Melbourne
June 24, 2010 4:36 pm

It may be interesting to watch the future data and possible (probable?) UHI at this site as it undergoes redevelopment.
http://www.williamslanding.com.au/About_Williams_Landing.asp

latitude
June 24, 2010 5:13 pm

This is funny.
So what they are saying is the oldest data is no good, not accurate, and is corrected down.
But the exact same newest data, from the same equipment in the same place, is very accurate, accurate enough that they can apply a computer generated correction to it.
Makes sense that you would correct for UHI by raising temperatures, right? and when raising temperatures doesn’t show enough increase, it must mean the oldest data is wrong and must be corrected down………………….

dixonstalbert
June 24, 2010 5:14 pm

thank you Willis for original post and thank you carrot eater for june24 /1:47pm link to clear climate code showing how GIS calculates adjustment
I thought it would be helpful for other newbies to post how GIStemp calculates the adjustment, so I have pasted the code comments from /code/step2.py (from carrot eaters link) that document the calculations at the bottom of this post. (My apologies to the oldies who already know this stuff for the length.)
I understand the code picks out all the stations classified as rural within a radius of ”
d.cslat = math.cos(station.lat * pi180)” , calculates a trend for these, does some data checks, then adjusts the “urban” station with this trend.
Presumably, the “rural stations” around Laverton Aero showed a stronger and inconsistent warming trend then it already it had, so in this case applying the adjustment for UHI causes an even greater warming trend.
Of course, this brings us back to the central theme of this blog: Are the “rural” classifications in the GISTemp accurate and valid? Or to answer Carrot Eater’s question: Would you be happy if GISS just tossed out all urban stations?
I would, Yes. That is the point of surfacestations.org : to find a large sample of stations with no UHI corruption and see if there really is a significant, widespread trend in temperatures.
Here are the code comments from step2.py:
def urban_adjustments(anomaly_stream):
“””Takes an iterator of station records and applies an adjustment
to urban stations to compensate for urban temperature effects.
Returns an iterator of station records. Rural stations are passed
unchanged. Urban stations which cannot be adjusted are discarded.
The adjustment follows a linear or two-part linear fit to the
difference in annual anomalies between the urban station and the
combined set of nearby rural stations. The linear fit is to allow
for a linear effect at the urban station. The two-part linear fit
is to allow for a model of urban effect which starts or stops at
some point during the time series.
The algorithm is essentially as follows:
For each urban station:
1. Find all the rural stations within a fixed radius;
2. Combine the annual anomaly series for those rural stations, in
order of valid-data count;
3. Calculate a two-part linear fit for the difference between
the urban annual anomalies and this combined rural annual anomaly;
4. If this fit is satisfactory, apply it; otherwise apply a linear fit.
If there are not enough nearby rural stations, or the combined
rural record does not have enough overlap with the urban
record, try a second time for this urban station, with a
larger radius. If there is still not enough data, discard the
urban station.
“””
def combine_neighbors(us, iyrm, iyoff, neighbors):
“””Combines the neighbor stations *neighbors*, weighted according
to their distances from the urban station *us*, to give a combined
annual anomaly series. Returns a tuple: (*counts*,
*urban_series*, *combined*), where *counts* is a per-year list of
the number of stations combined, *urban_series* is the series from
the urban station, re-based at *iyoff*, and *combined* is the
combined neighbor series, based at *iyoff*.
“””
def prepare_series(iy1, iyrm, combined, urban_series, counts, iyoff):
“””Prepares for the linearity fitting by returning a series of
data points *(x,f)*, where *x* is a year number and *f* is the
difference between the combined rural station anomaly series
*combined* and the urban station series *urban_series*. The
points only include valid years, from the first quorate year to
the last. A valid year is one in which both the urban station and
the combined rural series have valid data. A quorate year is a
valid year in which there are at least
*parameters.urban_adjustment_min_rural_stations* contributing
(obtained from the *counts* series).
Returns a 4-tuple: (*p*, *c*, *f*, *l*). *p* is the series of
points, *c* is a count of the valid quorate years. *f* is the
first such year. *l* is the last such year.
“””
def cmbine(combined, weights, counts, data, first, last, weight):
“””Adds the array *data* with weight *weight* into the array of
weighted averages *combined*, with total weights *weights* and
combined counts *counts* (that is, entry *combined[i]* is the
result of combining *counts[i]* values with total weights
*weights[i]*). Adds the computed bias between *combined* and
*data* before combining.
Only combines in the range [*first*, *last*); only combines valid
values from *data*, and if there are fewer than
*parameters.rural_station_min_overlap* entries valid in both
arrays then it doesn’t combine at all.
Note: if *data[i]* is valid and *combined[i]* is not, the weighted
average code runs and still produces the right answer, because
*weights[i]* will be zero.
“””
“””Finds a fit to the data *points[]*, using regression analysis,
by a line with a change in slope at *xmid*. Returned is a 4-tuple
(*sl1*, *sl2*, *rms*, *sl*): the left-hand slope, the right-hand
slope, the RMS error, and the slope of an overall linear fit.
“””
“””Decide whether to apply a two-part fit.
If the two-part fit is not good, the linear fit is used instead.
The two-part fit is good if all of these conditions are true:
– left leg is longer than urban_adjustment_short_leg
– right leg is longer than urban_adjustment_short_leg
– left gradient is abs less than urban_adjustment_steep_leg
– right gradient is abs less than urban_adjustment_steep_leg
– difference between gradients is abs less than urban_adjustment_steep_leg
– either gradients have same sign or
at least one gradient is abs less than
urban_adjustment_reverse_gradient

sky
June 24, 2010 5:27 pm

carrot eater:
I’m surprised that anyone with any analytic acumen would defend GISTEMP and their data suppliers. There is virtually no QC in the data analysis, which is done on time-series often patched together from short stretches of inconsistent data at the same station. Incisive QC routines find egregious offsets of decadal and longer duration in both the “raw” and the “adjusted” series. The basic premise of their homogenization is that a low night-lights station is “rural,” and for every grid-cell, one such station is designated as the “reference,” whose trend all other stations are then forced to mimic. This is trend management of the most obvious subjective kind. And the tendentiousness of their management technique is amply evident in comparing the two versions of the USA48 anomalies. They differ substantially only at the extreme ends of the series, in what is an obvious attempt to maintain a consistent trend theroughout the decades, rather than a genuine methodological change, as is advertised.

carrot eater
June 24, 2010 5:34 pm

MikeEdwards:
If there was a difference in trends between the urban station and the rural neighbors, then the method will try to get rid of them, as dixonstalbert outlines.
Exactly why the trend was different: this does not come into play. Maybe it was UHI. Maybe it was something else. Maybe it was an artifact of a step change at a station move. Maybe the rural neighbors had, for whatever reason, a higher warming trend than the urban station, so the urban station is adjusted to warm faster.
The algorithm doesn’t know or care why this would be the case; it just puts its head down and makes the urban stations look like the rural stations.
So the real question is, is it a good idea to neuter the urban stations in this way?
Or, put another way, do you think it’s a good idea to just eliminate the urban stations from the sample?
Because that’s roughly what this method is doing; any long term trends unique to the urban stations are not allowed into the result.
This is the context that Willis’s posts always miss. Along with the context of how little the adjustment affects the overall result.

June 24, 2010 6:10 pm

BOM has several records for Laverton at http://www.bom.gov.au/climate/data/weather-data.shtml
Untick the box “Only interested in open stations” to see them-
087031 Laverton RAAF, 087177 Laverton Comparison, 087032 Laverton Salines, 087086 Laverton explosives, 087065 Werribee Research Farm (8.8km away). It’s only 18.4km from BOM Regional Office in Melbourne.
I’m currently analysing Victorian climate data and should have a post up in a couple of days at
kenskingdom.wordpress.com
which will include Laverton. On ya janama and Willis.
Ken

Ripper
June 24, 2010 6:17 pm

‘Does it make sense that after combining the data, the “combined” result is often colder than any of the five individual datasets?”
It makes no sense to me , but does somehow to the CRU team.
Here are just a couple of examples where not one but two stations have been adjusted (in favour of a warm bias and lowering the 1961-1990 baseline) in Australia by CRU.
http://members.westnet.com.au/rippersc/gerojones1999line.jpg
http://members.westnet.com.au/rippersc/gerojones1999.jpg
http://members.westnet.com.au/rippersc/kaljones1999line.jpg
http://members.westnet.com.au/rippersc/kaljones1999sg.jpg
The BOM quality site is a bit of a joke IMO.
Here they have turned a 100 year flat trend over two stations 12km apart and 65 odd metres different in elevation into up 1.6 degree trend in the Maximums.
http://members.westnet.com.au/rippersc/hchomog.jpg
Yet the data shows the earlier station minimums were much colder in winter
http://members.westnet.com.au/rippersc/hcjanjulycomp.jpg
Compare them with a plot with CRU2010 details released after the climate gate that for Australia purport (by the met office) to be “Based on the original temperature observations sourced from records held by the Australian Bureau of Meteorology”.
http://members.westnet.com.au/rippersc/hccru2010.jpg
The CRU 2010 data shows a cooling trend from 1950.
Bear in mind that the Halls creek station is extrapolated over at least 1mil sqr km or 15% of Australia’s land mass.
More Aus info here
http://www.warwickhughes.com/blog/?p=510

Ripper
June 24, 2010 6:18 pm

P.S. Looking forward to Antony’s presentation in Perth on the 29th.

janama
June 24, 2010 6:28 pm

just a tip for all – the Weatherzone site is the way to access where the sites are physically.
here’s Laverton
http://www.weatherzone.com.au/vic/melbourne/laverton
scroll down and click on “full climatology »” and scroll to bottom of page and you’ll see the exact location of the site 37.8565°S 144.7566°E – remove the degree signs and paste into google earth. It will take you directly to the Stevenson box site.

carrot eater
June 24, 2010 6:37 pm

Willis
“One of the “adjustments” that GISS has made is to throw away thirty years of Australian data. ”
Where is that? I must have missed something. GISS just takes what NOAA gives them. If a record is less than 20 years long (I think), they toss it out.
“The field of climate science needs to have one agreed upon method for selecting temperature data, for combining records at the same location, for adjusting for UHI, and for area-averaging records.”
NO they absolutely don’t. I couldn’t disagree more. Because there simply isn’t any obvious best way to do any of these things. That’s why there’s value in different groups using different methods with roughly the same data – CRU, GISS, NCDC, the individual countries, and now a whole slew of bloggers as well – you see the effects of different choices in processing, you see what matters, what doesn’t matter. When it comes to hemispheric or global means, it’s remarkable how little processing choices matter; you get about the same results. But it’s still useful to have different people trying different things.
“It is that we get very, very different answers if we use the GISS data (adjusted or unadjusted) and the Australian HQ data.”
1. So?
2. No, they aren’t very, very different.
3. Differences are going to come up when different groups use different adjustment methods. GISS takes the raw, and then applies its crude adjustment to make it look like the rural stations. The BoM will, on the other hand, sit there with both field notes and statistical methods and try to specifically adjust for each little thing that happened there – station moves, micro-site stuff, whatever. Again, why is there this need for everybody to come to the exact same results? That’s weirdly bureaucratic.
“Your link above shows that the GISS UHI adjustments make absolutely no difference to the post-1950 data … is that supposed to impress me?”
The reason I show you that is to show that the adjustments you are so suspicious of have very little impact to the big picture. It’s something to keep in mind when obsessing over each individual adjustment.
“PS – carrot eater, we know for a fact that UHI exists and is a couple of degrees in many cities. ”
Yes. But based on that, you can’t just eyeball a global graph and know how much UHI is in there. I’m sorry, but you just can’t. You have to do some analysis. One simple thing to do is exactly what GISS does: not let any urban station carry its own trend into the analysis. The trends in GISTEMP are driven by the rural stations.
“Yes, I’d be happy if GISS did that … but what does that have to do with adjustments that are improper?”
It has everything to do with it.. because that’s essentially what the GISS adjustment accomplishes. I don’t understand why this is so difficult to grasp, but it’s fundamental to understanding what GISS does. If you would be happy with GISS dropping the rural stations, then you can’t be upset with them doing what they do now.
Anyway, if you want to see what happens, you can go use ccc code to see what GISTEMP does when you do eliminate all the urban stations. I think Ron Broberg has posted such results.
As for your point about using light/dark for urban/rural in poor countries: that’s a reasonable objection,on the face of it, but separate from the point here. I think they think nightlights scales with energy usage, and UHI scales with energy usage. I’m not convinced, since UHI has as much to do with materials of construction, obstructions to convection and changes in surface moisture, as it does anything else, and you can have all these things in a poor country that’s dark at night. So without doing or seeing further analysis on poor countries, I’m not convinced by their choice there.

June 24, 2010 8:03 pm

carrot eater says:
June 24, 2010 at 3:41 pm “I haven’t looked at their data for this location, but the Australian BoM does something that’s probably more akin to what Willis seems to want – painstaking adjustments with manual human judgment, guided by field notes about station moves, etc”.
Reply. (1). Your earlier assertion that trend is more important than accurate absolute value is simply wrong. Ultimately, there is a national average, then a global average calculated. These are independent of trend. They are entirely dependent upon the accuracy of the temperature measurement.
Reply (2). Yes, the BoM does the things you state. Here is one result derived from the 100 years of annual average data from Laverton. It deals with the number of times a value appears among those 100. Let’s start with the whole numbers and half numbers.
13 deg 1 time
13.5 deg 3
14 deg 3
14.5 deg 4
15 deg 2
But, looking at intermediate values, we have
13.8 deg 13 times
14.1 deg 10
14.3 deg 10
So, only 3 values account for a third of the numbers reported and there is a tendency to avoid whole and half numbers. Does this seem like a natural data set? Emphatically no. It looks “adjusted”. If the adjustment is this far askew, what else is askew?
carrot eater, I’m calling you out. Let’s use Laverton as an example. Please give your ideas on why the original BoM data, as read from the primary records, are suppressed in favour of the adjusted values from 1910 as referenced above.
Please explain how one can recover from the BoM, the original data, plus the time periods when adjustments were made, plus the magnitude of those adjustments. Please state why there are missing values as I posted above, and how they were infilled.
Remember, this is before the data are exported for GISS to play with.

carrot eater
June 24, 2010 8:54 pm

“Reply. (1). Your earlier assertion that trend is more important than accurate absolute value is simply wrong. Ultimately, there is a national average, then a global average calculated. These are independent of trend. They are entirely dependent upon the accuracy of the temperature measurement.”
I don’t think you understood what I mean. A temperature series that goes
1 1 2 2 1
is, so far as GISS is concerned, the same as a temperature series that goes
2 2 3 3 2
That’s what I mean that absolutes don’t matter, when what you’re ultimately calculating are anomalies. Trends are what matter. And the process of GISTEMP doesn’t calculate national averages. The grid boxes don’t know or care about political boundaries.
As for the rest of it, I’m not interested in numerology.

Faustino
June 24, 2010 9:28 pm

I’ve come across a letter of mine published in The Australian, our national newspaper, on 25 Sept 2007. It still seems pertinent, in the light of data concerns which have arisen since 2007, and particularly as the new Prime Minister Julia Gillard has acknowledged the lack of a popular consensus on AGW policies.
“Letter to the Editor, The Australian. Published 25/9/07 (lead letter)
Climate change is the natural condition of the Earth. Several questions must be answered before policy decisions are made to attempt to modify the rate of change.
Is the Earth really warming at an unusual rate? If so, is this a problem and what are the costs and benefits of climate change? What is the cause of any change and can policy-driven human actions significantly affect the future rate of change and the level of global temperature? What are the costs and benefits of such actions? Are they worthwhile?
Clearly, no one can decisively answer all of these questions. All policy proposals are based on a high degree of ignorance and uncertainty, which should be recognised. Yet none of the questions were addressed [in an op-ed piece by a government minister]. It would be better to focus our efforts on developing a clearer understanding of climate change than on pursuing ad hoc and disparate measures, many of which will clearly not be cost-effective.
Michael Cunningham, West End, Qld”
I think that Gillard should use this as a starting point.

AC of Adelaide
June 24, 2010 11:04 pm

Speaking as a scientist Just looking at the graph displayed at “Willis Eschenbach says: June 24, 2010 at 5:53 pm above ” I think that anyone who draws a straight line through that data set as dislayed is either heroic or has rocks in their heads.
You can fiddle with it as much as you like, but until you have another hundred years of data or can explain the fluctuations, all you can reliably say is that temperature goes up and temerature goes down. I mean just look at those huge drops. What are they all about? I doubt its UHI effect.
Now you can give that data to NASA to have a fiddle with but after you’ve read Case Study 12 of D’Aleo and Watts (“show this to Jim then hide it”) which shows that it is apparently standard practice to alter “raw data” and then delete the original, and when you’ve digested that read http://wattsupwiththat.com/2010/03/30/nasa-data-worse-than-climate-gate-data-giss-admits/#more-17958 where NASA says its data is even worst that the Uni of East Anglia for heaven’s sake – Then any output “homogenised” or otherwise is just not credible, however you tart it up.
I guess I’m addressing this to various people out there who eat vegetables.

June 25, 2010 12:00 am

carrot eater says:
June 24, 2010 at 8:54 pm “As for the rest of it, I’m not interested in numerology.”
You are caught out badly on selective quotation. Whereas you state “And the process of GISTEMP doesn’t calculate national averages” my statement was “Ultimately, there is a national average, then a global average calculated.” I did not limit my comment to GISTEMP. Would you care to answer how an anomaly temperature is calculated if not from the mean of absolute numbers, whose accuracy is vitally important as a base, irrespective of trend?
Of course accuracy matters. Only a novice would argue otherwise.
Thank you for taking up the challenge. You do not score points for telling people to do them near impossible, then failing to show that you can do it.
I’m not dealing with numerology in the sense of predicting horse races from past patterns of winners. I’m talking about a natural number set that ought to have an explainable distribution. This was does not. Why, oh wise carrot eater?

carrot eater
June 25, 2010 4:48 am

Geoff Sherrington:
“Would you care to answer how an anomaly temperature is calculated if not from the mean of absolute numbers, whose accuracy is vitally important as a base, irrespective of trend?”
Seriously? Maybe I’m misunderstanding what you are saying, but this looks like an elementary misunderstanding of how anomalies are calculated. The only time the absolute numbers are averaged together are to find the monthly means at any given location from the daily observations – something that happens before the data gets to NOAA or GISS or CRU.
Regardless of how you combine the stations – RSM, CAM or FDM, you aren’t simply averaging together absolute values from different locations.
If I’m not misunderstanding you, and this is actually a point of contention, then I would have you work out a simple example of how you think anomalies are calculated. Start with my silly example of
1 1 2 2 1
and
2 2 3 3 2

Verified by MonsterInsights