More on the NIWA New Zealand data adjustment story

NIWA issued a response statement regarding the charges leveled by The NZ Climate Science Coalition here:

http://www.niwa.co.nz/our-science/climate/news/all/niwa-confirms-temperature-rise

They say:

Warming over New Zealand through the past century is unequivocal.

NIWA’s analysis of measured temperatures uses internationally accepted techniques, including making adjustments for changes such as movement of measurement sites. For example, in Wellington, early temperature measurements were made near sea level, but in 1928 the measurement site was moved from Thorndon (3 metres above sea level) to Kelburn (125 m above sea level). The Kelburn site is on average 0.8°C cooler than Thorndon, because of the extra height above sea level.

I’m not too impressed, especially when you see where the weather station for National Institute of Water and Atmosphere (NIWA) is, right on the rooftop next to the air conditioners:

Note also the anemometer mast, identifying the weather station Click for a larger image

Here is the station survey: NIWA_station_survey (PDF) and the Google Earth KML file

Thanks to: Dieuwe de Boer who did a good portion of station surveys in New Zealand last year.

The NZ Climate Science Coalition responds:

NIWA’s explanation raises major new questions

The NIWA climate controversy took a new twist tonight with the release of new data from the government run climate agency.

Reeling from claims that it has massaged data to show a 150 year warming trend where there isn’t one, NIWA’s chief climate scientist David Wratt, an IPCC vice-chair on the 2007 AR4 report, issued a news release stating adjustments had been made to compensate for changes in sensor locations over the years.

While such an adjustment is valid, it needs to be fully explained so other scientists can test the reasonableness of the adjustment.

Wratt is refusing to release data his organisation claims to have justifying adjustments on other weather stations, meaning the science cannot be reviewed. However, he has released information relating to Wellington temperature readings, and they make for interesting reading.

Here’s the rub. Up until 1927, temperatures for Wellington had been taken at Thorndon, only 3 m above sea level and an inner-city suburb. That station closed and, as I suspected in my earlier post, there is no overlap data allowing a comparison between Thorndon and Kelburn, where the gauge moved, at an altitude of 135 metres.

With no overlap of continuous temperature readings from both sites, there is no way to truly know how temperatures should be properly adjusted to compensate for the location shift.

Wratt told Investigate earlier there was international agreement on how to make temperature adjustments, and in the news release tonight he elaborates on that:

“Thus, if one measurement station is closed (or data missing for a period), it is acceptable to replace it with another nearby site provided an adjustment is made to the average temperature difference between the sites.”

Except, except, it all hinges on the quality of the reasoning that goes into making that adjustment. If it were me, I would have slung up a temperature station in the disused location again and worked out over a year the average offset between Thorndon and Kelburn. It’s not perfect, after all we are talking about a switch in 1928, but it would be something. But NIWA didn’t do that.

Instead, as their news release records, they simply guessed that the readings taken at Wellington Airport would be similar to Thorndon, simply because both sites are only a few metres above sea level.

Airport records temps about 0.79C above Kelburn on average, so NIWA simply said to themselves, “that’ll do” and made the Airport/Kelburn offset the official offset for Thorndon/Kelburn as well, even though no comparison study of the latter scenario has ever been done.

Here’s the raw data, from NIWA tonight, illustrating temp readings at their three Wellington locations since 1900:

What’s interesting is that if you leave Kelburn out of the equation, Thorndon in 1910 is not far below Airport 2010. Perhaps that gave NIWA some confidence that the two locations were equivalent, but I’m betting Thorndon a hundred years ago was very different from an international airport now.

Nonetheless, NIWA took its one-size-fits all “adjustment and altered Thordon and the Airport to match Kelburn for the sake of the data on their website and for official climate purposes.

In their own words, NIWA describe their logic thus.

  • Where there is an overlap in time between two records (such as Wellington Airport and Kelburn), it is a simple matter to calculate the average offset and adjust one site relative to the other.
  • Wellington Airport is +0.79°C warmer than Kelburn, which matches well with measurements in many parts of the world for how rapidly temperature decreases with altitude.
  • Thorndon (closed 31 Dec 1927) has no overlap with Kelburn (opened 1 Jan 1928). For the purpose of illustration, we have applied the same offset to Thorndon as was calculated for the Airport.
  • The final “adjusted” temperature curve is used to draw inferences about Wellington temperature change over the 20th century. The records must be adjusted for the change to a different Wellington location

Now, it may be that there was a good and obvious reason to adjust Wellington temps. My question remains, however: is applying a temperature example from 15km away in a different climate zone a valid way of rearranging historical data?

And my other question to David Wratt also remains: we’d all like to see the metholdology and reasoning behind adjustments on all the other sites as well.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
313 Comments
Inline Feedbacks
View all comments
December 9, 2009 5:39 pm

I think for the most part what you suggest is equivalent to what is being done.
Averaging the deltas is no different than averaging the temperature, and subtracting the mean. Where there are gaps, you also have gaps in the deltas, and you have to weight accordingly.
As for averaging to compute a grid value – that’s an attempt to account for the temperature of a volume of air (that in the grid). Then the global temp is the sum of all of those divided by the number of grids (or area in grids, or whatever).
In other words, creating grids is one way of dealing with the geographic sparsity and inhomogenuity of data. It seems like a relatively primitive approach – I’m sure there are much better ways of computing the average of a 2 dimensional area with non-evenly spaced datapoints than just putting them in grids.
My guess is the gridding is done for two reasons (and this really is a guess):
1) simplicity – especially before a lot of computing power became available
2) to feed and match models. GCM’s use grids also, so having gridded temperature data lets you match them up.
As for trying to create a time series at one point, rather than using the disparate time series as they appear in the actual record… I would guess that’s also a simplicity sort of thing. Again, I’d bet a good inferential statistician would have a more general way of doing it. However, even then, you have to adjust the individual station records where possible for actual events (changing an instrument, nearby environment changes, etc), and also for longer term stuff like UHI.
So really, what they are doing is a simplistic approach to achieving both station temperature series (which are useful for local climatological data – which is used directly in weather forecasting, among other things) and global averages.
There is an advantage in the simplistic approach: it makes the operation more visible, which is useful for quality control. If you take all the series, feed them into some sophisticated statistical number cruncher, it’s hard to know whether what you got out was right or had artifacts or bugs in it.
Not being familiar with the literature, I would hope there are papers out there using more sophisticated techniques also.

Nick
December 9, 2009 6:01 pm

Averaging the deltas is no different than averaging the temperature, and subtracting the mean. Where there are gaps, you also have gaps in the deltas, and you have to weight accordingly.
Explain more. It’s the filling in of the gaps that is the issue, because these are the adjustments, and that involves assumptions.
Here is some made up data. How would you join them up?
Day A B C D
1 11.92
2 10.24
3 10.59 10.38
4 10.08 11.14 11.85
5 11.12 11.52
6 10.97 10.97 11.8
7 11.19 10.19 11.32
8 11.69 11.83 10.12
9 11.48 11.32
10 10.34 11.67
11 11.02 11.55

Glenn
December 9, 2009 9:08 pm

John Moore (21:33:22) :
“You get better information because, *on average* the unsaturated adiabatic lapse rate is a pretty good estimate – because the atmosphere tends to be mixed (much more so farther above ground than the 1M thermometers, unfortunately). That does not mean that, for one station, you get better information. It means that, in a large average, you should. Not correcting for altitude, on average, is guaranteed to get you worse data, which is my whole point.”
Lapse rate is influenced by more than one factor, including temperature itself, and in some instances does not apply or is reversed. Lapse rate does not take into account air currents, geographical influences, etc.
There is no such thing as an “average” lapse rate, but if there were it would probably be seen to change as climate changes. Average implies “some more, some less”. But even were an “average” rate applied to a “large” number of stations there is no reason to assume that data would be accurate to any degree, unless the average rate of the “large” area was already known. And that comes back to the simple example of, if an average is known, then one station could suffice for the temperature of a whole region, or the whole earth.
Not correcting for altitude guarantees the best data that can be drawn from records.
Trying to string apples and oranges together doesn’t guarantee anything but garbage.

December 9, 2009 11:07 pm

Sorry, Glenn, but that’s just nonsense. There is such a thing as an average lapse rate. It would *not* change as climate changes, because it is an atmospheric constant that is not sensitive to CO2 changes (or much of anything else).
The average IS known. It is 9.8C/km. You don’t need to go around measuring things to know that, because it is a result of SIMPLE THERMODYNAMICS.
Hence applying that average will improve the average. Not applying it means you have one of two choices:
1) using uncorrected data, which is almost certainly wrong
2) not using the data at all, which reduces the information you have.
Again… we are talking about two things:
1) using averages to improve the signal in averages (in other words, I’m not claiming that using the average works for every case)
2) the thermodynamically derived constant which is the unsaturated adiabatic lapse rate of lifted air parcels.

Nick
December 10, 2009 3:13 am

.
Not correcting for altitude guarantees the best data that can be drawn from records.
===============
Why?
Why do you even need to stitch the two records together?
Why not just treat each station as completely separate stations?

Glenn
December 10, 2009 2:35 pm

John Moore (23:07:42) :
“Sorry, Glenn, but that’s just nonsense. There is such a thing as an average lapse rate. It would *not* change as climate changes, because it is an atmospheric constant that is not sensitive to CO2 changes (or much of anything else).
The average IS known. It is 9.8C/km. You don’t need to go around measuring things to know that, because it is a result of SIMPLE THERMODYNAMICS.”
The average adiabatic lapse rate is NOT the dry lapse rate, and varies considerably. The dry lapse rate is about 10C/1000m, the wet lapse rate is about 5C/1000m. The average lapse rate is 6.5C/1000m.
But this average is not an average lapse rate of the entire atmosphere of the earth, or of any area in any given period of time, say New Zealand 1912 to 1928. This is what I meant by there being NO “average” lapse rate and that actual rates change with climatic changes.
The claim that an average lapse rate is the dry lapse rate, and that either rate isn’t “sensitive to” other factors, indicates a profound misunderstanding of the subject.
These terms “simple thermodynamic” and “atmospheric constant” you use actually reinforce that inference. There are MANY factors, variables if you wish, that determine and affect actual lapse rates. These factors are the descriptors of a specific climate, or weather conditions. And the climate changes.
Throndon was lowered by .79C, which is a value that corresponds to an average lapse rate drop of 6.5C/1000m. Using the different rates:
Dry: 9.8C/1000m = 1.15C / 121meters
Average: 6.5C/1000m = .79C / 121 meters
Wet: 5C/1000m = .61C / 121meters
I believe WIki is correct:
“As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of 6.49 K(°C)/1,000 m”
“The standard atmosphere contains no moisture.”
No moisture??

nickleaton
December 10, 2009 2:48 pm

No moisture? Of course its a wrong assumption, and the problem is that the assumption has to be based on the location concerned.
You would have to adjust on a day by day basis using the observed humidity to adjust for the lapse rate.
However they haven’t done that.
However, I still think its the wrong approach. I see no need to stitch up temperature records from different sites, different equipement to make one temperature record for a particular site, (process and adjust the data) and then use this record and further process it.
Nick

Glenn
December 10, 2009 2:51 pm

Nick (03:13:09) :
“Why do you even need to stitch the two records together?
Why not just treat each station as completely separate stations?”
I haven’t argued for stitching these records together as they have. The need to determine a trend is legitimate, but that must not affect scientific discipline. I disagree completely with what John has said so far, but I don’t think the concept of adjusting temp by altitude is completely invalid. Likely not very accurate, and to be taken with a grain of salt when the adjustment is equal or close to the increasing trend it produced.
NIWA could have looked at rainfall, sunshine data that existed concurrently with temp data and interpolated a difference in elevation. What strikes me though is the number they used (.79C for 121 meters, EXACTLY the ICAO rate), which leads me to suspect they “didn’t need no stinkin’ rainfall records”.

Nick
December 10, 2009 3:23 pm

On the use of 0.79, its plain wrong.
You can show its wrong by taking the humidity figure for the station, looking up the appropriate lapse rate, and show that 0.79C isn’t correct.
On the stitching, it is a question of trend as you say, and it needs to be a justified scientific position.
What I’m asking is can you determine the trend without stitching?
Think about 4 stations, some with missing data, in different locations, with some overlapping records and some disparate records. Sketch a graph.
Ask yourself, without stitching them together can you determine a trend?
ie. Lets say we have one station. Can we compute the trend? Yes, we fit a line to the curve.
Now, if we have two stations with the same start and end date. What’s the trend?
Now if we have two stations. First one starting before the second. First ending before the second, with a period of overlap. What’s the trend?
What’s the trend if they don’t overlap?
ie. Start with the simple cases and make them more complex and ask yourself, what’s a reasonable and justified approach to working out a trend?
Can you do this without a lapse rate?
Nick

December 10, 2009 5:04 pm

You can show its wrong by taking the humidity figure for the station, looking up the appropriate lapse rate, and show that 0.79C isn’t correct.

I refer you to: http://en.wikipedia.org/wiki/Adiabatic_lapse_rate#Dry_adiabatic_lapse_rate for a discussion on lapse rates. Please tell me the appropriate lapse rate and humidity.

ie. Start with the simple cases and make them more complex and ask yourself, what’s a reasonable and justified approach to working out a trend?

There are lots of ways to make a trend. One does not have to stitch the stations together. However, one does have to adjust for differences in station conditions, except for where there is perfect overlap of time and nothing has changed.
One of those conditions, in the case of interest, where they don’t overlap, is altitude. So the answer to:
“Can you do this without a lapse rate?”
Is: no, not in this case, unless you have better information from some other source.

Nick
December 10, 2009 5:29 pm

You are asking the wrong question John.
Take two stations A and B. A is near B. A is lower than B. We have an old record for A, we have a newer record for B. That’s the example in question.
What you are saying is that even though we don’t have a recent record for A, we can reconstruct it by subtracting the appropriate temperature from the record for B. (Standard environmental lapse rate * altitude) You could of course do it the other way round.
1. It’s wrong because lapse rate depends on humidity. The 0.79C is for completely dry air.
2. The lapse rate isn’t constant, its a variable since it depends on humidity and that varies over time.
3. It’s the wrong question. You’re saying we have to answer the question, what was the absolute temperature in the past. Why not just ask the question, has it warmed?
So take two sites.
A with a record from 1900 to 1960
B with a record from 1950 to 2000.
If A warms 1900 to 1960, and B warms 1950 to 2000, we can safely assume that 1900 to 2000 things have warmed. Note that there is no need what so ever to assume lapse rates.
Nick

December 10, 2009 5:44 pm

The unsaturated adiabatic lapse rate is correct under the conditions of perfect mixing – i.e. it is the lapse rate you get from non-condensing convection. A real atmosphere has a variable lapse rate that us usually somewhat less than the dry adiabat trajectory. This is because the temperature is not determined solely by convective processes. See below. I goofed on which lapse rate to use, but not on the overall question (at least by this criticism).
————-
I wrote “The average IS known. It is 9.8C/km. You don’t need to go around measuring things to know that, because it is a result of SIMPLE THERMODYNAMICS.”
Glenn wrote:
“The average adiabatic lapse rate is NOT the dry lapse rate, and varies considerably. The dry lapse rate is about 10C/1000m, the wet lapse rate is about 5C/1000m. The average lapse rate is 6.5C/1000m.”
Glenn is correct. The *dry* lapse rate (9.8C) is what you get from purely convective processes – I had confused the average with the thermodynamic ideal, because I use the thermodynamic lapse rate more often – for computing convective instability, where it is appropriate.
I would again re-iterate, however, that the dry lapse rate is NOT dependent on the humidity. The reason it was incorrect to use is not because the air is not dry, but because processes other than pure non-condensing lifting actually go into making up an atmospheric column’s temperature trajectory.
Notice, however, that the average lapse rate gives you approximately their adjustment, however, which is the point. In the absence of better information (comparable stations, specific weather information such as which days had clouds bases below 135 meters but above sea level, etc), adjusting by an average lapse rate is a reasonable thing to do.

Nick
December 10, 2009 5:53 pm

OK. Lets say the station has moved up the hill.
Does adiabatic expansion apply?
Not really. The air will be in contact with the ground. Adiabatic doens’t apply.
See the mechanism behind katabatic and anabatic (in particular) winds.
Nick

December 10, 2009 7:09 pm

Nick,
Adiabatic applies to the air which rises (through convective processes) to the level of the station. This sets the environmental temperature at that level in the general area. If the air at the station “ignores” that, it will rise or sink until the air at that station is at the appropriate temperature. The fact that the air right at the station is in contact with the ground would only be the determining factor if that air was isolated from all the air around it. The real atmosphere doesn’t work that way.
Now, as was properly ponted out above, the dry adiabatic is not the correct lapse rate to use (contrary to my original assertion), because the environmental lapse rate is normally less than that. The correct lapse rate to use is the exact environmental lapse rate at the point at that time, which, of course, we don’t actually know (and if we narrow it to that point, is irrelevant anyway because of the ground contact).
However, if we want to make a time series from two nearby stations which are not at the same altitude, we have to do something to adjust for the fact that, normally, the higher station will be cooler. So what can we do other than make some estimate of the lapse rate?
There are a number of answers to this:
1) don’t join the two series. Just use the differences within the individual series. That, however, leaves a gap around the time when the two stations didn’t have concurrent records. For global temperatures, that might still be okay, though, since not all stations would have gaps at the same time.
2) use the earth’s average environmental lapse rate
3) try to get an even better estimate of an average lapse rate
4) use other stations that overlap the gap and thus give us a clue about the temperature trends at the time of the gap. This is part of the approach used by the GHCN.
Got other ideas? I doubt I’ve identified every approach.
3)

Nick
December 10, 2009 7:25 pm

Please read up on katabatic and anabatic processes. I can’t put it subtly.
Both apply in mountainous regions.
Instead of air behaving adiabatically (no exernal heating) the air stays in contact with the ground and so is hotter than you would expect.
You can see this in mountainous regions where the cloudbase (transition from dry to wet) occurs at higher altitudes over the peaks compared to the valleys.
A constant lapse rate leads to an over estimation of the temperature record.
Nick

December 10, 2009 7:50 pm

I am well aware of katabatic and anabatic processes. Katabatic winds are rather well known if you study meteorology in the US intermountain region where I live. My home is about 400ft above the NWS station in the Phoenix area, in steep hills, and I have felt the katabatic winds in the early evening (it’s really pretty neat – it blows down this canyon near my house, crossing the road in a stream only about 3 meters across, at about 2 m/s). My home is sometimes warmer than the NWS station in the evenings, and cooler during the day.
There are other effects also. I have never said that adjusting station data is either easy or exact. Rather, I said that using a lapse rate in this case is better than doing nothing at all, and more importantly, is certainly not a sign of bad faith data meddling.
“A constant lapse rate leads to an over estimation of the temperature record.”
I have no idea what you mean by that. Do you mean that 6.5C gives an overestimation? Or 10C does? or 1C does? Will it do this for all stations?
Here is my challenge to you:
What would YOU do with the temperature record, given two stations, separated by about 130 meters in altitude, near each other, with no other information, and with an adjoining but not overlapping temperature time series?

Ripper
December 10, 2009 8:16 pm

Check out the Halls Creek graphs
http://members.westnet.com.au/rippersc/hallscreek.jpg
http://members.westnet.com.au/rippersc/hallsmin.jpg
http://members.westnet.com.au/rippersc/hallsmean.jpg
The new station is 62 metres higher and 12 km away but actually reads hotter minimums although the maximums are almost identical on the overlap
That is the exact opposite of what I thought would happen.
It really highlights the problem of trying to merge stations

December 10, 2009 8:22 pm

I agree that merging stations is tough.
I think it highlights the problem that surface temp data in general is really crappy stuff.

Glenn
December 10, 2009 9:25 pm

John Moore (17:44:53) :
“Notice, however, that the average lapse rate gives you approximately their adjustment, however, which is the point. In the absence of better information (comparable stations, specific weather information such as which days had clouds bases below 135 meters but above sea level, etc), adjusting by an average lapse rate is a reasonable thing to do.”
Why is that a point? What evidence is there that the difference between Kelburn and the Airport is a result of adiabatic lapse rate? I’ll remind you that of stations nearby at the same altitude with substantial (relative to the trend) average temperature differences, and that stations close by at different altitudes often have temperatures reversed.
I don’t see anything reasonable about blindly adjusting a hundred year old station without taking into consideration at least some of the “etceteras”, or doing so at all if not. And I fail to see why you don’t understand that changes in weather can affect local temperatures and behaviors.

Glenn
December 10, 2009 9:35 pm

John Moore (19:50:16) :
“What would YOU do with the temperature record, given two stations, separated by about 130 meters in altitude, near each other, with no other information, and with an adjoining but not overlapping temperature time series?”
Wanting something doesn’t justify the use of any trick when you got no information. What would you do if all you had is that the one station opened the same day the other closed, and the temp was .2C different?
“Thorndon (closed 31 Dec 1927) has no overlap with Kelburn (opened 1 Jan 1928).”
http://www.niwa.co.nz/our-science/climate/news/all/niwa-confirms-temperature-rise/combining-temperature-data-from-multiple-sites-in-wellington

Bob
December 10, 2009 10:27 pm

Glenn:
“Thorndon (closed 31 Dec 1927) has no overlap with Kelburn (opened 1 Jan 1928).”
At the risk of being repetitive this is not correct. The was overlap – data was collected from Kelburn in 1927. I have no idea what happened to the data but the commentry from Niwa on this point is deceptive. Is it really likely that it was lost?

December 10, 2009 10:30 pm

“I don’t see anything reasonable about blindly adjusting a hundred year old station without taking into consideration at least some of the “etceteras”, or doing so at all if not.

If you know the etceteras, you take them into account. I don’t know any etceteras other than the altitude change. If there are others, bring them out and we can add them to the discussion.

“And I fail to see why you don’t understand that changes in weather can affect local temperatures and behaviors.”

As I have stated over and over again, lapse rate is not the only difference – I even gave an example from my own house of how station data is affected by other factors. Hence I can only conclude that you are not paying attention to what I have been saying.
Glenn… you didn’t answer my question, but rather responded with one of your own.
As for the .2C delta-T – they weren’t on the same day as far as I can tell, so I don’t know how we could use that information. Do you?
As for some trick “when you got no information” – they did have information. They had the altitude difference. That’s not very good information, but it’s not zero information and it is definitely relevant. When you use crappy data, you have to be satisfied with “not very good.”
Notice I am not defending the use of this sort of data for making extravagant global warming claims.
So let me be absolutely clear: I think surface temperature data is noisy, messy stuff. It isn’t very good data. This means that the massaged result isn’t very good either, although aggregates can be better than individual station records for statistical reasons.
It does not mean that an attempt to join two station records is some sort of scientific misconduct or fraud when they use a reasonable temperature adjustment for the differences in altitude – even given everything we know about how that adjustment may be wrong. Nothing I have seen on this thread is evidence of misconduct or fraud – unless I am missing some critical point – which is always possible given limited time and the amount of information flying around.
I am a skeptic of AGW. I just don’t believe in crying wolf when we are really seeing a puppy! It’s more like the behavior of the alarmists.

Glenn
December 10, 2009 11:34 pm

John Moore (22:30:57) :
“As for the .2C delta-T – they weren’t on the same day as far as I can tell, so I don’t know how we could use that information. Do you?”
Well I don’t know about you but if the thermometers were .2C different tomorrow from what it was today, and some guy came along and said no, because of the average lapse rate, the thermometers were really .6C different (and in a different direction), I’d be wondering whether to trust the thermometers or the guy.
Had temps overlapped by one day, what would you think could be done with that?
A week? A month? Temps do not show a divergence of .8C for 4 years, 2 either way, according to either the adjusted or unadjusted graph.

Nick
December 11, 2009 4:46 am

What would I do?
OK. We have two temperature records, different sites, with an overlap.
Doesn’t matter that there is a hight difference. From what I can see there is no argument about either record. [Or for the moment assume they are quality stations]
You have three periods.
Site A only
Site A and B
Site B only.
Trend is then trend for site A, joined to average Trend for A and B, joined to trend for B for each period respectively.
That gives a reasonable estimate of the increase and decrease.
1. Lapse rate? What lapse rate? Doesn’t come into it at all, so no assumptions are needed.
2. All data has an equal weight
3. All data is used.
4. The question is asked is what’s the temperature increase and that’s what it calculates. It’s not asking what’s the absolute temperature.
5. There is no joining of dots. It even works for missing records.
6. Time of observation bias adjustment? Not needed. If the time of obseration changes, split into two series just like a site moving or temperature device changing.
7. No need to average averages.
8. No need to select your favourite stations.
What doesn’t it deal with?
Doesn’t deal with the UHI.
Nick

December 11, 2009 9:14 am

I think we are talking past each other. I have no information that contains an overlap (see: Bob (22:27:21) : ).
Nick:
My question was: ““What would YOU do with the temperature record, given two stations, separated by about 130 meters in altitude, near each other, with no other information, and with an adjoining but not overlapping temperature time series?” Hence your response is not to my question. I have no difficulty with your method for the case that you assert – it is obviously the reasonable way to do things.
However, without no overlap data between Thorndon and Kelburn, we can’t use it.
Glenn, you write “if the thermometers were .2C different tomorrow from what it was today, and some guy came along and said no, because of the average lapse rate, the thermometers were really .6C different” – ys, if that was the case. But we are dealing with .2C a difference between two thermometers, at different locations, on different days? I don’t see any way to use that information.

All that being said, I find the statement “Warming over New Zealand through the past century is unequivocal.” to contain way too much certainty and they shouldn’t have said it.