A New System for Determining Global Temperature

Guest essay by Mike Jonas

Introduction

There are a number of organisations that produce estimates of global temperature from surface measurements. They include the UK Met Office Hadley Centre, the Goddard Institute of Space Studies (GISS) and Berkeley Earth, but there are others.

They all suffer from a number of problems. Here, an alternative method of deriving global temperature from surface measurements is proposed, which addresses some of those problems.

Note: The terms global temperature and regional temperature will be used here to refer to some kind of averaged surface temperature for the globe or for a region. It could be claimed that these would not be real temperatures, but I think that they would still be useful indicators.

The Problems

Some of the problems of the existing systems are:

· Some systems use temperature measurements from surrounding weather stations (or equivalent) to adjust a station’s temperature measurements or to replace missing temperature measurements. Those adjusted temperatures are then used like measured temperatures in ongoing calculations.

· The problem with this method is that surrounding stations are often a significant distance away and/or in very different locations, and their temperatures may be a poor guide to the missing temperatures.

· Some systems use a station’s temperature and/or the temperatures of surrounding stations over time to adjust a station’s temperature measurements, so that they appear to be consistent. (I refer to these as trend-based adjustments).

· There is a similar problem with this method. For example, higher-trending urban stations, which are unreliable because of the Urban Heat Effect (UHE), can be used to adjust more reliable lower-trending rural stations.

· Some systems do not make allowances for changes in a station, for example new equipment, a move to a nearby location, or re-painting. Such changes can cause a step-change in measured temperatures. Other systems treat such a change as creating a new station.

· Both these methods have problems. Systems that do not make allowance : These systems can make inappropriate trend-based adjustments, because the step-change is not identified. Systems that create a new station : These systems can also make inappropriate trend-based adjustments. For example, if a station’s paint detoriates, then its measurements may have an invalid trend. On re-painting, the error is rectified, but by regarding the repainted station as a new station the system then incorporates the invalid trend into its calculations.

There are other problems, of course, but a common theme is that individual temperature measurements are adjusted or estimated from other stations and/or other dates, before they get used in the ongoing calculations. In other words, the set of temperature measurements is changed to fit an expected model before it is used. [“model” in this sense refers to certain expectations of consistency between neighbouring stations or of temperature trends. It does not mean “computer model” or “computer climate model”.].

The Proposed New System

The proposed new system uses the set of all temperature measurements and a model. It adjusts the model to fit the temperature measurements. [As before, “model” here refers to a temperature pattern. It does not mean “computer model” or “computer climate model”.].

Over time, the model can be refined and the calculations can be re-run to achieve (hopefully) better results.

The proposed system does not on its own solve all problems. For example, there will be some temperature measurements that are incorrect or unreliable in some significant way and will genuinely need to be adjusted or deleted. This issue is addressed later in this article.

For the purpose of describing the system, I will begin by assuming that the basic time unit is one day. I will also not specify which temperature I mean by the temperature, but the entire system could for example be run separately for daily minimum and maximum temperatures. Other variations would be possible but are not covered here.

The basic system is described below under the two subheadings :The Model” and “The System”.

The Model

The model takes into account those factors which affect the overall pattern of temperature. A very simple initial model could use for example time of year, latitude, altitude and urban density, with simple factors being applied to each, eg. x degrees C per metre of altitude.

The model can then be used to generate a temperature pattern across the globe for any given day. Note that the pattern has a shape but it doesn’t have any temperatures.

So, using a summer day in the UK as an example, the model is likely to show Scottish lowlands as being warmer than the same-latitude Scottish highlands but cooler than the further-south English lowlands, which in turn would be cooler than urban London.

The System

On any given day, there is one temperature measurement for each weather station (or equivalent) active on that day. ie, there is a set of points (locations) each of which has one temperature measurement.

These points are then triangulated. That is, a set of triangles is fitted to the points, like this:

clip_image002

Note: the triangulation is optimised to minimise total line length. So, for example, line GH is used, not FJ, because GH is shorter.

The model is then fitted to all the points. The triangles are used to estimate the temperatures at all other points by reference to the three corners of the triangle in which they are located. In simple terms, within each triangle the model retains its shape while its three corners are each moved up or down to match their measured temperatures. (For points on one of the lines, it doesn’t matter which triangle is used, the result is the same).

I can illustrate the system with a simple 1D example (ie. along a line). On a given day, suppose that along the line between two points the model looks like:

clip_image004

If the measured temperatures at the two points on that day were say 12 and 17 deg C, then the system’s estimated temperatures would use the model with its ends shifted up or down to match the start and end points:

clip_image006

Advantages

There are a number of advantages to this approach:

· All temperature measurements are used unadjusted. (But see below re adjustments).

· The system takes no notice of any temperature trends and has no preconceived ideas about trends. Trends can be obtained later, as required, from the final results. (There may be some kinds of trend in the model, for example seasonal trends, but they are all “overruled” at every measured temperature.).

· The system does not care which stations have gaps in their record. Even if a station only has a single temperature measurement in its lifetime, it is used just like every other temperature measurement.

· No estimated temperature is used to estimate the temperature anywhere else. So, for example, when there is a day missing in a station’s temperature record then that station is not involved in the triangulation that day. The system can provide an estimate for that station’s location on that day, but it is not used in any calculation for any other temperature.

· No temperature measurement affects any estimated temperature outside its own triangles. Within those triangles, its effect decreases with distance.

· No temperature measurement affects any temperature on any other day.

· The system can use moving temperature measurement devices, eg. on ships, provided the model or the device caters for things like time of day.

· The system can “learn”, ie. its results can be used to refine the model, which in turn can improve the system (more on this later). In particular, its treatment of UHE can be validated and re-tuned if necessary.

Disadvantages

Disadvantages include:

· Substantial computer power may be needed.

· There may be significant local distortions on a day-to-day basis. For example, the making or missing of one measurement from one remote station could significantly affect a substantial area on that day.

· The proposed system does not solve all the problems of existing systems.

· The proposed system does not completely remove the need for adjustments to measured temperatures (more on this later).

System Design

There are a number of ways in which the system could be designed. For example, it could use a regular grid of points around the globe, and estimate the temperature for each point each day, then average the grid points for global and regional temperatures. Testing would show which grid spacings gave the best results for the least computer power.

Better and simpler designs may well be possible.

Note : Whenever long distances are involved in the triangulation process, Earth’s surface curvature could matter.

Discussion

One of the early objectives of the new system would be to refine the model so that it better matched the measured temperatures, thus giving better estimated temperatures. Most model changes are expected to make very little difference to the global temperature, because measured temperatures override the model. After a while, the principal objective for improving the model would not be a better global temperature, it would be … a better model. Eventually, the model might contribute to the development of real climate models, that is, models that work with climate rather than with weather (see Inside the Climate Computer Models).

Oceans would be a significant issue, since data is very sparse over significant ocean areas. The model for ocean areas is likely to affect global averages much more than the model for land areas. Note that ocean or land areas with sparse temperature data will always add to uncertainty, regardless of the method used.

I stated above (“Disadvantages”) that the proposed system does not completely remove the need for adjustments to measured temperatures. In general, individual station errors don’t matter provided they are reasonably random and not systemic, because they will average out over time and because each error impacts only a limited area (its own triangles) on one day only. So, for example, although it would be tempting to delete obviously wrong measurements, it is better to leave them in if there are not too many of them, because they have little impact and their removal would then not have to be justified and documented. The end result would be a simpler system, easier to follow, to check and to replicate, and less open to misuse (see “Misuse” below), although there would be more day-to-day variation. Systemic errors do matter because they can introduce a bias, so adjustments to these should be made, and the adjustments should be justified and documented. An example of a systemic error could be a widespread change to the time of day that max-min thermometers are read. Many of the systemic errors have already been analysed by the various temperature organisations. It would be very important to retain all original data so that all runs of the system using adjusted measurements can be compared to runs with the original data in order to quantify the effect of the adjustments and to assist in detecting bias.

Some stations may be so unreliable or poorly sited that they are best omitted. For example, stations near air-conditioner outlets, or at airports where they receive blasts from aircraft engines.

The issue of “significant local distortions on a day-to-day basis” should simply be accepted as a feature of the system. It is really only an artefact of the sparseness and variability of the temperature measurement coverage. The first aim of the system is to provide regional and global temperatures and their trends. Even a change to a station that caused a step change in its data (such as new equipment, a move to a nearby location, or re-painting) would not matter much, because each station influences only its own triangles. It would matter, however, if such step-changes were consistent and widespread, ie. they would matter if they could introduce a significant bias at a regional or global level.

It wouldn’t even matter if at a given location on a particular day the estimated maximum temperature was lower than the estimated minimum temperature. This could happen if, for example, among the nearby stations some had maximum temperatures missing while some other stations had minimum temperatures missing. (With a perfect model, it couldn’t happen, but of course the model can never be perfect.).

All the usual testing methods would be used, like using subsets of the data. For example, the representation of UHE in the model can be tested by calculating with and without temperature measurements on the outskirts of urban areas, and then comparing the results at those locations.

All sorts of other factors can be built into the model, some of which may change over time – eg. proximity to ocean, ocean currents, average cloud cover, actual hours of sunshine, ENSO and other ocean oscillations, and many more. Assuming that the necessary data is available, of course.

Evaluation

Each run of the system can produce ratings that give some indication of how reliable the results are:

· How much the model had to be adjusted to fit the temperature measurements.

· How well the temperature measurements covered the globe.

The ratings could be summarised globally, by region, and by period.

Some station records give measurement reliability. These could be incorporated into the ratings.

Misuse

Like all systems, the proposed system would be open to misuse, but perhaps not as much as existing systems.

Bias could still be introduced into the system by adjusting historical temperature measurements – eg. to increase the rate of warming by lowering past temperatures. The proposed system does make this a bit more difficult, because it removes some of the reasons for adjusting past temperature measurements. In particular, temperature measurements cannot be adjusted to fit surrounding measurements, and they cannot be adjusted to fit a model (deletion of “outliers” is an example of this). If such a bias was introduced, the ratings (see “Evaluation” above) would not be affected, so they would not be able to assist in detecting the bias. The bias could be detected by comparing results against results from unadjusted data, but proving it against a determined defence could be very difficult.

Bias could still be introduced into the system by exploiting large areas with no temperature measurements, such as the Arctic, but the proposed system also makes this a bit more difficult. In order to exploit such areas, the model would need to be designed to generate change within the unmeasured area. So, for example, a corrupt model could make the centre of the Arctic warmer over time relative to the outer regions where the weather stations are. It would be possible to detect this type of corruption via the ratings (a high proportion of the temperature trend would come from a region with a low coverage rating), but again proof would be difficult.

NB. In talking about misuse, I am not in any way suggesting that misuse does or would occur. I am simply checking the proposed system for weaknesses. There may be other weaknesses that I have not identified.

Conclusion

It would be very interesting to implement such a system, because it would operate very differently to current systems and would therefore provide a genuine alternative to and check against the current systems. Raising the necessary funding could be a major hurdle.

The system could also, I think, be used to check some weather and climate theories against historical temperature data, because (a) it handles incomplete temperature data, (b) it provides a structure (the model) in which such theories can be represented, and (c) it provides ratings for evaluation of the theories.

Provided that the system could be used without needing too much computer power, it could be suitable for open-source/cooperative environments where users could check each other’s results and develop cooperatively. The fact that the system is relatively easy to understand, unlike the current set of climate models, would be a big advantage.

Footnotes

1. I hope that the use of the word “model” does not cause confusion. I tried to make it clear that the model I refer to in this document is a temperature pattern, not a computer model. I tried some other words, but they didn’t really work.

2. I assume that unadjusted temperature data is available from the temperature organisations (weather bureaux etc). To any reasonable person, it is surely inconceivable that these organisations would not retain their original data.

###

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.

Abbreviations

C – Centigrade or Celsius

ENSO – El Niño Southern Oscillation

GISS – Goddard Institute of Space Studies

UHE – Urban Heat Effect

1D – 1-dimensional

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

155 Comments
Inline Feedbacks
View all comments
January 10, 2016 2:07 pm

The Warmistas will never accept this new System of measurement. If temperature measurements are used unadjusted then there will be no ability to show the required rise in Global Temperature demanded by CAGW.

Auto
Reply to  ntesdorf
January 10, 2016 2:42 pm

ntes
Plus a lot
Auto

paul
Reply to  ntesdorf
January 11, 2016 4:28 am

what about a sensor on the moon pointing back at earth

gnomish
Reply to  paul
January 11, 2016 10:26 am

brilliant.
ne plus ultra.
but it does not justify stealing.
there is never any justification for violating a person’s rights, period.

george e. smith
Reply to  ntesdorf
January 11, 2016 9:53 am

“””””….. Abbreviations
C – Centigrade or Celsius …..”””””
So make up your mind; which is it.
Centigrade and Celsius are NOT the same.
C is Celsius in the SI system of units.
A centigrade scale is simply any linear scale that goes from zero to 100.
It could be simply the percentage of people who believe in catastrophic calamitous man made global warming climate change (CCMMGWCC) ;
izzat 97% of all scientists; or it could simply be your score on a school term paper.
The Celsius scale IS a centigrade scale; but C does not mean centigrade.
g

george e. smith
Reply to  ntesdorf
January 11, 2016 9:58 am

OK, so I’ll bite.
Just where can I find a reference to this new system of global temperature measurement.
Whatever happened to the concept of simply measuring the temperature at times and places on the earth surface that conform to the sampling requirements of standard sampled data system theory; i.e. Nyquist.
Seems to me that would work.
I think that might be the only thing that would work. Anything less is just BS.
G

Editor
Reply to  george e. smith
January 11, 2016 10:55 am

G – point taken re Centigrade. re sampling requirements, we can’t go back in time and re-sample.

David Riser
January 10, 2016 2:10 pm

The real issue with either idea, how we do it now or your suggestion is that the mathematical model of temperature differences fail to distinguish problems from reality. A weather front can come through and look like a step change no matter how you measure it. Invalidating any attempt to mash the numbers. A single day can contain more than one high and low. High’s and lows can be vastly different over short time frames. Finally clouds have a huge impact on what the local temperature is at any given time and models don’t do clouds. So the raw data is really the best we have. The raw data of ideal stations is even better. But adjustments as a whole are just bs.

Auto
Reply to  David Riser
January 10, 2016 2:45 pm

David R
Broadly agree.
Weather is variable – even chaotic.
Your
“But adjustments as a whole are just bs.” is noted, appreciated and mightily agreed with. In spades.
Auto – as self-effacing as ever!

Editor
Reply to  David Riser
January 10, 2016 4:15 pm

re weather fronts, please see my comment below http://wattsupwiththat.com/2016/01/10/a-new-system-for-determining-global-temperature/comment-page-1/#comment-2117126 (the same applies to clouds, etc).
So the raw data is really the best we have.“. Absolutely.

Samuel C Cogar
Reply to  David Riser
January 11, 2016 3:03 am

David R,
Most any type of thermometer or thermocouple that is immersed in a gallon of -40 F antifreeze will do clouds with ease

Samuel C Cogar
Reply to  Samuel C Cogar
January 11, 2016 3:26 am

And ps, ….. auto manufacturers solved their “average temperature” problem many, many years ago by the placement of a simple spring-loaded thermostat in their vehicle’s radiator.

Duster
Reply to  David Riser
January 11, 2016 10:13 am

Another “problem” is not in the station but in the reported location over time. That can yield pseudo moves that are due to rounding and to datum shifts that are not reported or accounted for. For instance using standard USGS 7.5-minute paper topographic maps in the continental US generally means using the NAD27 (North American 1927 Datum) datum for any map older than about 1980. Later terrain maps employ WGS 84 (same datum used by a GPS unit) or NAD83, which is to most intents the same as WGS84. The difference between NAD27 and WGS84 can be well or over 100 meters. This can appear to automated data evaluation systems as a “move.” This can have significant “effects” on mapped locations if the individual doing the locating does not note and record the datum used for the fix. Additional error can be introduced if the location reported is corrected to a newer datum and the processing software corrects the location again because the programmer made an assumption about reported locations.
Another major “effect” is simple rounding of latitude and longitude figures. A second of longitude is roughly 30 meters at 45-degrees North (A bit over 100 feet). If a lat-lon location is only reported to a minute of accuracy that error is presumably +/- 30 seconds. Some of these errors are systematic rather than random.
IIRC, Anthony wrote up a station in New York state some years ago that appeared in the data to have been “moved” several times, but had been moved once, less than 100 feet, when a new system replaced the older installation.

tomwys1
January 10, 2016 2:21 pm

I do not see any marked improvement over the current RSS data set, whose only “deficiency” is that Urban Heat Islands won’t overwhelm it. That being said, since most people actually live in/on these UHI affected areas, perhaps a UHI bias needs to be taken into account??? If not, then I’ll go with RSS!!!

Auto
Reply to  tomwys1
January 10, 2016 2:48 pm

tomwy
A variety of ways of measuring – even if one or more may be imperfect – at least allows dispassionate observers to look – carefully – at the data.
This may – or may not – be an improvement ( I think it probably is, if not hugely so) – but the re-evaluation more than justifies the effort, I suggest.
Auto

Reply to  Auto
January 11, 2016 1:19 am

Reply to  tomwys1
January 10, 2016 3:01 pm

I would agree, so long as we place a weather station inside every shopping mall.

Reply to  tomwys1
January 10, 2016 6:58 pm

Radiosonde data presented in figure 7 of http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade indicates that the surface-adjacent troposphere warmed .02, maybe .03 degree/decade more than the satellite-measured lower troposphere. One reason I suspect this happened is because decreasing ice and snow cover increased the lapse rate overall in the lowest ~1-1.5 kilometers of the lower troposphere.

Reply to  tomwys1
January 10, 2016 7:01 pm

We actually have three perfectly good temperature systems; RSS and UAH on a 5 degree grid, and Radiosonde(70,000 balloons annually). The UHI effect is very small as metropolitan areas only represent about 1% of the earth’s surface. The oceans represent almost 75%, and sparsely-populated/ unoccupied areas cover the rest of the planet.
The surface s tation system should be abolished and most scientists know it. So it would remove CAGW from climate science. The temperature differences are the primary contention between mainstream scientists and skeptical scientists. Removing this barrier would rove climate science ages forward.

richardscourtney
Reply to  Dale Hartz
January 11, 2016 6:18 am

Dale Hartz:
You say

We actually have three perfectly good temperature systems; RSS and UAH on a 5 degree grid, and Radiosonde(70,000 balloons annually). The UHI effect is very small as metropolitan areas only represent about 1% of the earth’s surface. The oceans represent almost 75%, and sparsely-populated/ unoccupied areas cover the rest of the planet.
The surface station system should be abolished and most scientists know it. So it would remove CAGW from climate science. The temperature differences are the primary contention between mainstream scientists and skeptical scientists. Removing this barrier would rove (sic) climate science ages forward.

I very strongly agree.
Although there is no possibility of a calibration standard for global temperature, the MSU (i.e. RSS and UAH) measurements are almost global in coverage and the radiosondes provide independent measurements for comparison.
The major problem with the surface station data is not removed by the proposal in the above essay. Indeed, the essay says it proposes to adopt the problem when it says

The proposed new system uses the set of all temperature measurements and a model. It adjusts the model to fit the temperature measurements. [As before, “model” here refers to a temperature pattern. It does not mean “computer model” or “computer climate model”.].
Over time, the model can be refined and the calculations can be re-run to achieve (hopefully) better results.

That is what is done by ALL the existing teams who use station data to compute global temperature.
The “stations” are the sites where actual measurements are taken.
When the measurement sites are considered as being the measurement equipment, then the non-uniform distribution of these sites is an imperfection in the measurement equipment. Some measurement sites show warming trends and others cooling trends and, therefore, the non-uniform distribution of measurement sites may provide a preponderance of measurement sites in warming (or cooling) regions. Also, large areas of the Earth’s surface contain no measurement sites, and temperatures for these areas require interpolation.
Accordingly, the measurement procedure to obtain the global temperature for e.g. a year requires compensation for the imperfections in the measurement equipment. A model of the imperfections is needed to enable the compensation, and the teams who provide values of global temperature each use a different model for the imperfections (i.e. they make different selections of which points to use, they provide different weightings for e.g. effects over ocean and land, and so on). So, though each team provides a compensation to correct for the imperfection in the measurement equipment, each also uses a different and unique compensation model.
The essay proposes adoption of an additional unique compensation model.
Refining the model to obtain “better” results can only be directed by prejudice concerning what is “better” because there is no measurement standard for global temperature.

As you say, Dale Hartz, “The surface station system should be abolished”. Another version of it is not needed.
Richard

Editor
Reply to  Dale Hartz
January 11, 2016 11:29 am

Richard – I strongly agree with using the satellite systems from now on, for global and regional temperature, but in another comment I have given reasons for continuing the surface stations.
You miss the point when you say “That is what is done by all the existing teams …”. It isn’t. All the existing teams use their model to manipulate actual measurements before they are used. Mine won’t change any actual measurement [I did make provision for correcting systemic errors, but following a comment by Nick Stokes I now feel that it is best not to make any changes at all.]. One of the significant differences, for example, is that while UHE gets spread out by existing systems, mine would confine it to urban areas.
I agree that the historical measurements are heavily imperfect, and that any results from them need to be seen in that light. But I am seriously unimpressed with the mindset of the people doing the current systems, and I’m trying to get people to see things from a different angle – one in which measurement trumps model. I think the exercise would be interesting, and I think there would be greater benefit than just a better “global temperature” and better error bars (there’s another comment on that).
While everything is being distorted by ideologues it is easy to get very cynical about the surface station measurement system, but when sanity is re-established then I think there is merit in continuing with it.

Reply to  Dale Hartz
January 11, 2016 2:27 pm

Mike Jonas,
I followed your path part of the way creating my process to read the data.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/
I had 2 different goals, that led me a different way. I was more interested in daily variability (min to max to min), as I am very impressed how quickly it cools at sunset. That led me to generating the day to day change/rate of change over the year.
So I don’t calculate a temperature, I look at the derivative of temps. While I think this allows some flexibility on what stations/data to include, I’ve elected to include any station that has a full year of data gets included. No data adjustments, strictly measurements. When I process a full year for a station, then average all stations for that year for the area being calculated (I do various areas 1×1, 10×10, latitudinal bands, continents, global). For daily rate of change, you can include stations that take partial year samples(again adjustable), as you’re looking mostly for the derivative from warming and cooling peaks.
What I’ve come up with is there’s no loss of night time cooling since 1940, in fact it’s slightly cooler tomorrow morning that it was today.
So, I think there’s good data in the surface record, just not what’s published from it.

provoter
Reply to  tomwys1
January 10, 2016 10:46 pm

“…since most people actually live in/on these UHI affected areas, perhaps a UHI bias needs to be taken into account? If not, then I’ll go with RSS!”
Agreed. As a very long aside, though (and probably nothing that is news to you):
The way to take UHI bias properly into account is to completely separate urban temp measurements from the rural measurements – problem solved. Urban areas occupy well under 1% of the earth’s total surface of about 5 million km² (see, for example, here and here). If one’s goal is to understand global temp trends, and if non-urban trends say one thing, and urban trends say another – what does simple math tell him he should pay attention to?
Obviously this is assuming there actually existed 1) a well-sited, well-distributed, global, non-urban network of stations, that were 2) well equipped, well maintained, well calibrated, well measured, and whose raw readings were available to all. Such a thing doesn’t exist, of course (no bonus points for guessing why ;^> ), but the basic point that urban readings do nothing but contaminate all others remains. It’s like determining the temperature of the air in your house by consulting the thermometer sitting just above your stove — while you’re cooking dinner.
Lastly, if a person would like to know what urban temps are doing just for the sake of knowing such a thing, this is all well and good. Let’s just make sure we don’t fool ourselves into thinking they have anything to do with what the other 99+ % of the earth’s surface temperatures are doing.

JohnWho
January 10, 2016 2:22 pm

” It would be very important to retain all original data…”
Just that statement alone makes your proposed new system superior to what we have now.

Auto
Reply to  JohnWho
January 10, 2016 2:52 pm

John
Agree.
But –
I am remiss – I don’t see (in this thread) the source of your quote –
” It would be very important to retain all original data…”
As noted – I agree.
Auto – late at night, so probably in error.

JohnWho
Reply to  Auto
January 10, 2016 3:19 pm

Toward the end of the paragraph that starts with “I stated above (“Disadvantages”) …” in the “Discussion” section.

RoHa
Reply to  JohnWho
January 10, 2016 3:53 pm

I thought the original data had been damaged by the floods of 2011/destroyed in the storeroom fire/mislaid in the move from the old building/rendered unreadable when the data storage format was updated/eaten by mice.

JohnWho
Reply to  RoHa
January 10, 2016 7:19 pm

You appear to be confused with the Hillary Clinton emails.
/grin

DD More
Reply to  RoHa
January 11, 2016 2:18 pm

No RoHa, they are right next to the Bypass Plans
“But the plans were on display…”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”
― Douglas Adams, The Hitchhiker’s Guide to the Galaxy

Kev-in-Uk
January 10, 2016 2:23 pm

Interesting! – but may I make an immediate observation? In order to create a ‘model’ one needs to have historical (and preferably correct (i.e. raw!)) data to set ‘trends’ between stations within reasonable proximity (I note your last statements though!). Also, station height (e.g. above mean sea level) would probably be an important factor to try and include. The passage of weather fronts over the UK (for example) causes significant diffences over fairly short distances, probably within 10’s of miles – I’m guessing this would be completely different to the variation across large desert plains for example! I am on the coast, and 25 miles inland the temperatures are extremely different in summer/winter as you can imagine. Hence, such difference needs to be noted (historically) and accounted for in any ‘model’ in order to make a regional ‘avergae’ assessment? I can see the objective, but not sure of the method. What I would suggest is a direct historical comparison of (nearby) stations to determine some measure of synchronicity, which would likely throw up various oddities. If the agreement is within reason (in the majority) it would follow to assume agreement in general and deduce ‘regional’ temp values from the various station pair(s) data. As for UK metoffice raw data being available, that might not be possible, as Mr Jones himself has said!

Editor
Reply to  Kev-in-Uk
January 10, 2016 3:25 pm

Hi Kev-in-Uk – I tried to make it clear that I’m looking for a whole new way of looking at temperatures. Conceptually, you take temperatures around the globe to find out what the temperature is, and you then average them to get a global average temperature. My system does that, whereas other existing systems average something else. My system can also use every temperature measurement as is, on an equal footing with all other temperature measurements. We have all got so used to thinking about temperature trends that it is hard to think of a system that ignores them – but my system does. In effect, it has no interest in trends at any point in the entire process. Only after it has finished would you look at the results to see what the temperature trends have been.
So – what do you do about things like weather fronts, that can’t be predicted more than a few days ahead but have significant local effect? The answer is that you can’t put them into the model, so you don’t try to. You simply accept their effect as part of “natural variability” and just go on measuring the temperatures – ie, it’s just weather, and weather is what you are measuring! Over time, things like that average out, and if they don’t then you can probably work out how to put them into the model.

bobl
Reply to  Mike Jonas
January 10, 2016 5:53 pm

I think this has merit even as a cross check BUT, any system where temperatures are estimated from surrounding sites are affected by the time lags – For example there is no relationship between Adelaide and Melbourne on any given day but there IS a relationship between Melbourne and Adelaide lagged by one day because the predominant west to east motion of weather systems in this part of the world.
At different times of the year the dominant weather systems motion may alter, so the model needs to account for that effect. That is the “Pattern” will likely have seasonal variations.
A much better approach to all of this is to NOT model at all but rather acquire data from crowdsourced domestic weather stations

Kev-in-Uk
Reply to  Mike Jonas
January 10, 2016 10:45 pm

Hi Mike, at the end of the day, I feel that the temp data records have been tampered with enough and used to present/support an agenda. The mere fact that anyone with half a brain knows that central London is a few degrees warmer (due to UHE) than surrounding countryside and yet the Metoffice/MSM still use ‘high’ temps recorded at Heathrow to browbeat us with strongly suggests that UHE is not taken seriously. I mean, why are the London station data not adjusted DOWN by the obvious few degrees of UHE. I guess when someone answers that satisfactorily, or demonstrates that that is the case (in the Hadcrut data for example), I might sit up and take the surface dataset more seriously. As it is, I strongly believe the current ‘treatment’ is more likely to adjust surrounding stations UP to match Heathrow (or similar) or at least to allow the obvious UHE affected values to remain in situ and bump up the spatial temperature ‘average’! In short, there is no genuine human ‘interpretation’ of data anymore – it’s all programmed adjustment and averaging. I’m not really sure how you feel your system could ‘learn’ UHE without being historically measured in direct comparison to to non UHE stations? – and even then, wouldn’t a large degree of human data interpretation would be needed? regards, Kev

richardscourtney
Reply to  Mike Jonas
January 11, 2016 6:35 am

Mike Jonas:
You say

I tried to make it clear that I’m looking for a whole new way of looking at temperatures. Conceptually, you take temperatures around the globe to find out what the temperature is, and you then average them to get a global average temperature. My system does that, whereas other existing systems average something else. My system can also use every temperature measurement as is, on an equal footing with all other temperature measurements. We have all got so used to thinking about temperature trends that it is hard to think of a system that ignores them – but my system does. In effect, it has no interest in trends at any point in the entire process. Only after it has finished would you look at the results to see what the temperature trends have been.

I am sorry to be the bearer of bad news, but as my above post explains, your proposed method is in principle THE SAME as all the existing derivations of global temperature from surface stations. And, therefore, if it were developed then your method would not provide any advantage(s) over any of the existing methods for deriving global temperature from surface stations.
Richard

Duster
Reply to  Mike Jonas
January 11, 2016 10:43 am

bobl – As I read Mike’s proposal, his suggestion rather cuts to the heart of the “global average” question. While a global average is not much use for things like weather forecasting, it would be quite useful in determining “trends” if there are such. As the data is accumulated each 24-hour period, a temperature surface is calculated using a triangulated irregular network (a TIN to folks who use GIS systems). Employing such as system you could develop a detailed, global temperature “geoid” or simpler ovoid each day. An annual average of each station could be employed to calculate and annual average temperature surface and the annuals and longer terms could be summarized the same way. A global trend would show up as systematic drift in ALL stations over time without the need for data correction, gridding, or any of the extraneous effort involved in producing current “climate” summary data. That is, if there is a real, global trend, then it is present in all temperature data collected regardless of any instrument issues or any other side tracks. “Correcting” or adjusting the data has never been necessary to detect such a trend, IF IT IS REAL. Processes like increasing UHI would appear as growing “peaks” on the global surface; regional changes (increasing or decreasing forest, conversion to agricultural use etc.) would appear as local or regional “topographic” patterns that impose a change in to the local topography and then stabilize. Trend would affect all of these universally IF it is global.

January 10, 2016 2:24 pm

surely a better system would be to ditch temperature altogether and just use changes in net energy emitted and absorbed by the earth ?

Editor
Reply to  corporateshots
January 10, 2016 4:03 pm

I agree absolutely – if you want to find out what the energy balance is. But if you want to know what the temperature is, a thermometer is quite useful. ie, it’s not an “either-or”, let’s do both.

Owen in GA
Reply to  corporateshots
January 11, 2016 12:04 pm

Great idea!
The long pole in that tent is that the current on orbit experiments doing that measurement have admitted error bars larger than the effect we want to measure. I suspect there may be more errors than the admitted ones, but have not been able to look that closely. I know calibrating sensors to give a flat response across a broad band of radiation (far IR to far UV in this case) is very difficult even in a laboratory. I can’t imagine how difficult it must be when the experiment is in a high Earth orbit. Keeping the amplifiers and references aligned to see this couple of watts per square meter difference between incoming visible to far UV, and outgoing far IR to mid IR must be a nearly impossible task. I am amazed they get to it as well as they do.

BioBob
January 10, 2016 2:46 pm

Sorry, but how about a system that actually uses replicated random sampling that the rest of field science has required to estimate variance for decades now ? Then, of course, you would need to actually use samples to verify the sample size needed within and between sites. A cursory examination would reveal that the best stations with only 3 samples are woefully inadequate and not randomly located in any case.
The reality is that accurate global temperature estimates are some fantasy produced by some fevered wannabe hack scientist.

Auto
Reply to  BioBob
January 10, 2016 2:54 pm

BioB
Absolutely.
It is a purely (impurely??) political concept.
Auto

Richard Keen
January 10, 2016 2:54 pm

I’ll read this idea in more detail, but to date it’s my conclusiion that you simply cannot get a valid global temperature from a sparse and intermittent surface station network, and that meaningful global tempertures do not exist before 1979 (MSU et al.). But the surface station network does allow for regional and local time series of sufficient accuracy to detect regional and local climate change.
For example, the consensus IPCC models predict the fastest warming states in the US to be Alaska (due to latitude) and Colorado (due to altitude, in the tropospheric “hot spot”). Both places have enough long term stations to say something about what is really happening. And that is that Alaska follows the PDO and little else (little trend warming), and Colorado is more complicated (a mix of PDO, AMO, and who knows what else) but also very little trend warming. At my own co-op station at 8950 feet in Colorado the IPCC predicts 1 degree F of warming every 15 years. Since 2000 it’s cooled 1 degree F. (all of this has been published and/or posted on WUWT and elsewhere). If the models can’t get a grip on regional climate change, there’s no point is using them for global change (which, after all, is the average of the regional changes).
But again, I’ll give this idea more of the attention it deserves after dinner and a beer tonight.

Robert B
Reply to  Richard Keen
January 10, 2016 3:47 pm

You can only get trends at stations and use that as some sort of indicator of a global trend.
My suggestion is to take the decadal trend for max and min separately at each station. Take the mean for grids of so-many degrees using stations that have more than 90% of data for the period. Redo it changing the starting month and shifting grids by a degree, and take the mean of all grids.
I suspect that the results will show such a massive uncertainty that everyone will say “lets just stick with the satellite data since 1979”.

Richard Keen
Reply to  Robert B
January 10, 2016 5:31 pm

Robert, since your conclusion is inevitable, I suggest we skip the intermediate steps and fire Gavin, Jones, NCDC et al. at 8 am Monday and have each and every climate observer figure out their own climate, and publish that. Of course, Mike Jonas can keep his job, which I’m sure is not in the climate-industrial complex.

Robert B
Reply to  Robert B
January 10, 2016 10:11 pm

Its not inevitable if you believe that you really can collect temperature readings from sparse and intermittent surface station network, correct for events never documented, and get something meaningful from the average.
Looking at two stations in my city, the old city site up to 1979 and the airport which is only 6 km away, the SD for the difference between the monthly mean max for the two is 0.46°C for the overlapping period of 25 years. So half the time, the difference is greater than a third of a degree from the average (0.18). RSS shows a total of a third of a degree rise from 1979! That’s on top of temperature not being something like density, where even the average for this straight forward example of an intrinsic property is not straight forward.

Robert B
Reply to  Robert B
January 10, 2016 11:42 pm

I’ll add that there is a trend in the differences as UHI affected the airport more than the city site (in parklands next to the city centre) for the first 15 years. There was no trend for the last 10 with the differences randomly spread and the SD was still 0.21°C. This suggests that a regions anomaly is within ±half a degree of what the local station records. How can you possibly homogenise using stations tens of kilometers away let alone hundreds as is the case for remote regions?
The moving 10 year trends differ by 2°C/century at the start, goes to 10 and then down -2°C/century but its a smooth curve with noise of 0.2°/century (spread due to different starting months). The global average shows a trend of less than 1 degree per century! There is too much happening to use an average temperature without a perfect record with evenly spread stations and a lot more of them.

ShrNfr
January 10, 2016 3:01 pm

I am a fan of using an extended Kalman-Bucy filter to combine the observations. The data vector would be the combined microwave spectrometer measurements, radiosonde data, and surface measurements. The trick is to get the noise vector correct along with the state transition matrix correct. When radiosonde measurements or surface measurements are absent, the observation matrix has a zero for them. The state transition matrix would be tricky because of the need to have the entire state vector be all points on the entire world grid at any given time as the state vector. In the “old days” you would not even think of something like this, but these days, there is enough horsepower in even a Mac Pro with a graphics unit to perhaps pull it off a bit. Lots of junk to work out.

jmarshs
January 10, 2016 3:02 pm

Building a model of a system that has no global temperature (the Earth) is something completely different from building a model to maintain a desired, average temperature of a system (ex. an engine or HVAC system) over which you can exert control relative to timely feedback.
Engineers do the latter all the time. But discussions about “global temperatures” is something that always leaves me shaking and scratching my head….

Firey
January 10, 2016 3:09 pm

What is wrong with using satellite data? They circle the globe every 90 minutes, cover both land and sea & have been doing so for 30 odd years. If it can’t be used what was the point of launching them in the first place?

Editor
Reply to  Firey
January 10, 2016 3:38 pm

I agree that satellites give us the best global data. There are four reasons I can think of immediately for using a system based on surface temperatures:
1. There is no satellite data before 1979.
2. Surface temperature measurements provide a cross-check for the satellite data.
3. Local measurements provide a level of detail that satellites currently cannot.
4. Weather and climate studies may be able to take advantage of the extra detail.
[count 3 and 4 as one reason if you like]

porscheman
Reply to  Mike Jonas
January 10, 2016 6:31 pm

You are incorrect. The NEMS flew on Nimbus E and the SCAMS flew on Nimbus F. I once shared an office with 4,000 tapes from Nimbus E extracting the NEMS data. It was nadir only, while SCAMS was a scanning instrument. Lord knows where the data is today, but if they could find it and a tape drive to read it, you could analyze it in a couple of weeks with a Mac Pro. That would bring the record back to earlier than 1973.

Marcus
January 10, 2016 3:10 pm

Why not just use unadjusted satellite and balloon data ?

January 10, 2016 3:29 pm

In general, individual station errors don’t matter provided they are reasonably random and not systemic, because they will average out over time
I regard this as an incorrect assumption. The problem is that most errors are NOT random, they ARE systemic. You noted the effect of aging paint in your comments. Well, that’s a systemic error. As the sensors age, they too will drift, and all the sensors of a given type will drift in the same direction. I could go on, but I think that illustrates the point. Systemic errors will (I believe) heavily outweigh random errors. Assuming they will all cancel out is, I think, one of the biggest errors made in calculating global temperatures. It is simply a bad assumption.

Editor
Reply to  davidmhoffer
January 10, 2016 3:51 pm

I don’t assume they will ALL cancel out, and I do state that systemic errors need to be dealt with. But I do think that there is data, currently being adjusted to fit a perceived ‘model’, which is best left unchanged. Making lots of adjustments always risks inadvertently introducing bias. Our temperature measurements have all sorts of gaps and errors, and we should get away from the idea that we can adjust them in order to get improved results. Instead, we should accept that the measurements are all that we have and that any system using them cannot be more accurate than the meassurements.

Reply to  Mike Jonas
January 10, 2016 3:57 pm

we should accept that the measurements are all that we have and that any system using them cannot be more accurate than the meassurements.
I f you were to display error bars on the end result, I think a considerable number of people would be delighted.

Editor
Reply to  Mike Jonas
January 10, 2016 4:08 pm

I should have mentioned error bars under “Evaluation”. Yes, very important.

JohnKnight
Reply to  Mike Jonas
January 10, 2016 8:03 pm

Mike,
Along the lines mentioned here, and this;
“Note: The terms global temperature and regional temperature will be used here to refer to some kind of averaged surface temperature for the globe or for a region. It could be claimed that these would not be real temperatures, but I think that they would still be useful indicators.”
Sure, but why call them global or regional this or that, rather than weather station system this or that? Why “pretend” the globe or whole region is being measured, rather than keeping it real, and speak of “stations in X region show…, stations in all regions show… etc?

Duster
Reply to  davidmhoffer
January 11, 2016 1:20 pm

Local trends would appear in Mike’s proposed triangulated system as a systematic shift in the “z” value of local points. So would regional effects like development and reforestation. The idea is quite elegant and eliminates a great deal of wasted time and argument by simply ignoring problems with data quality. The quality issue is only important if the problems are directional and systematic, and contrary to theoretical expectations. Any real trend would actually appear globally and could only “vanish” if an equal and opposite “correction” were applied to all data. But doesn’t NOAA apply just such a “correction”? Even problems like TOBS cease to be relevant.

January 10, 2016 3:38 pm

corporateshots January 10, 2016 at 2:24 pm
surely a better system would be to ditch temperature altogether and just use changes in net energy emitted and absorbed by the earth ?

I think this bears repeating. What we’re trying to understand is the energy balance change due to CO2 and other factors. Temperature is a lousy proxy for energy balance. A change of one degree at -40 is about 2.9 w/m2. A change of 1 degree at +40 is 7.0 w/m2. These two values can not be averaged! Any system that doesn’t take this into account winds up over representing changes at high latitude, high altitude and winter seasons, and under representing changes in low latitude, low altitude and summer seasons.
The patient has three broken ribs, a fractured elbow, a broken leg and a deep cut on his forehead. Putting three stitches in the deep cut may well be called for, but it hardly treats the patients fatal injuries.

gnomish
Reply to  davidmhoffer
January 10, 2016 4:04 pm

The only meaningful measurement is TOTAL. (because it’s an actual measurement – what a concept!)
Averaging antarctica with death valley is purest nonsense.
Somebody needs a boot averaged with his butt.

Crispin in Waterloo
Reply to  davidmhoffer
January 10, 2016 7:39 pm

David M H
I am with you on this energy thing. If this method proposed is used, what is it telling us? Is it answering the right question?
If the high is 10 C for ten minutes and the rest of the 24 hrs is -5, what does this tell us? My inclusion of the element of time tells you that the time-weighted value should be a hair above -5, not 2.5.
Heat in the system is a combination of temperature, time, altitude and humidity. Is it impossible to have a clock and a hygrometer at the stations? Each station should produce a ‘heat content’ measure that reflects what the enthalpy of the atmosphere was during the day. While we are transfixed by highs and lows, they are not telling us much about climate.
Quantifying the energy in the air on a daily basis has meaning when discussing climate issues.
An advantage to doing this is it would force discussion about climate to use system energy instead of transient maxes or mins that contain so little information.
The oceans are analysed on the basis of heat content. Why not do that same thing for th atmosphere? Then the two can be summed in a meaningful way.

poitsplace
January 10, 2016 4:06 pm

How about we adjust NOTHING…except documented irregularities based on things like equipment and time of day. We come up with a daily, monthly, and/or yearly temperature based on the data we have and as well spatially accounted for as possible. We re-run the routine several times using various different spatial practices to figure out what the “global temperature” was…and make note of how wildly that changes the results. We note that since we’re not forcing it into a continuous record, its even more erratic.
We total up the obvious, overall error between the different spacial accounting practices (because they can’t all be right and maybe none are). We add in the measurement error. We add in the error for the various adjustments that are needed for equipment changes, time of day, etc. AaaaAAAaaand then we leave UHI calculations out of the adjustments and explain to people that thanks to irregular station moves to avoid UHI, often multiple moves per station, we have no freaking clue how much UHI there actually is in the record but that its likely present making our already far more erratic results “too high” by some difficult to fathom amount.
…and after all that, we’ll notice it’s virtually impossible to even know with certainty that there’s been warming since the 1940s

Steamboat McGoo
January 10, 2016 4:25 pm

” There are a number of organisations that produce estimates of global temperature from surface measurements. ”
There are also a number of organisations that produce astrological predictions/estimates from birth dates & star/constellation positions.
I don’t have much use for either set of “estimates”.

Alexander Carpenter
January 10, 2016 4:35 pm

How about “template” instead of “model”? Or “local template,” “regional template,? and “global template”? Talk about requiring computer power! We could develop budgets as large as the GC modelers get. More green jobs!
But do we really need to know this “value” to such precision and accuracy? Or are we just falling into the trap of efforting to counter every warmist factoid when the settled fundamentals of their “science” rob their claims of all validity? Te burden of proof lies on a claimant, especially extreme claims when the fundamentals don’t work out.
Beyond that, is it a number that has any real meaning (outside of political manipulation)? What are the climate parameters of a tempest in a teapot? Shall we develop a template for tempests?

ScienceABC123
January 10, 2016 4:52 pm

The use of all temperature data sets is problematic. Some of those data sets are of poor quality, and others have been ‘adjusted’ using unknown methodology. The old saying of “garbage in, garbage out” is very much in play if all data sets are used.

RCS
January 10, 2016 5:02 pm

” To any reasonable person, it is surely inconceivable that these organisations would not retain their original data.”
Tell that to the CRU.
This is simply a method of interpolation and you do not show why it is optimal.
” No temperature measurement affects any estimated temperature outside its own triangles. Within those triangles, its effect decreases with distance.”
Really? Most interpolation schemes require the use of derivative at points as they are a result of Talylor’s theorem. This requires the use of data outside the triangle in question to determine the derivative and so will influence the value within the triangle.
If I have understood you correctly, your scheme is piece-wise discontinuous, which certainly isn;t physically realistic. To obtain continuity through any point, knowledge of the surrounding data is required.

Editor
Reply to  RCS
January 10, 2016 6:08 pm

Yes, it is piece-wise discontinuous – just like actual temperature measurements are discontinuous. The whole point is to have a system that uses every surface temperature measurement unchanged. As I said, the proposed system adjusts the model to fit the temperature measurements. And when I said “fit”, I meant fit exactly – in the final result for each day every temperature measurement is still there and it hasn’t been changed. No matter what you put into the model, it can’t override any measured temperature.
Yes, it is a 2-dimensional interpolation scheme which interpolates within triangles, but no it doesn’t require the use of derivatives etc because when it interpolates within a triangle, it doesn’t look outside the triangle. The only temperatures it has are at the three corners of the triangle. Instead, it uses an expected temperature pattern (the “model”) for its pattern within the triangle. So the temperature pattern survives within the triangle, but the triangle’s corners don’t move. The null model is “all temperatures are the same everywhere every day”. Even that would give respectable results, but a more intelligent model will give better results – how much better can be determined by comparing their ratings. In the end, the system and the model help each other (the system “learns”), but the measured temperatures still reign supreme.
I mention UHE a few times. If the model gets UHE right, then in the results UHE will appear only in urban areas. These are a very small fraction of the globe, so UHE will then have very little effect on the global figure. The UHE will still be there, because those places really do have those temperatures. Not all the urban temperature is natural, but it won’t be worth trying to remove it because it will be a trivial part of the global whole. One of the major benefits of the proposed system is that it won’t let UHE spread its influence beyond the urban areas once the model “gets” UHE. And the system itself helps the model to “get” UHE. Once the model “gets” UHE, it can tell you how to remove it!
This system is probably a bit different to anything anyone is used to, so hopefully people will clear their minds before trying to understand it. It’s not that it’s difficult – it isn’t – it is just that I am trying to think outside the box.

Reply to  Mike Jonas
January 10, 2016 7:22 pm

“Yes, it is piece-wise discontinuous – just like actual temperature measurements are discontinuous.”
Actually, it isn’t. You are just linearly interpolating within each triangle – as if you pulled a membrane tightly over the values. It is just the kind of interpolation done in finite elements. The derivatives are discontinuous.
I draw plots of temperature anomalies interpolated in this way, eg here. The node values are exact, and color shading shows in between. There is a version here whre the shading isn’t as good, but you can show the mesh.

RCS
Reply to  Mike Jonas
January 11, 2016 5:16 am

Of course a proper interpolation scheme can be used to preserve data points and make the temperature continuous.
The problem with your post is that you don’t specify the model in a meaningful way apart from saying that it is intelligent.
As regards “thinking outside the box”, surely you mean thinking outside the triangle!

Editor
Reply to  Mike Jonas
January 11, 2016 12:47 pm

Nick – re discontinuous / derivatives. You are correct, of course.

higley7
January 10, 2016 5:45 pm

An interesting test would be to take a set of four sites in which one site is in the middle of the triangle of the other three. Calculate the average temperature from them and then delete the center site and use the outer three to calculate the average temperature. It would be very interesting to see how similar the answers would be.

January 10, 2016 6:10 pm

A caveat up front. I am not a climate scientist but a simple physicist, for which I thank the circumstances of my entry into science. That given, I have long thought that the notion of a “global average temperature” (GAT) constructed from a sparse set of mixed quality data, statistically infilled (and outfilled) spatially and temporally to try to simulate global coverage is poorly suited to discerning trends presumably based on thermodynamics of the global climate system (GCS). I think concerns have been frequently voiced here and elsewhere about the utility, from the perspective of physics, of extensive versus intensive variables and the difficulty of properly defining and constraining the thermodynamics of the GCS. Couple with that the apparently cumbersome, intricate, and in many cases arcane nuances of continually adjusting mixed quality point measurements so as to average across space and time (and hence across thermodynamic regimes) and I am unconvinced that such GATs represent much more than the adjustment process evolution. Thankfully it is not my job to worry about such things. The only downside to this whole issue is that I have already broken my 2016 resolution to quit spending valuable time reading and thinking about climate science.
All that said, I have long thought a much more useful approach would be to treat the temperature records as the stock market is treated, and simply construct an index (or indices) with as consistent a definition of its components as possible. This would mean abandoning the notion of total global coverage, since that is already of dubious thermodynamic quality. As the Dow Jones average selects a certain set of industries based on qualities related to company size, longevity, quality of financials, weighted simply by market price, etc. one could select a set of temperature records based on the precision and accuracy of the instrument, the transparency of its record keeping, the suitability of its measurement protocols, perhaps weighted by the accuracy of the instrument, etc. One could attempt to obtain as extensive a global set of high quality measurements as possible, but with no attempt to be complete. Rather the goal would be to have a consistently high quality set of measurements sampling the globe. It seems sea surface temperature would be tough because many of those sources appear to move about, so perhaps satellite sea surface temperatures could be used. Sites could be selected to be as free from complicating issues such as changes in land use. I think I have read of analyses of some subset of the continental U.S. stations that are of high quality, called GHCN or something like that. The result would be admittedly only an index, and it would likely be difficult to find an index of quality historically, but one starting now or within the modern era would seem to be an improvement. Tracking the trends in such an index would seem to me to be as useful as constantly picking apart attempts to fiddle with all the mixed data to try to obtain a GAT. I presume global climate model outputs could just as easily be extracted to compare to such an extensive but incomplete set of data as is done now to compare to existing GATs. The behavior of such an index would not be easily relatable to the complete thermodynamics of the GCS, but neither is the behavior of the GATs in existence. It seems such an index would be more free of adjustment induced variations and more directly related to temperature trends, at least for the set of points sampled.

Reply to  fahutex
January 10, 2016 8:28 pm

fahutex says: “One could attempt to obtain as extensive a global set of high quality measurements as possible, but with no attempt to be complete. Rather the goal would be to have a consistently high quality set of measurements sampling the globe.”
This is just so sensible! I waste so much time wondering why GISS et al use such a convoluted process! Take high quality stations only and get as many around the world as possible. Bad data is just that:bad. Why are we mixing clean and dirty water together?

Richard Keen
Reply to  fahutex
January 10, 2016 10:03 pm

fahutex says: January 10, 2016 at 6:10 pm: I think I have read of analyses of some subset of the continental U.S. stations that are of high quality …
… Yes, it’s called the CRN, Climate Reference Network. Anthony has written several times about it and showed the data. Unfortunately, it shows no warming over the past ten years or so, so NCDC sticks to their analysis of less suitable stations because they can find ways to adjust it to provide the required warming.

Reply to  Richard Keen
January 10, 2016 10:30 pm

“NCDC sticks to their analysis of less suitable stations”
In fact, anomalies from USHCN stations and USCRN are virtually identical:
http://www.moyhu.org.s3.amazonaws.com/2016/1/uscrn.png

Richard Keen
Reply to  fahutex
January 10, 2016 10:21 pm

fahutex also says: January 10, 2016 at 6:10 pm: I am unconvinced that such GATs represent much more than the adjustment process evolution…
… I like to compare it to cheese. You start with milk, and with a little effort make great cheddar. More processing and you get Velv**ta, known more as “processed cheese product” than as cheese. In this case, just as with “global temperatures”, the result says more about the processing than the initial input (observations and/or milk). I suspect that with Velv**ta, you could start with motor oil and end up with the same thing. With climate data, you can have observations that show warming, cooling, or cycles, but 2015 will always come up as the warmest year (until 2016).

aussiepete
Reply to  fahutex
January 10, 2016 11:16 pm

Thank you fahutex.
I am not a scientist in any shape or form but i believe i am blessed with a goodly amount of common sense and have been a voracious reader, particularly of WUWT. Your contribution to this subject is the best i’ve read anywhere and gives me confidence that one day common sense may return to climate science.

richardscourtney
Reply to  fahutex
January 11, 2016 6:48 am

fahutex:
You say

All that said, I have long thought a much more useful approach would be to treat the temperature records as the stock market is treated, and simply construct an index (or indices) with as consistent a definition of its components as possible.

Yes, that is one of the options stated in Appendix B of this item which I suspect you may want to read.
Richard

nankerphelge
January 10, 2016 6:17 pm

Novel and interesting approach but it would be absolutely unacceptable to GISS NOAA etc as they need to adjust figures to get the required outcome.

dp
January 10, 2016 6:21 pm

“Over time, the model can be refined and the calculations can be re-run to achieve (hopefully) better results.”
Wait – that is exactly what goes on now. Who will decide how this will be implemented, and what criterion would apply before the NextGen temperature tracking system gets fiddled with?

JohnWho
Reply to  dp
January 10, 2016 6:28 pm

Isn’t what is being done now more along the lines of data being altered and calculations re-run to achieve pre-determined results?

dp
Reply to  JohnWho
January 10, 2016 8:01 pm

I think it is more about intent than the method of getting the desired result. The goal is to get a desired result else they would just leave things alone. Don’t see anything to prevent that in NextGen methodology, and the opportunity seems to be built into the process (see the quote).

Michael C
January 10, 2016 6:36 pm

OK – you have been a given an unlimited budget to set up a temperature data system that you KNOW will give annual global averages for land sea and atmosphere within margin of error within 1 C. What would it be?
Instead of the rhetoric. bitterness, and blame, this is what the World should be doing. But first we must get the various institutions to publicly admit their estimated (and audited) margins of error
The key to exposing the truth (or lack of) over this issue is to shout loud and clear, over and over to all involved parties “WHAT ARE YOUR MARGINS OF ERROR?” They should not be permitted to slide out from under this question and it needs to become the catch-cry of anyone concerned over this issue
Once it becomes clear that the MOE’s are larger than the e.g. “degree to which records are been broken” the public and Governments will begin to understand (or made to)

January 10, 2016 6:39 pm

Any honest commenter must have noticed that the contrarian side of the “Warming? Not Warming!” argument has switched from arguments against the basic physics (lost those) to hiatus-assertion (lost those) to current attacks on temperature-assessment methodologies.
There must be some Latinate rhetorical definition for “searching for an argumentative technique that supports a pre-decided conclusion,” but I’m afraid I’m unaware of it …

Reply to  myslewski
January 10, 2016 7:43 pm

Troll writes: “… the contrarian side of the “Warming? Not Warming!” argument has switched from arguments against the basic physics (lost those) to …”
Sorry. The CAGW hypothesis predicts:
1) A warming trend in the troposphere (“hot spot”) – sorry, not there.
2) More H2O in the upper troposphere – sorry, there’s less.
3) Less radiation to space – sorry, there’s more.
Hypothesis disproved 3 times over.
“…to hiatus-assertion (lost those)…”
The existence of an hiatus was never a part of the reason for disbelieving this theory – its proven failure to get core predictions right is the reason. But since you mention it, the “hiatus” is alleged to have stopped because the ground is warming. But the troposphere isn’t, and the theory predicts:
4) The troposphere will warm faster than the ground – sorry, the atmosphere is still flat while the ground is warming – theory disproved for a fourth time.
“… to current attacks on temperature-assessment methodologies.”
Like the date of the month corrections in Australia that go up on the first of the month, down on the first of the next, up, down, and all around? Like those “attacks”? Well YEAH! The temperature products are demonstrably corrupt, no shadow of a doubt.
The fact that you and those like you believe in a theory that is four times disproved and based on proven malpractice and/or incompetence isn’t a criticism of US, it’s a criticism of YOU.

AndyG55
Reply to  myslewski
January 11, 2016 1:23 am

“arguments against the basic physics (lost those)”
Seriously?
without any knowledge of basic physics, how the heck would you have any idea about anything ?
All the “physics” arguements are well and truly on the ANTI-AGW side of reality.

richardscourtney
Reply to  myslewski
January 11, 2016 7:01 am

myslewski:
In addition to the excellent rebuttal from Ron House of untrue assertions from you, I address your claim saying of arguments about “hiatus-assertion (lost those)”.
The IPCC says you are plain wrong.
Box 9.2 on page 769 of Chapter 9 of IPCC the AR5 Working Group 1 (i.e. the most recent IPCC so-called science report) is here and says

Figure 9.8 demonstrates that 15-year-long hiatus periods are common in both the observed and CMIP5 historical GMST time series (see also Section 2.4.3, Figure 2.20; Easterling and Wehner, 2009; Liebmann et al., 2010). However, an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006–2012 by RCP4.5 simulations, Section 9.3.2) reveals that 111 out of 114 realizations show a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble (Box 9.2 Figure 1a; CMIP5 ensemble mean trend is 0.21ºC per decade). This difference between simulated and observed trends could be caused by some combination of (a) internal climate variability, (b) missing or incorrect radiative forcing and (c) model response error. These potential sources of the difference, which are not mutually exclusive, are assessed below, as is the cause of the observed GMST trend hiatus.

GMST trend is global mean surface temperature trend.
and
A “hiatus” is a stop.
So, the quoted IPCC Box provides two definitions of the ‘hiatus’; viz.
(a) The ‘pause’ is “a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble”.
And
(b) The ‘pause’ is “the observed GMST trend hiatus”.
And the IPCC says the ‘hiatus’ exists whichever definition of the ‘hiatus’ is used.
How do you define “lost those”?
Richard

1 2 3