A New System for Determining Global Temperature

Guest essay by Mike Jonas

Introduction

There are a number of organisations that produce estimates of global temperature from surface measurements. They include the UK Met Office Hadley Centre, the Goddard Institute of Space Studies (GISS) and Berkeley Earth, but there are others.

They all suffer from a number of problems. Here, an alternative method of deriving global temperature from surface measurements is proposed, which addresses some of those problems.

Note: The terms global temperature and regional temperature will be used here to refer to some kind of averaged surface temperature for the globe or for a region. It could be claimed that these would not be real temperatures, but I think that they would still be useful indicators.

The Problems

Some of the problems of the existing systems are:

· Some systems use temperature measurements from surrounding weather stations (or equivalent) to adjust a station’s temperature measurements or to replace missing temperature measurements. Those adjusted temperatures are then used like measured temperatures in ongoing calculations.

· The problem with this method is that surrounding stations are often a significant distance away and/or in very different locations, and their temperatures may be a poor guide to the missing temperatures.

· Some systems use a station’s temperature and/or the temperatures of surrounding stations over time to adjust a station’s temperature measurements, so that they appear to be consistent. (I refer to these as trend-based adjustments).

· There is a similar problem with this method. For example, higher-trending urban stations, which are unreliable because of the Urban Heat Effect (UHE), can be used to adjust more reliable lower-trending rural stations.

· Some systems do not make allowances for changes in a station, for example new equipment, a move to a nearby location, or re-painting. Such changes can cause a step-change in measured temperatures. Other systems treat such a change as creating a new station.

· Both these methods have problems. Systems that do not make allowance : These systems can make inappropriate trend-based adjustments, because the step-change is not identified. Systems that create a new station : These systems can also make inappropriate trend-based adjustments. For example, if a station’s paint detoriates, then its measurements may have an invalid trend. On re-painting, the error is rectified, but by regarding the repainted station as a new station the system then incorporates the invalid trend into its calculations.

There are other problems, of course, but a common theme is that individual temperature measurements are adjusted or estimated from other stations and/or other dates, before they get used in the ongoing calculations. In other words, the set of temperature measurements is changed to fit an expected model before it is used. [“model” in this sense refers to certain expectations of consistency between neighbouring stations or of temperature trends. It does not mean “computer model” or “computer climate model”.].

The Proposed New System

The proposed new system uses the set of all temperature measurements and a model. It adjusts the model to fit the temperature measurements. [As before, “model” here refers to a temperature pattern. It does not mean “computer model” or “computer climate model”.].

Over time, the model can be refined and the calculations can be re-run to achieve (hopefully) better results.

The proposed system does not on its own solve all problems. For example, there will be some temperature measurements that are incorrect or unreliable in some significant way and will genuinely need to be adjusted or deleted. This issue is addressed later in this article.

For the purpose of describing the system, I will begin by assuming that the basic time unit is one day. I will also not specify which temperature I mean by the temperature, but the entire system could for example be run separately for daily minimum and maximum temperatures. Other variations would be possible but are not covered here.

The basic system is described below under the two subheadings :The Model” and “The System”.

The Model

The model takes into account those factors which affect the overall pattern of temperature. A very simple initial model could use for example time of year, latitude, altitude and urban density, with simple factors being applied to each, eg. x degrees C per metre of altitude.

The model can then be used to generate a temperature pattern across the globe for any given day. Note that the pattern has a shape but it doesn’t have any temperatures.

So, using a summer day in the UK as an example, the model is likely to show Scottish lowlands as being warmer than the same-latitude Scottish highlands but cooler than the further-south English lowlands, which in turn would be cooler than urban London.

The System

On any given day, there is one temperature measurement for each weather station (or equivalent) active on that day. ie, there is a set of points (locations) each of which has one temperature measurement.

These points are then triangulated. That is, a set of triangles is fitted to the points, like this:

clip_image002

Note: the triangulation is optimised to minimise total line length. So, for example, line GH is used, not FJ, because GH is shorter.

The model is then fitted to all the points. The triangles are used to estimate the temperatures at all other points by reference to the three corners of the triangle in which they are located. In simple terms, within each triangle the model retains its shape while its three corners are each moved up or down to match their measured temperatures. (For points on one of the lines, it doesn’t matter which triangle is used, the result is the same).

I can illustrate the system with a simple 1D example (ie. along a line). On a given day, suppose that along the line between two points the model looks like:

clip_image004

If the measured temperatures at the two points on that day were say 12 and 17 deg C, then the system’s estimated temperatures would use the model with its ends shifted up or down to match the start and end points:

clip_image006

Advantages

There are a number of advantages to this approach:

· All temperature measurements are used unadjusted. (But see below re adjustments).

· The system takes no notice of any temperature trends and has no preconceived ideas about trends. Trends can be obtained later, as required, from the final results. (There may be some kinds of trend in the model, for example seasonal trends, but they are all “overruled” at every measured temperature.).

· The system does not care which stations have gaps in their record. Even if a station only has a single temperature measurement in its lifetime, it is used just like every other temperature measurement.

· No estimated temperature is used to estimate the temperature anywhere else. So, for example, when there is a day missing in a station’s temperature record then that station is not involved in the triangulation that day. The system can provide an estimate for that station’s location on that day, but it is not used in any calculation for any other temperature.

· No temperature measurement affects any estimated temperature outside its own triangles. Within those triangles, its effect decreases with distance.

· No temperature measurement affects any temperature on any other day.

· The system can use moving temperature measurement devices, eg. on ships, provided the model or the device caters for things like time of day.

· The system can “learn”, ie. its results can be used to refine the model, which in turn can improve the system (more on this later). In particular, its treatment of UHE can be validated and re-tuned if necessary.

Disadvantages

Disadvantages include:

· Substantial computer power may be needed.

· There may be significant local distortions on a day-to-day basis. For example, the making or missing of one measurement from one remote station could significantly affect a substantial area on that day.

· The proposed system does not solve all the problems of existing systems.

· The proposed system does not completely remove the need for adjustments to measured temperatures (more on this later).

System Design

There are a number of ways in which the system could be designed. For example, it could use a regular grid of points around the globe, and estimate the temperature for each point each day, then average the grid points for global and regional temperatures. Testing would show which grid spacings gave the best results for the least computer power.

Better and simpler designs may well be possible.

Note : Whenever long distances are involved in the triangulation process, Earth’s surface curvature could matter.

Discussion

One of the early objectives of the new system would be to refine the model so that it better matched the measured temperatures, thus giving better estimated temperatures. Most model changes are expected to make very little difference to the global temperature, because measured temperatures override the model. After a while, the principal objective for improving the model would not be a better global temperature, it would be … a better model. Eventually, the model might contribute to the development of real climate models, that is, models that work with climate rather than with weather (see Inside the Climate Computer Models).

Oceans would be a significant issue, since data is very sparse over significant ocean areas. The model for ocean areas is likely to affect global averages much more than the model for land areas. Note that ocean or land areas with sparse temperature data will always add to uncertainty, regardless of the method used.

I stated above (“Disadvantages”) that the proposed system does not completely remove the need for adjustments to measured temperatures. In general, individual station errors don’t matter provided they are reasonably random and not systemic, because they will average out over time and because each error impacts only a limited area (its own triangles) on one day only. So, for example, although it would be tempting to delete obviously wrong measurements, it is better to leave them in if there are not too many of them, because they have little impact and their removal would then not have to be justified and documented. The end result would be a simpler system, easier to follow, to check and to replicate, and less open to misuse (see “Misuse” below), although there would be more day-to-day variation. Systemic errors do matter because they can introduce a bias, so adjustments to these should be made, and the adjustments should be justified and documented. An example of a systemic error could be a widespread change to the time of day that max-min thermometers are read. Many of the systemic errors have already been analysed by the various temperature organisations. It would be very important to retain all original data so that all runs of the system using adjusted measurements can be compared to runs with the original data in order to quantify the effect of the adjustments and to assist in detecting bias.

Some stations may be so unreliable or poorly sited that they are best omitted. For example, stations near air-conditioner outlets, or at airports where they receive blasts from aircraft engines.

The issue of “significant local distortions on a day-to-day basis” should simply be accepted as a feature of the system. It is really only an artefact of the sparseness and variability of the temperature measurement coverage. The first aim of the system is to provide regional and global temperatures and their trends. Even a change to a station that caused a step change in its data (such as new equipment, a move to a nearby location, or re-painting) would not matter much, because each station influences only its own triangles. It would matter, however, if such step-changes were consistent and widespread, ie. they would matter if they could introduce a significant bias at a regional or global level.

It wouldn’t even matter if at a given location on a particular day the estimated maximum temperature was lower than the estimated minimum temperature. This could happen if, for example, among the nearby stations some had maximum temperatures missing while some other stations had minimum temperatures missing. (With a perfect model, it couldn’t happen, but of course the model can never be perfect.).

All the usual testing methods would be used, like using subsets of the data. For example, the representation of UHE in the model can be tested by calculating with and without temperature measurements on the outskirts of urban areas, and then comparing the results at those locations.

All sorts of other factors can be built into the model, some of which may change over time – eg. proximity to ocean, ocean currents, average cloud cover, actual hours of sunshine, ENSO and other ocean oscillations, and many more. Assuming that the necessary data is available, of course.

Evaluation

Each run of the system can produce ratings that give some indication of how reliable the results are:

· How much the model had to be adjusted to fit the temperature measurements.

· How well the temperature measurements covered the globe.

The ratings could be summarised globally, by region, and by period.

Some station records give measurement reliability. These could be incorporated into the ratings.

Misuse

Like all systems, the proposed system would be open to misuse, but perhaps not as much as existing systems.

Bias could still be introduced into the system by adjusting historical temperature measurements – eg. to increase the rate of warming by lowering past temperatures. The proposed system does make this a bit more difficult, because it removes some of the reasons for adjusting past temperature measurements. In particular, temperature measurements cannot be adjusted to fit surrounding measurements, and they cannot be adjusted to fit a model (deletion of “outliers” is an example of this). If such a bias was introduced, the ratings (see “Evaluation” above) would not be affected, so they would not be able to assist in detecting the bias. The bias could be detected by comparing results against results from unadjusted data, but proving it against a determined defence could be very difficult.

Bias could still be introduced into the system by exploiting large areas with no temperature measurements, such as the Arctic, but the proposed system also makes this a bit more difficult. In order to exploit such areas, the model would need to be designed to generate change within the unmeasured area. So, for example, a corrupt model could make the centre of the Arctic warmer over time relative to the outer regions where the weather stations are. It would be possible to detect this type of corruption via the ratings (a high proportion of the temperature trend would come from a region with a low coverage rating), but again proof would be difficult.

NB. In talking about misuse, I am not in any way suggesting that misuse does or would occur. I am simply checking the proposed system for weaknesses. There may be other weaknesses that I have not identified.

Conclusion

It would be very interesting to implement such a system, because it would operate very differently to current systems and would therefore provide a genuine alternative to and check against the current systems. Raising the necessary funding could be a major hurdle.

The system could also, I think, be used to check some weather and climate theories against historical temperature data, because (a) it handles incomplete temperature data, (b) it provides a structure (the model) in which such theories can be represented, and (c) it provides ratings for evaluation of the theories.

Provided that the system could be used without needing too much computer power, it could be suitable for open-source/cooperative environments where users could check each other’s results and develop cooperatively. The fact that the system is relatively easy to understand, unlike the current set of climate models, would be a big advantage.

Footnotes

1. I hope that the use of the word “model” does not cause confusion. I tried to make it clear that the model I refer to in this document is a temperature pattern, not a computer model. I tried some other words, but they didn’t really work.

2. I assume that unadjusted temperature data is available from the temperature organisations (weather bureaux etc). To any reasonable person, it is surely inconceivable that these organisations would not retain their original data.

###

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.

Abbreviations

C – Centigrade or Celsius

ENSO – El Niño Southern Oscillation

GISS – Goddard Institute of Space Studies

UHE – Urban Heat Effect

1D – 1-dimensional

0 0 votes
Article Rating
155 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 10, 2016 2:07 pm

The Warmistas will never accept this new System of measurement. If temperature measurements are used unadjusted then there will be no ability to show the required rise in Global Temperature demanded by CAGW.

Auto
Reply to  ntesdorf
January 10, 2016 2:42 pm

ntes
Plus a lot
Auto

paul
Reply to  ntesdorf
January 11, 2016 4:28 am

what about a sensor on the moon pointing back at earth

gnomish
Reply to  paul
January 11, 2016 10:26 am

brilliant.
ne plus ultra.
but it does not justify stealing.
there is never any justification for violating a person’s rights, period.

george e. smith
Reply to  ntesdorf
January 11, 2016 9:53 am

“””””….. Abbreviations
C – Centigrade or Celsius …..”””””
So make up your mind; which is it.
Centigrade and Celsius are NOT the same.
C is Celsius in the SI system of units.
A centigrade scale is simply any linear scale that goes from zero to 100.
It could be simply the percentage of people who believe in catastrophic calamitous man made global warming climate change (CCMMGWCC) ;
izzat 97% of all scientists; or it could simply be your score on a school term paper.
The Celsius scale IS a centigrade scale; but C does not mean centigrade.
g

george e. smith
Reply to  ntesdorf
January 11, 2016 9:58 am

OK, so I’ll bite.
Just where can I find a reference to this new system of global temperature measurement.
Whatever happened to the concept of simply measuring the temperature at times and places on the earth surface that conform to the sampling requirements of standard sampled data system theory; i.e. Nyquist.
Seems to me that would work.
I think that might be the only thing that would work. Anything less is just BS.
G

Editor
Reply to  george e. smith
January 11, 2016 10:55 am

G – point taken re Centigrade. re sampling requirements, we can’t go back in time and re-sample.

David Riser
January 10, 2016 2:10 pm

The real issue with either idea, how we do it now or your suggestion is that the mathematical model of temperature differences fail to distinguish problems from reality. A weather front can come through and look like a step change no matter how you measure it. Invalidating any attempt to mash the numbers. A single day can contain more than one high and low. High’s and lows can be vastly different over short time frames. Finally clouds have a huge impact on what the local temperature is at any given time and models don’t do clouds. So the raw data is really the best we have. The raw data of ideal stations is even better. But adjustments as a whole are just bs.

Auto
Reply to  David Riser
January 10, 2016 2:45 pm

David R
Broadly agree.
Weather is variable – even chaotic.
Your
“But adjustments as a whole are just bs.” is noted, appreciated and mightily agreed with. In spades.
Auto – as self-effacing as ever!

Editor
Reply to  David Riser
January 10, 2016 4:15 pm

re weather fronts, please see my comment below http://wattsupwiththat.com/2016/01/10/a-new-system-for-determining-global-temperature/comment-page-1/#comment-2117126 (the same applies to clouds, etc).
So the raw data is really the best we have.“. Absolutely.

Samuel C Cogar
Reply to  David Riser
January 11, 2016 3:03 am

David R,
Most any type of thermometer or thermocouple that is immersed in a gallon of -40 F antifreeze will do clouds with ease

Samuel C Cogar
Reply to  Samuel C Cogar
January 11, 2016 3:26 am

And ps, ….. auto manufacturers solved their “average temperature” problem many, many years ago by the placement of a simple spring-loaded thermostat in their vehicle’s radiator.

Duster
Reply to  David Riser
January 11, 2016 10:13 am

Another “problem” is not in the station but in the reported location over time. That can yield pseudo moves that are due to rounding and to datum shifts that are not reported or accounted for. For instance using standard USGS 7.5-minute paper topographic maps in the continental US generally means using the NAD27 (North American 1927 Datum) datum for any map older than about 1980. Later terrain maps employ WGS 84 (same datum used by a GPS unit) or NAD83, which is to most intents the same as WGS84. The difference between NAD27 and WGS84 can be well or over 100 meters. This can appear to automated data evaluation systems as a “move.” This can have significant “effects” on mapped locations if the individual doing the locating does not note and record the datum used for the fix. Additional error can be introduced if the location reported is corrected to a newer datum and the processing software corrects the location again because the programmer made an assumption about reported locations.
Another major “effect” is simple rounding of latitude and longitude figures. A second of longitude is roughly 30 meters at 45-degrees North (A bit over 100 feet). If a lat-lon location is only reported to a minute of accuracy that error is presumably +/- 30 seconds. Some of these errors are systematic rather than random.
IIRC, Anthony wrote up a station in New York state some years ago that appeared in the data to have been “moved” several times, but had been moved once, less than 100 feet, when a new system replaced the older installation.

tomwys1
January 10, 2016 2:21 pm

I do not see any marked improvement over the current RSS data set, whose only “deficiency” is that Urban Heat Islands won’t overwhelm it. That being said, since most people actually live in/on these UHI affected areas, perhaps a UHI bias needs to be taken into account??? If not, then I’ll go with RSS!!!

Auto
Reply to  tomwys1
January 10, 2016 2:48 pm

tomwy
A variety of ways of measuring – even if one or more may be imperfect – at least allows dispassionate observers to look – carefully – at the data.
This may – or may not – be an improvement ( I think it probably is, if not hugely so) – but the re-evaluation more than justifies the effort, I suggest.
Auto

Dahlquist
Reply to  Auto
January 11, 2016 1:19 am

Reply to  tomwys1
January 10, 2016 3:01 pm

I would agree, so long as we place a weather station inside every shopping mall.

Reply to  tomwys1
January 10, 2016 6:58 pm

Radiosonde data presented in figure 7 of http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade indicates that the surface-adjacent troposphere warmed .02, maybe .03 degree/decade more than the satellite-measured lower troposphere. One reason I suspect this happened is because decreasing ice and snow cover increased the lapse rate overall in the lowest ~1-1.5 kilometers of the lower troposphere.

Reply to  tomwys1
January 10, 2016 7:01 pm

We actually have three perfectly good temperature systems; RSS and UAH on a 5 degree grid, and Radiosonde(70,000 balloons annually). The UHI effect is very small as metropolitan areas only represent about 1% of the earth’s surface. The oceans represent almost 75%, and sparsely-populated/ unoccupied areas cover the rest of the planet.
The surface s tation system should be abolished and most scientists know it. So it would remove CAGW from climate science. The temperature differences are the primary contention between mainstream scientists and skeptical scientists. Removing this barrier would rove climate science ages forward.

Reply to  Dale Hartz
January 11, 2016 6:18 am

Dale Hartz:
You say

We actually have three perfectly good temperature systems; RSS and UAH on a 5 degree grid, and Radiosonde(70,000 balloons annually). The UHI effect is very small as metropolitan areas only represent about 1% of the earth’s surface. The oceans represent almost 75%, and sparsely-populated/ unoccupied areas cover the rest of the planet.
The surface station system should be abolished and most scientists know it. So it would remove CAGW from climate science. The temperature differences are the primary contention between mainstream scientists and skeptical scientists. Removing this barrier would rove (sic) climate science ages forward.

I very strongly agree.
Although there is no possibility of a calibration standard for global temperature, the MSU (i.e. RSS and UAH) measurements are almost global in coverage and the radiosondes provide independent measurements for comparison.
The major problem with the surface station data is not removed by the proposal in the above essay. Indeed, the essay says it proposes to adopt the problem when it says

The proposed new system uses the set of all temperature measurements and a model. It adjusts the model to fit the temperature measurements. [As before, “model” here refers to a temperature pattern. It does not mean “computer model” or “computer climate model”.].
Over time, the model can be refined and the calculations can be re-run to achieve (hopefully) better results.

That is what is done by ALL the existing teams who use station data to compute global temperature.
The “stations” are the sites where actual measurements are taken.
When the measurement sites are considered as being the measurement equipment, then the non-uniform distribution of these sites is an imperfection in the measurement equipment. Some measurement sites show warming trends and others cooling trends and, therefore, the non-uniform distribution of measurement sites may provide a preponderance of measurement sites in warming (or cooling) regions. Also, large areas of the Earth’s surface contain no measurement sites, and temperatures for these areas require interpolation.
Accordingly, the measurement procedure to obtain the global temperature for e.g. a year requires compensation for the imperfections in the measurement equipment. A model of the imperfections is needed to enable the compensation, and the teams who provide values of global temperature each use a different model for the imperfections (i.e. they make different selections of which points to use, they provide different weightings for e.g. effects over ocean and land, and so on). So, though each team provides a compensation to correct for the imperfection in the measurement equipment, each also uses a different and unique compensation model.
The essay proposes adoption of an additional unique compensation model.
Refining the model to obtain “better” results can only be directed by prejudice concerning what is “better” because there is no measurement standard for global temperature.

As you say, Dale Hartz, “The surface station system should be abolished”. Another version of it is not needed.
Richard

Editor
Reply to  Dale Hartz
January 11, 2016 11:29 am

Richard – I strongly agree with using the satellite systems from now on, for global and regional temperature, but in another comment I have given reasons for continuing the surface stations.
You miss the point when you say “That is what is done by all the existing teams …”. It isn’t. All the existing teams use their model to manipulate actual measurements before they are used. Mine won’t change any actual measurement [I did make provision for correcting systemic errors, but following a comment by Nick Stokes I now feel that it is best not to make any changes at all.]. One of the significant differences, for example, is that while UHE gets spread out by existing systems, mine would confine it to urban areas.
I agree that the historical measurements are heavily imperfect, and that any results from them need to be seen in that light. But I am seriously unimpressed with the mindset of the people doing the current systems, and I’m trying to get people to see things from a different angle – one in which measurement trumps model. I think the exercise would be interesting, and I think there would be greater benefit than just a better “global temperature” and better error bars (there’s another comment on that).
While everything is being distorted by ideologues it is easy to get very cynical about the surface station measurement system, but when sanity is re-established then I think there is merit in continuing with it.

Reply to  Dale Hartz
January 11, 2016 2:27 pm

Mike Jonas,
I followed your path part of the way creating my process to read the data.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/
I had 2 different goals, that led me a different way. I was more interested in daily variability (min to max to min), as I am very impressed how quickly it cools at sunset. That led me to generating the day to day change/rate of change over the year.
So I don’t calculate a temperature, I look at the derivative of temps. While I think this allows some flexibility on what stations/data to include, I’ve elected to include any station that has a full year of data gets included. No data adjustments, strictly measurements. When I process a full year for a station, then average all stations for that year for the area being calculated (I do various areas 1×1, 10×10, latitudinal bands, continents, global). For daily rate of change, you can include stations that take partial year samples(again adjustable), as you’re looking mostly for the derivative from warming and cooling peaks.
What I’ve come up with is there’s no loss of night time cooling since 1940, in fact it’s slightly cooler tomorrow morning that it was today.
So, I think there’s good data in the surface record, just not what’s published from it.

provoter
Reply to  tomwys1
January 10, 2016 10:46 pm

“…since most people actually live in/on these UHI affected areas, perhaps a UHI bias needs to be taken into account? If not, then I’ll go with RSS!”
Agreed. As a very long aside, though (and probably nothing that is news to you):
The way to take UHI bias properly into account is to completely separate urban temp measurements from the rural measurements – problem solved. Urban areas occupy well under 1% of the earth’s total surface of about 5 million km² (see, for example, here and here). If one’s goal is to understand global temp trends, and if non-urban trends say one thing, and urban trends say another – what does simple math tell him he should pay attention to?
Obviously this is assuming there actually existed 1) a well-sited, well-distributed, global, non-urban network of stations, that were 2) well equipped, well maintained, well calibrated, well measured, and whose raw readings were available to all. Such a thing doesn’t exist, of course (no bonus points for guessing why ;^> ), but the basic point that urban readings do nothing but contaminate all others remains. It’s like determining the temperature of the air in your house by consulting the thermometer sitting just above your stove — while you’re cooking dinner.
Lastly, if a person would like to know what urban temps are doing just for the sake of knowing such a thing, this is all well and good. Let’s just make sure we don’t fool ourselves into thinking they have anything to do with what the other 99+ % of the earth’s surface temperatures are doing.

JohnWho
January 10, 2016 2:22 pm

” It would be very important to retain all original data…”
Just that statement alone makes your proposed new system superior to what we have now.

Auto
Reply to  JohnWho
January 10, 2016 2:52 pm

John
Agree.
But –
I am remiss – I don’t see (in this thread) the source of your quote –
” It would be very important to retain all original data…”
As noted – I agree.
Auto – late at night, so probably in error.

JohnWho
Reply to  Auto
January 10, 2016 3:19 pm

Toward the end of the paragraph that starts with “I stated above (“Disadvantages”) …” in the “Discussion” section.

RoHa
Reply to  JohnWho
January 10, 2016 3:53 pm

I thought the original data had been damaged by the floods of 2011/destroyed in the storeroom fire/mislaid in the move from the old building/rendered unreadable when the data storage format was updated/eaten by mice.

JohnWho
Reply to  RoHa
January 10, 2016 7:19 pm

You appear to be confused with the Hillary Clinton emails.
/grin

DD More
Reply to  RoHa
January 11, 2016 2:18 pm

No RoHa, they are right next to the Bypass Plans
“But the plans were on display…”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”
― Douglas Adams, The Hitchhiker’s Guide to the Galaxy

Kev-in-Uk
January 10, 2016 2:23 pm

Interesting! – but may I make an immediate observation? In order to create a ‘model’ one needs to have historical (and preferably correct (i.e. raw!)) data to set ‘trends’ between stations within reasonable proximity (I note your last statements though!). Also, station height (e.g. above mean sea level) would probably be an important factor to try and include. The passage of weather fronts over the UK (for example) causes significant diffences over fairly short distances, probably within 10’s of miles – I’m guessing this would be completely different to the variation across large desert plains for example! I am on the coast, and 25 miles inland the temperatures are extremely different in summer/winter as you can imagine. Hence, such difference needs to be noted (historically) and accounted for in any ‘model’ in order to make a regional ‘avergae’ assessment? I can see the objective, but not sure of the method. What I would suggest is a direct historical comparison of (nearby) stations to determine some measure of synchronicity, which would likely throw up various oddities. If the agreement is within reason (in the majority) it would follow to assume agreement in general and deduce ‘regional’ temp values from the various station pair(s) data. As for UK metoffice raw data being available, that might not be possible, as Mr Jones himself has said!

Editor
Reply to  Kev-in-Uk
January 10, 2016 3:25 pm

Hi Kev-in-Uk – I tried to make it clear that I’m looking for a whole new way of looking at temperatures. Conceptually, you take temperatures around the globe to find out what the temperature is, and you then average them to get a global average temperature. My system does that, whereas other existing systems average something else. My system can also use every temperature measurement as is, on an equal footing with all other temperature measurements. We have all got so used to thinking about temperature trends that it is hard to think of a system that ignores them – but my system does. In effect, it has no interest in trends at any point in the entire process. Only after it has finished would you look at the results to see what the temperature trends have been.
So – what do you do about things like weather fronts, that can’t be predicted more than a few days ahead but have significant local effect? The answer is that you can’t put them into the model, so you don’t try to. You simply accept their effect as part of “natural variability” and just go on measuring the temperatures – ie, it’s just weather, and weather is what you are measuring! Over time, things like that average out, and if they don’t then you can probably work out how to put them into the model.

bobl
Reply to  Mike Jonas
January 10, 2016 5:53 pm

I think this has merit even as a cross check BUT, any system where temperatures are estimated from surrounding sites are affected by the time lags – For example there is no relationship between Adelaide and Melbourne on any given day but there IS a relationship between Melbourne and Adelaide lagged by one day because the predominant west to east motion of weather systems in this part of the world.
At different times of the year the dominant weather systems motion may alter, so the model needs to account for that effect. That is the “Pattern” will likely have seasonal variations.
A much better approach to all of this is to NOT model at all but rather acquire data from crowdsourced domestic weather stations

Kev-in-Uk
Reply to  Mike Jonas
January 10, 2016 10:45 pm

Hi Mike, at the end of the day, I feel that the temp data records have been tampered with enough and used to present/support an agenda. The mere fact that anyone with half a brain knows that central London is a few degrees warmer (due to UHE) than surrounding countryside and yet the Metoffice/MSM still use ‘high’ temps recorded at Heathrow to browbeat us with strongly suggests that UHE is not taken seriously. I mean, why are the London station data not adjusted DOWN by the obvious few degrees of UHE. I guess when someone answers that satisfactorily, or demonstrates that that is the case (in the Hadcrut data for example), I might sit up and take the surface dataset more seriously. As it is, I strongly believe the current ‘treatment’ is more likely to adjust surrounding stations UP to match Heathrow (or similar) or at least to allow the obvious UHE affected values to remain in situ and bump up the spatial temperature ‘average’! In short, there is no genuine human ‘interpretation’ of data anymore – it’s all programmed adjustment and averaging. I’m not really sure how you feel your system could ‘learn’ UHE without being historically measured in direct comparison to to non UHE stations? – and even then, wouldn’t a large degree of human data interpretation would be needed? regards, Kev

Reply to  Mike Jonas
January 11, 2016 6:35 am

Mike Jonas:
You say

I tried to make it clear that I’m looking for a whole new way of looking at temperatures. Conceptually, you take temperatures around the globe to find out what the temperature is, and you then average them to get a global average temperature. My system does that, whereas other existing systems average something else. My system can also use every temperature measurement as is, on an equal footing with all other temperature measurements. We have all got so used to thinking about temperature trends that it is hard to think of a system that ignores them – but my system does. In effect, it has no interest in trends at any point in the entire process. Only after it has finished would you look at the results to see what the temperature trends have been.

I am sorry to be the bearer of bad news, but as my above post explains, your proposed method is in principle THE SAME as all the existing derivations of global temperature from surface stations. And, therefore, if it were developed then your method would not provide any advantage(s) over any of the existing methods for deriving global temperature from surface stations.
Richard

Duster
Reply to  Mike Jonas
January 11, 2016 10:43 am

bobl – As I read Mike’s proposal, his suggestion rather cuts to the heart of the “global average” question. While a global average is not much use for things like weather forecasting, it would be quite useful in determining “trends” if there are such. As the data is accumulated each 24-hour period, a temperature surface is calculated using a triangulated irregular network (a TIN to folks who use GIS systems). Employing such as system you could develop a detailed, global temperature “geoid” or simpler ovoid each day. An annual average of each station could be employed to calculate and annual average temperature surface and the annuals and longer terms could be summarized the same way. A global trend would show up as systematic drift in ALL stations over time without the need for data correction, gridding, or any of the extraneous effort involved in producing current “climate” summary data. That is, if there is a real, global trend, then it is present in all temperature data collected regardless of any instrument issues or any other side tracks. “Correcting” or adjusting the data has never been necessary to detect such a trend, IF IT IS REAL. Processes like increasing UHI would appear as growing “peaks” on the global surface; regional changes (increasing or decreasing forest, conversion to agricultural use etc.) would appear as local or regional “topographic” patterns that impose a change in to the local topography and then stabilize. Trend would affect all of these universally IF it is global.

January 10, 2016 2:24 pm

surely a better system would be to ditch temperature altogether and just use changes in net energy emitted and absorbed by the earth ?

Editor
Reply to  corporateshots
January 10, 2016 4:03 pm

I agree absolutely – if you want to find out what the energy balance is. But if you want to know what the temperature is, a thermometer is quite useful. ie, it’s not an “either-or”, let’s do both.

Owen in GA
Reply to  corporateshots
January 11, 2016 12:04 pm

Great idea!
The long pole in that tent is that the current on orbit experiments doing that measurement have admitted error bars larger than the effect we want to measure. I suspect there may be more errors than the admitted ones, but have not been able to look that closely. I know calibrating sensors to give a flat response across a broad band of radiation (far IR to far UV in this case) is very difficult even in a laboratory. I can’t imagine how difficult it must be when the experiment is in a high Earth orbit. Keeping the amplifiers and references aligned to see this couple of watts per square meter difference between incoming visible to far UV, and outgoing far IR to mid IR must be a nearly impossible task. I am amazed they get to it as well as they do.

BioBob
January 10, 2016 2:46 pm

Sorry, but how about a system that actually uses replicated random sampling that the rest of field science has required to estimate variance for decades now ? Then, of course, you would need to actually use samples to verify the sample size needed within and between sites. A cursory examination would reveal that the best stations with only 3 samples are woefully inadequate and not randomly located in any case.
The reality is that accurate global temperature estimates are some fantasy produced by some fevered wannabe hack scientist.

Auto
Reply to  BioBob
January 10, 2016 2:54 pm

BioB
Absolutely.
It is a purely (impurely??) political concept.
Auto

Richard Keen
January 10, 2016 2:54 pm

I’ll read this idea in more detail, but to date it’s my conclusiion that you simply cannot get a valid global temperature from a sparse and intermittent surface station network, and that meaningful global tempertures do not exist before 1979 (MSU et al.). But the surface station network does allow for regional and local time series of sufficient accuracy to detect regional and local climate change.
For example, the consensus IPCC models predict the fastest warming states in the US to be Alaska (due to latitude) and Colorado (due to altitude, in the tropospheric “hot spot”). Both places have enough long term stations to say something about what is really happening. And that is that Alaska follows the PDO and little else (little trend warming), and Colorado is more complicated (a mix of PDO, AMO, and who knows what else) but also very little trend warming. At my own co-op station at 8950 feet in Colorado the IPCC predicts 1 degree F of warming every 15 years. Since 2000 it’s cooled 1 degree F. (all of this has been published and/or posted on WUWT and elsewhere). If the models can’t get a grip on regional climate change, there’s no point is using them for global change (which, after all, is the average of the regional changes).
But again, I’ll give this idea more of the attention it deserves after dinner and a beer tonight.

Robert B
Reply to  Richard Keen
January 10, 2016 3:47 pm

You can only get trends at stations and use that as some sort of indicator of a global trend.
My suggestion is to take the decadal trend for max and min separately at each station. Take the mean for grids of so-many degrees using stations that have more than 90% of data for the period. Redo it changing the starting month and shifting grids by a degree, and take the mean of all grids.
I suspect that the results will show such a massive uncertainty that everyone will say “lets just stick with the satellite data since 1979”.

Richard Keen
Reply to  Robert B
January 10, 2016 5:31 pm

Robert, since your conclusion is inevitable, I suggest we skip the intermediate steps and fire Gavin, Jones, NCDC et al. at 8 am Monday and have each and every climate observer figure out their own climate, and publish that. Of course, Mike Jonas can keep his job, which I’m sure is not in the climate-industrial complex.

Robert B
Reply to  Robert B
January 10, 2016 10:11 pm

Its not inevitable if you believe that you really can collect temperature readings from sparse and intermittent surface station network, correct for events never documented, and get something meaningful from the average.
Looking at two stations in my city, the old city site up to 1979 and the airport which is only 6 km away, the SD for the difference between the monthly mean max for the two is 0.46°C for the overlapping period of 25 years. So half the time, the difference is greater than a third of a degree from the average (0.18). RSS shows a total of a third of a degree rise from 1979! That’s on top of temperature not being something like density, where even the average for this straight forward example of an intrinsic property is not straight forward.

Robert B
Reply to  Robert B
January 10, 2016 11:42 pm

I’ll add that there is a trend in the differences as UHI affected the airport more than the city site (in parklands next to the city centre) for the first 15 years. There was no trend for the last 10 with the differences randomly spread and the SD was still 0.21°C. This suggests that a regions anomaly is within ±half a degree of what the local station records. How can you possibly homogenise using stations tens of kilometers away let alone hundreds as is the case for remote regions?
The moving 10 year trends differ by 2°C/century at the start, goes to 10 and then down -2°C/century but its a smooth curve with noise of 0.2°/century (spread due to different starting months). The global average shows a trend of less than 1 degree per century! There is too much happening to use an average temperature without a perfect record with evenly spread stations and a lot more of them.

ShrNfr
January 10, 2016 3:01 pm

I am a fan of using an extended Kalman-Bucy filter to combine the observations. The data vector would be the combined microwave spectrometer measurements, radiosonde data, and surface measurements. The trick is to get the noise vector correct along with the state transition matrix correct. When radiosonde measurements or surface measurements are absent, the observation matrix has a zero for them. The state transition matrix would be tricky because of the need to have the entire state vector be all points on the entire world grid at any given time as the state vector. In the “old days” you would not even think of something like this, but these days, there is enough horsepower in even a Mac Pro with a graphics unit to perhaps pull it off a bit. Lots of junk to work out.

jmarshs
January 10, 2016 3:02 pm

Building a model of a system that has no global temperature (the Earth) is something completely different from building a model to maintain a desired, average temperature of a system (ex. an engine or HVAC system) over which you can exert control relative to timely feedback.
Engineers do the latter all the time. But discussions about “global temperatures” is something that always leaves me shaking and scratching my head….

Firey
January 10, 2016 3:09 pm

What is wrong with using satellite data? They circle the globe every 90 minutes, cover both land and sea & have been doing so for 30 odd years. If it can’t be used what was the point of launching them in the first place?

Editor
Reply to  Firey
January 10, 2016 3:38 pm

I agree that satellites give us the best global data. There are four reasons I can think of immediately for using a system based on surface temperatures:
1. There is no satellite data before 1979.
2. Surface temperature measurements provide a cross-check for the satellite data.
3. Local measurements provide a level of detail that satellites currently cannot.
4. Weather and climate studies may be able to take advantage of the extra detail.
[count 3 and 4 as one reason if you like]

porscheman
Reply to  Mike Jonas
January 10, 2016 6:31 pm

You are incorrect. The NEMS flew on Nimbus E and the SCAMS flew on Nimbus F. I once shared an office with 4,000 tapes from Nimbus E extracting the NEMS data. It was nadir only, while SCAMS was a scanning instrument. Lord knows where the data is today, but if they could find it and a tape drive to read it, you could analyze it in a couple of weeks with a Mac Pro. That would bring the record back to earlier than 1973.

Marcus
January 10, 2016 3:10 pm

Why not just use unadjusted satellite and balloon data ?

January 10, 2016 3:29 pm

In general, individual station errors don’t matter provided they are reasonably random and not systemic, because they will average out over time
I regard this as an incorrect assumption. The problem is that most errors are NOT random, they ARE systemic. You noted the effect of aging paint in your comments. Well, that’s a systemic error. As the sensors age, they too will drift, and all the sensors of a given type will drift in the same direction. I could go on, but I think that illustrates the point. Systemic errors will (I believe) heavily outweigh random errors. Assuming they will all cancel out is, I think, one of the biggest errors made in calculating global temperatures. It is simply a bad assumption.

Editor
Reply to  davidmhoffer
January 10, 2016 3:51 pm

I don’t assume they will ALL cancel out, and I do state that systemic errors need to be dealt with. But I do think that there is data, currently being adjusted to fit a perceived ‘model’, which is best left unchanged. Making lots of adjustments always risks inadvertently introducing bias. Our temperature measurements have all sorts of gaps and errors, and we should get away from the idea that we can adjust them in order to get improved results. Instead, we should accept that the measurements are all that we have and that any system using them cannot be more accurate than the meassurements.

Reply to  Mike Jonas
January 10, 2016 3:57 pm

we should accept that the measurements are all that we have and that any system using them cannot be more accurate than the meassurements.
I f you were to display error bars on the end result, I think a considerable number of people would be delighted.

Editor
Reply to  Mike Jonas
January 10, 2016 4:08 pm

I should have mentioned error bars under “Evaluation”. Yes, very important.

JohnKnight
Reply to  Mike Jonas
January 10, 2016 8:03 pm

Mike,
Along the lines mentioned here, and this;
“Note: The terms global temperature and regional temperature will be used here to refer to some kind of averaged surface temperature for the globe or for a region. It could be claimed that these would not be real temperatures, but I think that they would still be useful indicators.”
Sure, but why call them global or regional this or that, rather than weather station system this or that? Why “pretend” the globe or whole region is being measured, rather than keeping it real, and speak of “stations in X region show…, stations in all regions show… etc?

Duster
Reply to  davidmhoffer
January 11, 2016 1:20 pm

Local trends would appear in Mike’s proposed triangulated system as a systematic shift in the “z” value of local points. So would regional effects like development and reforestation. The idea is quite elegant and eliminates a great deal of wasted time and argument by simply ignoring problems with data quality. The quality issue is only important if the problems are directional and systematic, and contrary to theoretical expectations. Any real trend would actually appear globally and could only “vanish” if an equal and opposite “correction” were applied to all data. But doesn’t NOAA apply just such a “correction”? Even problems like TOBS cease to be relevant.

January 10, 2016 3:38 pm

corporateshots January 10, 2016 at 2:24 pm
surely a better system would be to ditch temperature altogether and just use changes in net energy emitted and absorbed by the earth ?

I think this bears repeating. What we’re trying to understand is the energy balance change due to CO2 and other factors. Temperature is a lousy proxy for energy balance. A change of one degree at -40 is about 2.9 w/m2. A change of 1 degree at +40 is 7.0 w/m2. These two values can not be averaged! Any system that doesn’t take this into account winds up over representing changes at high latitude, high altitude and winter seasons, and under representing changes in low latitude, low altitude and summer seasons.
The patient has three broken ribs, a fractured elbow, a broken leg and a deep cut on his forehead. Putting three stitches in the deep cut may well be called for, but it hardly treats the patients fatal injuries.

gnomish
Reply to  davidmhoffer
January 10, 2016 4:04 pm

The only meaningful measurement is TOTAL. (because it’s an actual measurement – what a concept!)
Averaging antarctica with death valley is purest nonsense.
Somebody needs a boot averaged with his butt.

Crispin in Waterloo
Reply to  davidmhoffer
January 10, 2016 7:39 pm

David M H
I am with you on this energy thing. If this method proposed is used, what is it telling us? Is it answering the right question?
If the high is 10 C for ten minutes and the rest of the 24 hrs is -5, what does this tell us? My inclusion of the element of time tells you that the time-weighted value should be a hair above -5, not 2.5.
Heat in the system is a combination of temperature, time, altitude and humidity. Is it impossible to have a clock and a hygrometer at the stations? Each station should produce a ‘heat content’ measure that reflects what the enthalpy of the atmosphere was during the day. While we are transfixed by highs and lows, they are not telling us much about climate.
Quantifying the energy in the air on a daily basis has meaning when discussing climate issues.
An advantage to doing this is it would force discussion about climate to use system energy instead of transient maxes or mins that contain so little information.
The oceans are analysed on the basis of heat content. Why not do that same thing for th atmosphere? Then the two can be summed in a meaningful way.

poitsplace
January 10, 2016 4:06 pm

How about we adjust NOTHING…except documented irregularities based on things like equipment and time of day. We come up with a daily, monthly, and/or yearly temperature based on the data we have and as well spatially accounted for as possible. We re-run the routine several times using various different spatial practices to figure out what the “global temperature” was…and make note of how wildly that changes the results. We note that since we’re not forcing it into a continuous record, its even more erratic.
We total up the obvious, overall error between the different spacial accounting practices (because they can’t all be right and maybe none are). We add in the measurement error. We add in the error for the various adjustments that are needed for equipment changes, time of day, etc. AaaaAAAaaand then we leave UHI calculations out of the adjustments and explain to people that thanks to irregular station moves to avoid UHI, often multiple moves per station, we have no freaking clue how much UHI there actually is in the record but that its likely present making our already far more erratic results “too high” by some difficult to fathom amount.
…and after all that, we’ll notice it’s virtually impossible to even know with certainty that there’s been warming since the 1940s

Steamboat McGoo
January 10, 2016 4:25 pm

” There are a number of organisations that produce estimates of global temperature from surface measurements. ”
There are also a number of organisations that produce astrological predictions/estimates from birth dates & star/constellation positions.
I don’t have much use for either set of “estimates”.

Alexander Carpenter
January 10, 2016 4:35 pm

How about “template” instead of “model”? Or “local template,” “regional template,? and “global template”? Talk about requiring computer power! We could develop budgets as large as the GC modelers get. More green jobs!
But do we really need to know this “value” to such precision and accuracy? Or are we just falling into the trap of efforting to counter every warmist factoid when the settled fundamentals of their “science” rob their claims of all validity? Te burden of proof lies on a claimant, especially extreme claims when the fundamentals don’t work out.
Beyond that, is it a number that has any real meaning (outside of political manipulation)? What are the climate parameters of a tempest in a teapot? Shall we develop a template for tempests?

ScienceABC123
January 10, 2016 4:52 pm

The use of all temperature data sets is problematic. Some of those data sets are of poor quality, and others have been ‘adjusted’ using unknown methodology. The old saying of “garbage in, garbage out” is very much in play if all data sets are used.

RCS
January 10, 2016 5:02 pm

” To any reasonable person, it is surely inconceivable that these organisations would not retain their original data.”
Tell that to the CRU.
This is simply a method of interpolation and you do not show why it is optimal.
” No temperature measurement affects any estimated temperature outside its own triangles. Within those triangles, its effect decreases with distance.”
Really? Most interpolation schemes require the use of derivative at points as they are a result of Talylor’s theorem. This requires the use of data outside the triangle in question to determine the derivative and so will influence the value within the triangle.
If I have understood you correctly, your scheme is piece-wise discontinuous, which certainly isn;t physically realistic. To obtain continuity through any point, knowledge of the surrounding data is required.

Editor
Reply to  RCS
January 10, 2016 6:08 pm

Yes, it is piece-wise discontinuous – just like actual temperature measurements are discontinuous. The whole point is to have a system that uses every surface temperature measurement unchanged. As I said, the proposed system adjusts the model to fit the temperature measurements. And when I said “fit”, I meant fit exactly – in the final result for each day every temperature measurement is still there and it hasn’t been changed. No matter what you put into the model, it can’t override any measured temperature.
Yes, it is a 2-dimensional interpolation scheme which interpolates within triangles, but no it doesn’t require the use of derivatives etc because when it interpolates within a triangle, it doesn’t look outside the triangle. The only temperatures it has are at the three corners of the triangle. Instead, it uses an expected temperature pattern (the “model”) for its pattern within the triangle. So the temperature pattern survives within the triangle, but the triangle’s corners don’t move. The null model is “all temperatures are the same everywhere every day”. Even that would give respectable results, but a more intelligent model will give better results – how much better can be determined by comparing their ratings. In the end, the system and the model help each other (the system “learns”), but the measured temperatures still reign supreme.
I mention UHE a few times. If the model gets UHE right, then in the results UHE will appear only in urban areas. These are a very small fraction of the globe, so UHE will then have very little effect on the global figure. The UHE will still be there, because those places really do have those temperatures. Not all the urban temperature is natural, but it won’t be worth trying to remove it because it will be a trivial part of the global whole. One of the major benefits of the proposed system is that it won’t let UHE spread its influence beyond the urban areas once the model “gets” UHE. And the system itself helps the model to “get” UHE. Once the model “gets” UHE, it can tell you how to remove it!
This system is probably a bit different to anything anyone is used to, so hopefully people will clear their minds before trying to understand it. It’s not that it’s difficult – it isn’t – it is just that I am trying to think outside the box.

Reply to  Mike Jonas
January 10, 2016 7:22 pm

“Yes, it is piece-wise discontinuous – just like actual temperature measurements are discontinuous.”
Actually, it isn’t. You are just linearly interpolating within each triangle – as if you pulled a membrane tightly over the values. It is just the kind of interpolation done in finite elements. The derivatives are discontinuous.
I draw plots of temperature anomalies interpolated in this way, eg here. The node values are exact, and color shading shows in between. There is a version here whre the shading isn’t as good, but you can show the mesh.

RCS
Reply to  Mike Jonas
January 11, 2016 5:16 am

Of course a proper interpolation scheme can be used to preserve data points and make the temperature continuous.
The problem with your post is that you don’t specify the model in a meaningful way apart from saying that it is intelligent.
As regards “thinking outside the box”, surely you mean thinking outside the triangle!

Editor
Reply to  Mike Jonas
January 11, 2016 12:47 pm

Nick – re discontinuous / derivatives. You are correct, of course.

higley7
January 10, 2016 5:45 pm

An interesting test would be to take a set of four sites in which one site is in the middle of the triangle of the other three. Calculate the average temperature from them and then delete the center site and use the outer three to calculate the average temperature. It would be very interesting to see how similar the answers would be.

January 10, 2016 6:10 pm

A caveat up front. I am not a climate scientist but a simple physicist, for which I thank the circumstances of my entry into science. That given, I have long thought that the notion of a “global average temperature” (GAT) constructed from a sparse set of mixed quality data, statistically infilled (and outfilled) spatially and temporally to try to simulate global coverage is poorly suited to discerning trends presumably based on thermodynamics of the global climate system (GCS). I think concerns have been frequently voiced here and elsewhere about the utility, from the perspective of physics, of extensive versus intensive variables and the difficulty of properly defining and constraining the thermodynamics of the GCS. Couple with that the apparently cumbersome, intricate, and in many cases arcane nuances of continually adjusting mixed quality point measurements so as to average across space and time (and hence across thermodynamic regimes) and I am unconvinced that such GATs represent much more than the adjustment process evolution. Thankfully it is not my job to worry about such things. The only downside to this whole issue is that I have already broken my 2016 resolution to quit spending valuable time reading and thinking about climate science.
All that said, I have long thought a much more useful approach would be to treat the temperature records as the stock market is treated, and simply construct an index (or indices) with as consistent a definition of its components as possible. This would mean abandoning the notion of total global coverage, since that is already of dubious thermodynamic quality. As the Dow Jones average selects a certain set of industries based on qualities related to company size, longevity, quality of financials, weighted simply by market price, etc. one could select a set of temperature records based on the precision and accuracy of the instrument, the transparency of its record keeping, the suitability of its measurement protocols, perhaps weighted by the accuracy of the instrument, etc. One could attempt to obtain as extensive a global set of high quality measurements as possible, but with no attempt to be complete. Rather the goal would be to have a consistently high quality set of measurements sampling the globe. It seems sea surface temperature would be tough because many of those sources appear to move about, so perhaps satellite sea surface temperatures could be used. Sites could be selected to be as free from complicating issues such as changes in land use. I think I have read of analyses of some subset of the continental U.S. stations that are of high quality, called GHCN or something like that. The result would be admittedly only an index, and it would likely be difficult to find an index of quality historically, but one starting now or within the modern era would seem to be an improvement. Tracking the trends in such an index would seem to me to be as useful as constantly picking apart attempts to fiddle with all the mixed data to try to obtain a GAT. I presume global climate model outputs could just as easily be extracted to compare to such an extensive but incomplete set of data as is done now to compare to existing GATs. The behavior of such an index would not be easily relatable to the complete thermodynamics of the GCS, but neither is the behavior of the GATs in existence. It seems such an index would be more free of adjustment induced variations and more directly related to temperature trends, at least for the set of points sampled.

Dave in Canmore
Reply to  fahutex
January 10, 2016 8:28 pm

fahutex says: “One could attempt to obtain as extensive a global set of high quality measurements as possible, but with no attempt to be complete. Rather the goal would be to have a consistently high quality set of measurements sampling the globe.”
This is just so sensible! I waste so much time wondering why GISS et al use such a convoluted process! Take high quality stations only and get as many around the world as possible. Bad data is just that:bad. Why are we mixing clean and dirty water together?

Richard Keen
Reply to  fahutex
January 10, 2016 10:03 pm

fahutex says: January 10, 2016 at 6:10 pm: I think I have read of analyses of some subset of the continental U.S. stations that are of high quality …
… Yes, it’s called the CRN, Climate Reference Network. Anthony has written several times about it and showed the data. Unfortunately, it shows no warming over the past ten years or so, so NCDC sticks to their analysis of less suitable stations because they can find ways to adjust it to provide the required warming.

Reply to  Richard Keen
January 10, 2016 10:30 pm

“NCDC sticks to their analysis of less suitable stations”
In fact, anomalies from USHCN stations and USCRN are virtually identical:
http://www.moyhu.org.s3.amazonaws.com/2016/1/uscrn.png

Richard Keen
Reply to  fahutex
January 10, 2016 10:21 pm

fahutex also says: January 10, 2016 at 6:10 pm: I am unconvinced that such GATs represent much more than the adjustment process evolution…
… I like to compare it to cheese. You start with milk, and with a little effort make great cheddar. More processing and you get Velv**ta, known more as “processed cheese product” than as cheese. In this case, just as with “global temperatures”, the result says more about the processing than the initial input (observations and/or milk). I suspect that with Velv**ta, you could start with motor oil and end up with the same thing. With climate data, you can have observations that show warming, cooling, or cycles, but 2015 will always come up as the warmest year (until 2016).

aussiepete
Reply to  fahutex
January 10, 2016 11:16 pm

Thank you fahutex.
I am not a scientist in any shape or form but i believe i am blessed with a goodly amount of common sense and have been a voracious reader, particularly of WUWT. Your contribution to this subject is the best i’ve read anywhere and gives me confidence that one day common sense may return to climate science.

Reply to  fahutex
January 11, 2016 6:48 am

fahutex:
You say

All that said, I have long thought a much more useful approach would be to treat the temperature records as the stock market is treated, and simply construct an index (or indices) with as consistent a definition of its components as possible.

Yes, that is one of the options stated in Appendix B of this item which I suspect you may want to read.
Richard

nankerphelge
January 10, 2016 6:17 pm

Novel and interesting approach but it would be absolutely unacceptable to GISS NOAA etc as they need to adjust figures to get the required outcome.

dp
January 10, 2016 6:21 pm

“Over time, the model can be refined and the calculations can be re-run to achieve (hopefully) better results.”
Wait – that is exactly what goes on now. Who will decide how this will be implemented, and what criterion would apply before the NextGen temperature tracking system gets fiddled with?

JohnWho
Reply to  dp
January 10, 2016 6:28 pm

Isn’t what is being done now more along the lines of data being altered and calculations re-run to achieve pre-determined results?

dp
Reply to  JohnWho
January 10, 2016 8:01 pm

I think it is more about intent than the method of getting the desired result. The goal is to get a desired result else they would just leave things alone. Don’t see anything to prevent that in NextGen methodology, and the opportunity seems to be built into the process (see the quote).

Michael C
January 10, 2016 6:36 pm

OK – you have been a given an unlimited budget to set up a temperature data system that you KNOW will give annual global averages for land sea and atmosphere within margin of error within 1 C. What would it be?
Instead of the rhetoric. bitterness, and blame, this is what the World should be doing. But first we must get the various institutions to publicly admit their estimated (and audited) margins of error
The key to exposing the truth (or lack of) over this issue is to shout loud and clear, over and over to all involved parties “WHAT ARE YOUR MARGINS OF ERROR?” They should not be permitted to slide out from under this question and it needs to become the catch-cry of anyone concerned over this issue
Once it becomes clear that the MOE’s are larger than the e.g. “degree to which records are been broken” the public and Governments will begin to understand (or made to)

January 10, 2016 6:39 pm

Any honest commenter must have noticed that the contrarian side of the “Warming? Not Warming!” argument has switched from arguments against the basic physics (lost those) to hiatus-assertion (lost those) to current attacks on temperature-assessment methodologies.
There must be some Latinate rhetorical definition for “searching for an argumentative technique that supports a pre-decided conclusion,” but I’m afraid I’m unaware of it …

Reply to  myslewski
January 10, 2016 7:43 pm

Troll writes: “… the contrarian side of the “Warming? Not Warming!” argument has switched from arguments against the basic physics (lost those) to …”
Sorry. The CAGW hypothesis predicts:
1) A warming trend in the troposphere (“hot spot”) – sorry, not there.
2) More H2O in the upper troposphere – sorry, there’s less.
3) Less radiation to space – sorry, there’s more.
Hypothesis disproved 3 times over.
“…to hiatus-assertion (lost those)…”
The existence of an hiatus was never a part of the reason for disbelieving this theory – its proven failure to get core predictions right is the reason. But since you mention it, the “hiatus” is alleged to have stopped because the ground is warming. But the troposphere isn’t, and the theory predicts:
4) The troposphere will warm faster than the ground – sorry, the atmosphere is still flat while the ground is warming – theory disproved for a fourth time.
“… to current attacks on temperature-assessment methodologies.”
Like the date of the month corrections in Australia that go up on the first of the month, down on the first of the next, up, down, and all around? Like those “attacks”? Well YEAH! The temperature products are demonstrably corrupt, no shadow of a doubt.
The fact that you and those like you believe in a theory that is four times disproved and based on proven malpractice and/or incompetence isn’t a criticism of US, it’s a criticism of YOU.

AndyG55
Reply to  myslewski
January 11, 2016 1:23 am

“arguments against the basic physics (lost those)”
Seriously?
without any knowledge of basic physics, how the heck would you have any idea about anything ?
All the “physics” arguements are well and truly on the ANTI-AGW side of reality.

Reply to  myslewski
January 11, 2016 7:01 am

myslewski:
In addition to the excellent rebuttal from Ron House of untrue assertions from you, I address your claim saying of arguments about “hiatus-assertion (lost those)”.
The IPCC says you are plain wrong.
Box 9.2 on page 769 of Chapter 9 of IPCC the AR5 Working Group 1 (i.e. the most recent IPCC so-called science report) is here and says

Figure 9.8 demonstrates that 15-year-long hiatus periods are common in both the observed and CMIP5 historical GMST time series (see also Section 2.4.3, Figure 2.20; Easterling and Wehner, 2009; Liebmann et al., 2010). However, an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006–2012 by RCP4.5 simulations, Section 9.3.2) reveals that 111 out of 114 realizations show a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble (Box 9.2 Figure 1a; CMIP5 ensemble mean trend is 0.21ºC per decade). This difference between simulated and observed trends could be caused by some combination of (a) internal climate variability, (b) missing or incorrect radiative forcing and (c) model response error. These potential sources of the difference, which are not mutually exclusive, are assessed below, as is the cause of the observed GMST trend hiatus.

GMST trend is global mean surface temperature trend.
and
A “hiatus” is a stop.
So, the quoted IPCC Box provides two definitions of the ‘hiatus’; viz.
(a) The ‘pause’ is “a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble”.
And
(b) The ‘pause’ is “the observed GMST trend hiatus”.
And the IPCC says the ‘hiatus’ exists whichever definition of the ‘hiatus’ is used.
How do you define “lost those”?
Richard

thingadonta
January 10, 2016 6:42 pm

How about just always requiring plotting the raw versus adjusted data on any graph or in any paper. people can then see for themselves the adjusted ‘opinion’ whatever method is used.

January 10, 2016 6:44 pm

How about we stop trying to calculate global average temperatures from surface records altogether? They clearly are not up to the task. Let’s leave these measurements in their raw form and use them only for trivia purposes for local weather segments after the news each day!

Michael C
Reply to  wickedwenchfan
January 10, 2016 6:57 pm

wicked – Yes I have wondered that myself. By all means report them but really concentrate on marine temperatures with a coordinated, standardised system. With the degree of money now being committed to climate one could build a system from hell. These are the sort of issues that the IPCC should be addressing.

January 10, 2016 6:52 pm

I have been using a system somewhat like this, which I describe here, with links there and many earlier articles. For over four years I have been posting reports, latest for December here. My main method (mesh) does involve triangulation – I use the convex hull of the points, which is effectively Delaunay – better than minimising length.
I don’t attempt to predict by latitude ot whatever – there is no need. The historic record gives a better predictor. In effect, monthly values are fitted with a linear model by least squares, and the result integrated by linear interpolation within the triangles. I use unadjusted GHCN, although adjusted makes little difference.
I don’t use daily values – then the effort of triangulation would be prohibitive, with little gain (and not much data readily available). Monthly is well within the capability of my PC.
You can run the system yourself – the R code is provided and described.

Reply to  Nick Stokes
January 10, 2016 8:33 pm

Nick,
So as a favour, could you modify the code to calculate the 4th root of temperature, trend that, and then compare to the temperature trend?

Reply to  davidmhoffer
January 10, 2016 9:20 pm

David,
“4th root of temperature”
I presume you mean fourth power. I could, easily, but people overrate the non-linearity produced by fourth power. Most monthly means lie between about 260 and 305 K. The ratio of derivatives of T^4 would be about 1.6:1, but that is for fairly rare extremes.
The nonlinearity for anomalies would be quite negligible. So the implementation would simply be to vary the weighting by that derivative (of T^4) for each reading.
Of course, you would probably want to average the days within the month by T^4. That I’m afraid we can’t.

Reply to  davidmhoffer
January 11, 2016 8:43 am

Of course, you would probably want to average the days within the month by T^4. That I’m afraid we can’t.
Why?
As for it being a small ratio, 1.6:1 is not a small ratio when one is trying to resolve global trends out to 2 decimal places. 1.6:1 is huge! As for it being only for rare extremes, that’s the whole point, they aren’t rare.

Reply to  davidmhoffer
January 11, 2016 9:17 am

“Why?”
Because for many locations we only have monthly averaged data. This is changing somewhat with GHCN Daily, but not completely. People don’t appreciate what an enormous effort has been required to simply digitise the temperature record that we have. Transfer it eliably from paper to bits.
But I should push back against the idea that T^4 is even appropriate. It is no doubt based on the idea that you really want average flux, not T. That isn’t necesarily true. But insofar as it is, T^4 dependence applies only to the reltively small proportion of flux that exits via the atmospheric window. For flux that interacts with the air (and GHG) ordinary diffusion (Rosseland) is a much better model, and that is linear.
To put it another way, while he surface itself radiates approx as T^4, the almost balancing back radiation also increases as T^4. The difference is close to linear.
Or yet another way, as the Earth warms with GHG, its apparent T seen from space doesn’t increase.

Reply to  davidmhoffer
January 11, 2016 10:15 am

To put it another way, while he surface itself radiates approx as T^4, the almost balancing back radiation also increases as T^4. The difference is close to linear.
Or yet another way, as the Earth warms with GHG, its apparent T seen from space doesn’t increase.

Correct. The Effective Black Body temperature of earth doesn’t change by one bit due to the addition of GHG’s. However, that doesn’t mean that the system a a whole has a linear response. In fact, if it DID have a linear response, you would get a NON linear result by averaging temperatures across the globe. Get it? The back balancing radiation is not in near balance from a an altitude/lapse rate perspective, nor from a latitude perspective. The notion that everything just stays roughly linear due to the addition of GHG’s is to completely ignore all the physical processes that create everything from weather to trade winds to ocean currents, to processes like la nina and el nino.
That is a ridiculous assumption and one that is without merit unless and until it has been put to the test.

Reply to  davidmhoffer
January 11, 2016 12:52 pm

“Why?”
Because for many locations we only have monthly averaged data.

Actually, not that I thought it through, that would probably be sufficient. As long as the locations could be raised to the 4th power, and then averaged in time and space, the over weighting of cold regimes (high lat, high alt, winter) and the under weighting of warm regimes (low lat, low alt, summer) could probably become apparent.
Of course that is just my belief.

Editor
Reply to  Nick Stokes
January 10, 2016 8:58 pm

Thanks, Nick. I’m not familiar with Delaunay, but any consistent triangulation system would do.
re predicting latitude etc – by using the historical record as a predictor, you are in fact using a model to predict (the model is the interpretation of the historical record), and you are using the system itself to “learn” as I indicated. What you start with doesn’t matter, though it would be nice to keep the null latitude model for comparison. I suspect that the only difference would be in the ratings. UHE is more interesting than latitude, and it would be nice to see its influence in temperature estimates reduced to where it belongs.
I illustrated using daily values, but monthly would be valid (“I will begin by assuming that the basic time unit is one day. … Other variations would be possible …“), provided the whole month is available at each location. I agree that daily triangulation could well be prohibitively expensive, and a monthly system might have to be used for practical purposes. However, since monthly data is presumably constructed from daily data and some months can’t be constructed, data would be lost. I would be reluctant to lose anything until proved necessary.
I looked at your “little difference” page, and the graph seemed to me to be out of kilter with the explanation : the difference between the two plots went all one way while the explanation said it changed direction. I suspect that what I was looking at wasn’t what I thought it was.

Reply to  Mike Jonas
January 10, 2016 9:34 pm

Mike,
What is shown on that page is a back-trend plot. It shows the trend from the x-axis time to present. So starting before about 1960 the trends of adjusted to present are higher, but only by less than 0.1°C/Century. For trends from later than about 1970 to present, the adjusted is actually less.
There is a post here showing the regional breakdown, and also global, in the more familiar time series plot.

Editor
Reply to  Mike Jonas
January 11, 2016 12:41 pm

Thanks, Nick. I have thought quite a lot about what you are saying, and have come to the conclusion that all data should be used unadjusted no matter what. It makes everything a lot easier, and it makes everything a lot less open to manipulation. This ties in to something I said in another comment, namely that adjustments can introduce bias – even adjustments aimed at removing bias.
If there is something so dramatically important that it simply has to be dealt with, then its proper place is in the model – so for example if it was desperately needed then aircraft traffic could be in the model. I should have stuck to my own principle: measurement trumps model.

Tony
January 10, 2016 7:21 pm

Why not start with something “easy” like a house. What’s the average temperature of your home. Good luck!

JohnWho
Reply to  Tony
January 10, 2016 7:29 pm

Question:
Is it important to determine the actual “average” temperature or is it more important to determine a trend?

Jeff Alberts
Reply to  JohnWho
January 10, 2016 8:02 pm

If the average of intensive properties is physically meaningless, then the trend will be equally meaningless.

Editor
Reply to  JohnWho
January 10, 2016 9:02 pm

The average price of the shares in a portfolio is physically meaningless, but the trend matters a lot.

Jeff Alberts
Reply to  JohnWho
January 11, 2016 7:05 am

Really? The trend of a physically meaningless average is somehow meaningful? Share prices aren’t physical things, they’re monetary concepts.
I’d suggest you read this.

Reply to  Tony
January 10, 2016 9:48 pm

Mike Jonas January 10, 2016 at 9:02 pm
The average price of the shares in a portfolio is physically meaningless, but the trend matters a lot.

The ratio of “price” and “shares” is not governed by a non linear physical law. The analogy is not apt.
I get why you are trying to do what you are trying to do. But unless and until the data is first converted to 4th root of T and then averaged and trended, you’ve got nothing meaningful in terms of the earth’s energy balance. Every physicist I have ever talked to about this issue agrees to one extent or another. The reluctance of people with the processing power, access to data, and programming skills to do that just baffles me. They’ll spend countless hours figuring out how to interpolate data via some uber sophisticated math, but this simple ask gets ignored. If someone were to explain to me why I am wrong, I’d be happy to listen. But no one does.
I rather suspect (my hypothesis if you will) is that the resulting trend will be even more muted than the temperature trend.

Editor
Reply to  Tony
January 10, 2016 11:06 pm

davidmhoffer – I hadn’t considered using the 4th root, but of course it does make some sense. Please note that I don’t rule anything out (“The terms global temperature and regional temperature will be used here to refer to some kind of averaged surface temperature for the globe or for a region.“), so the system could use 4th root averaged ^4. At least it could get people thinking …..

Jeff Alberts
January 10, 2016 7:57 pm

You can derive a “global temperature” all you want, but it will still be physically meaningless.

Nicholas Schroeder
January 10, 2016 8:39 pm

Why does it even matter?
Because the surface temperature is being abused/adjusted to fabricate a “See, we told you so.” retrograde justification of CAGW. I’m sure there is some logical fallacy that applies.
Broken record time.
1) Anthro’s net 4 Gt/y CO2 contribution to the natural 45,000 Gt of reservoirs and hundreds of Gt/y fluxes is trivial, the uncertainties in natural fluxes are substantial, IPCC AR5 Fig 6.1 and Table 6.1.
2) The 2 W/m^2 RF of the net anthro CO2 accumulated 1750 to 2011 is trivial in the magnitude and uncertainties of the total atmospheric power flux balance. Fig 10 Trenberth et al 2011. The water vapor cycle, albedo, ice, evaporation could absorb this amount in the 3rd or 4th decimal point.
3) The GCMs are worse than useless. IPCC AR5 Text Box 9.2

feliksch
Reply to  Nicholas Schroeder
January 11, 2016 4:13 am

Do you mean AR5 WG1 ALL FINAL Box 9.1 | Climate Model Development and Tuning (p. 745)?

Nicholas Schroeder
Reply to  feliksch
January 11, 2016 6:33 am

feliksch
No, Box 9.2 beginning page 769. FAQ 8.1 is another interesting read.

Rob
January 10, 2016 8:40 pm

Yeah. I say the term “model”. The worlds best Dynamical Forecast Weather Models
consistently “bomb” after only 7-days. And “these people” want to talk Climate Models???

Jeff Alberts
Reply to  Rob
January 11, 2016 7:08 am

Seven days? You’re being WAY too generous. Yesterday they weren’t forecasting rain in my area for a few days. Today they’re now forecasting rain. Total fail.

AndyG55
January 10, 2016 10:09 pm

There is one pristine, evenly spaced, unadjusted surface temperature compilation, USCRN
and one that also adds ballon atmospheric sample, ClimDiv.
Over the area that these cover, the satellite data trends in UAH and RSS are pretty much the same, thus verifying the data extraction algorithms of the satellite temp series.
So what is wrong with using data derived from satellites.?

January 10, 2016 10:27 pm

Min/max is too often a measure of how high or low the temp was allowed to go by cloud, or the absence thereof, at the potential peak or trough times of a day. I am staggered that this simple glaring fact is ignored.
“Coolest” years in my region (mid-coast NSW) were 1929 and 1974…both years of much rain, hence cloud. (Not that rain tells the full story of cloud at potential min/max time of day, but it’s the only indicator available that means at least a bit.) Is it surprising that our driest year after legendary 1902 was also our “hottest” year on record (1915 – sorry warmies)? Is it surprising that “record” high minima occurred in the massive wet of 1950 in regions of northern NSW where the cloud usually clears at night in winter – but didn’t in 1950?
The problem with global temp is not that it is hard to determine by min/max records. The problem is that is utterly without point. Because cloud.

Editor
January 10, 2016 11:46 pm

Well, a couple of comments. First, Mike, thanks for an interesting proposal. It is not a whole lot different from what the Berkeley Earth folks do. They define a global “temperature field” which is a function of (from memory) latitude, altitude, and day of the year. In any case, much like your system.
For me, I’d have to go with the sentiment expressed by Anthony and his co-authors. Their thought is that if you have a huge pile of data, some of which is good, some bad, and some ugly, you should just use the good data rather than busting your head figuring out how to deal with the bad and the ugly.
That of course brings up a whole other discussion about what is “good data”, but in my eyes that discussion is much better than trying to put lipstick on a pig.
w.

Editor
Reply to  Willis Eschenbach
January 11, 2016 12:26 pm

Hi Willis – yes, if your sole objective is to get global and regional temperature trends using the surface station history, then go for quality. No question. But I see the model itself as an objective too. [I doubt the model can ever get implemented, so think of this as a thought experiment].
I do expect that much of what I am proposing is already being done, as you say re BEST. But I am trying to get the mindset turned around so that there is no attempt to fit the temperature measurements to the model, Only the model can change.
We have become obsessed by temperature trends, and the BEST mindset is full,of them. I see a trend as something that you interpret from data. First comes the data, and trends play no role in collecting the data. So when you are putting together the global picture from station and other data, trends should play no role. Only after you have done it would you look at it to see what the trends were.
We have also become obsessed by UHE. The way I deal with it is not to try to to remove it from the record, but to use it and quantify it. I see a parallel with the isostatic (hope I got the term right) adjustments to sea level. The powers that be decided to report sea level adjusted up to eliminate the isostatic effect. But my view of that is that they should report the real sea level first and address the reasons later. So with UHE. The temperatures in urbs are higher – it’s what they are – so we should record and use those higher temperatures and address the reasons later. My system I think is way better for doing that. And it also ends up telling you how to remove UHE from the results if you want to.

Reply to  Willis Eschenbach
January 11, 2016 1:58 pm

Mike Jonas January 11, 2016 at 12:26 pm Edit

Hi Willis – yes, if your sole objective is to get global and regional temperature trends using the surface station history, then go for quality. No question. But I see the model itself as an objective too. [I doubt the model can ever get implemented, so think of this as a thought experiment].

Thanks, Mike. To me, Berkeley Earth does a good job in that their method is transparent and they maintain all of the raw data, so I have access to both the adjusted and unadjusted information. And as I mentioned, they (like you) establish a “temperature field” that gives the expected temperature given the month, altitude, and latitude of the station.
They have also used what to me is a good method, called “kriging”, to establish the field. From Wiki:

In statistics, originally in geostatistics, Kriging or Gaussian process regression is a method of interpolation for which the interpolated values are modeled by a Gaussian process governed by prior covariances, as opposed to a piecewise-polynomial spline chosen to optimize smoothness of the fitted values

Is this the right way to do it? One thing I learned from Steven Mosher is that there is no “right” way to average temperatures. There are only different ways, each with their own pluses and minuses.
Regarding your method, I fear I don’t understand it. You have fitted a triangulated mesh to the individual station points. So far, so good. But I don’t understand how that relates to the model. You say that you use the model to determine the shape of the temperature line between two stations … but how do you use that data about the shape of those lines?
In any case, I think I’ll go mess about using CERES data and DEM data to see what the temperature field looks like …
Regards,
w.

Editor
Reply to  Willis Eschenbach
January 11, 2016 4:42 pm

OK, here’s a simple 1D example of how the “shape” works. Suppose that somewhere between two points there is an urb so that the model between the two points looks like this:
http://members.iinet.net.au/~jonas1@westnet.com.au/UHEModel.JPG
Then suppose that on a given day the temperatures as measured at the two points are 12 and 13 deg C and that there are no measurements from the urb or anywhere else in between the two points. The estimated temperatures then look like this:
http://members.iinet.net.au/~jonas1@westnet.com.au/UHEEstimated.JPG
Hope that helps.

Editor
Reply to  Willis Eschenbach
January 11, 2016 6:03 pm

Thinks … maybe you meant what formula is used. I like to keep things simple, so you take the three points of the triangle in the model as representing a flat triangle, move the triangle corners to match the three measured temperatures, then place each point within the triangle at the same distance above or below the triangle as it is in the model. Other methods are possible of course, but that one is simple.

u.k(us)
Reply to  Willis Eschenbach
January 14, 2016 5:11 pm

When Willis leaves the exit door this widely open, he must like you.

Michael C
January 10, 2016 11:53 pm

I am not a physicist. Can the total energy emission from earth of the appropriate form (IR?) be measured accurately by current satellites? Is it theoretically possible to simply measure or accurately calculate energy in vs energy out on an annual basis?

TonyN
January 11, 2016 1:44 am

Interesting use of the triangulation mapping technique. How about creating maps including the other weather-station data such as precipitation, local pressure, wind vector. cloud coverage? Could they produce sets of weather-maps that if somehow integrated over 30 years could produce a ‘supermap’ showing actual climate change in terms of e.g average windspeeds, rainfall, cloud-cover, pressure and so on?

Hivemind
January 11, 2016 2:11 am

Thank you for your very interesting article. I think that it looks like a much improved method over what is currently being used.
However I feel that it suffers from the deficiency that there is no such thing as a global temperature. The temperature at Reykjavík, Iceland right now is very different from the temperature at Brisbane, Australia right now. Worse, the temperature at Brisbane right now (about 8:00 PM), is quite different from the temperature it will be at noon tomorrow. And the noon temperature at Brisbane today is quite different from the noon temperature in 6 months. We are looking at tens of degrees centigrade change every day, as well as summer to winter.
Now for climate change work, we don’t care so much about the actual temperature, but do want to know about the trend, so it is possible to create an alternative algorithm that is free from the systemic biases caused by attempts to merge thousands of low grade temperature records together. The method I suggest involves taking each site and breaking it up into uninterrupted segments. Any change in a site creates a new segment – relocation, painting, new equipment, etc. Instead of lowering the quality of the data, these changes now simply create new segments.
Each segment will be much shorter, but relatively high quality. You can rely on the segment to be internally consistent. In other words if it shows a trend, there is probably a trend there. The trends of a region can be combined (nothing smaller than a continent) to show the overall trend of that region. I, personally, wouldn’t combine regions to try to get the trend for the whole planet, but everybody knows climate scientists are crazy.
I still prefer to use raw satellite data. Always start with the highest quality data. Make it a rule.

Reply to  Hivemind
January 12, 2016 8:18 am

Trend analysis based on actual measurements, no homogenization or infilling, just station trends.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/

4TimesAYear
January 11, 2016 3:38 am

“The system does not care which stations have gaps in their record. Even if a station only has a single temperature measurement in its lifetime, it is used just like every other temperature measurement.
· No estimated temperature is used to estimate the temperature anywhere else.”
If this means no proxy temps, I’m all for it; even a few miles can make a huge difference in temperature or weather conditions.

Nikola Milovic
January 11, 2016 3:57 am

Establish this model for measuring the surface temperature of the Earth
Sun and Earth lie in the plane of the ecliptic, so that the sun’s rays are parallel to this plane and affect the Earth by circles section of the Earth, which are parallel to the plane of the ecliptic. Determine the number of these planes and the northern and the southern hemisphere (say every 5 degrees, going from the center of the Earth on the ecliptic and the point at the place where the radius pierces the Earth’s surface.)
In these cross-sectional circles set of measurements (say every 5 degrees of the initial selected point. This measurement is carried out at the same time London time. This is a measurement for one day, and specify how many times to perform during 24 hours.
To determine the annual measurement of the true anomaly angle (angle phi) and it determine, for example, every day -which is about phi = 1 level).
Then you have a completely accurate, coordinated and sized natural temperature of the earth.

January 11, 2016 4:26 am

If we use well sited stations and find that the temperature is quite flat and mates well with satellite data,
one could imagine that just selected the very best sited station and just use that would give similar trend performance.
This could be continued worldwide: each country selects its best sited station and publishes that data unchanged.
Would the overall trend in temperature be different from the current land based systems currently in use?
An extreme version of this would be to have just one well sited sensor – no more, and report that trend.
Unfortunately, no one would trust data from only one national source…..
Before people suggest just one sensor would not be able to relate to world wide temperature, as mentioned above, heat content (+20 degrees, -5 degrees: average is ???) so perhaps one or one per country could be much better.

BioBob
Reply to  steverichards1984
January 11, 2016 10:33 am

The variance of a sample size of one is infinite / unknown. The unfortunate truth is that anything man does is impermanent and results (and errors) change over time. We have no standard sensors that do not fail, no stations without non-random error, no satellites without measurement drift, error, or infinite lifespan, nor adequate numbers of satellite replicates. Error is us.
There are limits to what we are able to do and we have to know our limitations. The reality is that humans are not currently able to gather consistent, unbiased or error free information over centuries or even portions thereof. It is certain that the current changes over a century are well within our gross measurement error and that supposed ‘global climate change’ of .8 C per century is a farce.
We should start by gathering statistically valid estimates based on random, replicated samples of adequate size so we at least measure what sorts of variance our methods produce. Since we have not even done that, it certainly would be a good start so that we more accurately know our limitations. But we do not because it is inconvenient for those with agendas to explain away data that do not agree with their viewpoints.

January 11, 2016 8:49 am

Thanks, Mike Jonas. Your New System makes a lot of sense, but I’m afraid it is way too much for humans right now.
We will have to keep on using just MSU LT temperatures until we become of age, if we ever do.

Ben smith
January 11, 2016 9:12 am

In the end, your approach like all the others except satellite based measurement provides an imperfect fiction which is overwhelmed by inaccuracy and variability. You really cannot logically average temperatures across the globe with such poor distribution of stations and such variability of accuracy in local measurement capability. When you consider the lack of precision, error bars expand and your final product provides no greater utility than current systems, the value of which is already oversold.

Editor
January 11, 2016 10:22 am

george e. smith January 11, 2016 at 9:53 am

“””””….. Abbreviations
C – Centigrade or Celsius …..”””””
So make up your mind; which is it.
Centigrade and Celsius are NOT the same.
C is Celsius in the SI system of units.
A centigrade scale is simply any linear scale that goes from zero to 100.

Well, that’s not entirely true, as “Centigrade” was also the accepted and common name for the Celsius scale when I was a kid, and nobody used “Celsius”. Like the song “Fever” said back in the early 20th century,

“Chicks were born to give you fever
Be it Fahrenheit or Centigrade”.

Next, here’s the result of asking for the definition in Google:

cen·ti·grade
ˈsen(t)əˌɡrād/
adjective
another term for Celsius.

See that part that says “another term for Celsius”? … who would have guessed? Well, actually, I and most everyone would have guessed that Centigrade is another term for Celsius. Except you. I guess.
Then we have Websters, which says:

centigrade
: relating to, conforming to, or having a thermometric scale on which the interval between the freezing point of water and the boiling point of water is divided into 100 degrees with 0° representing the freezing point and 100° the boiling point

So Websters specifically disagrees with your claim that “centigrade” means ANY scale going from zero to 100. Instead, Webster says a centigrade scale is a scale which specifically goes from 0 AT FREEZING to 100 AT BOILING.
In other words, George, your attempt at being a scientific grammar nazi is a total fail. If you are going to snark at people about their choice of words, you should first do your homework and make sure of your own facts … so here are some more facts for you
The Centigrade scale was invented in 1744, and for the next two hundred years it was called the Centigrade scale. In 1948 the CGPM (Conference General des Poids et Measures) decided to change the name to the Celsius scale.
So despite your bogus claims, Celsius is nothing more than the new name for the Centigrade scale, and as the author you so decry states, indeed both terms continue to be used, and the abbreviation “C” is the same for both.
This use of both terms should not be surprising, since the scale was originally called “Centigrade” for over two hundred years, and the change was made recently, by fiat, and by a bunch of scientists in France. As a result, the term centigrade is still well established in common parlance.
Regards,
w.
PS—Don’t get me started on the tardigrade scale …

pete mack
January 11, 2016 11:09 am

Your model leads to a “step function” temperature map–that is, at the line equidistant between two nearest neighbors, there have different temperatures on each side. This is counterintuitive, hence the usual choice of weighted smoothing over points across some kernel.

Editor
Reply to  pete mack
January 11, 2016 4:55 pm

Not sure that I get what you are saying, but take a look at my comment http://wattsupwiththat.com/2016/01/10/a-new-system-for-determining-global-temperature/comment-page-1/#comment-2117872 – that shows no step function at the mid-point.

Allen63
January 11, 2016 11:14 am

A thoughtful essay and thoughtful back and forth comments.
I too think we need to start with “some method” that uses “only the raw temperatures” with minimum “interpretation”. Then, see where that takes us.
It may allow us to see that, at least before the satellite record, world temperatures are unknowable with both sufficient coverage, accuracy, and precision to make the current “CAGW exists and is due to CO2” claims.

jose
January 11, 2016 11:14 am

Why go through all of the contortions; why not just use satellite data and be done with it.

Reply to  jose
January 11, 2016 1:01 pm

It seems to me that our government agencies and mainstream scientists accept and use satellite records except the temperature measurements.
For example: Sea level and temperature, ice mass, hurricanes(typhoons) wind and rain, Polar ice mass, Ice extent, CO2 Concentration, read some ocean buoys, read some tidal gauges, and much more are acceptable from the several hundred meteorological satellites..
Why does the government agencies refuse to use RSS, UAH and Rdiosonde for temperature?

January 11, 2016 1:01 pm

This makes very good sense.
Can this new system be applied to historic data?

Warren Latham
January 11, 2016 2:40 pm

Dear Mike (Jonas),
You can NOT determine “global temperature”.
Regards,
WL

January 11, 2016 2:59 pm

The proposed new system uses the set of all temperature measurements and a model.

…”and a model” I stopped reading right there.

Editor
Reply to  Roy Denio
January 11, 2016 4:48 pm

It isn’t that kind of model, as I tried to make clear. You only had to read one more sentence (or pick the clue in the previous para).

Rolf Hammarling
January 11, 2016 3:16 pm

I read an article some years ago (the article was written in 2006) by Christopher Essex from University of Western Ontario, Dept of Applied Mathematics (he also had 2 co-authors), that stated that there is no such thing as a global temperature. He argued that averages of the Earth´s temperature are devoid of a physical context which would indicate how they should be interpreted or what meaning can be attached to changes in global temperatures. How should one understand this? Are we trying to figure out better ways to measure something that doesn´t exist?

Editor
Reply to  Rolf Hammarling
January 11, 2016 4:52 pm

How about a global temperature index? Then you can tell how to interpret it and when it is appropritate to use it, by seeing how the index is constructed.

AlexS
January 11, 2016 3:19 pm

Why measuring temperature matters?
We don’t even know what makes it.

Dave in Canmore
Reply to  AlexS
January 12, 2016 8:30 am

+1

Dan
January 11, 2016 5:03 pm

I have two problems.
One. When I search for the scientific definition of global temperature, nothing comes up. If global temperature has no agreed scientific meaning, it is meaningless, then measuring it seems pointless.
Two. I assume global temperature is in some way related to the temperature of the atmosphere or more precisely the energy of the atmosphere. Both of these cannot exist independently of the atmospheric pressure. Measuring the atmospheric temperature without standardising the pressure gives little information that would help to assess global energy of the atmosphere
As far as I can make out, one Bar of pressure equates to 3 degrees C roughly, So the range of variation in either would significantly relate to the other. That is to say, if the energy of the system remains constant but the temperature rises by 2 degrees, the pressure would then drop by about 0.6 Bar, which could happen anytime with no drama.
Dan

HankHenry
January 11, 2016 5:12 pm

This isn’t really about surface temperature. It’s about surface air temperature.

Reply to  HankHenry
January 12, 2016 8:17 am

This isn’t really about surface temperature. It’s about surface air temperature.

While this is true, in reality it really should be about the surface too, as the air cools quite rapidly at night, except it is warmed by the surface until it also cools down.

HankHenry
Reply to  micro6500
January 12, 2016 1:24 pm

I would argue that if you want an uncomplicated measure of surface temperature trends for the purposes of gauging climate you should be burying your thermometers. Furthermore, there is a huge pool of cold water in the ocean abyss that one needs to be mindful of when thinking about surface temperature. Kevin Trenberth is now arguing that the reason observed air temperature trends don’t match modeled trends is because of “missing heat” in the oceans. I don’t think he’s entirely wrong to point this out. The weight of a column of the atmosphere is represented by similar column of water only 33 feet high.

garymount
January 12, 2016 8:13 am

Hi Mike. I like the idea of your project suggestion. I am a software engineer looking for new challenges. I have plenty of current challenges already and have enough projects to last me two lifetimes.
I have a massive climate science software project code named Wattson that I briefly mentioned in a comment here a couple of years ago. It is extremely comprehensive, but not ready to be publicly detailed. Your idea is like a subset of the Wattson project, however your particular method is different than any existing sub projects of Wattson.
Anyway, I though I would mention some of the modern technologies being used in the software development domain.
————–
First is how open source software is distributed today. Here are some resources on that topic:
https://channel9.msdn.com/Search?term=github#ch9Search
One example from the above search results:
Open Source projects best practices
What are good practices for committing new features to your product’s repository? How can I know, or how can I prevent, my build from being broken by a PR by a project team member, or the community?
In this videos, Phil Haack and MVPs walk through GitHub and discuss best practices on how to take the best from GitHub and its repositories.
https://channel9.msdn.com/events/MVP-RD-Americas/GitHub–Microsoft-Partnership/GH2-Open-Source-projects-best-practices
——————–
To greatly increase the computing speed of your project I would consider using C++ AMP.
Here is a resource for further information:
http://blogs.msdn.com/b/nativeconcurrency/archive/2013/06/28/what-s-new-for-c-amp-in-visual-studio-2013.aspx
And as mentioned at that link, a video explaining the technology:
BUILD 2013 day 2 keynote demo
————–
If you use Visual Studio, of which you can use the free very comprehensive Community edition, this allows up to 5 users to collaborate on a project using Visual Studio Team Services (formerly Visual Studio Online) :
https://www.visualstudio.com/products/visual-studio-team-services-vs
Visual Studio Team Services
Services for teams to share code, track work, and ship software – for any language, all in a single package.
It’s the perfect complement to your IDE.
From <https://www.visualstudio.com/products/visual-studio-team-services-vs>
Free for up to 5 users.
————-
Database: I am using a technology called Entity Framework that automagically creates the database infrastructure (tables etc.) from my code. More information here:
https://channel9.msdn.com/Search?term=ef7#ch9Search
One Example from above search:
A Lap Around, Over, and Under EF7Ignite AustraliaNovember 19, 2015A deep dive into the layers of Entity Framework 7, Migrations, Model building, Seeding Data, Dependency Injection, Verbose Logging, Unit Tests and the new In-Memory Provider
From <https://channel9.msdn.com/Search?term=ef7>
————-
Another thing to consider is using JSON if data is to be exchanged :
“JSON is an open, text-based data exchange format (see RFC 4627). Like XML, it is human-readable, platform independent, and enjoys a wide availability of implementations. Data formatted according to the JSON standard is lightweight and can be parsed by JavaScript implementations with incredible ease, making it an ideal data exchange format for Ajax web applications. Since it is primarily a data format, JSON is not limited to just Ajax web applications, and can be used in virtually any scenario where applications need to exchange or store structured information as text.”
From <https://msdn.microsoft.com/en-us/library/bb299886.aspx>
————-
This project might make for several future blog posts about modern methods of software development as readers follow along during the journey of developing this project, and later the slow reveal of the massive Code Named Wattson Project.
————–
Personally, I do Test Driven Development (TDD) for all of my projects now, where a test is written before any domain or production code is produced.
This means that if I were to work on this project, there will be unit tests that drive the development forward.
————–
Lastly, could you make up a name for this project. Preferably 8 or 9 letters or less to keep the namespaces short in the software code. Note, I am pretty rusty with C++ as I switched to C# around the year 2000. Been itching to resume working with it after I finish a current project I am working on that has nothing to do with climate.
GGM

Editor
Reply to  garymount
January 12, 2016 9:57 am

Many thanks, that is very helpful information. It is going to be much more practical I think to leverage off an existing system than to write from scratch. Maybe Wattson is that system. I’m preparing another post which explores the system further, so please keep an eye out for it (assuming WUWT will publish it)

BobG
January 12, 2016 10:10 am

What is needed is more data. It is something that the Watt’s team that took pictures of various temperature stations could do.
For example, look for several stations that need to be repainted where there is room to put more equipment. Two or three at different areas (regions of the country) would do. Next, build duplicate temperature stations but that is in good condition with new paint and put them close to the other temperature stations but still far enough away that the duplicate won’t impact the original. Collect temperature data from the duplicate and original stations for at least one year two would be better. Next, paint the old temperature station as it normally would be repainted. Collect two more years of data from both temperature stations.
Using this type data, it would be then possible to show the impact of painting temperature stations. NOAA should have done this type of experiment several times as a basis for their calculations.

Nicholas Schroeder
January 12, 2016 3:40 pm

The “missing” heat is in the expanding ice caps.

Evan Jones
Editor
January 13, 2016 4:57 am

· There may be significant local distortions on a day-to-day basis. For example, the making or missing of one measurement from one remote station could significantly affect a substantial area on that day.
One word: Anomalize.

Nikola Milovic
January 13, 2016 7:48 am

Garymount
I saw a lot of some instructions on how you can make a software for the study of climate codes. I’m not sufficiently versed in and your mess, but I am very much interested in the underlying cause of climate change.
Do you think (not only you, but almost all scientists of the world), you can come up with some results that you might be hidden in your team combinatorics which have no true foundation in nature, nor do you know what causes climate change and all other phenomena on the planet.
I respect your knowledge in the making of the program, but it is all in vain, without the knowledge of the true causes of phenomena. Have any scientific, ever attempt to decipher the enigma that has logic in it, I suppose to find something today. The way everyone looking for something like “blind chicken grain”
Come on, down slightly on the level of us in science unknown and try to make a program based on the logic that we have.
Here’s offers:
Climate change on the planet depend on mutual relations planets and sun, where there are so many cycles that take place from the beginning of our solar system. You all ignore the fact that we are human beings latest patent Creator for whom science has no respect.
Let me help you:
Sunspot cycle of 11.2 years, is the result of 4 planets and other cycles depend on the actions of other planets.
Are you willing to make a program for unraveling this problem, according to my data?
This requires a lot of astronomical data, strong software and a lot of diagrams and formulas. Try to include NASA and the American government, but it needs to be before that waives the wrong attitude about the causes of climate change
HERE YOU GO!!

Capt Karl
January 13, 2016 4:53 pm

My head is spinning.
Is the purpose of all these temp measurements to somehow arrive at some globally averaged daily temperature of planet Earth? A single number? A tilted and spinning planet that has two frozen poles, three quarters covered in oceans and the rest in continents, mountains, valleys, deserts, cities and 7 billion people? Who all live at the bottom of a 50 mile high global ocean of air with constant and chaotic weather systems. And not to mention a capricious star, 7 light minutes away, that we circle every 365+ days in the only prime Goldilocks orbit,
What is the Earth’s average daily temperature? Does this number make any sense? (the average temperature of my whole car is probably 150F when running, but I am comfortable…)