A way forward for BEST et al and their surface temperature record

Guest essay by Mike Jonas

Introduction

On 10 January, WUWT published my post A new system for determining global temperature. In that post, I described a proposed system for determining global temperature from the historical temperature record.

In that post, I indicated that another objective of the proposed system was to develop an improved model of global temperature : “After a while, the principal objective for improving the model would not be a better global temperature, it would be … a better model.“.

In this post, I explore that objective of the proposed system – a better model – and how the BEST project in particular could contribute to it. [BEST – Berkeley Earth Surface Temperatures].

In order to understand this post, it may be necessary to first read the original post. However, it will not be necessary to understand how the proposed triangulation process works because that is unimportant here (I use a 1-dimensional (“1D”) trial run to illustrate the idea).

The system would still provide global and regional temperature indexes, but these just become by-products (and I think that the size of their error bars might shock some people). The principal aim becomes learning about Earth’s temperatures, weather and climate.

In an earlier post (here) I stated that the climate models were all upside down because they tried to construct climate bottom-up from weather, and that instead the models need to work first and directly with climate. In my 10 Jan post I said that the temperature organisations had their logic upside down because they tried to adjust temperatures to match a model when they should have been adjusting a model to match the temperatures. This post is another step towards changing the mindset.

About BEST

There is a lot to like about the BEST project.

BEST’s project goals (here) include: “To provide an open platform for further analysis by publishing our complete data and software code as well as tools to aid both professional and amateur exploration of the data“. This is a brilliant goal, which offers a very positive way forward, as I will attempt to explain in this article.

BEST have also, very commendably, put together a much larger set of temperature measurements than is used by other organisations.

What is less good is their mindset, which needs changing. They are using their own notions of temperature trends and consistency to fill in missing temperature measurements, and to adjust temperature measurements, which are subsequently used as if they were real temperature measurements. This is a very dangerous approach, because of its circular nature: if they adjust measured temperatures to match their pre-conceived notions of how temperatures work, then their final results are very likely to match their pre-conceived notions.

My Objective

We (collectively) should make much better use of the historical surface temperature record (“temperature history”) than simply trying to work out the “global temperature” and its trend. With all its faults, the record is a significant asset, and it would be a shame not to try to make maximum use of it.

The main objective should be to use the temperature history to learn more about how Earth’s temperature / weather /climate system (“TWC”) operates.

The basic idea is simple :

• Build a model of how mankind’s activities and Earth’s weather and climate systems drive surface temperatures.

• Test the model against the temperature history.

• Use the results to improve the model.

Because the model contains both natural and man-made factors, it will be possible – after the model has been refined to be reasonably accurate – to determine the contribution of each man-made and each natural factor to Earth’s temperatures. [NB. Factors may combine non-linearly.].

As the model improves, so we learn more about the TWC. And maybe, along the way, we also get a better “global average temperature” index.

Illustration

I will illustrate the ideas using a simple 1D trial.

I use January (mid-summer) temperature measurements from Australian stations roughly in a line from Robe on the SE coast of South Australia across Sydney to Lord Howe Island in the Tasman Sea [Bankstown Airport is in Sydney]:

clip_image002[4]
Figure 1. Line of weather stations (not all are placemarked).
Willis Eschenbach in his recent post used an Altitude and Latitude temperature field to model expected temperatures. If I align Willis’s factors with the measured temperatures along the line in Figure 1 so that their weighted averages are the same, the average difference between the individual temperatures is something like 2.5 deg C over the period 1965 to 2014:

clip_image004[4]
Figure 2. Temperature field (model) using Altitude and Latitude only, compared to measured temperatures along the line from Robe to Lord Howe Island.
But if I add in two more factors – for coastal/continental effect and urban heat effect (UHE) – then the average difference between temperature field (model) and measured temperature comes down to less than 1.5 deg C:

clip_image006[4]
Figure 3. Temperature field (model) using four factors, compared to measured temperatures along the line from Robe to Lord Howe Island.
The two additional factors have been formulated very simply, to demonstrate how much difference it can make to use a better model. I don’t claim that these are the correct formulae for SE Australia in January, or that they can be applied elsewhere, or that others are not using them. They are purely for illustrative purposes. I have also simply combined the factors linearly where a real model might need to be non-linear. The formulae used were:

• Coastal/continental effect: Max temperature rises by an extra 8 deg C over the first 100km from the coast (inland Australia gets pretty hot in summer!). Min temperature falls by 2 deg C over the same distance. No further change further inland.

• UHE: Both Max and Min temperatures are higher by 1 deg C in urban areas. They both rise by a bit more towards the centre of the urban area, with the additional rise being larger for larger urban areas. Formula used for the additional rise at the urban centre is 0.5*ln(1+W) deg C where W is the approx diameter of the urban area in km.

NB. Figures 2 and 3 are actually for 1985, which is reasonably representative of the whole period. I can’t easily use station averages, because of missing temperature measurements:

clip_image008[4]
Figure 4. A sample of Max temperature meassurements. Some temperature measurements are missing.
Applying the Model

The model for Altitude and Latitude only, along the line from Robe to Lord Howe Island, looks like this:

clip_image010[4]
Figure 5. Altitude, Latitude only model, unadjusted – ie, not matched to measured temperatures.
The orange dots are the measured temperatures. It is easily seen that the measured temperatures and the model have different shapes. At this stage, it’s the shape that matters, not the absolute value. Under my proposed system, measured temperatures reign supreme, so the model has to be adjusted to match them:

clip_image012[4]
Figure 6. Altitude, Latitude only model, adjusted to match measured temperatures.
Because the adjusted model must go through all measured temperatures, you can see that changing the model isn’t going to make much difference to the final average temperature …..

….. or can you??

Here is the same graph, but using the two extra factors – coastal/continental effect and UHE:

clip_image014[4]
Figure 7. Four-factors model, adjusted to match measured temperatures.

Note: The 5th dot from the left is above the line because it is a small urban area (Warracknabeal) that falls between the graphed grid points (5km spacing).

The following graph focuses just on the centre section of Figure 7, so that UHE is easier to see:

clip_image016[4]
Figure 8. Centre section from Figure 7, with urban areas by approx width (Cootamundra, Sydney).
Now you can see just how much effect urban stations can have on average temperatures if you don’t treat UHE properly. BEST and others have come in for a lot of criticism (rightly so in my opinion) for the way they treat UHE. The various adjustments, homogenisations and in-fillings that the various organisations have made to temperatures have typically ended up spreading UHE into non-urban areas, thus corrupting their global average temperature index.

In my system, as illustrated above, UHE is catered for in the model, and is clearly confined to urban areas only. The average temperature along the Robe – Lord Howe line in Figure 6 is 28.3 deg C. In Figure 7 it is 27.1 deg C – that’s 1.2 deg C lower. The UHE contribution to the average temperature in Figure 7 is less than 0.1 deg C.

The model can and should be refined and extended to cover all sorts of additional factors – R Pielke Snr for example might like to add land clearing (I suspect that it would be significant between Robe and Cootamundra, where there is a lot of farm/grazing land).

Conclusion

Hopefully, the point is now successfully made that a change in mindset is needed, to get us away from the idea that temperatures must be adjusted, homogenised and mucked around with in order to make them match some idea of what we think they should be. Instead, we need to work with all temperature measurements unchanged. When we have developed a model that can get close to them it will tell us more about how Earth’s temperature/weather/climate system actually works.

Further supporting information is given below, after “Acknowledgements”.

Acknowledgements

Comments on the original post – especially the critical comments – have been very helpful. Some specific comments are referenced here, but many others were helpful, so I’ll just say “Thanks” to all commenters.

• richardscourtney said of my original proposed system, “That is what is done by ALL the existing teams who use station data to compute global temperature“. Maybe so, but I think not. For example, as demonstrated above, my system prevents UHE from being notionally spread into non-urban areas. I am not aware of any existing system that does that. As always, I’m happy to be proved wrong.

• Nick Stokes said that temperature adjustments made little difference. Since adjustments add to error bars (see below) it may be best to eliminate adjustments altogether. The proposed system makes this easy to do, and to evaluate, but it’s only an option. Note that below I completely rule out some kinds of temperature adjustments.

• Willis Eschenbach likened obtaining a global temperature from the temperature history as putting lipstick on a pig. The same comment may well apply here, but there is still the option of using only the higher quality data (if it can be identified). That would further eliminate any perceived need for adjustments.

• garymount described the open source Wattson project, which may well already be doing everything I describe here and more, but it hasn’t been documented yet.

• It will be interesting to see whether this article is seen as introducing anything new or different.

I say below that the proposed system is suitable for cooperative development. The point is that it is very difficult for anyone to identify the effect of any one factor in the temperature record in isolation. The temperature record is the end result of all factors, so all possible factors need to be taken into account. In a cooperative model, a researcher can be working on one factor but testing it in combination with all other factors. BEST, and maybe Wattson, could usefully act as central collectors, distributors and coordinators.

Changes from the previous post

The proposed system is essentially the proposed system in the previous post, but for those who insist that some temperature adjustments are needed I have included a section for them. The triangulation process is not needed in this post because I am using a 1D example, but it or something like it would be needed in the real system to obtain weightings and ratings and to match the model to the measured temperatures.

The system has three parts:

• The set of all surface temperature measurements, unadjusted.

• Adjustments to temperature measurements (please understand the process before complaining!!)

• Expected temperature patterns (“the model”).

These three parts are dealt with below under the subheadings “Temperature”, “Adjustments” and “The Model”.

Ideally, the new system will actually be implemented and it will be made 100% public for open-source cooperative development. Everything I say below can be interpreted in that light, ie. in a context where everything in the system can be replicated by others, tested, improved, etc.

Temperatures

As before, the actual temperature measurements reign supreme. They must be retained and used unchanged.

Associated with the surface temperature measurements, there needs to be accuracy data (error bars), eg:

• The accuracy of the thermometers themselves. Automated thermometers may be very accurate, but others have limited accuracy plus the risk of human error in reading them.

• Fitness for purpose : the risk that the temperature at the thermometer is not truly representative of what it purports to measure, ie. of the temperature outside. Poorly sited stations in particular would have large error bars.

• Provision for weather station deterioration, eg. failing paint.

• Instances where the measurements themselves are identified as unreliable. For example, some records show if a daily temperature is unreliable because it is estimated from a multiple-day measurement.

There could be blanket error ranges for the various types of equipment over time. So a super-accurate well-sited well-maintained automated thermometer would still be given error bars of say a few tenths of a degree C for “fitness for purpose”. An unautomated thermometer would have larger error bars, and an older thermometer would have even larger error bars.

Sorry, but I didn’t put error bars into my illustration above. It is only an illustration aimed at getting the basic idea across, and I felt that error bars at this stage would not assist understanding.

Adjustments

As before, the system probably cannot completely remove the need for adjustments to measured temperatures. With any adjustments, however, whether they are to fix obviously incorrect individual temperatures or to fix systemic errors such as time-of-day bias, there is always the risk that the adjustments themselves can introduce errors. In the proposed system, adjustments are held separately from the measured temperatures, so that they can be seen, understood, tested, quantified, and if required rejected.

All adjustments are perceived errors in the measurements, so for every adjustment the error bars should be increased. The best general rule is: don’t do them!

Some types of adjustment are not permitted, and these include many of the adjustments that are currently being done by BEST and others. These include things like infilling, station moves, “outliers” from some expected pattern, adjustment from surrounding stations, etc.

People are so used to station moves being adjusted for, that it may seem odd that adjustments for station moves cannot be allowed. The reason is that every temperature measurement has a location, so when a station moves the only change to iits known data is its (x,y,z) location and if that doesn’t change then there’s nothing to change. To single out the station for special treatment might itself introduce a bias. In my system, the “before” station, the “after” station, and all other stations are treated identically. In fact, all temperature measurements are treated identically. As I said in my previous post, a station with only a single temperature measurement in its lifetime is used on an equal footing with all other temperature measurements, and even moving temperature measurement devices could be used.

Here’s a really significant thought: If a particular station move is regarded as being significant even though it is a trivial location change, then it would be reasonable to estimate the amount of temperature change that the move could generate and then ensure that the “fitness for purpose” error bars are at least this large on all stations. The rationale is that there must be this much potential error at other locations.

clip_image018[4]
Figure 9. If moving Station X a few metres changes its temperature, what is the temperature a few metres from Station Y?
The Model

The really important part of the system is the model. The set of temperatures is there for the model to be tested against.

The model contains all factors affecting temperature that anyone wants to test. The factors are combined to predict a pattern of temperature around the world’s surface over time. The pattern is then tested against the measured temperatures. Rather than try to describe it all in words, I hope that the simple 1D illustration above will suffice to get the ideas across.

###

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.

Data and workings

All data and workings are available in a spreadsheet here:.ModelTrial.xlsx (Excel Spreadsheet)

Abbreviations

BEST – Berkeley Earth Surface Temperatures

C – Centigrade or Celsius [If you don’t like “Centigrade” please see Willis’s comment on the original post]

TWC – Earth’s temperature / weather /climate system

UHE – Urban Heat Effect

1D – 1-dimensional

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
117 Comments
Inline Feedbacks
View all comments
Ken Gray
January 29, 2016 6:33 am

Here is something that may be germane to your excellent project. Wegman I believe pointed out that there are advanced statistical techniques for handling incomplete data sets, i.e., sets of data with missing data points. Wouldn’t this be preferable to infilling? Perhaps another tool that is mathematically consistent to avoid the deprecated adjustments procedures. Thanks.

Editor
Reply to  Ken Gray
January 29, 2016 5:29 pm

Ken Gray – thanks, I’ll take a look, but I think it might be barking up the wrong tree. The point of my proposed system is that it simply doesn’t care if data is missing. It simply works with the data that it has, treating all data points equally.

M Seward
January 29, 2016 8:16 am

Hmmm… then why did NOAA ‘adjust ‘ their data and eliminate the “pause” and then trumpet that fact as loudly and as widely as they could in the lead up to Paris?
The basic truth is that the thermometer record is only useful as local data about what the temperature was at a location at a time. Using it for a global, indicative temperature is just risible as there are so many variables that affect the eventual value. As MJ perhaps unwittingly illustrates all these global temperatures’ actually are is the construct of a model of one sort or another.
I’ll stick with the satellite or balloon data sets thanks. Still some adjustments but nothing like that required for the surface thermometer record.

Reply to  M Seward
January 29, 2016 12:31 pm

There can be no scientific reason for trumpeting results of a modified process as substantially different from the prior measurements. NOAA’s bombastic pronouncements have to be political. I am reminded of what the CIA provided for Bush when they were looking for a reason to invade Iraq- exactly what the administration wanted! The top of these organizations are political appointments who are there to control the output to suit their political masters. It is scientifically corrupt from the top down. Even Nick can’t bring himself to come out and say the ship intake data is valid. It is, in fact, laughable!

Editor
Reply to  M Seward
January 29, 2016 5:46 pm

I think ‘wittingly’ would be more accurate!

ferd berple
January 29, 2016 9:01 am

Isn’t trying to build records by station a fool’s enterprise? Since the stations change over time due to uncontrolled siting variables, there will be drift due to the station environment that has nothing to do with climate.
Thus, the only realistic approach is to reject the notion that you can build an uncontaminated history by stations, and instead use random sampling similar to what is done for ship data.

Editor
Reply to  ferd berple
January 29, 2016 5:33 pm

ferd berple – maybe you are right, and obviously one will never deal completely with all small-scale variables like siting, station/location contamination, etc. So there will always be error bars. But I think that what I am proposing would at least be better than anything anyone else is doing (that I know of). It would also allow these unknowns to be quantified.

ferd berple
January 29, 2016 9:09 am

Wouldn’t this be preferable to infilling?
===================
how do you infill the North Pole with surrounding data, where all the surrounding stations may well be warmer simply because they are further south? Don’t you need to correct for latitude, elevation, humidity, ice coverage, time of day, etc. etc. etc.
It is a fool’s errand. You are a slave to the grid or the station. The problem is the sampling technique relies on data you do not have.
If you are using griding, as you reduce the grid size to increase accuracy you increase the chance of empty grid cells, reducing accuracy.
If you rely on stations, as conditions around the stations change non-climactic variables will appear to be climate change, contaminating the result.
Either way you cannot win. So change the methodology.

Reply to  ferd berple
January 29, 2016 1:55 pm

“how do you infill the North Pole with surrounding data, where all the surrounding stations may well be warmer simply because they are further south?”
Any infilling will be done with anomalies. There is no reason to expect warmer with latitude.

Reply to  Nick Stokes
January 29, 2016 6:01 pm

Apologies, that answer related to the existing system, not Mike’s proposal.

Editor
Reply to  ferd berple
January 29, 2016 5:39 pm

ferd berple – The North Pole, like the South Pole, the oceans and the major wildernesses, will always be an issue because they have so few thermometers. My proposed system does handle them, but necessarily the error bars increase over those areas, and the reduction in reliability of the overall results can be quantified too (see what I say about model rating in the article – actually I think it was in the previous article).

January 29, 2016 9:34 am

Regarding: ” UHE: Both Max and Min temperatures are higher by 1 deg C in urban areas.”
Other articles at WUWT said that UHE increases minimum temperatures greatly more than maximum temperatures. Some of these articles proposed using max instead of mean temperatures because UHE raises mainly the minimum temps.

Editor
Reply to  Donald L. Klipstein
January 29, 2016 4:56 pm

I was keeping it simple because it was for illustration only, and I tried to stay on the conservative side. You are quite right, in a real system the Min figure might well be higher than the Max, and both might well be higher than my simple test. If my system gets implemented, we should be able to tell.

Kev-in-Uk
January 29, 2016 9:38 am

I’ve read a good many comments on this and the original post and I have not noticed anyone stating the bleeding obvious (as it seems to me, anyway!).
It strikes me that the whole premise of doing some kind of statistical analysis, modeling, or whatever data treatment you want to do – is avoiding the necessary hard work that is actually required to make sense of local/regional spatial temperature information in any meaningful way. I think this is kinda what Mike Jonas is trying to promote? (albeit by creating yet another ‘model’)
To certain degree, I suspect this is just purely lazyness or avoidance of ‘human’ workload on the part of the current dataset keepers? As many have said, it is somewhat crazy to deduce a global temperature, let alone a global temperature anomaly, etc, from flawed or incomplete data, and especially to silly levels of degrees, whilst not quantifying the real/true uncertainties. However, in my opinion, it should be possible to undertake humanized assessment of local/regional areas using raw temperature data. And yes, this may mean somebody actually making real world ‘decisions’ based on experience and rational judgement! (instead of some computerised GIGO system)
In simple terms, a human (appropriately trained!) needs to be able to look at local datasets independently and correspondingly – in order to ‘see’ and evaluate or question why differences are observed. This takes time and skill, and requires careful analysis of all the factors (as are obvious). Whatever the likes of the statisticians want to say – if this is not being done from the outset, writing some treatment ‘model’ to do the essential ‘human’ part is nigh on impossible in my opinion – and as we see, justifying such methodolgy is even harder!. If I suggested to commenters here, that a crowd sourced project to do this would be a good idea – I suspect there would be many takers/volunteers.
Basically, if someone gave you a map (of a given area, say 100x100km, but whatever creates a reasonable workable number of stations), with a list of all the stations, and excel spreadsheet(s) and plots of the data for say, the last 50 years – I reckon human analysis could be undertaken by direct comparison of records with each other in relatively short order. Indeed, I’m sure it would be a piece of coding ‘pi$$’ to have them displayed as overlays in some kind of video/gif for a human to review quite quickly and pick out ‘odd’ ones?
The point being that if you see, say Heathrow airport showing a steady rise, but a few other stations nearby showing no rise, you would highlight Heathrow as suspect. You would then cross check the several stations with each other, and if they all show similar trends, etc – you could be reasonably confident that THEY were the REAL situation and you would DISCARD Heathrow completely, yes? Now, obviously, it’s not as simple as that, but the basic premise is that this is what is required in order to flush out and actually analyse good data from bad data. I personally do not see how it can be done any other way (the statisticians can argue about how ‘little’ it affects the net result and all that, but that’s not really a scientific approach).
I realise that the current datasets have some form of QC, but I do not know (does anyone?) exactly what that entails and whether it involves proper human cross checking? As far as I can tell, the likes of BEST tend to simply discard poor QC data – but in fact, just because it is intermittent or whatever, it may actually be extremely useful to confirm other station data. For example, even if a temp is indicated to be reading 5 degrees high everyday, the plots should still ‘match’ nearby stations, ergo, a simple adjustment could perhaps be made to enable such data to be used – even if not used, it would still be helping to confirm nearby station data. Like I said, I don’t know how detailed such analysis may or may not be undertaken in all these QC procedures. However, I strongly suspect that the required level of human assessment is not undertaken – instead simple coded ‘rules’ are formulated to reduce/remove data from further computation. (I’m sure the likes of Mosher can advise what is truly undertaken?)
If crowd sourcing SETI, pulsar searches, etc, is considered desirable and helpful to science – why not station data checking? Of course, there is a question of what data ‘they’ might want to give out, especially ‘raw’ stuff, but
then again , if ‘they’ have nothing to hide……?
My overall point being that with all the uncertainties being bandied about, surely we can actually identify (and remove?) a good many of them via detailed and meaningful human analysis instead of computerised modeling (based on numerous assumptions, etc).
Obviously, this comment is not particularly helpful but it strikes me that many seem to forget that such analysis is essential if we are to consider any dataset as ‘good’.
I strongly doubt NASA,GISS, CRU, etc have thousands of people doing this kind of data checking (as clearly, to do it right, you would need that amount of people!) so this really ought to be considered as a starting point?

Editor
Reply to  Kev-in-Uk
January 29, 2016 5:21 pm

Kev-in-UK – I would like to get this going on an open source basis, but first I need to talk to a few people. If I get anywhere, I’m sure you will see it on WUWT!

January 29, 2016 9:39 am

Mike Jonas, don’t be discouraged by naysayers, especially those in the industry of world temperatures. The strength of your approach lies in the very nature of the errors in the old record. There will have been just as many positive as negative errors in such a system. I was bowled over by your figure 9. Indeed, if you can get different temperatures with a move of 100 metres, what the heck is the temperature a hundred metres away from the ones you didn’t move!! This is the biggest contribution of your work. It makes the adjustments and homogenizations clearly non-scientific.
Okay, if you have one reading in a series that is 100C, yeah we could ignore this one as impossible! and there is some merit to Tobs adjustments. Outside of that, the adjustments are not useful. One thing I have been wanting to say for a long time is: don’t adjust because a station needs painting. Paint the gol darn thing and maintain them that way! If these temperatures are so critically important, put 3 or 4 of them within a hundred feet of each other at each site and maintain them. It costs us billions a year to keep the climate crocodile fed, take one percent of that, at least, and make a good reliable network. Run the new network for 5 years along with the old one and see what you get.
Adjusting records up that are showing downtrends from Northern Canada, across Greenland, Iceland and Siberia is totally wrong. Iceland meteorolgists strongly argue against what the Anglo Saxon world record keepers are doing. This vast area shows the same patterns! Even showing that the 30’s early 40s were warmer than now. Ditto South American records (as shown by “not a lot of people know” blog). Here are some examples, all over Greenland:
http://www.worldclimatereport.com/index.php/2007/10/16/greenland-climate-now-vs-then-part-i-temperatures/
All over Iceland:
http://www.euanmearns.com/wp-content/uploads/2015/03/IMO_dT_7stations.png
And in New Zealand they added on up to 1C to the raw record and reversed trend upwards- scroll down the gallery of chicanery (and note that mid 20th Century temps were warmer than now – like iceland:
http://www.climatescience.org.nz/images/PDFs/global_warming_nz2.pdf

nobodysknowledge
Reply to  Gary Pearse
January 29, 2016 1:34 pm

People working with temperatures can see the needs for adjustments. I think all adjustments should be affirmed by local meteorologists, as a quality control. They have the local knowledge that is necessary to make real judgements. Perhaps they could themselves make the necessary estimates based on scientific rules.

Editor
Reply to  Gary Pearse
January 29, 2016 5:23 pm

Gary Pearse – thanks for picking up on Figure 9. I thought it was pretty important, as I tried to indicate. I wonder how many others understood its full implications.

Kev-in-Uk
Reply to  Mike Jonas
January 30, 2016 2:10 am

Mike, I think most folk will understand that error bars are important, and technically, should be the same for all stations. There have been discussions where folk like myself have said that raw data is preferable (after removing obvious errors) in all stations, as unless there is a gradual or ‘drifting’ instrumental error, it is reasonable to assume that the underlying trend (IF THERE IS ONE!) will be the same for all stations.
However, over several decades, various instrumental changes, etc, there will be differences in stations ‘errors’, and , of course, if we simply kept the raw data and kept widening the error bars we would end up with temperature of +/- several degrees. Curiously enough, this is exactly how myself (Geoscientist) views the temperature data in any event, e.g. typical Uk summer day 22 degC +/- several degrees – typical Uk winter day; 5 degrees +/- several degrees! In my humble opinion therefore, this what most folk don’t like about the global temperature metric – it’s basically a construct of inumerable assumptions and an averaging on nigh on the fewest data points imaginable compared to the size of the spatial task and produces a meaningless ‘number’. Ignoring the CO2 trace gas issue – this is what most skeptics cannot tolerate, the thought of a supposed number being used to support a supposed theory, when in practise, neither can be reasonably demonstrated (i.e. the real measurable effect of CO2 in the atmosphere, nor the actual ‘global’ temerature).
In my opinion therefore, local or at best ‘regional’ data should be used to create independent ‘verified’ and corrected (as required) datasets, from which general trends may be observed. I would like to bet that if such local data were developed it would likely reflect or correlate to, the years of satellite data available? Even more, I’d bet that such a procedure would also confirm the greatly increased UHI/UHE in the last few decades! I used Heathrow as an example before because we all know that Heathrow has had vastly increased air traffic over the last few decades, but I have yet to see an assessment of how much the temps should be reduced to compensate for this (and, of course the increased population/sprawl of London generally).
At least if we had good local and/or regional datasets, we might be able to take temperature trends more seriously?

William Astley
January 29, 2016 9:43 am

This is surreal. There has been a cottage industry of CAGW manipulation of the land based temperature record to attempt to create a better hockey stick (reducing temperatures in the past and increasing recent temperatures), ignoring the fact that the land based temperature record is contaminated due to the urban effect which explains the roughly 0.3C difference between the satellite data (all the satellite temperature data bases more or less agree with each other).
The pathetically manipulations to the GISS temperature ‘data’ base was done to create a cult of CAGW propaganda display as was Mann’s elimination of the medieval warm period.
The cult of CAGW do not understand/comprehend that manipulating current and past temperature data to create propaganda displays does not change the logical implications of 18 years without warming. Observation A leads to conclusion to B. Changing observation A will not change the physical implications of conclusion B. In addition, there are dozens and dozens of independent analysis results that support conclusion B.
The purposeless idiotic climate wars is a distraction, as to what is going to happen next to the earth’s climate. The red in this picture is going to be replaced by blue as surely as night follows day. If and when there is significant cooling this entire conversation will change.
http://www.ospo.noaa.gov/data/sst/anomaly/2016/anomnight.1.28.2016.gif
Based on the 18 years without warming and dozens and dozens of other observations/analysis results there is no CAGW issue, there is no measureable AGW issue. If that assertion/conclusion is correct, there are fundamental errors in the most basic analysis/assumptions of the effect of ‘greenhouse’ gas molecules on surface temperature. The ‘error’ is: An increase in any greenhouse gas will cause there to be an increase convection cooling (an increase in convection cooling reduces the lapse rate). The reduction in the lapse rate due to the increase in convection cooling, causes there to less surface cooling, which offsets the majority of the greenhouse effect from the molecule in question. In addition the warming due to the increase in CO2 – ignoring the offsetting increase in convection – was over estimated by a factor of 4 as the overlap of the spectral absorption of water vapor and CO2 was ignored.
The recent tropical warming and high latitude warming was caused by an increase in solar wind bursts which in turn was caused by massive weird persistent coronal holes on the surface of the sun. The massive coronal holes are disappearing as are the sunspots. There is a major change occurring to the sun.
Comments:
1) The unexplained disappearing sunspots – disappearing sunspots is different that a reduction in the number of the sunspots – on the sun has resulted in weakening of the solar heliosphere which is the reason why galactic cosmic rays (GCR see below for an explanation of what is GCR) that are striking the earth is the highest ever recorded for this time in a solar cycle.
2) The solar heliosphere is the name for a tenuous solar created plasma bubble or solar ‘atmosphere’ that extends past the orbit of Pluto. The solar heliosphere is made up of hot ionized gas (plasma) that is ejected from the sun and pieces of magnetic flux that is also ejected from the sun. The pieces of magnetic flux in the solar heliosphere blocks and defects high speed cosmic protons. The high speed cosmic protons are for historic reasons (The discovers of the high speed cosmic protons thought they were observing a ‘ray’ and the name ray as opposed to particle stuck) called galactic cosmic ‘rays’ (GCR) or galactic cosmic ‘flux’ (GCF).
3) The high speed cosmic protons strike the earth’s atmosphere and create cloud forming ions in the earth’s atmosphere. The earth’s magnetic field also blocks and defects GCR so the majority of the change in GCR that strikes the earth due to changes in the strength and extent of the solar heliosphere is in the latitude region 40 to 60 degrees on the earth.
4) There is a second solar mechanism that has a big affect on the earth’s climate. Solar wind bursts from the sun (The majority of the solar wind bursts are created by ‘coronal holes’ on the surface of the sun. Coronal holes are not accounted for in the sunspot counting number and can and do appear even when there are no or few sunspots on the surface of the sun) create a space charge differential in the earth’s ionosphere which in turn causes current flow from high latitude regions to the equator of the planet. This current flow from high latitude regions of the planet to the equator in turn causes a change in cloud amount – the mechanism where solar wind bursts remove cloud forming ions from the earth’s atmosphere is called electroscavenging – and cloud properties that causes warming at high latitude regions and at the equator which causes warming. The mechanism electroscavening is one of the key physical reasons why there are specific patterns of regional warming and cooling on the earth.
5) The electroscaveging effect from solar wind bursts overrides/inhibits the normal cooling that would occur due to higher GCR.
Coronal holes on the surface of the sun are caused by an unknown process deep within the sun. Sudden abrupt changes to the coronal holes indicate that there are significant unexplained changes deep within the sun.
The following paper discuss the fact that changes in planetary temperature closely correlates with changes in the number of solar wind bursts. The other paper explains the electroscavenging mechanism.
http://gacc.nifc.gov/sacc/predictive/SOLAR_WEATHER-CLIMATE_STUDIES/GEC-Solar%20Effects%20on%20Global%20Electric%20Circuit%20on%20clouds%20and%20climate%20Tinsley%202007.pdf
The role of the global electric circuit in solar and internal forcing of clouds and climate
http://sait.oat.ts.astro.it/MmSAI/76/PDF/969.pdf

Once again about global warming and solar activity
Solar activity, together with human activity, is considered a possible factor for the global warming observed in the last century. However, in the last decades solar activity has remained more or less constant while surface air temperature has continued to increase, which is interpreted as an evidence that in this period human activity is the main factor for global warming. We show that the index commonly used for quantifying long-term changes in solar activity, the sunspot number, accounts for only one part of solar activity and using this index leads to the underestimation of the role of solar activity in the global warming in the recent decades.
A more suitable index is the geomagnetic activity which reflects all solar activity (William: The geomagnetic field rings when there are solar wind bursts.), and it is highly correlated to global temperature variations in the whole period for which we have data.
…The geomagnetic activity reflects the impact of solar activity originating from both closed and open magnetic field regions, so it is a better indicator of solar activity than the sunspot number which is related to only closed magnetic field regions. It has been noted that in the last century the correlation between sunspot number and geomagnetic activity has been steadily decreasing from – 0.76 in the period 1868- 1890, to 0.35 in the period 1960-1982, while the lag has increased from 0 to 3 years (Vieira et al. 2001). (William: This is the reason sunspot number no longer correlates with planetary temperature. There is another mechanism electroscanvenging from solar wind bursts which is modulating planetary temperature.)
In Figure 6 the long-term variations in global temperature are compared to the long-term variations in geomagnetic activity as expressed by the ak-index (Nevanlinna and Kataja 2003). The correlation between the two quantities is 0.85 with p<0.01 for the whole period studied. It could therefore be concluded that both the decreasing correlation between sunspot number and geomagnetic activity, and the deviation of the global temperature long-term trend from solar activity as expressed by sunspot index are due to the increased number of high-speed streams of solar wind on the declining phase and in the minimum of sunspot cycle in the last decades.

January 29, 2016 11:07 am

They are using their own notions of temperature trends and consistency to fill in missing temperature measurements, and to adjust temperature measurements, which are subsequently used as if they were real temperature measurements.
You need to study up. BEST do not “use” anything “as if” they were real temperature measurements. Using a hierarchical model which they subject to lots of testing of accuracy, they use estimates from the data to impute values that are missing or dramatically (based on explicit criteria written into the code and the descriptions) different from their neighbors.

Editor
Reply to  matthewrmarler
January 29, 2016 5:11 pm

impute : to ​calculate something when you do not have ​exact ​information, by comparing it to something similar (http://dictionary.cambridge.org/dictionary/english/impute). That is exactly what BEST is doing, and what I say they should not be doing.

January 29, 2016 11:09 am

• Build a model of how mankind’s activities and Earth’s weather and climate systems drive surface temperatures.
• Test the model against the temperature history.
• Use the results to improve the model.

Have you spent any time studying the development of GCMs and other models whose results have been widely presented in the peer-reviewed literature and popular press?

Reply to  matthewrmarler
January 29, 2016 12:59 pm

This seems like a reasonable approach. The models have been built. Tests against reality have shown that the models are incorrect. They are now using the results to change the inputs. What’s wrong with that picture?

Editor
Reply to  matthewrmarler
January 29, 2016 5:13 pm

Have I spent any time studying [..] GCMs?
Yes. http://wattsupwiththat.com/2015/11/08/inside-the-climate-computer-models/

January 29, 2016 2:07 pm

One wonders how the the variations are so manipulated and confused just so it can demonstrate a “warming planet”. When one looks at “models” one has to wonder just how they pervert the temperature. Locally, over a one hundred mile distance my Iphone temps range from 17deg to 19degC, where one won has rain and the other does not. But as demonstrated on “models” those temps would be all lifted to 19degC to show “warming” and that method is a lie.

Editor
January 29, 2016 4:45 pm

My latest drum to beat is to ask: If we were at start over, would we be focusing on the air temperature 2 meters about the ground and the sea temperatures at the surface or in the top few meters?
Is that a physically correct metric to measure hypothesized CO2-induced atmospheric or climatic warming?
Is CO2-induced warming expected to appear primarily within 2 meters of the surface (up in the air or down in the sea)?
If surface temperatures (air and sea) are not best-indicator metric, proposing new ways to massage or torture the existing (questionable) data into a doubtfully-useful average-of-averages-of-averages single number may be a colossal waste of time and effort.
Why did we spend all that money to loft atmospheric temperature sensing satellites if we are going to insist on using transmogrified surface thermometer records?
Did the whole subject get off on the wrong foot down the wrong path?

Editor
Reply to  Kip Hansen
January 29, 2016 5:18 pm

If our concern is the Earth’s heat content, we should undoubtedly be looking in the oceans not the atmosphere. So yes, the whole subject of global warming did get off on the wrong foot down the wrong path. But if we want to learn about the weather and climate that we live in, then it is appropriate to look also at data from the surface and from the atmosphere.

Editor
Reply to  Mike Jonas
February 3, 2016 8:55 am

Reply to Mike Jonas ==> “But if we want to learn about the weather and climate that we live in, then it is appropriate to look also at data from the surface and from the atmosphere.” Quite right — that’s where weather happens and weather is important to us. As a sailor, the sea surface also experiences its own weather — coupled to the atmosphere — and is a great concern to me when I am voyaging.
How much these local surface phenomena reflect the state of the climate is one of the questions unanswered. Some believe that you can added them all together, get an average, and have climate! Me, not so much.

nobodysknowledge
Reply to  Kip Hansen
January 30, 2016 4:34 am

Some good questions. If we measure air temperatures at 2m plus/minus 4 cm, i think we measure 0,01 % of the air. Air has 3% of the heat uptake, then it is 0,003% of the warming we can measure. When we measure over land area, this will be less than 0,001% of the heating. How can this give a good picture of global warming? Perhaps the energy budget is more interesting when it comes to global warming. Still I think it interesting to look at surface temperatures. but what conclusions can we draw?

nobodysknowledge
Reply to  nobodysknowledge
January 30, 2016 4:35 am

It was meant as an answer to Kip Hansen.

1sky1
January 29, 2016 5:18 pm

Unless first principles provide a well-validated theory or law, models should always be constructed ex post, not ex ante. Otherwise we risk deforming the data, instead of being informed by it. That is exactly the case with BEST’s presumptions of red-noise temporal variability and spatial homogeneity that lead to the manufacture of long regional time series from mere snippets of often UHI-afflicted record. Kriging—often from afar–then furthers the fiction by filling in geographic gaps in data coverage. The only real scientific value of their much much-touted exercise is not in their unrealistic modelling, but the access provided to the raw data.

garymount
February 5, 2016 7:01 am

I haven’t had time to review and comment as I have an important software project I need to dedicate my mental facilities towards as it nears completion to a point where it can get up and running. A project years in the making, off and on.
In a couple of weeks I hope to free up some time to deeply look into this latest post, so more later.
However, for now I leave you with some thoughts I had while reading this post when I finally had a chance, and reading one of Willis’s posts about global temperatures.
Note, though I don’t comment often, I do read almost every post and a large quantity of comments every day.
—————————————-
A day in the life of an average global temperature estimate.
Our day begins, as most days do not, on the Greenwich Meridian. There are few temperature stations located here, if any ( I haven’t checked ), but there are many within Time Zone 0 or Zulu Time. The local times throughout a time zone are all different than the time zone time except at one meridian line located within the time zone. For example a few miles to the east of my location the local time is one minute earlier and to the west, one minute later.
So our temperature reading for the calculated average temperature day begins at midnight GMT, some 8 hours (time zone standardized – but not true local time) before my temperature station in Port Coquitlam – some 30 miles East and inland from Vancouver – starts its day.
The sun is always spiralling around the globe, either heading away from the north pole (towards the south pole) or heading towards it. As the day progresses, and the sun spirals, more temperature stations get collected into the days tally. All with different local times because of time zone modifications so that for example a day might start half an hour before the true start of a day, or half an hour later.
Some of the last stations have their final reading some 11 hours after the final reading of a GMT station. However I don’t know what to make of the stations just a little further west where their time zones are minus or behind GMT by 13 hours.
This is a very odd day.
What would the average temperature of the globe look like at an instant of time? Not for a day but for a moment in time. What would the average day look like if you integrated these instances say every minute across a 24 hour time period? And whos 24 hour period, as there are thousands of local days across the planet, though you can Time Zone and restrict your choices to 24 + Newfoundland 🙂
ggm