A way forward for BEST et al and their surface temperature record

Guest essay by Mike Jonas

Introduction

On 10 January, WUWT published my post A new system for determining global temperature. In that post, I described a proposed system for determining global temperature from the historical temperature record.

In that post, I indicated that another objective of the proposed system was to develop an improved model of global temperature : “After a while, the principal objective for improving the model would not be a better global temperature, it would be … a better model.“.

In this post, I explore that objective of the proposed system – a better model – and how the BEST project in particular could contribute to it. [BEST – Berkeley Earth Surface Temperatures].

In order to understand this post, it may be necessary to first read the original post. However, it will not be necessary to understand how the proposed triangulation process works because that is unimportant here (I use a 1-dimensional (“1D”) trial run to illustrate the idea).

The system would still provide global and regional temperature indexes, but these just become by-products (and I think that the size of their error bars might shock some people). The principal aim becomes learning about Earth’s temperatures, weather and climate.

In an earlier post (here) I stated that the climate models were all upside down because they tried to construct climate bottom-up from weather, and that instead the models need to work first and directly with climate. In my 10 Jan post I said that the temperature organisations had their logic upside down because they tried to adjust temperatures to match a model when they should have been adjusting a model to match the temperatures. This post is another step towards changing the mindset.

About BEST

There is a lot to like about the BEST project.

BEST’s project goals (here) include: “To provide an open platform for further analysis by publishing our complete data and software code as well as tools to aid both professional and amateur exploration of the data“. This is a brilliant goal, which offers a very positive way forward, as I will attempt to explain in this article.

BEST have also, very commendably, put together a much larger set of temperature measurements than is used by other organisations.

What is less good is their mindset, which needs changing. They are using their own notions of temperature trends and consistency to fill in missing temperature measurements, and to adjust temperature measurements, which are subsequently used as if they were real temperature measurements. This is a very dangerous approach, because of its circular nature: if they adjust measured temperatures to match their pre-conceived notions of how temperatures work, then their final results are very likely to match their pre-conceived notions.

My Objective

We (collectively) should make much better use of the historical surface temperature record (“temperature history”) than simply trying to work out the “global temperature” and its trend. With all its faults, the record is a significant asset, and it would be a shame not to try to make maximum use of it.

The main objective should be to use the temperature history to learn more about how Earth’s temperature / weather /climate system (“TWC”) operates.

The basic idea is simple :

• Build a model of how mankind’s activities and Earth’s weather and climate systems drive surface temperatures.

• Test the model against the temperature history.

• Use the results to improve the model.

Because the model contains both natural and man-made factors, it will be possible – after the model has been refined to be reasonably accurate – to determine the contribution of each man-made and each natural factor to Earth’s temperatures. [NB. Factors may combine non-linearly.].

As the model improves, so we learn more about the TWC. And maybe, along the way, we also get a better “global average temperature” index.

Illustration

I will illustrate the ideas using a simple 1D trial.

I use January (mid-summer) temperature measurements from Australian stations roughly in a line from Robe on the SE coast of South Australia across Sydney to Lord Howe Island in the Tasman Sea [Bankstown Airport is in Sydney]:

clip_image002[4]
Figure 1. Line of weather stations (not all are placemarked).
Willis Eschenbach in his recent post used an Altitude and Latitude temperature field to model expected temperatures. If I align Willis’s factors with the measured temperatures along the line in Figure 1 so that their weighted averages are the same, the average difference between the individual temperatures is something like 2.5 deg C over the period 1965 to 2014:

clip_image004[4]
Figure 2. Temperature field (model) using Altitude and Latitude only, compared to measured temperatures along the line from Robe to Lord Howe Island.
But if I add in two more factors – for coastal/continental effect and urban heat effect (UHE) – then the average difference between temperature field (model) and measured temperature comes down to less than 1.5 deg C:

clip_image006[4]
Figure 3. Temperature field (model) using four factors, compared to measured temperatures along the line from Robe to Lord Howe Island.
The two additional factors have been formulated very simply, to demonstrate how much difference it can make to use a better model. I don’t claim that these are the correct formulae for SE Australia in January, or that they can be applied elsewhere, or that others are not using them. They are purely for illustrative purposes. I have also simply combined the factors linearly where a real model might need to be non-linear. The formulae used were:

• Coastal/continental effect: Max temperature rises by an extra 8 deg C over the first 100km from the coast (inland Australia gets pretty hot in summer!). Min temperature falls by 2 deg C over the same distance. No further change further inland.

• UHE: Both Max and Min temperatures are higher by 1 deg C in urban areas. They both rise by a bit more towards the centre of the urban area, with the additional rise being larger for larger urban areas. Formula used for the additional rise at the urban centre is 0.5*ln(1+W) deg C where W is the approx diameter of the urban area in km.

NB. Figures 2 and 3 are actually for 1985, which is reasonably representative of the whole period. I can’t easily use station averages, because of missing temperature measurements:

clip_image008[4]
Figure 4. A sample of Max temperature meassurements. Some temperature measurements are missing.
Applying the Model

The model for Altitude and Latitude only, along the line from Robe to Lord Howe Island, looks like this:

clip_image010[4]
Figure 5. Altitude, Latitude only model, unadjusted – ie, not matched to measured temperatures.
The orange dots are the measured temperatures. It is easily seen that the measured temperatures and the model have different shapes. At this stage, it’s the shape that matters, not the absolute value. Under my proposed system, measured temperatures reign supreme, so the model has to be adjusted to match them:

clip_image012[4]
Figure 6. Altitude, Latitude only model, adjusted to match measured temperatures.
Because the adjusted model must go through all measured temperatures, you can see that changing the model isn’t going to make much difference to the final average temperature …..

….. or can you??

Here is the same graph, but using the two extra factors – coastal/continental effect and UHE:

clip_image014[4]
Figure 7. Four-factors model, adjusted to match measured temperatures.

Note: The 5th dot from the left is above the line because it is a small urban area (Warracknabeal) that falls between the graphed grid points (5km spacing).

The following graph focuses just on the centre section of Figure 7, so that UHE is easier to see:

clip_image016[4]
Figure 8. Centre section from Figure 7, with urban areas by approx width (Cootamundra, Sydney).
Now you can see just how much effect urban stations can have on average temperatures if you don’t treat UHE properly. BEST and others have come in for a lot of criticism (rightly so in my opinion) for the way they treat UHE. The various adjustments, homogenisations and in-fillings that the various organisations have made to temperatures have typically ended up spreading UHE into non-urban areas, thus corrupting their global average temperature index.

In my system, as illustrated above, UHE is catered for in the model, and is clearly confined to urban areas only. The average temperature along the Robe – Lord Howe line in Figure 6 is 28.3 deg C. In Figure 7 it is 27.1 deg C – that’s 1.2 deg C lower. The UHE contribution to the average temperature in Figure 7 is less than 0.1 deg C.

The model can and should be refined and extended to cover all sorts of additional factors – R Pielke Snr for example might like to add land clearing (I suspect that it would be significant between Robe and Cootamundra, where there is a lot of farm/grazing land).

Conclusion

Hopefully, the point is now successfully made that a change in mindset is needed, to get us away from the idea that temperatures must be adjusted, homogenised and mucked around with in order to make them match some idea of what we think they should be. Instead, we need to work with all temperature measurements unchanged. When we have developed a model that can get close to them it will tell us more about how Earth’s temperature/weather/climate system actually works.

Further supporting information is given below, after “Acknowledgements”.

Acknowledgements

Comments on the original post – especially the critical comments – have been very helpful. Some specific comments are referenced here, but many others were helpful, so I’ll just say “Thanks” to all commenters.

• richardscourtney said of my original proposed system, “That is what is done by ALL the existing teams who use station data to compute global temperature“. Maybe so, but I think not. For example, as demonstrated above, my system prevents UHE from being notionally spread into non-urban areas. I am not aware of any existing system that does that. As always, I’m happy to be proved wrong.

• Nick Stokes said that temperature adjustments made little difference. Since adjustments add to error bars (see below) it may be best to eliminate adjustments altogether. The proposed system makes this easy to do, and to evaluate, but it’s only an option. Note that below I completely rule out some kinds of temperature adjustments.

• Willis Eschenbach likened obtaining a global temperature from the temperature history as putting lipstick on a pig. The same comment may well apply here, but there is still the option of using only the higher quality data (if it can be identified). That would further eliminate any perceived need for adjustments.

• garymount described the open source Wattson project, which may well already be doing everything I describe here and more, but it hasn’t been documented yet.

• It will be interesting to see whether this article is seen as introducing anything new or different.

I say below that the proposed system is suitable for cooperative development. The point is that it is very difficult for anyone to identify the effect of any one factor in the temperature record in isolation. The temperature record is the end result of all factors, so all possible factors need to be taken into account. In a cooperative model, a researcher can be working on one factor but testing it in combination with all other factors. BEST, and maybe Wattson, could usefully act as central collectors, distributors and coordinators.

Changes from the previous post

The proposed system is essentially the proposed system in the previous post, but for those who insist that some temperature adjustments are needed I have included a section for them. The triangulation process is not needed in this post because I am using a 1D example, but it or something like it would be needed in the real system to obtain weightings and ratings and to match the model to the measured temperatures.

The system has three parts:

• The set of all surface temperature measurements, unadjusted.

• Adjustments to temperature measurements (please understand the process before complaining!!)

• Expected temperature patterns (“the model”).

These three parts are dealt with below under the subheadings “Temperature”, “Adjustments” and “The Model”.

Ideally, the new system will actually be implemented and it will be made 100% public for open-source cooperative development. Everything I say below can be interpreted in that light, ie. in a context where everything in the system can be replicated by others, tested, improved, etc.

Temperatures

As before, the actual temperature measurements reign supreme. They must be retained and used unchanged.

Associated with the surface temperature measurements, there needs to be accuracy data (error bars), eg:

• The accuracy of the thermometers themselves. Automated thermometers may be very accurate, but others have limited accuracy plus the risk of human error in reading them.

• Fitness for purpose : the risk that the temperature at the thermometer is not truly representative of what it purports to measure, ie. of the temperature outside. Poorly sited stations in particular would have large error bars.

• Provision for weather station deterioration, eg. failing paint.

• Instances where the measurements themselves are identified as unreliable. For example, some records show if a daily temperature is unreliable because it is estimated from a multiple-day measurement.

There could be blanket error ranges for the various types of equipment over time. So a super-accurate well-sited well-maintained automated thermometer would still be given error bars of say a few tenths of a degree C for “fitness for purpose”. An unautomated thermometer would have larger error bars, and an older thermometer would have even larger error bars.

Sorry, but I didn’t put error bars into my illustration above. It is only an illustration aimed at getting the basic idea across, and I felt that error bars at this stage would not assist understanding.

Adjustments

As before, the system probably cannot completely remove the need for adjustments to measured temperatures. With any adjustments, however, whether they are to fix obviously incorrect individual temperatures or to fix systemic errors such as time-of-day bias, there is always the risk that the adjustments themselves can introduce errors. In the proposed system, adjustments are held separately from the measured temperatures, so that they can be seen, understood, tested, quantified, and if required rejected.

All adjustments are perceived errors in the measurements, so for every adjustment the error bars should be increased. The best general rule is: don’t do them!

Some types of adjustment are not permitted, and these include many of the adjustments that are currently being done by BEST and others. These include things like infilling, station moves, “outliers” from some expected pattern, adjustment from surrounding stations, etc.

People are so used to station moves being adjusted for, that it may seem odd that adjustments for station moves cannot be allowed. The reason is that every temperature measurement has a location, so when a station moves the only change to iits known data is its (x,y,z) location and if that doesn’t change then there’s nothing to change. To single out the station for special treatment might itself introduce a bias. In my system, the “before” station, the “after” station, and all other stations are treated identically. In fact, all temperature measurements are treated identically. As I said in my previous post, a station with only a single temperature measurement in its lifetime is used on an equal footing with all other temperature measurements, and even moving temperature measurement devices could be used.

Here’s a really significant thought: If a particular station move is regarded as being significant even though it is a trivial location change, then it would be reasonable to estimate the amount of temperature change that the move could generate and then ensure that the “fitness for purpose” error bars are at least this large on all stations. The rationale is that there must be this much potential error at other locations.

clip_image018[4]
Figure 9. If moving Station X a few metres changes its temperature, what is the temperature a few metres from Station Y?
The Model

The really important part of the system is the model. The set of temperatures is there for the model to be tested against.

The model contains all factors affecting temperature that anyone wants to test. The factors are combined to predict a pattern of temperature around the world’s surface over time. The pattern is then tested against the measured temperatures. Rather than try to describe it all in words, I hope that the simple 1D illustration above will suffice to get the ideas across.

###

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.

Data and workings

All data and workings are available in a spreadsheet here:.ModelTrial.xlsx (Excel Spreadsheet)

Abbreviations

BEST – Berkeley Earth Surface Temperatures

C – Centigrade or Celsius [If you don’t like “Centigrade” please see Willis’s comment on the original post]

TWC – Earth’s temperature / weather /climate system

UHE – Urban Heat Effect

1D – 1-dimensional

0 0 votes
Article Rating
117 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 28, 2016 1:59 pm

Temperatures in Sydney are always 1 C degree higher than in Newcastle, although the latter is 100 miles closer to the equator, because of Sydney’s greater UHI. Temperature is where you measure it, not an average of anything.

Mike
Reply to  ntesdorf
January 29, 2016 2:38 am

It impossible to have a physically meaningful “global average temperature” since temperature is not an additive quantity. ( It is not an ‘extensive’ property, to use the technical term ).
If it taken to be a proxy for thermal energy this is legit. So one could (with some caveats ) calculate an average SST to represent the thermal energy of the mixed layer.
You CAN NOT add land temperature to a sea temperature because they are very different media with different heat capacities. It is as obviously flawed as averaging deg F and deg C temperature records: it is physically meaningless.
No one would be dumb enough to do the latter but mixing land and sea temps now seems to be a main stay of climatology, a pseudo-science that does not care about being physically meaningful.

george e. smith
Reply to  Mike
January 29, 2016 1:15 pm

Well I don’t like “centigrade” no matter who said what about which.
A “Centigrade” scale is simply ANY scale which divides the difference between two calibration points into 100 equal (linear) increments. It could be Temperature, or it could be the railway line from New York to Washington DC.
The Celsius scale IS a centigrade scale, because it divides the difference between the two calibration (definition) points into 100 equal parts. Those two set points are the melting and boiling points of water, under standard conditions.
The size of the “degree” as derived for the Celsius scale, is then used to numerically define the difference between the triple point of water and absolute zero; which places zero kelvin at -273.16 Celsius degrees.
There’s a reason for having internationally agreed on units and terminology, so people can communicate with each other and all get the same information.
When we don’t all talk the same, we end up with expensive scientific packages crashing onto the surface of Mars.
G; yes I’m a nitpicker. I helped pay for that Mars snafu.

Henry Galt
January 28, 2016 2:14 pm

Nice. May as well lead them by the hand Mike. Saves them having to steal it.

January 28, 2016 2:42 pm

MJ, as much as I can applaud your intent, I do not think it can succeed. I speak with some (admittedly rusty) chops in math stats and mathematical modeling (mostly biology/ ecology and economics/business).
There are several things you may wish to consider further.
1. Take UHE. It is not only an urban/non-urban issue. It is a microsite encroachment issue, which can also occur in rural locations with economic development, as AW’s surface stations project and results have vividly shown. No ‘0-1 indexed’ U/R model (what you use), or laughable NASA ‘nightlight’ categorization, can resolve this. You need actual station data over time, which simply does not exist for most stations. There are, admittedly, exceptions like Rutherglen in Australia.
2. Whether 3D (previous post) or 1D (this post), you still have constructed a logical and mathematical equivalent to classical regional expectations. Whether the BEST version or NOAA’s pairwise comparison (Homogenization version 2). Nothing of the sort can ever work well, even ignoring problem 1. BEST station 166900 is a good illustrative example which Mosher hates. Amundsen-Scott South pole is arguably the best maintained, and certainly the most expensive, weather station on the planet. BEST rejects 26 months of record colds recorded there because of its statistical regional expectation field. Well, the next station that could contribute to a South Pole expectation is McMurdo, which is 1300 km away and 2700 meters lower, on the Antarctic coast. Frank Landsners’s RUTI project shows the same thing for Europe, just distinguishing coastal influenced stations from those that are not.
3. 71% of the Earth’s surface is ocean. Until ARGO (and arguably even after) it was grossly under sampled. Method biases (canvas buckets, engine room inlet temps when engine rooms do not have a uniform depth), trade route sample biases,…. There is no way to fix–by any fitting model adjustment–historical data that simply does not exist.
Why not face the fact that until satellite coverage, the surface temperature data is simply not fit for climate purpose? Makes a much easier political sound bite. And objectively true.

Editor
Reply to  ristvan
January 28, 2016 5:40 pm

ristvan – you may well be right. But I thought that if we were trying to use the surface temperature history we should at least use it reasonably correctly and not pollute all uses of it with pre-conceived notions.
1. What you say is correct. Mine was a simple illustrative example. I think that as soon as anyone starts to think seriously about it, the task of modelling the historical surface temperature measurements becomes massive. But that’s why I say an open source cooperative approach is needed – there’s simply too much for any one organisation. If someone works out exactly what affects thermometers in Rutherglen, for example, then that can be built into the model at Rutherglen, and perhaps lessons learned from it can be used elsewhere too. I’m not expecting a perfect fit, but a better fit might be more useful than a bad fit, and there are provisions for assessing the model’s accuracy.
2. The South Pole seems to be a good example for what I am trying to say. OK, so some of the South Pole reaings don’t match expectations. Don’t throw them away, leave them in. Then when the model is run, it will fail to match those readings. The regional temperature will still reflect the actual readings, the model will be recognised as being inaccurate at the South Pole, and any regional or global temperature index will have correspondingly higher error bars. One day, the reason for the low temperatures may be understood, and the model can then be fixed. Isn’t that how any system ought to work?
3. I have to agree with you about the oceans. Well, almost. Before the days of Argo and satellites, we probably have virtually no useful ocean data (for purpose). We may one day start to understand much better how oceans behave and how they affect temperatures around the globe. Even if we throw away all past ocean temperature measurements on the ground that they have no value for purpose, we might perhaps build a model of past ocean behaviour based on recent behaviour (AMO, PDO, IOD, ENSO, Gulf Stream, Humboldt Current, etc, etc) and apply that with suitably high error bars increasing back through time. We would also work out how ocean temperature behaviour affected land surface temperatures, and we could use that with past land temperatures to verify the model. In the meantime, it might be best to work on land-only.

george e. smith
Reply to  ristvan
January 29, 2016 1:19 pm

Maybe it is 73.1% that is ocean. And yes it’s just a typo, not a complaint. I don’t pick nits over what doesn’t really matter.
g

January 28, 2016 2:57 pm

Ahh
You dont understand that Willis actually credits me with his approach.
That is because we actually use a model to predict the temperature.
There isnt any infilling as you guys normally understand it.
The temperature at any given location in BE is defined as follows
T = C +W
C = f(altitude, elevation season)
I will note that we have also looked at adding a coastal factor BUT it did not increase the R^2 much
as for UHE.. That regressor had no impact. sorry.
Finally I ABSOLUTELY LOVE Ruds example. Every model ( c=f(altitude,lat,time))
will fail on corner cases. In his example… areas where there are seasonal temperature inversions. Given there are 40K stations, and given that the best “bug” finder has found over 400 bad stations, Rud has a long way to go before he wins a cookie.
Bottom line gents. USING elevation and latitude and season you can explain OVER 90% of the temperature. That leaves 10% for coastal effects, UHE, microsite effects, land cover effects, and weather.

u.k(us)
Reply to  Steven Mosher
January 28, 2016 3:30 pm

Never mind all the water sloshing around.

Marcus
Reply to  u.k(us)
January 28, 2016 4:24 pm

..He definitely has something sloshing around in his head !!

Editor
Reply to  Steven Mosher
January 28, 2016 5:44 pm

How about we get 100% of it into the open and have a look at it.

Reply to  Mike Jonas
January 29, 2016 5:24 am

Mike Jonas:
I rarely agree with Steven Mosher but on this occasion he repeats a point that I put to you in the thread discussing your previous post.
I then said the following.
The major problem with the surface station data is not removed by the proposal in the above essay. Indeed, the essay says it proposes to adopt the problem when it says

The proposed new system uses the set of all temperature measurements and a model. It adjusts the model to fit the temperature measurements. [As before, “model” here refers to a temperature pattern. It does not mean “computer model” or “computer climate model”.].
Over time, the model can be refined and the calculations can be re-run to achieve (hopefully) better results.

That is what is done by ALL the existing teams who use station data to compute global temperature.
The “stations” are the sites where actual measurements are taken.
When the measurement sites are considered as being the measurement equipment, then the non-uniform distribution of these sites is an imperfection in the measurement equipment. Some measurement sites show warming trends and others cooling trends and, therefore, the non-uniform distribution of measurement sites may provide a preponderance of measurement sites in warming (or cooling) regions. Also, large areas of the Earth’s surface contain no measurement sites, and temperatures for these areas require interpolation.
Accordingly, the measurement procedure to obtain the global temperature for e.g. a year requires compensation for the imperfections in the measurement equipment. A model of the imperfections is needed to enable the compensation, and the teams who provide values of global temperature each use a different model for the imperfections (i.e. they make different selections of which points to use, they provide different weightings for e.g. effects over ocean and land, and so on). So, though each team provides a compensation to correct for the imperfection in the measurement equipment, each also uses a different and unique compensation model.
The essay proposes adoption of an additional unique compensation model.
Refining the model to obtain “better” results can only be directed by prejudice concerning what is “better” because there is no measurement standard for global temperature.

You replied to that saying to me

You miss the point when you say “That is what is done by all the existing teams …”. It isn’t. All the existing teams use their model to manipulate actual measurements before they are used. Mine won’t change any actual measurement [I did make provision for correcting systemic errors, but following a comment by Nick Stokes I now feel that it is best not to make any changes at all.]. One of the significant differences, for example, is that while UHE gets spread out by existing systems, mine would confine it to urban areas.
I agree that the historical measurements are heavily imperfect, and that any results from them need to be seen in that light. But I am seriously unimpressed with the mindset of the people doing the current systems, and I’m trying to get people to see things from a different angle – one in which measurement trumps model. I think the exercise would be interesting, and I think there would be greater benefit than just a better “global temperature” and better error bars (there’s another comment on that).
While everything is being distorted by ideologues it is easy to get very cynical about the surface station measurement system, but when sanity is re-established then I think there is merit in continuing with it.

That reply was – and is – nonsense. The effect is the same if
(a) a model is used “to manipulate actual measurements before they are used”
or
(b) a model is used “to manipulate” the indications of “actual measurements” after “they are used”.
It is not possible for your system to be such that “measurement trumps model” when you adopt a compensation model to correct for undetermined imperfections in the data (e.g. measurements not available from sites which do not exist).
And your prejudices about how to alter data to make it “better” are of no greater worth than the alterations to data made by those whom you call “ideologues”.
Richard

Editor
Reply to  Mike Jonas
January 29, 2016 2:30 pm

richardscourtney – I think you have missed the point, but without detailed knowledge of the inner workings of BEST I can’t be absolutely certain. My understanding is that BEST fits a model or models to the temperatures and then uses the models in places instead of the measured temperatures. So, for example, when it adjusts or in-fills a temperature it is using a model to change the set of temperatures that is used as the set of measured temperatures. NB. They might not think that they are using a model, for example when they fill in a missing temperature by reference to surrounding stations they might not think they are using a model but in fact they are. My system fits a model to the temperatures so that all measured temperatures are used unchanged and the model is only used where there are no temperature measurements. From everything that has been explained about BEST over the years, I am confident that my treatment of temperatures and of UHE in particular is better, but as always I’m happy to be proved wrong. So, for example, when Steven Mosher says “as for UHE.. That regressor had no impact. sorry.“, I would need to see exactly what the regression looked like.

Reply to  Mike Jonas
January 30, 2016 12:12 am

Mike Jonas:
Thankyou for your responding to my post. I now write to ask you to address what I said in that post.
My post provided three criticism but the only part of your response which relates to any of those criticisms says

My system fits a model to the temperatures so that all measured temperatures are used unchanged and the model is only used where there are no temperature measurements.

Yes! I know! I said that!
But my first point rejected it saying

The effect is the same if
(a) a model is used “to manipulate actual measurements before they are used”
or
(b) a model is used “to manipulate” the indications of “actual measurements” after “they are used”.

Your response amounts to, “But I propose using fewer changes to the data”.
Any changes to the data are not acceptable.
And, importantly, my second point said

It is not possible for your system to be such that “measurement trumps model” when you adopt a compensation model to correct for undetermined imperfections in the data (e.g. measurements not available from sites which do not exist).

Most of the planet is covered with oceans that have few measurement sites. Therefore, your use of a “model” that “is only used where there are no temperature measurements” means most of the planet will be modeled.
The model “trumps” the measurements when most of the planet is modeled.
This dominance of the model over the measurements imposes on my first point. This is because the output of your system provides modeled data that affects the result indicated by the measurements (as I said, “a model is used “to manipulate” the indications of “actual measurements” after “they are used”.”)
And my third point remains true but unmentioned by you.
It was and is

And your prejudices about how to alter data to make it “better” are of no greater worth than the alterations to data made by those whom you call “ideologues”.

Richard

TimTheToolMan
Reply to  Steven Mosher
January 28, 2016 6:00 pm

Mosher writes

Bottom line gents.

But the grand total is that its no good for looking for a global trend because the regional trends are all over the place.

AndyG55
Reply to  TimTheToolMan
January 29, 2016 1:35 am

“Bottom line gents”
Said like a true Dodgy Bros used car salesman.

george e. smith
Reply to  TimTheToolMan
January 29, 2016 1:42 pm

It can also help credibility if there is actually some physical phenomenon one can point to as a plausible cause for the effect of or on some model term.
Somebody concocted a “fitted” model to match the experimentally measured value of the fine structure constant; one of the fundamental constants of Physics.
Their model fitted fairly well to the experimental “data”.
To about eight (8) significant digits as I recall; which is somewhat better than Mike’s model here.
It fitted to within about 2/3rds of the standard deviation of the very best experimental measurement. Set the Physics world on its ears for a while.
Then some computer geek, did a computer search of similar models that fitted to better than the standard deviation. Came up with a list of about eight such models, some better than the original.
Later on, a math whizz derived mathematical theory of that class of models which described lattice points in an N dimensional space, describing an N dimensional sphere whose radius was the inverse of the fine structure constant. Any lattice point that lay in a shell between spheres (N dimensional) of radii at +/- one standard deviation from 1/alpha was a valid member of this class of models.
He then published the complete list of such models which contained about 12 valid lattice sites which were family members.
A whole lot of crow pies were then consumed by one and all, once it was pointed out, that there was NO physical connection of ANY kind between the real physical universe, and this family of models. They were truly the results of numerical origami.
So you can fit almost anything to anything else to almost any degree; but without a physical universe connection, it is just crap.
G
The fine structure constant, alpha, has a history of guffoons diddling with it; including Sir Arthur Eddington for one.

Reply to  Steven Mosher
January 29, 2016 8:11 pm

What does “over 90% of the temperature” mean?

Gunga Din
January 28, 2016 2:57 pm

With respects, all the attempts to derive a past or present GLOBAL temperature from past or present records and methods is impossible. A good guess? Sure. But something sure enough to change global energy and economies? No.
We have lots and lots of local numbers. We even have some global numbers for the atmosphere from satellites for a relatively short time period.
If this was just about science, I don’t think many would deny the limits of what we know. (I’m sure that pet theories would be defended until the owners realized it was time to “put them down”.)
But now political/philosophical beliefs have entered in. They fund and support their own agenda. In one sense, there’s nothing wrong with that. Where the “wrong” enters in is when “the end justifies the means” becomes their modus operandi.

Marcus
Reply to  Gunga Din
January 28, 2016 4:26 pm

…+ 1,000

indefatigablefrog
Reply to  Gunga Din
January 28, 2016 7:00 pm

Having invested so much in this project, nobody now seems willing to contemplate that the unknowns outweigh the knowns. And that applies to the known unknowns. Of course there are also Donald’s unknown unknowns to contemplate.
But, as we stand, more and more reckless, arrogant and absurdly over-confident buffoons are entering the fray, pretending to be scientific experts and confusing matters even further.
I suspect that we may need to wait for an entire generation of climate specialists to die out, before a new generation can dismiss modern climate science as heavily flawed and prone to over-confidence and bias.
All of which will involve an horrendous waste of time and money.
Well, never mind… 🙂

Hivemind
Reply to  indefatigablefrog
January 29, 2016 2:48 am

That is how Z-rays were finally put down. They were “discovered” by a French scientist shortly after X-rays. They became very popular in the scientific literature. There were a number of technical problems with them, but the worst was that many couldn’t reproduce the results. The beginning of the end came when an American scientist visited his laboratory and (in the darkened room) removed the prism while the experiment was still in progress and nobody noticed. The number of researchers publishing about it shrank year-by-year until only in France, then only by the original discoverer. After his death there were no more papers on the subject.
Unfortunately Wikipedia and Google have sadly let me down, so I can’t provide specifics, nor a link.

indefatigablefrog
Reply to  Hivemind
January 29, 2016 4:17 am

You may also find the rise and demise of the popular theory of female hysteria a rewarding topic of investigation.
Somewhere there is a graph of the number of papers published. It shows the hype-cycle over approximately one generation of scholars.

Gerry, England
Reply to  indefatigablefrog
January 29, 2016 6:47 am

Reality will start to bite in one of two ways – or even both at the same time. A noticeable cooling of the planet that they can’t fiddle out of the surface temps as the disconnect with the satellite records would get even more embarrassing than it is already and people digging out their homes from under the snow or running their heating during the Summer won’t swallow the ‘warmest year evah’ claims. And cash…when faced with declining competitivity in their manufacturing industries due to expensive electricity costs, countries such as those in the EU are shelving the green stuff and going for coal. Eventually people with the wisdom of King Canute will appear, knowing that you can’t stop a changing climate from er…..changing. My quote of the week. or maybe even month from Christopher Booker in relation to UK government idiocy – ‘there are a great many good people in the UK – it’s just that none of them are running the country’.

WZmek
Reply to  indefatigablefrog
January 29, 2016 11:10 am

This is for Hivemind on the 29th at 2:48. Is it possible you meant N-rays, purportedly discovered by Blondlot?

indefatigablefrog
Reply to  WZmek
January 29, 2016 11:48 am

Yes, but a Z-ray is just an N-ray rotated anti-clockwise by 90 degrees… 🙂

Gunga Din
Reply to  indefatigablefrog
January 29, 2016 1:42 pm

indefatigablefrog
January 29, 2016 at 11:48 am
Yes, but a Z-ray is just an N-ray rotated anti-clockwise by 90 degrees… 🙂

So is a Z-ray an adjusted N-ray or the other way around? 😎

January 28, 2016 3:00 pm

Thanks, Mike Jonas.
“adjusting a model to match the temperatures” makes a lot of sense!
I think LT temperatures might be a good [starting] point as most of the problems associated to individual measurements from thermometers would not be present. This model, of course, would be for the lower troposphere, but that would be a lot to gain, I think.
Then again, isn’t this what the weather models are supposed to be doing, with some kind of success, unlike the unrelated climate models?

ShrNfr
Reply to  Andres Valencia
January 28, 2016 3:03 pm

The temperature of the lower troposphere would measure how much heat is being retained by the atmosphere (global somethingorother). It would be a vast improvement over these surface temperatures that mean little at best and are biased at worst.

Ian
January 28, 2016 3:03 pm

I like the treatment of UHE, especially the local rise in temperature and its exclusion from the adjacent land. That change alone should be implemented on ALL models. Universities and publications should have to state whether their models use this method to avoid errors, or whether they bludgeon the adjacent land values to the UHE ‘damaged’ data.
I also like the method of accepting ALL data points without modification of the temperatures measured due to relocation. By using the triangulation method to calculate the data between measured points all the model has to do is re-triangulate once a station moves and use the new shape to work out the ‘average’ in the region. No need to adjust the mean, etc. A very similar system is used to calculate elevation contours from survey data. If a field is surveyed with a grid, the triangles form a neat mesh and give a good approximation of the surface, if later data points are added, the old data points are just appended to the new, a new triangulation grid is calculated and the resultant contours are drawn. A good link, using elevations, is at. https://www.e-education.psu.edu/geog482spring2/c7_p6.html
For the UHE effect, you would have the city with one point raised in the middle and a circle of points at regular spacing around the city at the lower level, avoiding the ‘spread’ of the UHE affected point to the remainder of the model.

Reply to  Ian
January 31, 2016 3:36 pm

Ian: I think you are on the right track. I live on the Canadian prairies which is described as “aces and spaces”. Using homogenization over 1200 kms is ridiculous. In Manitoba, Winnipeg is an “ace”. Owing to the flat topography, I doubt that Winnipeg’s official temperature (including UHE) probably doesn’t apply to more than 1% of the area within 300 km. Why, then, would anyone average temperatures in the region with Winnipeg? Brandon (pop. 50,000) is the only other major centre in the south. In Saskatchewan, Regina and Saskatoon are the only major centres. In Alberta, Calgary has the mountain influence (chinooks) as well as UHE, and Edmonton has a mix of mountain and prairie influences. Calgary and Winnipeg fall roughly within the 1200km zone. It is a joke to homogenize the prairie temperatures. Each “ace” is surrounded by hundreds of kms of “space”. If anything, the temperatures of the major urban areas should be thrown out, and values for the other 99% of Western Canada would be a better reflection of reality.

January 28, 2016 3:15 pm

Mike,
“The main objective should be to use the temperature history to learn more about how Earth’s temperature / weather /climate system (“TWC”) operates.”
It seems to me that this is closer to the objective of reanalysis. You have a model in mind, and you assimilate data. Their model is much more elaborate, with GCM-like features. And they assimilate a much greater range of data. But I think study of their methods would help. And it might well even make sense to use their output.
“Fitness for purpose : the risk that the temperature at the thermometer is not truly representative of what it purports to measure, ie. of the temperature outside.”
The key thing is whether it represents the temperature of the region. That is what you actually use it for. That’s why it is necessary to adjust for moves etc. After a move there is a change in the air outside (at that new site, which the thermometer correctly measures) but not of the region.
“All adjustments are perceived errors in the measurements, so for every adjustment the error bars should be increased. The best general rule is: don’t do them!”
They increase error. Their function is to reduce bias. The trade-off is that error is then attenuated on averaging, but bias is not.
Nevertheless, as you quoted me saying, I find that adjustment makes little difference to the average. I think this means that there wasn’t much bias to be removed. But you don’t know that till you try.

Editor
Reply to  Nick Stokes
January 28, 2016 6:12 pm

Nick – It is a reanalysis (if I understand “reanalysis” correctly), provided you aren’t referring to the kind of reanalysis that purports to correct past data. IOW it is a kind of reanalysis that is indistinguishable from an analysis (if I understand “analysis” correctly). And of course it needs to be used with understanding of what it actually is.
But wrt to adjusting for moves, please refer to my really significant thought. It is really important not to adjust for station moves. If a station move affects temperature, then not only did the station not represent “the temperature of the region”, but no-one can be sure that any other station represents the temperature of the region now and they can’t be sure that the moved station does either. See Figure 9. So adjusting for a station move is an exercise in futility which is every bit as likely to increase error as it is to reduce it and will lead to a false feeling of security. Instead, just bump up the error bars everywhere. And anyway, my proposed system is completely uninterested in station continuity, it simply sees every temperature measurement as a temperature measurement in its own right, and every temperature measurement from every station is treated identically and without reference to any other temperature measurement. A station move is just an (x,y,z) location change.

Nick Stokes
Reply to  Mike Jonas
January 28, 2016 7:02 pm

Mike,
“If a station move affects temperature, then not only did the station not represent “the temperature of the region”, but no-one can be sure that any other station represents the temperature of the region now and they can’t be sure that the moved station does either.”
Well, it often does. So what then?
I should more carefully have said “represents the temperature anomaly of the region”. A classic station move was in Wellington in 1928, when they moved from Thorndon at sea level to Kelburn at about 160 m altitude. The temperature clearly drops about 1°C, as you’d expect. Should they adjust? Which represents the region?
In fact, the region is quite hilly. But the point is that either gives a good representation of the anomaly for the region. If it’s a warm day in one place, it’s warm in the other. What has changed, and needs adjustment, is the base for computing the anomaly.

Reply to  Mike Jonas
January 28, 2016 7:26 pm

+ many. I like your proposal ,as an observer for the past 25 years of a station that has been moved twice in the past 75 years over a distance less than 600 meters we see differences but that does not make our measurements invalid, just as you said they are, “different”.

Reply to  Mike Jonas
January 28, 2016 7:28 pm

My comment is @ Mr. Jonas.

Editor
Reply to  Mike Jonas
January 28, 2016 10:49 pm

Nick, you ask : “A classic station move was in Wellington in 1928, when they moved from Thorndon at sea level to Kelburn at about 160 m altitude. The temperature clearly drops about 1°C, as you’d expect. Should they adjust?“.
Under my proposed system, the answer is very easy and very clear. No.
What they would do is to record the (x,y,z) location of every temperature measurement. That means that the ones before the move from Thorndon are at (x1,y1,z1) and after the move to Kelburn are at (x2,y2,z2) where x2 and y2 differ a bit from x1 and y1 resp, and z2=z1+160m. That’s it. Nothing else is needed. If temperatures do drop by the expected amount then the regional and global average temperatures will not change at all and the calculated model accuracy will not change, even though “Wellington” temperatures are now on average 1 deg C lower than they were, because at all times the model had Kelburn cooler than Thorndon by the formula for 160m of altitude. Now do you see how relevant my proposed system is?

Reply to  Mike Jonas
January 29, 2016 1:02 am

Mike,
In the NZ case, you can account for the change within your model, since it corresponds to metadata about altitude. Recently the Melbourne site moved a couple of km, to a site of greater tranquillity, and that seems to have had a cooling effect. But it’s still in central Melbourne, and I think from metadata you would find it very hard to quantify the change.
But I really don’t see what you are trying to achieve by running a model surface through, you say, every unadjusted measurement. In fact, I bet you aren’t. You’ll be passing through a month average or some such. That is already a derived quantity, and there are choices about how to calculate which some consider an adjustment. TOBS, for example, concerns the process whereby and average month max is calculated. There is no “raw” value, just different ways.
And I don’t see what you’ll learn. People have a healthy scepticism of curve fitting for fitting’s sake. Surface fitting has no greater intrinsic merit. And curve fitters rarely try to actually pass through every point. They recognise that if there really is underlying physics, it’s unlikely to be that wiggly.
Anyway, good luck. I’ll be interested to see what emerges.

Editor
Reply to  Mike Jonas
January 29, 2016 2:41 am

Nick – Ideally, my system really would work on individual temperature measurements, as per my previous post, not on monthly averages. But I accept that that would probably need too much computer power so some compromise could be needed.
I don’t “run a model through [] every unadjusted measurement”. I get a model as close as possible to the measured temperatures, then shift it the last bit to match. The amount it has to be shifted is the model error – quite simply it is the amount by which the model fails to match the measured temperatures. I wouldn’t expect a model ever to be perfect (the data is as you say wiggly), but getting the model closer could be an interesting learning process.
The whole process would I think have greater integrity than anything being done currently (that I know of).
The Melbourne example you give is interesting. I think it demonstrates that no model will ever be able to replicate all the intricacies of the temperatures. But that doesn’t mean a model can never be useful. The hard part will be resisting the temptation to fiddle (a temptation which all(?) the current operators have succumbed to). So, for example, it would be tempting to tweak the model at Melbourne because you know that the thermometer moved into a bit of space within the city. But unless you have similar data for similar cities you shouldn’t do this, because that is just as likely to introduce bias as it is to remove it. Again, te concept in Figure 9 applies. It may well be possible and useful to get very accurate data for a limited region, and to refine the model for that region, thus learning more about what data is important and how temperatures behave. Maybe such a model for the USA could use surfacestations.org data.

indefatigablefrog
January 28, 2016 3:34 pm

This is all very nice.
But, it seems to me that the biggest issue with BEST is the creation of a “Global Land + Ocean” temperature graph, which circulates widely. And is widely offered as clear confirmation that an “independent” analysis has confirmed the HADCRU analysis.
Whereas, the remarkable similarities between these two graphs seems like less of a coincidence once it is understood that the BEST Land + Ocean is mainly composed of the Hadley Sea Surface temperature analysis.
Forgive me, if I have misunderstood this issue. But surely 2/3rds of the earths surface is the vast watery component. And so, that popularly circulated BEST Land+Ocean is mostly representative of the HADCRU SST that it borrows.
And therefore mostly representative of the work of Hadley Centre and Climate Research and their brilliantly insightful determination of the precise thermal behaviours of a variety of types of bucket hauled from the salty brine on to the rolling deck of ships in the rolling seas of the Southern Ocean in the latter half of the 19th century and the first half of the 20th.
And on top of that brilliant exercise in precision instrumentation, collation and interpretation of data – we must also remember that using these miraculously precisely adjusted records – it was possible to then compute the temperature of vast regions of ocean not regularly visited by such bucket wielding sailors.
And – all of this precision scientific interpretation of “bucket/sailor/thermometer/ocean interaction modelling theory” seems to have been happily adopted wholesale by BEST and resold to a gullible audience – as confirmation of the mindblowing precision of the product that they have added into their graph.
I have asked on previous occasions if anybody can explain to me if this is what has occurred.
Because it looks like the worst kind of excuse for “science” to me.
And profoundly misleading to the target audience – who just look at that graph and are reassured by its similarities to HADCRU, GISS and NOAA. Not typically noticing the citation of Hadley SSTs.
Or the fact that the 95% confidence intervals are simply pulled from a magic hat!!!
I don’t know what to think. Maybe I’ve missed something big. For the sake of science and the people of the planet – I hope that I have.

Marcus
Reply to  indefatigablefrog
January 28, 2016 4:38 pm

…If a climate model cannot effectively model the past ( unadjusted ), it has no chance of modelling the future !!

indefatigablefrog
Reply to  Marcus
January 28, 2016 7:06 pm

And I was just talking about bucket modelling. As performed by Phil Jones et al.
In which the thermal characteristics and behaviour of a bucket on a rope was modelled in order to reconstruct the supposed SST at the point at which the bucket was plunged.
You can still find this critical work on the internet somewhere.
People, in general, have absolutely no idea how much “interpretation” goes into turning a written document from the past into reconstructed global mean temperatures. Every step is fraught with hazards, potential error and potential bias.

Scottish Sceptic
January 28, 2016 3:35 pm

I’m fed up of academics doing the job best suited to proper engineers.
Please stop fiddling with the data and lets get some real engineers who know about quality assurance and will focus on good reliable measurements instead of trying unsuccessfully to convince us they can post-process the data to make a silk purse out of a very bad pig’s ear.

u.k(us)
Reply to  Scottish Sceptic
January 28, 2016 3:50 pm

I’m sure that pigs are wondering how they got drug into yet another quagmire.

Marcus
Reply to  u.k(us)
January 28, 2016 4:39 pm

…Lipstick !!

Reply to  Scottish Sceptic
January 28, 2016 3:57 pm

“lets get some real engineers”
Who’s stopping you?

Marcus
Reply to  Nick Stokes
January 28, 2016 4:48 pm

“lets get some real engineers”
Who’s stopping you?
Payments …Only liberal doomsday seers get grants !! Very few have the balls of Anthony Watts ( and Eric and Tim !! “)

Reply to  Nick Stokes
January 28, 2016 10:53 pm

“Only liberal doomsday seers get grants”
I’ve been calculating global averages for years. I don’t get a grant (no funding at all). And surely a “real engineer” would be much more efficient 🙂

James Hein
January 28, 2016 3:44 pm

Adelaide Airport is about 4 miles from the city center.
At any given time on any given day the official record http://www.bom.gov.au/sa/observations/adelaide.shtml?ref=hdr shows anything up to an 8 degree difference between the two temperature stations with the airport always lower. The CBD station with no UHI effect adjustment is of course used for the “official” temperature.
I challenge the figure of 1 degree adjustment for UHE mentioned in the article.

Reply to  James Hein
January 28, 2016 4:08 pm

OK, here are average max temps, 1981-2010, for Adelaide (Kent Town) above and airport below. Airport is cooler, but is also nearer the sea. Nothing like 8°C.

	Jan	Feb	Mar	Apr	May	Jun	Jul	Aug	Sep	Oct	Nov	Dec	Annua
Ad	29.2	29.5	26.5	22.7	19.0	16.1	15.3	16.6	19.0	21.8	25.2	26.9	22.3
AA	28.1	28.4	25.6	22.2	18.7	15.8	14.9	16.0	18.3	21.1	24.2	25.9	21.6
Dr. S. Jeevananda Reddy
Reply to  Nick Stokes
January 28, 2016 5:40 pm

the socalled global warming is also not more than this difference! — micro-climate plays an important role in local level adaptations.
Dr. S. Jeevananda Reddy

Editor
Reply to  James Hein
January 28, 2016 6:22 pm

1 deg C could well be on the low side, but it was only for illustration and I wanted to err on the side of caution. Another important thing to understand is that it isn’t an adjustment. It’s part of an attempt to understand how temperatures work, and it’s just one of many factors that could be taken into account.

Hivemind
Reply to  James Hein
January 29, 2016 2:52 am

Adelaide airport is the only piece of grass within kilometers. Unless the CBD site is in the parklands, it will be much more affected by the UHE.

macha
January 28, 2016 4:04 pm

How about getting the land based thermometer siting sorted out before manipulating the existing garbage. Improve the basics before adjusting the raw results so the future record is “BEST”. Temperature is primarily local-weather. Global averaging is ridiculous, especially if its only from max:min values and three quarters of earth surface is water. But I guess everyone should have something to do….

Marcus
Reply to  macha
January 28, 2016 4:30 pm

…Way too logical !!

Warren Latham
January 28, 2016 4:18 pm

Dear Mr. Jonas,
Your simple, basic idea; to “Build a model of how mankind’s activities and Earth’s weather and climate systems drive surface temperatures” is WAY OFF TRACK: such a notion is presumptuous in the extreme.
Mankind’s activities do not “drive” earth’s surface temperatures.
You are looking in the wrong place if you are seeking a “driver”. If you are truly seeking such a thing then I respectfully suggest you look upward.
Your article is “paralysis by analysis”; however, if in the process of your studies you do find any ingredient(s) more important than the bright thing in the sky, then you shall have my full attention.
Regards,
WL

Editor
Reply to  Warren Latham
January 28, 2016 5:15 pm

I think that man’s activities do affect temperatures as measured in spot locations, and therefore do need to be recognised when interpreting temperature measurements. For example, urban development (man’s activities) near or around a thermometer affects that thermometer’s reading. I don’t claim that man’s development contributes much to the global average temperature, but I do think that temperature measurements corrupted by man’s development have contributed too much to BEST’s and others’ “global temperature”. Hopefully, you can see how in my example a better global temperature index can be generated, and how you could then see how much of the index (probably very little) comes from man’s activities.

Marcus
January 28, 2016 4:19 pm

No model can ever predict the future of a chaotic system !

Dr. S. Jeevananda Reddy
January 28, 2016 4:28 pm

There are several other factors that contribute to temperature of a location, namely [1] like heat-island effect in urban areas, cold-island effect in rural areas [changes in water and agriculture], [2] advection — heat & cold waves, [3] elevation with changing vegetation, [4] deforestation and reforestation, [5] mining and road networking, etc.
Dr. S. Jeevananda Reddy

Marcus
Reply to  Dr. S. Jeevananda Reddy
January 28, 2016 4:34 pm

…Far too man variables to accurately use as a predictor !

Dr. S. Jeevananda Reddy
Reply to  Dr. S. Jeevananda Reddy
January 28, 2016 5:28 pm

cont— precipitation [space and time], tides [cyclonic activity, phases of the Moon, after industrialization [covering by filth on land and in oceans] —
Dr. S. Jeevananda Reddy

Editor
Reply to  Dr. S. Jeevananda Reddy
January 28, 2016 6:33 pm

Exactly. And microclimates that you mentioned in an earlier comment. That’s why I say an open source cooperative approach is needed – there’s simply too much for any one organisation.
Is it actually worth doing? I don’t know, but surely it would be better than what is being done now.

ossqss
January 28, 2016 4:31 pm

It would be interesting to better understand the local/regional climate impacts of large scale wind and solar farms. I recall a study by MIT a few years back, but not much since.

JohnKnight
January 28, 2016 4:52 pm

Mike Jonas,
I find much to like about your approach, and perceive what to my mind is a helpful shift away from what your previous article’s title seemed to be implying (‘A New System for Determining Global Temperature’), which is to say away from the idea that such an estimated global temperature is a sort of “holy grail” of climate research/inquiry.
Treating such an estimate as the central issue is playing into the hands of those who have made all manner of worrying/alarming predictions of “climate chaos” and the like, if that number were to rise at all, it seems to me. Gaining a better understanding of climate systems is a far more rational “holy grail” at this point, I feel.

Marcus
January 28, 2016 4:53 pm

….Shhhhhhhh……..Nicky doesn’t like REALITY !!

ossqss
January 28, 2016 4:54 pm

Adjustments made the 18+ year pause disappear just last year at NOAA/NASA.
Nick Stokes , do you support that type of adjustment?

Nick Stokes
Reply to  ossqss
January 28, 2016 6:51 pm

“Adjustments made the 18+ year pause disappear just last year at NOAA/NASA.”
There has not in recent decades been a 18+ year pause in NOAA or NASA GISS, on any measure.
Here is Werner’s post from March 6 last year, well before ERSST V4. For GISS:
“The slope is not flat for any period that is worth mentioning.”
Same for HADCRUT 4, HADSST3 and HADCRUT 3. He doesn’t mention NOAA, but there was no pause there either.

ossqss
Reply to  ossqss
January 28, 2016 7:23 pm

My question was intended to understand if you support the type of adjustments made via Karl et al Nick.
Thank you for the clarification.

Chris Hanley
Reply to  ossqss
January 28, 2016 9:16 pm

“Werner’s post from March 6 last year, well before ERSST V4. For GISS:
“The slope is not flat for any period that is worth mentioning.”
Same for HADCRUT 4, HADSST3 and HADCRUT 3. He doesn’t mention NOAA, but there was no pause there either …”.
==========================
Werner didn’t mention HADCRUT 3 ’flat-wise’.
In fact the linear trend for HADCRUT 3,1997 – 2014.4 was virtually flat which, like some individual station data, was clearly wrong and had to be corrected.

Reply to  ossqss
January 28, 2016 10:46 pm

ossqss,
I support any endeavour to try to get the facts right. I think the ERSST V4 changes were a properly explained instance of science making progress. The chief complaint seems to be the period since 2000. As Karl et al explain, that is chiefly due to the allowance for ship-buoy bias. Buoys have, over 30 years, formed an increasing component of the data. Not only in coverage, but because they show less variability, they are assigned greater weight. So if there is a bias (difference between how ships and buoys read temperature), it will create a trend.
As Bob T explained in his latest post, an attempt to measure the bias was made in 2002. There was much less data available then, so they analysed it by comparing 1° grid cell weekly averages of ships and buoys. This allows up to 110 km between locations, and does not discriminate by time of observation. So they established that the bias was small (0.14) but too uncertain to use, because of having to aggregate in cells.
Later, in 2011, with more data and probably improved spatial accuracy, they could pair individual ship/buoy readings, matched to within 50 km in space and within a few pre-dawn hours in time. You can do that sort of thing with more data. Now they got a rather similar estimate (0.12°C) but with much less uncertainty.
If you are trying to get it right (as scientists do) you just have to adjust for that. You can’t ignore it. And they did, and it had a small effect.

ossqss
Reply to  ossqss
January 29, 2016 6:48 am

How does that process justify adjusting the buoy data upwards as part of the equation. Those buoys are specifically designed for the task vs.ship engine intakes. I would think the opposite would have been done to correct for contamination in sampling via ship engine intake channels. I understand the desire for homogeneous data, but there seems to be a logical issue with how it was done in this case.

Reply to  ossqss
January 29, 2016 7:31 am

It’s quite possible that both systems are accurately measuring the water temperature, at slightly different locations in the surface layer. It’s not helpful to argue about which is right. You establish a consistent process, and then find the anomaly, which is more meaningful, and won’t be affected by which way you do the adjustment.
As a practical matter they adjust the shorter record. You can’t adjust ship readings to buoys in the 50’s.

ossqss
Reply to  ossqss
January 29, 2016 9:18 am

So how would you account and adjust for 2 ships running the exact same route with 2 different hull structures, 2 different surface compositions, 2 different channel intake locations on the hull? How do you account for the contamination of the readings due to the simple variations in ships? One would think that just a frictional bias from the hull surface itself could alter readings by .12 degree. Let alone the measurement locations in proximity to a passive internal ship heat source. As with most adjustments, it is plagued with an anthropogenic bias from the adjuster who is a human in the end.

AJB
January 28, 2016 6:00 pm

You don’t need any of this crap. The following shows that any influence increased CO2 concentration may be having on temperature is completely and utterly swamped by natural variation. No ifs, no buts.
http://www.met.ie/weathermaps/monthly_climate/VALENTIA_OBSERVATORY_TEMP.png?0007
Note these plots use normal statistical measures against a 30-year record of daily min/max temperatures from a pristine site facing head on into the Atlantic that is free from UHI, TOBS arguments, etc. One station like that should be all you need; if you can’t discriminate a CO2 signal here, what chance have you got using the average of some spatially challenged hodgepodge?
Autumn this year was mild in the Emerald Isle, other months were the opposite. But the variance over a 30 year record is clearly so large that pointing to an imperceptible linear trend two orders of magnitude smaller is utterly risible – regardless that we’re dealing with a chaotic non-linear system in the first place. No need for trumped-up “confidence factors” and daft axis scaling; just simple, straightforward stats 101.
To think that huge numbers of the world’s supposed academics are engaged in groping for angels on a pin is more than a little worrying.

January 28, 2016 7:15 pm

Mine Jonas,
There is more data available here, about that part of Australia that you use for illustration.
The link is giving me problems, try a search for it.)
http://www.geoffstuff.com/pristine_feb_2015.xls
In short, rather than using raw temperatures, I have used the trends over the last 40 years for about 40 stations that should be free of UHI.
The trends show no correlation with latitude, but they do correlate with longitude.
In this sense these trend estimates need to be explained before you go to far with your latitude/altitude model, which is a neat concept that I rather like.
My suspicion is that you will be drowned out by noise, noise, noise.
Geoff.

Editor
Reply to  Geoff Sherrington
January 29, 2016 5:42 pm

Thanks, Geoff. I would like to take this forward to the next step. What data do you have for the ?45 stations. Your spreadsheet only has annual, but you refer to those figures being aggregated from finer data.

Dave Fair
January 28, 2016 7:32 pm

Man affects his local environment. Climate can only be reflected in the total atmosphere. Use the satellite/radiosonde information to set trends. Go home and have a brew.

January 28, 2016 7:38 pm

Mike Jonas,
Forgot to add that the data are as unadjusted as I could find, but there is no effect of significance shown for distance from a major ocean.
For a large majority of Australian sites, there was a change from LIG thermometry to small shelters with thermistors/thermocouples in the several years around 1993. There is abundant literature suggesting that this caused a step change that does not appear to have been corrected.
It might also be relevant that other work demonstrates strong statistical correlation of temperature with rainfall at a site. I do not know how you would deal with that. Rainfall could be a variable that changes T ( at least on a first pass look, without invoking secondary feedbacks). It could be unrelated to GHG and so sometimes ignored.
One demonstration of noise is that there is a weak correlation with the WMO station number, when none would intuitively be expected.
Geoff

Editor
Reply to  Geoff Sherrington
January 28, 2016 11:15 pm

Geoff – If I have understood your two comments correctly, you are working with station trends in the first place rather than station temperatures. I have no particular expectation of different trends at different latitudes, longitudes or distance from ocean. [That means my expectation is null, not zero.]
The system that I propose works with actual temperature measurements, not with anomalies and not with trends. The model aims to represent what the pattern of temperatures is, and if you look at my Figure 5 you can see a clear increase in measured temperature as you start going inland from Robe. By including a coastal/continental effect, the model can get a closer match to measured temperature. But as the model stands that won’t of itself affect any trends, because the coastal/continental allowance for each point does not change from year to year.
We have all got so used to everyone working with trends and/or anomalies instead of with temperatures, and mucking around with measured temperatures to make them fit some pre-conceived notions, that we have lost sight of what good practice looks like.

Editor
Reply to  Geoff Sherrington
January 29, 2016 3:02 am

The relationship between temperature and rain is very relevant. Cloud cover too, and wind direction and speed, vegetation/agriculture changes, bush fires (wildfires), etc etc. (Where I live, a small change in wind direction can deliver a large change in temperature). But we are very unlikely ever to have usable data for any of those factors, so they will have to remain in the error bars.
Is there also a correlation between station number and latitude?

Reply to  Mike Jonas
January 29, 2016 7:18 pm

Mike,
The daily data you ask about is in about 50 spread sheets of about 2 MB each, one for each station including some rejected for missing too much data. Too big to email, but I can send a USB stick.
If you would like these files and would like to talk about your main proposal, it might be best to use private emails rather than plug up Anthony’s space.
sherro1 at optusnet dot come dot au
As I said, it is an interesting proposal and I would like to help when I can.Geoff

January 28, 2016 11:59 pm

Mike,
The original temperature data are also on the spread sheet, if you prefer temperatures to trends.
I selected a trend format to normalise the figures to an extent.
If two sites near to each other and near to the coast show two different trends, all other factors being equal, then almost by definition they are each responding differently to distance from the ocean. Perhaps their microclimates dominate. If they do, it decreases the value of using temperature as the main metric.
Yes, I agree that trend is a less simple metric, but I elected to use it in an endeavour to combat some of the noise. It is worth a look. I have poked and prodded as much as the next guy, but it is noise that is an omnipresent frustration. Note that I do not work with anomalies at all.
Cheers Geoff.

Evan Jones
Editor
January 29, 2016 5:04 am

What is less good is their mindset, which needs changing. They are using their own notions of temperature trends and consistency to fill in missing temperature measurements, and to adjust temperature measurements, which are subsequently used as if they were real temperature measurements. This is a very dangerous approach, because of its circular nature: if they adjust measured temperatures to match their pre-conceived notions of how temperatures work, then their final results are very likely to match their pre-conceived notions.
You can’t infill before adjustment because you will be incorporating jump bias (for MMTS) and trend bias (for CRS, and, arguably, also for MMTS) for your pairwise.
What you have to do first is get your adjustments right — and then infill. Unfortunately, infill is necessary because one must establish be baselines for the inclusion of records that do not cover the timespan of the study (this is much more of an issue for BEST than us on Anthony’s team).
But it is necessary to get the initial adjustment right. BEST adjusts for jumps but does not address the trend bias, and both are necessary. It also pairwises with inhomogenous microsite classes. That is where BEST is going awry.

Editor
Reply to  Evan Jones
January 29, 2016 5:27 pm

evanmjones -Your “ one must establish [..] baselines for the inclusion of records that do not cover the timespan of the study” is incorrect. My proposed system explicitly provides for such records to be used ‘as is’ on an equal basis with all others.

Ken Gray
January 29, 2016 6:33 am

Here is something that may be germane to your excellent project. Wegman I believe pointed out that there are advanced statistical techniques for handling incomplete data sets, i.e., sets of data with missing data points. Wouldn’t this be preferable to infilling? Perhaps another tool that is mathematically consistent to avoid the deprecated adjustments procedures. Thanks.

Editor
Reply to  Ken Gray
January 29, 2016 5:29 pm

Ken Gray – thanks, I’ll take a look, but I think it might be barking up the wrong tree. The point of my proposed system is that it simply doesn’t care if data is missing. It simply works with the data that it has, treating all data points equally.

M Seward
January 29, 2016 8:16 am

Hmmm… then why did NOAA ‘adjust ‘ their data and eliminate the “pause” and then trumpet that fact as loudly and as widely as they could in the lead up to Paris?
The basic truth is that the thermometer record is only useful as local data about what the temperature was at a location at a time. Using it for a global, indicative temperature is just risible as there are so many variables that affect the eventual value. As MJ perhaps unwittingly illustrates all these global temperatures’ actually are is the construct of a model of one sort or another.
I’ll stick with the satellite or balloon data sets thanks. Still some adjustments but nothing like that required for the surface thermometer record.

Reply to  M Seward
January 29, 2016 12:31 pm

There can be no scientific reason for trumpeting results of a modified process as substantially different from the prior measurements. NOAA’s bombastic pronouncements have to be political. I am reminded of what the CIA provided for Bush when they were looking for a reason to invade Iraq- exactly what the administration wanted! The top of these organizations are political appointments who are there to control the output to suit their political masters. It is scientifically corrupt from the top down. Even Nick can’t bring himself to come out and say the ship intake data is valid. It is, in fact, laughable!

Editor
Reply to  M Seward
January 29, 2016 5:46 pm

I think ‘wittingly’ would be more accurate!

ferd berple
January 29, 2016 9:01 am

Isn’t trying to build records by station a fool’s enterprise? Since the stations change over time due to uncontrolled siting variables, there will be drift due to the station environment that has nothing to do with climate.
Thus, the only realistic approach is to reject the notion that you can build an uncontaminated history by stations, and instead use random sampling similar to what is done for ship data.

Editor
Reply to  ferd berple
January 29, 2016 5:33 pm

ferd berple – maybe you are right, and obviously one will never deal completely with all small-scale variables like siting, station/location contamination, etc. So there will always be error bars. But I think that what I am proposing would at least be better than anything anyone else is doing (that I know of). It would also allow these unknowns to be quantified.

ferd berple
January 29, 2016 9:09 am

Wouldn’t this be preferable to infilling?
===================
how do you infill the North Pole with surrounding data, where all the surrounding stations may well be warmer simply because they are further south? Don’t you need to correct for latitude, elevation, humidity, ice coverage, time of day, etc. etc. etc.
It is a fool’s errand. You are a slave to the grid or the station. The problem is the sampling technique relies on data you do not have.
If you are using griding, as you reduce the grid size to increase accuracy you increase the chance of empty grid cells, reducing accuracy.
If you rely on stations, as conditions around the stations change non-climactic variables will appear to be climate change, contaminating the result.
Either way you cannot win. So change the methodology.

Reply to  ferd berple
January 29, 2016 1:55 pm

“how do you infill the North Pole with surrounding data, where all the surrounding stations may well be warmer simply because they are further south?”
Any infilling will be done with anomalies. There is no reason to expect warmer with latitude.

Reply to  Nick Stokes
January 29, 2016 6:01 pm

Apologies, that answer related to the existing system, not Mike’s proposal.

Editor
Reply to  ferd berple
January 29, 2016 5:39 pm

ferd berple – The North Pole, like the South Pole, the oceans and the major wildernesses, will always be an issue because they have so few thermometers. My proposed system does handle them, but necessarily the error bars increase over those areas, and the reduction in reliability of the overall results can be quantified too (see what I say about model rating in the article – actually I think it was in the previous article).

January 29, 2016 9:34 am

Regarding: ” UHE: Both Max and Min temperatures are higher by 1 deg C in urban areas.”
Other articles at WUWT said that UHE increases minimum temperatures greatly more than maximum temperatures. Some of these articles proposed using max instead of mean temperatures because UHE raises mainly the minimum temps.

Editor
Reply to  Donald L. Klipstein
January 29, 2016 4:56 pm

I was keeping it simple because it was for illustration only, and I tried to stay on the conservative side. You are quite right, in a real system the Min figure might well be higher than the Max, and both might well be higher than my simple test. If my system gets implemented, we should be able to tell.

Kev-in-Uk
January 29, 2016 9:38 am

I’ve read a good many comments on this and the original post and I have not noticed anyone stating the bleeding obvious (as it seems to me, anyway!).
It strikes me that the whole premise of doing some kind of statistical analysis, modeling, or whatever data treatment you want to do – is avoiding the necessary hard work that is actually required to make sense of local/regional spatial temperature information in any meaningful way. I think this is kinda what Mike Jonas is trying to promote? (albeit by creating yet another ‘model’)
To certain degree, I suspect this is just purely lazyness or avoidance of ‘human’ workload on the part of the current dataset keepers? As many have said, it is somewhat crazy to deduce a global temperature, let alone a global temperature anomaly, etc, from flawed or incomplete data, and especially to silly levels of degrees, whilst not quantifying the real/true uncertainties. However, in my opinion, it should be possible to undertake humanized assessment of local/regional areas using raw temperature data. And yes, this may mean somebody actually making real world ‘decisions’ based on experience and rational judgement! (instead of some computerised GIGO system)
In simple terms, a human (appropriately trained!) needs to be able to look at local datasets independently and correspondingly – in order to ‘see’ and evaluate or question why differences are observed. This takes time and skill, and requires careful analysis of all the factors (as are obvious). Whatever the likes of the statisticians want to say – if this is not being done from the outset, writing some treatment ‘model’ to do the essential ‘human’ part is nigh on impossible in my opinion – and as we see, justifying such methodolgy is even harder!. If I suggested to commenters here, that a crowd sourced project to do this would be a good idea – I suspect there would be many takers/volunteers.
Basically, if someone gave you a map (of a given area, say 100x100km, but whatever creates a reasonable workable number of stations), with a list of all the stations, and excel spreadsheet(s) and plots of the data for say, the last 50 years – I reckon human analysis could be undertaken by direct comparison of records with each other in relatively short order. Indeed, I’m sure it would be a piece of coding ‘pi$$’ to have them displayed as overlays in some kind of video/gif for a human to review quite quickly and pick out ‘odd’ ones?
The point being that if you see, say Heathrow airport showing a steady rise, but a few other stations nearby showing no rise, you would highlight Heathrow as suspect. You would then cross check the several stations with each other, and if they all show similar trends, etc – you could be reasonably confident that THEY were the REAL situation and you would DISCARD Heathrow completely, yes? Now, obviously, it’s not as simple as that, but the basic premise is that this is what is required in order to flush out and actually analyse good data from bad data. I personally do not see how it can be done any other way (the statisticians can argue about how ‘little’ it affects the net result and all that, but that’s not really a scientific approach).
I realise that the current datasets have some form of QC, but I do not know (does anyone?) exactly what that entails and whether it involves proper human cross checking? As far as I can tell, the likes of BEST tend to simply discard poor QC data – but in fact, just because it is intermittent or whatever, it may actually be extremely useful to confirm other station data. For example, even if a temp is indicated to be reading 5 degrees high everyday, the plots should still ‘match’ nearby stations, ergo, a simple adjustment could perhaps be made to enable such data to be used – even if not used, it would still be helping to confirm nearby station data. Like I said, I don’t know how detailed such analysis may or may not be undertaken in all these QC procedures. However, I strongly suspect that the required level of human assessment is not undertaken – instead simple coded ‘rules’ are formulated to reduce/remove data from further computation. (I’m sure the likes of Mosher can advise what is truly undertaken?)
If crowd sourcing SETI, pulsar searches, etc, is considered desirable and helpful to science – why not station data checking? Of course, there is a question of what data ‘they’ might want to give out, especially ‘raw’ stuff, but
then again , if ‘they’ have nothing to hide……?
My overall point being that with all the uncertainties being bandied about, surely we can actually identify (and remove?) a good many of them via detailed and meaningful human analysis instead of computerised modeling (based on numerous assumptions, etc).
Obviously, this comment is not particularly helpful but it strikes me that many seem to forget that such analysis is essential if we are to consider any dataset as ‘good’.
I strongly doubt NASA,GISS, CRU, etc have thousands of people doing this kind of data checking (as clearly, to do it right, you would need that amount of people!) so this really ought to be considered as a starting point?

Editor
Reply to  Kev-in-Uk
January 29, 2016 5:21 pm

Kev-in-UK – I would like to get this going on an open source basis, but first I need to talk to a few people. If I get anywhere, I’m sure you will see it on WUWT!

Gary Pearse
January 29, 2016 9:39 am

Mike Jonas, don’t be discouraged by naysayers, especially those in the industry of world temperatures. The strength of your approach lies in the very nature of the errors in the old record. There will have been just as many positive as negative errors in such a system. I was bowled over by your figure 9. Indeed, if you can get different temperatures with a move of 100 metres, what the heck is the temperature a hundred metres away from the ones you didn’t move!! This is the biggest contribution of your work. It makes the adjustments and homogenizations clearly non-scientific.
Okay, if you have one reading in a series that is 100C, yeah we could ignore this one as impossible! and there is some merit to Tobs adjustments. Outside of that, the adjustments are not useful. One thing I have been wanting to say for a long time is: don’t adjust because a station needs painting. Paint the gol darn thing and maintain them that way! If these temperatures are so critically important, put 3 or 4 of them within a hundred feet of each other at each site and maintain them. It costs us billions a year to keep the climate crocodile fed, take one percent of that, at least, and make a good reliable network. Run the new network for 5 years along with the old one and see what you get.
Adjusting records up that are showing downtrends from Northern Canada, across Greenland, Iceland and Siberia is totally wrong. Iceland meteorolgists strongly argue against what the Anglo Saxon world record keepers are doing. This vast area shows the same patterns! Even showing that the 30’s early 40s were warmer than now. Ditto South American records (as shown by “not a lot of people know” blog). Here are some examples, all over Greenland:
http://www.worldclimatereport.com/index.php/2007/10/16/greenland-climate-now-vs-then-part-i-temperatures/
All over Iceland:
http://www.euanmearns.com/wp-content/uploads/2015/03/IMO_dT_7stations.png
And in New Zealand they added on up to 1C to the raw record and reversed trend upwards- scroll down the gallery of chicanery (and note that mid 20th Century temps were warmer than now – like iceland:
http://www.climatescience.org.nz/images/PDFs/global_warming_nz2.pdf

nobodysknowledge
Reply to  Gary Pearse
January 29, 2016 1:34 pm

People working with temperatures can see the needs for adjustments. I think all adjustments should be affirmed by local meteorologists, as a quality control. They have the local knowledge that is necessary to make real judgements. Perhaps they could themselves make the necessary estimates based on scientific rules.

Editor
Reply to  Gary Pearse
January 29, 2016 5:23 pm

Gary Pearse – thanks for picking up on Figure 9. I thought it was pretty important, as I tried to indicate. I wonder how many others understood its full implications.

Kev-in-Uk
Reply to  Mike Jonas
January 30, 2016 2:10 am

Mike, I think most folk will understand that error bars are important, and technically, should be the same for all stations. There have been discussions where folk like myself have said that raw data is preferable (after removing obvious errors) in all stations, as unless there is a gradual or ‘drifting’ instrumental error, it is reasonable to assume that the underlying trend (IF THERE IS ONE!) will be the same for all stations.
However, over several decades, various instrumental changes, etc, there will be differences in stations ‘errors’, and , of course, if we simply kept the raw data and kept widening the error bars we would end up with temperature of +/- several degrees. Curiously enough, this is exactly how myself (Geoscientist) views the temperature data in any event, e.g. typical Uk summer day 22 degC +/- several degrees – typical Uk winter day; 5 degrees +/- several degrees! In my humble opinion therefore, this what most folk don’t like about the global temperature metric – it’s basically a construct of inumerable assumptions and an averaging on nigh on the fewest data points imaginable compared to the size of the spatial task and produces a meaningless ‘number’. Ignoring the CO2 trace gas issue – this is what most skeptics cannot tolerate, the thought of a supposed number being used to support a supposed theory, when in practise, neither can be reasonably demonstrated (i.e. the real measurable effect of CO2 in the atmosphere, nor the actual ‘global’ temerature).
In my opinion therefore, local or at best ‘regional’ data should be used to create independent ‘verified’ and corrected (as required) datasets, from which general trends may be observed. I would like to bet that if such local data were developed it would likely reflect or correlate to, the years of satellite data available? Even more, I’d bet that such a procedure would also confirm the greatly increased UHI/UHE in the last few decades! I used Heathrow as an example before because we all know that Heathrow has had vastly increased air traffic over the last few decades, but I have yet to see an assessment of how much the temps should be reduced to compensate for this (and, of course the increased population/sprawl of London generally).
At least if we had good local and/or regional datasets, we might be able to take temperature trends more seriously?

William Astley
January 29, 2016 9:43 am

This is surreal. There has been a cottage industry of CAGW manipulation of the land based temperature record to attempt to create a better hockey stick (reducing temperatures in the past and increasing recent temperatures), ignoring the fact that the land based temperature record is contaminated due to the urban effect which explains the roughly 0.3C difference between the satellite data (all the satellite temperature data bases more or less agree with each other).
The pathetically manipulations to the GISS temperature ‘data’ base was done to create a cult of CAGW propaganda display as was Mann’s elimination of the medieval warm period.
The cult of CAGW do not understand/comprehend that manipulating current and past temperature data to create propaganda displays does not change the logical implications of 18 years without warming. Observation A leads to conclusion to B. Changing observation A will not change the physical implications of conclusion B. In addition, there are dozens and dozens of independent analysis results that support conclusion B.
The purposeless idiotic climate wars is a distraction, as to what is going to happen next to the earth’s climate. The red in this picture is going to be replaced by blue as surely as night follows day. If and when there is significant cooling this entire conversation will change.
http://www.ospo.noaa.gov/data/sst/anomaly/2016/anomnight.1.28.2016.gif
Based on the 18 years without warming and dozens and dozens of other observations/analysis results there is no CAGW issue, there is no measureable AGW issue. If that assertion/conclusion is correct, there are fundamental errors in the most basic analysis/assumptions of the effect of ‘greenhouse’ gas molecules on surface temperature. The ‘error’ is: An increase in any greenhouse gas will cause there to be an increase convection cooling (an increase in convection cooling reduces the lapse rate). The reduction in the lapse rate due to the increase in convection cooling, causes there to less surface cooling, which offsets the majority of the greenhouse effect from the molecule in question. In addition the warming due to the increase in CO2 – ignoring the offsetting increase in convection – was over estimated by a factor of 4 as the overlap of the spectral absorption of water vapor and CO2 was ignored.
The recent tropical warming and high latitude warming was caused by an increase in solar wind bursts which in turn was caused by massive weird persistent coronal holes on the surface of the sun. The massive coronal holes are disappearing as are the sunspots. There is a major change occurring to the sun.
Comments:
1) The unexplained disappearing sunspots – disappearing sunspots is different that a reduction in the number of the sunspots – on the sun has resulted in weakening of the solar heliosphere which is the reason why galactic cosmic rays (GCR see below for an explanation of what is GCR) that are striking the earth is the highest ever recorded for this time in a solar cycle.
2) The solar heliosphere is the name for a tenuous solar created plasma bubble or solar ‘atmosphere’ that extends past the orbit of Pluto. The solar heliosphere is made up of hot ionized gas (plasma) that is ejected from the sun and pieces of magnetic flux that is also ejected from the sun. The pieces of magnetic flux in the solar heliosphere blocks and defects high speed cosmic protons. The high speed cosmic protons are for historic reasons (The discovers of the high speed cosmic protons thought they were observing a ‘ray’ and the name ray as opposed to particle stuck) called galactic cosmic ‘rays’ (GCR) or galactic cosmic ‘flux’ (GCF).
3) The high speed cosmic protons strike the earth’s atmosphere and create cloud forming ions in the earth’s atmosphere. The earth’s magnetic field also blocks and defects GCR so the majority of the change in GCR that strikes the earth due to changes in the strength and extent of the solar heliosphere is in the latitude region 40 to 60 degrees on the earth.
4) There is a second solar mechanism that has a big affect on the earth’s climate. Solar wind bursts from the sun (The majority of the solar wind bursts are created by ‘coronal holes’ on the surface of the sun. Coronal holes are not accounted for in the sunspot counting number and can and do appear even when there are no or few sunspots on the surface of the sun) create a space charge differential in the earth’s ionosphere which in turn causes current flow from high latitude regions to the equator of the planet. This current flow from high latitude regions of the planet to the equator in turn causes a change in cloud amount – the mechanism where solar wind bursts remove cloud forming ions from the earth’s atmosphere is called electroscavenging – and cloud properties that causes warming at high latitude regions and at the equator which causes warming. The mechanism electroscavening is one of the key physical reasons why there are specific patterns of regional warming and cooling on the earth.
5) The electroscaveging effect from solar wind bursts overrides/inhibits the normal cooling that would occur due to higher GCR.
Coronal holes on the surface of the sun are caused by an unknown process deep within the sun. Sudden abrupt changes to the coronal holes indicate that there are significant unexplained changes deep within the sun.
The following paper discuss the fact that changes in planetary temperature closely correlates with changes in the number of solar wind bursts. The other paper explains the electroscavenging mechanism.
http://gacc.nifc.gov/sacc/predictive/SOLAR_WEATHER-CLIMATE_STUDIES/GEC-Solar%20Effects%20on%20Global%20Electric%20Circuit%20on%20clouds%20and%20climate%20Tinsley%202007.pdf
The role of the global electric circuit in solar and internal forcing of clouds and climate
http://sait.oat.ts.astro.it/MmSAI/76/PDF/969.pdf

Once again about global warming and solar activity
Solar activity, together with human activity, is considered a possible factor for the global warming observed in the last century. However, in the last decades solar activity has remained more or less constant while surface air temperature has continued to increase, which is interpreted as an evidence that in this period human activity is the main factor for global warming. We show that the index commonly used for quantifying long-term changes in solar activity, the sunspot number, accounts for only one part of solar activity and using this index leads to the underestimation of the role of solar activity in the global warming in the recent decades.
A more suitable index is the geomagnetic activity which reflects all solar activity (William: The geomagnetic field rings when there are solar wind bursts.), and it is highly correlated to global temperature variations in the whole period for which we have data.
…The geomagnetic activity reflects the impact of solar activity originating from both closed and open magnetic field regions, so it is a better indicator of solar activity than the sunspot number which is related to only closed magnetic field regions. It has been noted that in the last century the correlation between sunspot number and geomagnetic activity has been steadily decreasing from – 0.76 in the period 1868- 1890, to 0.35 in the period 1960-1982, while the lag has increased from 0 to 3 years (Vieira et al. 2001). (William: This is the reason sunspot number no longer correlates with planetary temperature. There is another mechanism electroscanvenging from solar wind bursts which is modulating planetary temperature.)
In Figure 6 the long-term variations in global temperature are compared to the long-term variations in geomagnetic activity as expressed by the ak-index (Nevanlinna and Kataja 2003). The correlation between the two quantities is 0.85 with p<0.01 for the whole period studied. It could therefore be concluded that both the decreasing correlation between sunspot number and geomagnetic activity, and the deviation of the global temperature long-term trend from solar activity as expressed by sunspot index are due to the increased number of high-speed streams of solar wind on the declining phase and in the minimum of sunspot cycle in the last decades.

January 29, 2016 11:07 am

They are using their own notions of temperature trends and consistency to fill in missing temperature measurements, and to adjust temperature measurements, which are subsequently used as if they were real temperature measurements.
You need to study up. BEST do not “use” anything “as if” they were real temperature measurements. Using a hierarchical model which they subject to lots of testing of accuracy, they use estimates from the data to impute values that are missing or dramatically (based on explicit criteria written into the code and the descriptions) different from their neighbors.

Editor
Reply to  matthewrmarler
January 29, 2016 5:11 pm

impute : to ​calculate something when you do not have ​exact ​information, by comparing it to something similar (http://dictionary.cambridge.org/dictionary/english/impute). That is exactly what BEST is doing, and what I say they should not be doing.

January 29, 2016 11:09 am

• Build a model of how mankind’s activities and Earth’s weather and climate systems drive surface temperatures.
• Test the model against the temperature history.
• Use the results to improve the model.

Have you spent any time studying the development of GCMs and other models whose results have been widely presented in the peer-reviewed literature and popular press?

Reply to  matthewrmarler
January 29, 2016 12:59 pm

This seems like a reasonable approach. The models have been built. Tests against reality have shown that the models are incorrect. They are now using the results to change the inputs. What’s wrong with that picture?

Editor
Reply to  matthewrmarler
January 29, 2016 5:13 pm

Have I spent any time studying [..] GCMs?
Yes. http://wattsupwiththat.com/2015/11/08/inside-the-climate-computer-models/

January 29, 2016 2:07 pm

One wonders how the the variations are so manipulated and confused just so it can demonstrate a “warming planet”. When one looks at “models” one has to wonder just how they pervert the temperature. Locally, over a one hundred mile distance my Iphone temps range from 17deg to 19degC, where one won has rain and the other does not. But as demonstrated on “models” those temps would be all lifted to 19degC to show “warming” and that method is a lie.

Editor
January 29, 2016 4:45 pm

My latest drum to beat is to ask: If we were at start over, would we be focusing on the air temperature 2 meters about the ground and the sea temperatures at the surface or in the top few meters?
Is that a physically correct metric to measure hypothesized CO2-induced atmospheric or climatic warming?
Is CO2-induced warming expected to appear primarily within 2 meters of the surface (up in the air or down in the sea)?
If surface temperatures (air and sea) are not best-indicator metric, proposing new ways to massage or torture the existing (questionable) data into a doubtfully-useful average-of-averages-of-averages single number may be a colossal waste of time and effort.
Why did we spend all that money to loft atmospheric temperature sensing satellites if we are going to insist on using transmogrified surface thermometer records?
Did the whole subject get off on the wrong foot down the wrong path?

Editor
Reply to  Kip Hansen
January 29, 2016 5:18 pm

If our concern is the Earth’s heat content, we should undoubtedly be looking in the oceans not the atmosphere. So yes, the whole subject of global warming did get off on the wrong foot down the wrong path. But if we want to learn about the weather and climate that we live in, then it is appropriate to look also at data from the surface and from the atmosphere.

Editor
Reply to  Mike Jonas
February 3, 2016 8:55 am

Reply to Mike Jonas ==> “But if we want to learn about the weather and climate that we live in, then it is appropriate to look also at data from the surface and from the atmosphere.” Quite right — that’s where weather happens and weather is important to us. As a sailor, the sea surface also experiences its own weather — coupled to the atmosphere — and is a great concern to me when I am voyaging.
How much these local surface phenomena reflect the state of the climate is one of the questions unanswered. Some believe that you can added them all together, get an average, and have climate! Me, not so much.

nobodysknowledge
Reply to  Kip Hansen
January 30, 2016 4:34 am

Some good questions. If we measure air temperatures at 2m plus/minus 4 cm, i think we measure 0,01 % of the air. Air has 3% of the heat uptake, then it is 0,003% of the warming we can measure. When we measure over land area, this will be less than 0,001% of the heating. How can this give a good picture of global warming? Perhaps the energy budget is more interesting when it comes to global warming. Still I think it interesting to look at surface temperatures. but what conclusions can we draw?

nobodysknowledge
Reply to  nobodysknowledge
January 30, 2016 4:35 am

It was meant as an answer to Kip Hansen.

1sky1
January 29, 2016 5:18 pm

Unless first principles provide a well-validated theory or law, models should always be constructed ex post, not ex ante. Otherwise we risk deforming the data, instead of being informed by it. That is exactly the case with BEST’s presumptions of red-noise temporal variability and spatial homogeneity that lead to the manufacture of long regional time series from mere snippets of often UHI-afflicted record. Kriging—often from afar–then furthers the fiction by filling in geographic gaps in data coverage. The only real scientific value of their much much-touted exercise is not in their unrealistic modelling, but the access provided to the raw data.

garymount
February 5, 2016 7:01 am

I haven’t had time to review and comment as I have an important software project I need to dedicate my mental facilities towards as it nears completion to a point where it can get up and running. A project years in the making, off and on.
In a couple of weeks I hope to free up some time to deeply look into this latest post, so more later.
However, for now I leave you with some thoughts I had while reading this post when I finally had a chance, and reading one of Willis’s posts about global temperatures.
Note, though I don’t comment often, I do read almost every post and a large quantity of comments every day.
—————————————-
A day in the life of an average global temperature estimate.
Our day begins, as most days do not, on the Greenwich Meridian. There are few temperature stations located here, if any ( I haven’t checked ), but there are many within Time Zone 0 or Zulu Time. The local times throughout a time zone are all different than the time zone time except at one meridian line located within the time zone. For example a few miles to the east of my location the local time is one minute earlier and to the west, one minute later.
So our temperature reading for the calculated average temperature day begins at midnight GMT, some 8 hours (time zone standardized – but not true local time) before my temperature station in Port Coquitlam – some 30 miles East and inland from Vancouver – starts its day.
The sun is always spiralling around the globe, either heading away from the north pole (towards the south pole) or heading towards it. As the day progresses, and the sun spirals, more temperature stations get collected into the days tally. All with different local times because of time zone modifications so that for example a day might start half an hour before the true start of a day, or half an hour later.
Some of the last stations have their final reading some 11 hours after the final reading of a GMT station. However I don’t know what to make of the stations just a little further west where their time zones are minus or behind GMT by 13 hours.
This is a very odd day.
What would the average temperature of the globe look like at an instant of time? Not for a day but for a moment in time. What would the average day look like if you integrated these instances say every minute across a 24 hour time period? And whos 24 hour period, as there are thousands of local days across the planet, though you can Time Zone and restrict your choices to 24 + Newfoundland 🙂
ggm

%d bloggers like this:
Verified by MonsterInsights