Calculating global temperature anomaly

By Nick Stokes,

There is much criticism here of the estimates of global surface temperature anomaly provided by the majors – GISS, NOAA and HADCRUT. I try to answer these specifically, but also point out that the source data is readily available, and it is not too difficult to do your own calculation. I point out that I do this monthly, and have done for about eight years. My latest, for October, is here (it got warmer).

Last time CharlesTM was kind enough to suggest that I submit a post, I described how Australian data made its way, visible at all stages, from the 30-minute readings (reported with about 5 min delay) to the collection point as a CLIMAT form, from where it goes unchanged into GHCN unadjusted (qcu). You can see the world’s CLIMAT forms here; countries vary as to how they report the intermediate steps, but almost all the data comes from AWS, and is reported at the time soon after recording. So GHCN unadjusted, which is one of the data sources I use, can be verified. The other, ERSST v5, is not so easy, but there is a lot of its provenance available.

My calculation is based on GHCN unadjusted. That isn’t because I think the adjustments are unjustified, but rather because I find adjustment makes little difference, and I think it is useful to show that.

I’ll describe the methods and results, but firstly I should address that much-argued question of why use anomalies.

Anomalies

Anomalies are made by subtracting some expected value from the individual station readings, prior to any spatial averaging. That is an essential point of order. The calculation of a global average is inevitably an exercise in sampling, as is virtually any continuum study in science. You can only measure at a finite number of places. Reliable sampling is very much related to homogeneity. You don’t have to worry about sampling accuracy in coin tosses; they are homogeneous. But if you want to sample voting intentions in a group with men, women, country and city folk etc, you have inhomogeneity and have to be careful that the sample reflects the distribution.

Global temperature is very inhomogeneous – arctic, tropic, mountains etc. To average it you would have to make sure of getting the right proportions of each, and you don’t actually have much control of the sampling process. But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low.

I’ll illustrate with a crude calculation. Suppose we want the average land temperature for April 1988, and we do it just by simple averaging of GHCN V3 stations – no area weighting. The crudity doesn’t matter for the example; the difference with anomaly would be similar in better methods.

I’ll do this calculation with 1000 different samples, both for temperature and anomaly. 4759 GHCN stations reported that month. To get the subsamples, I draw 4759 random numbers between 0 and 1 and choose the stations for which the number is >0.5. For anomalies, I subtract for each place the average for April between 1951 and 1980.

The result for temperature is an average sample mean of 12.53°C and a standard deviation of those 1000 means of 0.13°C. These numbers vary slightly with the random choices.

But if I do the same with the anomalies, I get a mean of 0.33°C (a warm month), and a sd of 0.019 °C. The sd for temperature was about seven times greater. I’ll illustrate this with a histogram, in which I have subtracted the means of both temperature and anomaly so they can be superimposed:

The big contributor to the uncertainty of the average temperature is the sampling error of the climatologies (normals), ie how often we chose a surplus of normally hot or cold places. It is large because these can vary by tens of degrees. But we know that, and don’t need it reinforced. The uncertainty in anomaly relates directly to what we want to know – was it a hotter of cooler month than usual, and how much?

You get this big reduction in uncertainty for any reasonable method of anomaly calculation. It matters little what base period you use, or even whether you use one at all. But there is a further issue of possible bias when stations report over different periods (see below).

Averaging

Once the anomalies are calculated, they have to be spatially averaged. This is a classic problem of numerical integration, usually solved by forming some approximating function and integrating that. Grid methods form a function that is constant on each cell, equal to the average of the stations in the cell. The integral is the sum of products of each cell area by that value. But then there is the problem of cells without data. Hadcrut, for example, just leaves them out, which sounds like a conservative thing to do. But it isn’t good. It has the effect of assigning to each empty cell the global average of cells with data, and some times that is clearly wrong, as when such a cell is surrounded with other cells in a different range. This was the basis of the improvement by Cowtan and Way, in which they used estimates derived from kriging. In fact any method that produced an estimate consistent with nearby values has to be better than using a global average.

There are other and better ways. In finite elements a standard way would be to create a mesh with nodes at the stations, and use shape functions (probably piecewise linear). That is my preferred method. Clive Best, who has written articles at WUWT is another enthusiast. Another method I use is a kind of Fourier analysis by fitting spherical harmonics. These, and my own variant of infilled grid, all give results in close agreement with each other; simple gridding is not as close, although overall the method often tracks NOAA and HADCRUT quite closely.

Unbiased anomaly formation.

I described the benefits of using anomalies in terms of reduction of sampling error, which just about any method will reflect. But there is care needed to avoid biasing the trend. Just using the average over the period of each station’s history is not good enough, as I showed here. I used the station reporting history of each GHCN station, but imagined that they each returned the same, regularly rising (1°C/century) temperature. Identical for each station, so just averaging the absolute temperature would be exactly right. But if you use anomalies, you get a lower trend, about 0.52°C/century. It is this kind of bias that causes the majors to use a fixed time base, like 1951-1980 (GISS). That does fix the problem, but then there is the problem of stations with not enough data in that period. There are ways around that, but it is pesky, and HADCRUT just excludes such stations, which is a loss.

I showed the proper remedy with that example. If you calculate the incorrect global average, and then subtract it (and add later) and try again, you get a result with a smaller error. That is because the basic cause of error is that the global trend is bleeding into the anomalies, and if you remove it, that effect is reduced. If you iterate that, then within six or so steps, the anomaly is back close to the exactly correct value. Now that is a roundabout way of solving that artificial problem, but it works for the real one too.

It is equivalent to least squares fitting, which was discussed eight years ago by Tamino, and followed up by Romanm. They proposed it just for single cells, but it seemed to me the way to go with the whole average, as I described here. It can be seen as fitting a statistical model
T(S,m,y) = G(y) + L(S,m) +ε(S,m,y)
where T is the temperature, S,m,y indicate dependence on station, month and year, so G is the global anomaly, L the station offsets, and ε the random remainder, corresponding to the residuals. Later I allowed G to vary monthly as well. This scheme was later used by BEST.

TempLS

So those are the ingredients of the program TempLS (details summarized here) which I have run almost every night since then, when GHCN Monthly comes out with an update. I typically post on about 10th of the month for the previous month’s results (October 2018 is here, it was warm). But I keep a running report here, starting about the 3rd, when the ERSST results come in. When GISS comes out, usually about the 17th, I post a comparison. I make a map using a spherical harmonics fit, with the same levels and colors as GISS. Here is the map for October:

The comparison with GISS for September is here. I also keep a more detailed updated Google Earth-style map of monthly anomalies here.

Clive Best is now doing a regular similar analysis, using CRUTEM3 and HADSST3 instead of my GHCN and ERSST V5. We get very similar results. The following plot shows TempLS along with other measures over the last four years, set to a common anomaly base of 1981-2010. You can see that the satellite measures tend to be outliers (UAH below, RSS above, but less so). The surface measures, including TempLS, are pretty close. You can check other measures and time intervals here.

The R code for TempLS is set out and described in detail in three posts ending here. There is an overview here. You can get links to past monthly reports from the index here; the lilac button TempLS Monthly will bring it up. The next button shows the GISS comparisons that follow.

 

 

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

317 Comments
Inline Feedbacks
View all comments
November 15, 2018 10:19 am

I think none of you guys are getting this. Looks to me you are all still stumbling around in the darkness.
The true average that can be summarized from a weather station’s data is given by the derivative of the least square equation taken over ‘x” years. [mind you: you have to have a certain procedure to fill in for any missing data]

x can be varied, say take the last 11 years, 22 years, 33 years and last 44 years. You must get at least 4 points data to try and look at a new function. THEN, you can set the speed of warming/cooling in K/yr again out against time and get a reasonably true PRESENTATION of the station’s result (over the past 44 years).

You will be surprised what you get just looking at the averages of say ten stations around you…..Like the results I got. I am sure you will find a certain pattern with high correlation.

Then take a look at a global sample, balanced to zero latitude. You don’t even need a very large sample, as evident from my results.

Have you tried this Nick?

Click on my name to read my report on this.

Michael Jankowski
November 15, 2018 10:49 am

“…The sd for temperature was about seven times greater…”

Not this silliness again.

Editor
November 15, 2018 11:00 am

ADJUSTMENTS: It is not strictly true that GHCN data is UNADJUSTED, not even the “unadjusted” “QCU” data set. GHCN is VERY clear about this is their README file.

Quoting:

“The unadjusted data are often referred to as the “raw” data. It is important to note that the term “unadjusted” means that the developers of GHCNM have not made any adjustments to these received and/or collected data, but it is entirely possible that the source of these data (generally National Meteorological Services) may have made adjustments to these data prior to their inclusion within the GHCNM. Often it is difficult or impossible to know for sure, if these original sources have made adjustments, so users who desire truly “raw” data would need to directly contact the data source.”

Just because Australia [reportedly] doesn’t adjust data before it sends it to GHCN doesn’t mean it is true for every data supplier.

MrZ
Reply to  Kip Hansen
November 15, 2018 11:45 am

Kip,

Very true, and also what set of stations they report when.
We talked briefly a few weeks back. I now have an application almost ready visualizing this. I plan to share it with you and Nick for validation and comments before making it public.
If you are interested of course…

Reply to  Kip Hansen
November 15, 2018 12:57 pm

Kip,
“doesn’t mean it is true for every data supplier”
It doesn’t mean it isn’t, either. GHCN is just providing a standard caveat, saying that they take the data as supplied.

But national met offices don’t usually have a big focus on climate science anyway, and no real reason to try to adjust data. Nor much opportunity nowadays. Most countries send in their CLIMAT forms within a week of the end of month. And once it is entered into GHCN qcu, it doesn’t change thereafter.

MrZ
Reply to  Nick Stokes
November 15, 2018 2:58 pm

Nick,
Any comments on above? Hope the language is OK.

“I think it is ok to assume that if there is UHI affects in the GHCN record those are introduced gradually and smaller before 1950 than 1980. A majority of the older station set is depending on infills for a common baseline period. If we infill with data from a period that is artificially warmer we will amplify this error. Since the baseline average is pushed higher our historical anomaly will show cooler than it actually was. UHI is one example average latitude is another that impacts in the same direction.
Wouldn’t setting a stations “anomaly base” at first overlapping years with neighbours in same or close by grid cells be less biased?”

Reply to  MrZ
November 15, 2018 4:06 pm

Matz,
I have now commented there. Generally I calculate anomalies for each site and then integrate, which infills implicitly, but not over different time periods. There may be some need to do what you describe for the method that uses fixed base period, although it shouldn’t be a major issue. There aren’t that many stations that need special treatment here. HADCRUT just discards them, which I think is a waste, but not a disaster.

Harry Passfield
November 15, 2018 11:27 am

The reason anomalies are used instead of real temps is this:

comment image

Don’t panic!!!

MrZ
Reply to  Harry Passfield
November 15, 2018 12:06 pm

😉
I started there two years ago. Now I do appreciate anomalies are good when merging data from multiple and constantly changing sources. I still agree though that anomalies can be deceptive when used for presentation. To me an increase from +35 to +40 is more alarming than from -35 to -30 even though they appear exactly the same when presented as anomalies.

Bruce of Newcastle
November 15, 2018 12:49 pm

The underlying data is corrupted for global temperature anomaly calculation. Roy Spencer has shown just how large UHIE is: it takes only about 60 people per square kilometre to increase local temperature by 1 C (see the last graph at the link).

Couple that with the overweight of airport measurements in the terrestrial datasets and the story is even worse.

It is easy to see that all temperature anomaly datasets evince a warm bias, because there is another proxy: the Rutgers snow cover extent anomaly dataset. This has been flat since mid 1994.

Unlike temperature anomaly snow cover extent is easy to measure from orbit. Effectively the snow cover extent anomaly is showing that the area of the NH at or below 0 C has not changed on average for over two decades.

Now I don’t know how rising temperatures can somehow fail to melt snow. The freezing point of water is not subject to adjustments or computer models, it just is.

In other words, Nick, the data you are using is bollocks. Sorry.

Reply to  Bruce of Newcastle
November 15, 2018 1:05 pm

Bingo again. Try looking at more parameters. Snow. Ice? Why only the nh? Etc.

Bruce of Newcastle
Reply to  HenryP
November 15, 2018 8:59 pm

Um, because there isn’t much land in the SH with snow on it? Snow doesn’t seem to accumulate on the sea for some weird reason.

My field includes multivariate non-linear regression analysis and statistical model building. I use the HadCET dataset as the cleanest and longest one we have available. The indirect solar modulation of cloud cover and the oceanic ~60 year cycle, plus a 2XCO2 of 0.7 C/doubling (ie. Lindzen’s TCS number) fits the data like a glove. And those two significant variables have turned over in the last couple decades. Hence the Pause.

November 15, 2018 2:01 pm

One paragraph in Nick’s quite competently written article gave me pause to reflect:

“Global temperature is very inhomogeneous – arctic, tropic, mountains etc.”

Sometimes ignorance gives one a fresh perspective, so forgive me if I read the above as, “In other words, by definition, ‘global temperature’ does not exist.”

“To average it you would have to make sure of getting the right proportions of each,…”

It, thus, is a fantasy metric, glued together in fragments of a fantasized whole that lacks wholeness to start with.

“… and you don’t actually have much control of the sampling process.”

Sampling of a non-whole, then, makes no sense.

“But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low.”

An aesthetically pleasing extension of a magnificent misconception is still a magnificent misconception at the foundation of its reasoning.

Again, I speak from a fair amount of ignorance, compared to you statistics ninjas.

Gator
Reply to  Robert Kernodle
November 15, 2018 2:30 pm

I would say your “ignorance” is your strength, given that you have not been taught to believe in mathematical fantasies.

As was once said, an average global temperature is as useful as an average global phone number. Just because you can does not mean that you should, or that anyone else should pay you any attention if you do.

Geoff Sherrington
November 15, 2018 2:21 pm

Nick,
Once I chanced upon a scene in the toilets of footy club, when a sailor was having his way with a tart who, mostly dressed, leaned against a wall eating a meat pie with tomato sauce dribbling down her arm. Mentally I contrasted this with the Mills and Boone romanticism with perfumes,the champagne, satin sheets, music. …
The first was a cheap and nasty.
The second was classical.

The anomaly method is a cheap and nasty. It sacrifices real, valuable data on the altar of some form of mathematical parsimony. Happiness becomes a tight p-statistic.
With the anomaly method, you seek to compensate for heterogeneity from effects on temperature like altitude, latitude, longitude. If you want to adjust for altitude, you can use a lapse rate. Then, you find that different lapse rates happen over time because of e.g. atmospheric moisture changes. Next you note that inversions affect the lapse rate. So, you might deduce that height corrections carry large errors.
You want to correct for latitude, hotter at equator than poles, etc. More sunlight each day. Well, you can use a cosine type transform to adjust for global irradiation variability, but when you compare theory to observation you find that the process has errors to consider.
Nick, you are proposing that where these adjustments carry large errors, you can do a reasonable job of error reduction by subtracting a 30 year average from the raw data.
Can you please explain to us all how this subtraction comprehends the troublesome physics of lapse rates and latitude corrections and somehow does away with much of the associated error?
Can you not see that this anomaly method might reduce a mathematical standard deviation but it cannot reduce the prior measurement uncertainty?
Good luck with the cheap and nasty. I’m off to listen to Beethoven.
Geoff.

November 15, 2018 2:46 pm

Epilogue
As I mentioned in the article, I post results three times a month. The first, early, is the reanalysis (NCEP/NCAR) average for the previous month. The second is TempLS, and the third is a comparison when GISS comes out. GISS is now out, and rose by 0.25°C, rather more than TempLS’ 0.16°C. My post with the comparative maps is here.

Scott W Bennett
Reply to  Nick Stokes
November 15, 2018 8:43 pm

Charming!

The monthly GISTEMP surface temperature analysis update has been posted. The global mean temperature anomaly for October 2018 was 0.99°C above the 1951-1980 October average.- Rocket Scientists

Oh come on!

Now they’re using Charm Values. I thought 97 was one hell of marketing plow but 99! Come on this
is stretching credulity too far.

If you’re keeping up with the curve, charm values have been dropped when it comes attracting customers because people – including me – round up to the nearest whole number. So the use of 0.99 is brilliant, in this day and age because every one is reading 1°C and I know I’m reading propanganda! ;-(

Reply to  Scott W Bennett
November 15, 2018 9:19 pm

“is stretching credulity too far”
I wonder what you would be saying if they had rounded up?
“Charm” is what you use when you want to make a number seem smaller. It’s still commonly used for that.

Scott W Bennett
Reply to  Nick Stokes
November 15, 2018 9:30 pm

I’m guessing it probably was one point something C but .99 sells better! 🙂

Geoff Sherrington
November 15, 2018 4:04 pm

Nick,
While you have the numbers up, would it be informative to recalculate your Monte Carlo example using (say) half a dozen different reference periods of 30 years each with plots showing the mean and sd in histogram form as you have done for 1951-80? The reason is for a quick look at sensitivity to choice of reference period. Geoff

Reply to  Geoff Sherrington
November 15, 2018 8:26 pm

Geoff,
I ran the same year (1988) using anomaly base 1921-50 instead of 1951-80. The results were:
Temp mean 11.57 sd 0.15
Anom mean 0.34 sd 0.029
Notable changes are:
1. A drop of about 1°C in mean temp. Another demo that one should never average temperatures, only anomaly. In this case, the removal of stations that do not have enough data (20 Aprils) in 1921-50 leaves a much colder set behind. Anomaly is not affected.
2. A larger sd for anomalies, but still much less than for temperature. Two causes
a. Fewer eligible stations – down to 3762
b. 1921-50 is a worse predictor of 1988. As I said, you get most of the effect for just about any reasonable expected value. But not all.

Clyde Spencer
Reply to  Nick Stokes
November 16, 2018 10:33 am

Nick,
You said, “2. A larger sd for anomalies, but still much less than for temperature.” That really is of little importance. The Empirical Rule in statistics provides an estimate of the SD by observing that the range of any normal distribution is approximately +/-3 SD. The range of an anomaly is much less than the original temperatures. Therefore, the SD will be smaller! But, that doesn’t mean that the uncertainties in the original temperatures have been removed, or that forecasts of future temperatures can be assigned the SD of the anomalies from which they are obtained.

Geoff Sherrington
Reply to  Clyde Spencer
November 16, 2018 6:57 pm

Clyde,
Assitionally, in my book one should not compare the sd from these 2 sets of data because they are not comparable sets. It’s not correct to say, as Nick has, that on sd is seven times the other.
Geoff

Geoff Sherrington
Reply to  Nick Stokes
November 16, 2018 6:54 pm

Thank you for that, Nick.
(I am in and out of hospital, more to come, so it is hard to keep up with the pace of commenting, my apologies.)
I shall ponder the data you provided.
In the interim, I have been making some process on the philosophy of using the anomaly method, using an approach of what would happen if all stations reported a flat temperature response over the long term. Just an intermediate way of holding some variables constant to see what shakes out, Geof.

Paramenter
November 16, 2018 8:18 am

Jeff, if the temp in your kitchen is 73 degrees, if the temp in your bedroom is 71 degrees, and the temp in your living room is 72 degrees, and the temp in your bathroom is 70 degrees, what is the temperature inside your house?

I suspect, counterargument would be as follows: averaging ‘locally’ makes perfect sense. What about averaging temperature of all rooms in the town, including offices, basements and lofts? What does it tell us? Going even further: what about averaging of all rooms in all houses around the planet? Bungalows in Texas, Victorian houses in London, wooden shacks in Siberia and Amazon, igloos in Greenland?

November 16, 2018 8:59 am


@all

You poor sods. You have studied the art of seeing how the temperature goes up and you have neglected to look at how climate cycles actually work.
The Dust Bowl drought of the 1930s was one of the worst environmental disasters of the Twentieth Century anywhere in the world. Three million people left their farms on the Great Plains during the drought and half a million migrated to other states, almost all to the West. But the Dust Bowl drought was not meteorologically extreme by the standards of the Nineteenth and Twentieth Centuries. Indeed the 1856-65 drought may have involved a more severe drop in precipitation, never mind the drought from 1845 onward which wiped out much of the bison population [contrary to popular thought it was not really man who was the main reason for the decimation of the Bison population.}

One more [dust bowl] drought coming up soon. It is already happening, is it not? Click on my name to read my final report. Have you figured out when?

November 16, 2018 9:13 am

Nick Stokes,

Given your particular expertise, I would really appreciate another article from you at WUWT that lays out the basics of actually calculating a global temperature anomaly — in recipe format.

In other words, write an article, say, with the title, Global Temperature Anomaly Calculation 101 for Dummies.

Go step by step, for example,

First, raw station data,
Second,
Third,
Fourth,
.
.
.
etc.

… you know, like a recipe.

This would be very helpful for us non-statistics ninjas, even for those of us who have objections on other, less technical grounds. I personally would really like to know more about how this stat is arrived at. Okay, it’s not a straight average. But then what ? What are the different phases of the calculation ?

If you and the mods could arrange for this, then thanks.

Reply to  Robert Kernodle
November 16, 2018 12:10 pm

I’d be glad to do that, if there is interest. I did write an article here trying to set out the basics of the integration part. But it doesn’t say that much about the actual anomaly calculation. For complete detail, there is the series of three starting here where I go through the code that I use. But that is probably too tangled with the programming technicalities.

Scott W Bennett
Reply to  Nick Stokes
November 16, 2018 7:20 pm

==> Nick

I’d be very keen to see that in simple english too! Most particularly so that you lay out the very basic assumptions that are implicit in your process.

I appreciate the work you’ve done so far but it does not get to the heart of the matters that concern me.

I’m reminded of the S.Harris cartoon with the two scientist discussing a complicated formula on a chalkboard: “I think you should be more explicit here in step two.”

Step two says “Then a miracle occurs.” and this is the magic of integration!

http://entersection.com/wp-content/uploads/2010/03/sidney_harris-the_new_yorker-2007-i_think_you_should_be_more_explicit_here_in_step_two.png

Your example of a sandpit assumes that the atmosphere is an homogenous substance i.e. That temperature in the atmospheric field is isotropic and that the samples therefore will have coherent and correlated length scales.*

*Though your triangular method does introduce at least some artificial form of correlation decay between data points. However, it is not just a simple matter of continuous v discrete or analogue/digital sampling either because you have only point data from which you are attempting to represent a temperature field that is three dimensional, dynamic, strongly anisotropic and certainly never homogenous.

I’d be wiling to bet that a simple random selection of station points using the current method v using mapped correlation decay, would produce a different result. I also mean that your method will most probably produce the same result no matter what the sample variation is. But with a correlation map the result will be different depending on the choice of station.

The material volume of that sandpit would be more accurately characterised by including its rocks, puddles and wet sand-castles but I’d even include its beach balls and plastic spades! 😉

ScottM
November 16, 2018 11:23 am

I think the justification for anomalies can be understood through an example. If you look at the January temperature for a place and compare it to the July temperature of the same year, the difference tells you nothing about long-term trends. If you look at the January anomaly and compare it to the July anomaly, it does tell you something about the trend (not a lot, since it is just two samples out of thousands, but *something*).

Paramenter
November 16, 2018 2:28 pm

Hey Clyde,

That is not my understanding. It is my understanding that the monthly ‘average’ is obtained from averaging all the daily highs, averaging all the daily lows, and then calculating the mid-range value from those two numbers!

Does it make a difference as far as results are concerned? I’ve compared those two approaches for March 2015 Boulder, CO. Calculating daily (Tmin+Tmax)/2 first and then averaging it yields the same result (-0.9 C) as averaging all the daily highs, averaging all the daily lows, and then calculating the mid-range value from those two numbers.

NOAA specifications are bit vague about how this is calculated:

“Monthly maximum/minimum/average temperatures are the average of all available daily max/min/averages. To be considered valid, there must be fewer than 4 consecutive daily values missing, and no more than 5 total values missing”.

ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/products/monthly01/README.txt
point F

The situation gets worse! Those 12 monthly mid-range anomalies are then used to calculate an annual mean for the station.

Yeah, and I reckon that those yearly averages based on monthly ones are actually basic units for constructing anomalies.

1sky1
November 16, 2018 3:57 pm

The big contributor to the uncertainty of the average temperature is the sampling error of the climatologies (normals), ie how often we chose a surplus of normally hot or cold places. It is large because these can vary by tens of degrees.

There’s subtle confusion evident here between mere sampling error and intrinsic variability of a highly heterogeneous global population. Inasmuch as the station “normals” are determined by exhaustive census of the temperature over the “base period” years, sampling error per se is no issue at all. Statistical concepts are not enough; knowledge of the physics of climate is required in order to characterize that intrinsic variability. As long as the globe is properly tessellated in a climatically representative manner and there are sufficient stations with long-enough records to fill all the cells, calculation of “anomalies” offers no scientific advantage. In fact, by suppressing actual temperature levels, physically meaningful information is lost.

The rub, of course, is that there’s insufficient intact, century-long, UHI-uncorrupted station records to determine the average global temperature via direct surface integration. In its desire to appear robust and authoritative, “climate science” dodges that issue by resorting to “anomalies” (often computed from inconsistent “normals”) to make the claim that thousands of stations went into the determination of GAST.

But the variability of those anomalies themselves is, at best, plainly heterogeneous, with oceanic tropical stations showing much lower standard deviations than at polar continental locations–and little coherence in between. At worst, the more elaborate computational schemes propagate data corruption at rapidly urbanizing locations much more widely, all under the cloak of “sophisticated” statistical arguments.

November 16, 2018 9:12 pm

Nick Stokes,

No one will ever vote yes to become poorer. Until some evidence of a problem with our climate becomes evident, will not happen.

Good luck with whatever you are trying to do now.

Moon

Philip Schaeffer
Reply to  Michael Moon
November 18, 2018 8:36 am

Michael Moon said:

“Good luck with whatever you are trying to do now.”

Math and science from what I can tell.

Pamela Gray
November 20, 2018 7:20 am

And how are these little wriggles up and down during a naturally warming period as seen by proxy over the past 800,000 years, alarming? We are in a warm period peak. Little jags up and down are exactly what one would expect. So why have such concerns over getting it exactly right?