The Meaning and Utility of Averages as it Applies to Climate

Guest essay by Clyde Spencer 2017

Introduction

I recently had a guest editorial published here on the topic of data error and precision. If you missed it, I suggest that you read it before continuing with this article. This will then make more sense. And, I won’t feel the need to go back over the fundamentals. What follows is, in part, prompted by some of the comments to the original article. This is a discussion of how the reported, average global temperatures should be interpreted.

Averages

Averages can serve several purposes. A common one is to increase accuracy and precision of the determination of some fixed property, such as a physical dimension. This is accomplished by confining all the random error to the process of measurement. Under appropriate circumstances, such as determining the diameter of a ball bearing with a micrometer, multiple readings can provide a more precise average diameter. This is because the random errors in reading the micrometer will cancel out and the precision is provided by the Standard Error of the Mean, which is inversely related to the square root of the number of measurements.

Another common purpose is to characterize a variable property by making multiple representative measurements and describing the frequency distribution of the measurements. This can be done graphically, or summarized with statistical parameters such as the mean, standard deviation (SD) and skewness/kurtosis (if appropriate). However, since the measured property is varying, it becomes problematic to separate measurement error from the property variability. Thus, we learn more about how the property varies than we do about the central value of the distribution. Yet, climatologists focus on the arithmetic means, and the anomalies calculated from them. Averages can obscure information, both unintentionally and intentionally.

With the above in mind, we need to examine whether taking numerous measurements of the temperatures of land, sea, and air can provide us with a precise value for the ‘temperature’ of Earth.

Earth’s ‘Temperature’

By convention, climate is usually defined as the average of meteorological parameters over a period of 30 years. How can we use the available temperature data, intended for weather monitoring and forecasting, to characterize climate? The approach currently used is to calculate the arithmetic mean for an arbitrary base period, and subtract modern temperatures (either individual temperatures or averages) to determine what is called an anomaly. However, just what does it mean to collect all the temperature data and calculate the mean?

If Earth were in thermodynamic equilibrium, it would have one temperature, which would be relatively easy to measure. Earth does not have one temperature, it has an infinitude of temperatures. In fact, temperatures vary continuously laterally, vertically, and with time, giving rise to an indefinite number of temperatures. The apparent record low temperature is -135.8° F and the highest recorded temperature is 159.3° F, for a maximum range of 295.1° F, giving an estimated standard deviation of about 74° F, using the Empirical Rule. Changes of less than a year are both random and seasonal; longer time series contain periodic changes. The question is whether sampling a few thousand locations, over a period of years, can provide us with an average that has defensible value in demonstrating a small rate of change?

One of the problems is that water temperatures tend to be stratified. Water surface-temperatures tend to be warmest, with temperatures declining with depth. Often, there is an abrupt change in temperature called a thermocline; alternatively, upwelling can bring cold water to the surface, particularly along coasts. Therefore, the location and depth of sampling is critical in determining so-called Sea Surface Temperatures (SST). Something else to consider is that because water has a specific heat that is 2 to 5 times higher than common solids, and more than 4 times that of air, it warms more slowly than land! It isn’t appropriate to average SSTs with air temperatures over land. It is a classic case of comparing apples and oranges! If one wants to detect trends in changing temperatures, they may be more obvious over land than in the oceans, although water-temperature changes will tend to suppress random fluctuations. It is probably best to plot SSTs with a scale 4-times that of land air-temperatures, and graphically display both at the same time for comparison.

Land air-temperatures have a similar problem in that there are often temperature inversions. What that means is that it is colder near the surface than it is higher up. This is the opposite of what the lapse rate predicts, namely that temperatures decline with elevation in the troposphere. But, that provides us with another problem. Temperatures are recorded over an elevation range from below sea level (Death Valley) to over 10,000 feet in elevation. Unlike the Universal Gas Law that defines the properties of a gas at a standard temperature and pressure, all the weather temperature-measurements are averaged together to define an arithmetic mean global-temperature without concern for standard pressures. This is important because the Universal Gas Law predicts that the temperature of a parcel of air will decrease with decreasing pressure, and this gives rise to the lapse rate.

Historical records (pre-20th Century) are particularly problematic because temperatures typically were only read to the nearest 1 degree Fahrenheit, by volunteers who were not professional meteorologists. In addition, the state of the technology of temperature measurements was not mature, particularly with respect to standardizing thermometers.

Climatologists have attempted to circumscribe the above confounding factors by rationalizing that accuracy, and therefore precision, can be improved by averaging. Basically, they take 30-year averages of annual averages of monthly averages, thus smoothing the data and losing information! Indeed, the Law of Large Numbers predicts that the accuracy of sampled measurements can be improved (If systematic biases are not present!) particularly for probabilistic events such as the outcomes of coin tosses. However, if the annual averages are derived from the monthly averages, instead of the daily averages, then the months should be weighted according to the number of days in the month. It isn’t clear that this is being done. However, even daily averages will suppress (smooth) extreme high and low temperatures and reduce the apparent standard deviation.

However, even temporarily ignoring the problems that I have raised above, there is a fundamental problem with attempting to increase the precision and accuracy of air-temperatures over the surface of the Earth. Unlike the ball bearing with essentially a single diameter (with minimal eccentricity), the temperature at any point on the surface of the Earth is changing all the time. There is no unique temperature for any place or any time. And, one only has one opportunity to measure that ephemeral temperature. One cannot make multiple measurements to increase the precision of a particular surface air-temperature measurement!

Temperature Measurements

Caves are well known for having stable temperatures. Many vary by less than ±0.5° F annually. It is generally assumed that the cave temperatures reflect an average annual surface temperature for their locality. While the situation is a little more complex than that, it is a good first-order approximation. [Incidentally, there is an interesting article by Perrier et al. (2005) about some very early work done in France on underground temperatures.] For the sake of illustration, let’s assume that a researcher has a need to determine the temperature of a cave during a particular season, say at a time that bats are hibernating. The researcher wants to determine it with greater precision than the thermometer they have carried through the passages is capable of. Let’s stipulate that the thermometer has been calibrated in the lab and is capable of being read to the nearest 0.1° F. This situation is a reasonably good candidate for using multiple readings to increase precision because over a period of two or three months there should be little change in the temperature and there is high likelihood that the readings will have a normal distribution. The known annual range suggests that the standard deviation should be less than (50.5 – 49.5)/4, or about 0.3° F. Therefore, the expected standard deviation for the annual temperature change is of the same order of magnitude as the resolution of the thermometer. Let’s further assume that, every day when the site is visited, the first and last thing the researcher does is to take the temperature. After accumulating 100 temperature readings, the mean, standard deviation, and standard error of the mean are calculated. Assuming no outlier readings and that all the readings are within a few tenths of the mean, the researcher is confident that they are justified in reporting the mean with one more significant figure than the thermometer was capable of capturing directly.

Now, let’s contrast this with what the common practice in climatology is. Climatologists use meteorological temperatures that may have been read by individuals with less invested in diligent observations than the bat researcher probably has. Or temperatures, such as those from the automated ASOS, may be rounded to the nearest degree Fahrenheit, and conflated with temperatures actually read to the nearest 0.1° F. (At the very least, the samples should be weighted inversely to their precision.) Additionally, because the data suffer averaging (smoothing) before the 30-year baseline-average is calculated, the data distribution appears less skewed and more normal, and the calculated standard deviation is smaller than what would be obtained if the raw data were used. It isn’t just the mean temperature that changes annually. The standard deviation and skewness (kurtosis) is certainly changing also, but this isn’t being reported. Are the changes in SD and skewness random, or is there a trend? If there is a trend, what is causing it? What, is anything, does it mean? There is information that isn’t being examined and reported that might provide insight on the system dynamics.

Immediately, the known high and low temperature records (see above) suggest that the annual collection of data might have a range as high as 300° F, although something closer to 250° F is more likely. Using the Empirical Rule to estimate the standard deviation, a value of over 70° F would be predicted for the SD. Being more conservative, and appealing to Tschbycheff’s Theorem and dividing by 8 instead of 4, still gives an estimate of over 31° F. Additionally, there is good reason to believe that the frequency distribution of the temperatures is skewed, with a long tail on the cold side. The core of this argument is that it is obvious that temperatures colder than 50° F below zero are more common than temperatures over 150° F, while the reported mean is near 50° F for global land temperatures.

The following shows what I think the typical annual raw data should look like plotted as a frequency distribution, taking into account the known range, the estimated SD, and the published mean:

clip_image002

The thick, red line represents a typical year’s temperatures, and the little stubby green line (approximately to scale) represents the cave temperature scenario above. I’m confident that the cave-temperature mean is precise to about 1/100th of a degree Fahrenheit, but despite the huge number of measurements of Earth temperatures, the shape and spread of the global data does not instill the same confidence in me for global temperatures! It is obvious that the distribution has a much larger standard deviation than the cave-temperature scenario and the rationalization of dividing by the square-root of the number of samples cannot be justified to remove random-error when the parameter being measured is never twice the same value. The multiple averaging steps in handling the data reduces extreme values and the standard deviation. The question is, “Is the claimed precision an artifact of smoothing, or does the process of smoothing provide a more precise value?” I don’t know the answer to that. However, it is certainly something that those who maintain the temperature databases should be prepared to answer and justify!

Summary

The theory of Anthropogenic Global Warming predicts that the strongest effects should be observed during nighttime and wintertime lows. That is, the cold-tail on the frequency distribution curve should become truncated and the distribution should become more symmetrical. That will increase the calculated global mean temperature even if the high or mid-range temperatures don’t change. The forecasts of future catastrophic heat waves are based on the unstated assumption that as the global mean increases, the entire frequency distribution curve will shift to higher temperatures. That is not a warranted assumption because the difference between the diurnal highs and lows has not been constant during the 20th Century. They are not moving in step, probably because there are different factors influencing the high and low temperatures. In fact, some of the lowest low-temperatures have been recorded in modern times! In any event, a global mean temperature is not a good metric for what is happening to global temperatures. We should be looking at the trends in diurnal highs and lows for all the climatic zones defined by physical geographers. We should also be analyzing the shape of the frequency distribution curves for different time periods. Trying to characterize the behavior of Earth’s ‘climate’ with a single number is not good science, whether one believes in science or not!

5 1 vote
Article Rating
397 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
mikebartnz
April 23, 2017 12:38 am

Very good article.
I always understood that in science you never used a precision that is greater than the least precise but that is exactly what the climatologists do.
In the past would someone out in a blizzard care as to how precise he was.

Stephen Richards
Reply to  mikebartnz
April 23, 2017 1:15 am

Or you need one more decimal place of accuracy than the figure you use , quote.

Clyde Spencer
Reply to  Stephen Richards
April 23, 2017 10:11 am

Stephen, as I remarked in the article that preceded this, in performing multiplications/divisions, one more significant figure than the least precise measurement is sometimes retained in the final answer. However, to be conservative, the answer should be rounded off to the same number of significant figures as the least precise measurement. The number of significant figures to the right of the decimal point imply the resolution or precision, and says nothing about accuracy.

gnomish
Reply to  Stephen Richards
April 23, 2017 2:17 pm

so what do you do when you know a measurement is 3 1/3? those numbers are simple integers with one significant figure. the computer will not follow the rules, will it?
btw- this past autumn, there were red leaves and green leaves. what was the average color?
does it matter that you have more than twice the average number of testicles and fewer than the average number of ovaries?
as far as the number of arms and legs go- most people on the planet have more than the average number.
if my head is in death valley and my feet in vostok, do i feel just fine – on the average?

george e. smith
Reply to  Stephen Richards
April 24, 2017 3:54 pm

The random errors obtained in repetitive measures of the same item DO NOT cancel out.
That conclusion presupposes that positive errors are equally likely as negative errors.
A micrometer for example is far less likely to give too low a reading as too high a reading.
The average of many tests is more likely to converge on the RMS value of the measurements.
Practical measuring devices do not have infinite resolution so errors do not eventually tend towards zero. There also is always some systematic error that is not going to go away.
But I will give you one point. It IS necessary to be measuring the exact same thing, for statistics to have any meaning at all.
Averaging individual events, that never repeat is numerical origami; they aren’t supposed to be the same number in the first place. So the result has no meaning other than it is the “average” or the “median”, or what ever algorithmic expression you get out of any standard text book on statistical mathematics.
GISSTemp is GISSTemp, and nothing more. it has no meaning beyond the description of the algorithm used to compute it. Same gose for HADCrud; it is just HADCrud; nothing more.
G

george e. smith
Reply to  Stephen Richards
April 24, 2017 4:07 pm

“””””….. Unlike the Universal Gas Law that defines the properties of a gas at a standard temperature and pressure, …..”””””
Is this anything like the “Ideal Gas Law ” ??
If you have a gas at …… standard temperature and pressure ….. as you say; the only remaining variables are the number of moles of the gas and the occupied volume.
So what good is that; the occupied volume is a constant times the number of moles of the ideal gas. Whoopee ! what use is it to know that. Doesn’t seem to have anything to do with any lapse rates.
G

Paul Westhaver
Reply to  mikebartnz
April 23, 2017 8:19 am

Hello, My name is Error Bars. Nice to meet you. I am the offspring of Mrs Measurement Accuracy and Mr Instrument Precision. I know that I am an ugly tedious spud like my sister Uncertainty and cousin Confidence Level. My best friend is loved by everyone. His name is Outlier.
If you don’t know me your BSc. is a P.O.S. Go back to school.

Greg
Reply to  Paul Westhaver
April 23, 2017 11:46 am

Good topic for discussion. I published an article on this a C .Etc. last year.

It is probably best to plot SSTs with a scale 4-times that of land air-temperatures, and graphically display both at the same time for comparison.

A factor of four would compare SHC of water to that of dry rock. Most ground is more like wet rock a scaling factor of around seems about right comparing BEST land temps to SST.comment image
https://climategrog.wordpress.com/2016/02/09/are-land-sea-averages-meaningful-2/

It is a classic case of ‘apples and oranges’. If you take the average of an apple and an orange, the answer is a fruit salad. It is not a useful quantity for physics based calculations such as earth energy budget and the impact of a radiative “forcings”.

BallBounces
Reply to  mikebartnz
April 23, 2017 11:51 am

In fact, reputable climatolonianologists always wait until they have at least two bristlecone pines before making paradigm-shifting prognostications.

April 23, 2017 12:47 am

Superb article. I particularly like this:-

It is probably best to plot SSTs with a scale 4-times that of land air-temperatures, and graphically display both at the same time for comparison.

Never seen it done however.

Samuel C Cogar
Reply to  Philip Mulholland
April 23, 2017 7:17 am

“Yes”, I agree, a superb commentary.
And I thank you, Clyde Spencer, for posting it.
I was especially impressed because it contained many of factual entities that I have been posting to these per se “news forums” and “blogs” for the past 20 years …… in what it has seemed to me to be a “futile attempt” on my part to educate and/or convince the readers and poster of said “factual entities”.
Cheers, Sam C

Clyde Spencer
Reply to  Samuel C Cogar
April 23, 2017 10:16 am

Sam C,
Don’t feel like the Lone Ranger. Some of the things that I have published here in the past have elicited responses, both favorable and otherwise, but seem to diffuse out of the consciousness of the readers as quickly as a balloon popping when it runs up against a rose bush.

Ian H
Reply to  Philip Mulholland
April 23, 2017 8:24 am

Our every day experience is that it takes a lot more energy to warm up water than it does air. So only 4 times the specific heat doesn’t seem nearly enough. The reason of course is that specific heat is defined per kilogram not per cubic metre and water is a lot more dense. A cubic meter of air weighs 1.2 kg while a cubic metre of water weighs 1000kg.
Now it is important to note that while heat capacity is defined in terms of mass, there is no special reason for this. We just need a measure of how much stuff we are talking about to define heat capacity and mass happens to be convenient. But we could equally well have used volume and defined heat capacity in terms of how much energy it takes to heat a cubic meter of stuff instead of a kilogram. The heat capacity per cubic metre for water would then be roughly 3,300 times larger than the heat capacity per cubic metre for air.
So now the question is when it comes to averaging SST and air temperatures which is more reasonable.
The per volume approach is equivalent to allowing to top meter of water to reach thermal equilibrium with the top meter of air. The per mass approach is equivalent to allowing the top meter of water to reach thermal equilibrium with the top 830 meters of air. I find it hard to say that either method is correct, but to me the first if anything seems more reasonable. And if we did this using the per volume measure of heat capacity then we should be weighting SST and air temperatures not at 4:1 but in the ratio of 3,300:1 .

Clyde Spencer
Reply to  Ian H
April 23, 2017 10:21 am

Ian H,
We can argue about the correct measurement procedure for air and SSTs, but the point I was trying to make is that the two data sets should be analyzed separately, and not conflated, because water and air respond quantitatively differently to heating.

Owen in GA
Reply to  Ian H
April 23, 2017 4:17 pm

Ian,
In order to use volume, you have to pin down at what pressure and temperature. The volume of things can change quite radically when pressure or temperature are changed.
Even worse, the values are calculated independent and quite often change based on the temperature and pressure of the material. So we say the heat capacity of something is X, but that is really for standard temperature and pressure.

Samuel C Cogar
Reply to  Ian H
April 24, 2017 4:03 am

The only correct measurement procedure for air and SSTs would be to use “liquid immersed thermocouples”.
All Surface Temperature Stations, ……. be they on the near-surface of the land or in the near-surface waters of the oceans, ……. should consist of a round or oval container with a two (2) gallon capacity ….. that is filled with a -40 F anti-freeze …… and with two (2) thermocouples suspended in the center of the liquid interior that are connected to an electronic transmitting device.
The “liquid” eliminates all of the present day “fuzzy math averaging” of the recorded near-surface air temperature “spikes” that occur hourly/daily/weekly as a result of local changes in cloud cover, winds and/or precipitation events.

george e. smith
Reply to  Ian H
April 24, 2017 4:34 pm

“””””…..
Samuel C Cogar
April 24, 2017 at 4:03 am
The only correct measurement procedure for air and SSTs would be to use “liquid immersed thermocouples”. …..”””””
Well thermocouples are a totally crude way to measure Temperatures. They are highly non linear, and moreover they require a reference thermocouple held at some standard reference Temperature.
A “thermocouple” s a bimetallic electrical connection, that generates a voltage difference between the two metals depending on the Temperature at that junction.
Unfortunately you cannot measure that voltage. To do so, you would have to connect those two metallic “wires” to some sort of “voltmeter”.
Well that voltmeter has two input terminals that are usually made from the same material usually brass or some other copper alloy metal.
So when you connect your two wires to the voltmeter’s two terminals, now you have a loop containing a total of three thermocouples, with the possibility of more thermocouples inside the voltmeter, that you don’t know anything about.
But now you have at least three thermocouples in series, each with its own Temperature/emf characteristic, all of them non linear, and for all you know each of the three thermocouples is at a different and probably unknown Temperature..
Thermocouples suck; but not in any useful way.
G

Samuel C Cogar
Reply to  Ian H
April 25, 2017 5:07 am

Thermocouples suck; but not in any useful way.

Well now, ……. george e. smith, …… back in the late 60’s those thermocouple were nifty devices for R&D environmental testing, …….. so excuse the hell out of me for even mentioning them.
Are you also going to find problems with “newer” devices, such as these, to wit:

An IC Temperature Sensor is a two terminal integrated circuit temperature transducer that produces an output current proportional to absolute temperature. The sensor package is small with a low thermal mass and a fast response time. The most common temperature range is 55 to 150°C (-58 to 302°F). The solid state sensor output can be analog or digital.
Immersion IC Sensors
An IC temperature probe consists of solid state sensor housed inside a metallic tube. The wall of the tube is referred to as the sheath of the probe. A common sheath material is stainless steel. The probe can come with or without a thread. Common applications are automotive/industrial engine oil temperature measurement, air intake temperature, HVAC, system and appliance temperature monitoring. http://www.omega.com/prodinfo/Integrated-Circuit-Sensors.html#choose

Most anything would be a great improvement over what is now being employed.

george e. smith
Reply to  Ian H
May 1, 2017 3:03 pm

Well Samuel, I guess you didn’t even address the issues I raised; the fact that a ” thermocouple ” is actually just one part of a closed circuit, that always has to have a minimum of two different materials, and at least two separate thermocouples, and more often than not there are three of each.
And each one, whether two or three of them generates a couple EMF that is a function of the Temperature at THAT point.
For a two material and two junction circuit, the response depends on the difference of the two temperatures, so you need one couple at some reference temperature. And back in the 60s when those thermocouples were nifty, things like Platinum resistance thermometers were well known and quite robust.
and many other techniques have developed since, including quartz crystal oscillators with a highly linear TC.
And yes today, modern semi-conductor ” bandgap ” sensors, can actually respond directly to absolute kelvin temperatures. Yes they also do need to be calibrated against certified standards, for accuracy, but for resolution and linearity they are about as good as it gets.
Thermo-couples are quite non-linear, as is the Platinum resistance, but for long term stability, PTRs are hard to beat.
G

April 23, 2017 12:47 am

I agree with this but I’d like to offer a different view.
There’s nothing inherently wrong with how temperature anomalies are calculated. As long as it stays within the limits of its arguments.
The reason why much of say the Met Office’s estimates of SST uncertainty (0.1 degrees C) is fine scientifically is that science itself allows an argument to be made using a mix of assumption with some measurement. You don’t have to always have data. If one day it can be shown that your assumptions have merit then your work will have more relevance.
I mean two words can demonstrate how pure theoretical musings can still be countenanced and funded: String Theory!
So with the NOAA or Met Office temperature anomaly calculations you have the following arguments (paraphrased) : “Assuming that the underlying temperature distribution follows a normal distribution and that measurement errors are random and independent, multiple measurements will produce less uncertainty” etc etc you know that type of thing.
And this is all fine as long as the paper states clearly the limits of the argument. So I can read the Met Office papers and say, okay I see where you have come to that conclusion and if it were true that’s interesting.
The problem is not the construction of a scientific argument. The problem is the application of the argument to the real world, bypassing the verification process.
Add that in with the theorists/modellers tending to believe their own ideas, mix it in with galactic mass proportions of funding, and pretty much you have the current God-cult of temperature anomalies varying to hundredths of a degree.
And for added bonus you have the same theorists trying to tell empirical scientists and engineers that they know better than them about temperature measurements. Or that they know better about the under-appreciated black art of a science: metrology.

Kaiser Derden
Reply to  mickyhcorbett75
April 23, 2017 10:16 am

any experiment (and I consider any attempt to “measure” the global average an experiment) that explains itself with the words “assuming that … ” has basically invalidated its use since they are openly admitting they have not locked down all the variables … any experiment which has variables with uncertain values (anything that you claim is based on an assumption is by definition “uncertain”) is no longer an experiment but simply a bias display …

Reply to  Kaiser Derden
April 23, 2017 1:30 pm

Yes for experiments. But science also allows for theoretical papers that present a possible scenario. There is nothing wrong with this per se as a hypothetical exercise but there is a very real problem if this is believed to hold any weight in the real world. Especially since everyday technology has been subjected to tough measurement criteria before being deemed “safe”.

Clyde Spencer
Reply to  mickyhcorbett75
April 23, 2017 10:25 am

MC75,
I don’t have a serious problem with the procedure of calculating anomalies because it is a reasonable approach to dealing with the issue of weather stations being at different elevations. However, I am questioning the precision with which the results are reported routinely.

Roger Graves
Reply to  Clyde Spencer
April 23, 2017 11:10 am

The only way in which a truly representative global temperature could be estimated is to set up an observatory on, say, Mars, and view our planet, in Carl Sagan’s words, as a ‘pale blue dot’. You could then measure an overall radiation temperature for the said pale blue dot. I’m not altogether sure what that radiation temperature would actually represent, but it is reasonable to assume that, if global warming did occur, it would be reflected in that radiation temperature.
Attempting to measure an overall global temperature on Earth is a classic case of not being able to see the forest for the trees. There are so many random local variables to consider that it would be almost impossible to take them all into consideration. Two independent groups undertaking the task, if there was no collusion between them, would almost certainly come up with significantly different values.

BJ in UK
Reply to  Clyde Spencer
April 23, 2017 11:34 am

As Roger Graves says, per Carl Sagan, view Earth from Mars as a “blue dot.”
This is not as too way out as it sounds. Using Wien’s law, the temperature of Earth can be measured directly by dividing 2.898 by the wavelength of Earth’s peak emission.
Such a satellite could be put in a suitable Mars orbit in a year or so, at relatively low cost, given what’s at stake.
Then direct measurement with no anomalies, averages etc., to put these arguments to rest.

Reply to  Clyde Spencer
April 23, 2017 1:28 pm

I think we agree on this Clyde. As a scientific exercise I don’t see a problem with it. As a national standard to be used in policy there is a very big problem with this approach.

Owen in GA
Reply to  Clyde Spencer
April 23, 2017 5:10 pm

BJ,
I would place such a satellite in a leading or trailing Lagrange point (preferably one each). Then the satellite would always be the same approximate distance from the Earth.

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 5:39 pm

“I’m not altogether sure what that radiation temperature would actually represent, but it is reasonable to assume that, if global warming did occur, it would be reflected in that radiation temperature.”
It’s mixed. Part of the spectrum represents surface (the atmospheric window). And a large part is from GHG at high altitude. But it’s no guide to Earth temperature. Total IR out has to equal (long-term) total sunlight absorbed – it doesn’t change. The AW part increases with arming, and would give some kind of guide. The rest decreases.

Samuel C Cogar
Reply to  Clyde Spencer
April 24, 2017 4:23 am

And I am questioning just what the ell is so important about the world’s populations knowing what the Global Average Temperature is for any given day, month or year? Especially given the fact that it will be DIFFERENT every time it is re-calculated.

Clyde Spencer
Reply to  Samuel C Cogar
April 24, 2017 8:40 am

Samuel,
From my perspective, the importance of world temperatures is that they are being used to try to scare the populace by convincing them that we are becoming increasingly hotter and time is of the essence in preventing catastrophe. I’m attempting to put the claims into perspective by pointing out the unstated assumptions and unwarranted false precision in the public claims.

george e. smith
Reply to  Clyde Spencer
April 24, 2017 4:37 pm

The earth according to NASANOAA has an average cloud cover of 65%.
So earth is more likely to look like a white dot, than a blue dot, from Mars.
G

Clyde Spencer
Reply to  george e. smith
April 25, 2017 9:12 am

GES,
You said, “The earth according to NASANOAA has an average cloud cover of 65%. So earth is more likely to look like a white dot, than a blue dot, from Mars.”
Where did this come from? It is unrelated to the comment you are supposedly responding to. Also, with a generally accepted ‘albedo’ of about 30%, it would seem that your 65% figure is the complement of the actual average cloud coverage.

Samuel C Cogar
Reply to  Clyde Spencer
April 25, 2017 5:30 am

Samuel,
From my perspective, the importance of world temperatures is

Clyde S, …….. I understood what your “perspective” was …… and pretty much agreed with 100% of your posted commentary. My above comment was directed at those who think/believe that knowing what the Local, Regional or Global Average Temperatures are ……. is more important than food, sex or a cold beer.

george e. smith
Reply to  Clyde Spencer
May 1, 2017 3:16 pm

Well Clyde you struck out.
I said NASANOAA says that average earth cloud coverage is 65%. Nowhere did I say they claim that clouds have a 100% diffuse reflectance.
“””””…..
Roger Graves
April 23, 2017 at 11:10 am
The only way in which a truly representative global temperature could be estimated is to set up an observatory on, say, Mars, and view our planet, in Carl Sagan’s words, as a ‘pale blue dot’. …..”””””
I presume YOU didn’t see this.
I did, and pointed out that the extent of cloud cover (mostly over the oceans,) makes each more likely a white dot from Mars; not Sagan’s “blue dot”.
Sky blue is very pale, so over the polar ice regions, the blue sky scattering is quite invisible.
G

richard verney
April 23, 2017 12:47 am

A common one is to increase accuracy and precision of the determination of some fixed property, such as a physical dimension. This is accomplished by confining all the random error to the process of measurement. Under appropriate circumstances, such as determining the diameter of a ball bearing with a micrometer, multiple readings can provide a more precise average diameter. This is because the random errors in reading the micrometer will cancel out and the precision is provided by the Standard Error of the Mean, which is inversely related to the square root of the number of measurements.

This only works when what you measure is the same and does not change.
However, this does not work in climate since what we are measuring is in constant flux, and the samples being used for measurement are themselves constantly changing.
When compiling the global average temperature data sets, sometimes say a few hundred weather stations are used, none of which are at airporta and are mainly rural, at other times it is say 1500, then 3000, then about 6000, then 4,500 ertc. The siting of these stations constantly moves, the spatial latitude distribution continually changes, the ratio between rural and urban continually changes, the ratio of Airport stations to non airport stations constantly changes, and of course equipment changes and TOB.
We never measure the same thing twice, errors are not random so never cancel out.

Clyde Spencer
Reply to  richard verney
April 23, 2017 10:28 am

Richard,
You said, “This only works when what you measure is the same and does not change.” I believe that is essentially what I said.

richard verney
April 23, 2017 12:55 am

It is well worth looking at the McKitrick 2010 paper. For example, see the change in the number of airport stations used to make up the data set.
http://notrickszone.com/wp-content/uploads/2017/02/NOAA-Data-Manipulation-Urban-Bias-Airport-Temperature.jpg
When on considers this, it is important to bear in mind that airports in say 1920 or 1940 were often themselves quite rural, and are nothing like the airports of the jet travel era of the 1960s.
An airport in 1920 or 1940 would often have grass runways, and there would be just a handful of piston prop planes. Very different to the airport complexes of today.

mikebartnz
Reply to  richard verney
April 23, 2017 1:04 am

Recently in Masterton NZ they moved the weather station to the airport and I remember when I was with the glider club coming into the landing circuit with full air brakes on and still rising at a good rate of knots.

Reply to  mikebartnz
April 23, 2017 7:20 am

Just watch the birds. They know where the thermals are.

Duane
Reply to  richard verney
April 23, 2017 6:13 am

Airports aren’t necessarily the final word on weather data anyway. Keep in mind that nearly all airports are built and maintained by local government authorities, which should not and do not inspire confidence at the outset. Local airport authorities don’t hire PhD level meteorologists to install the weather sensors, nor do they hire MS degreed or better scientists to operate them. They hire local contractors and, well people ranging from highly competent aviation professionals to people with GEDs, respectively.
Also, “airport temperature” is not a single fixed value anyway. The temperature where? Above an asphalt airport ramp? Or at the top of a control tower (most airports aren’t towered)? In between the runways at a busy commercial airport filled with multi-engined large turbine aircraft, or off in the back 40 of a rural area with cows grazing nearby and maybe one or a handful of airport operations a day?
The fact is that measurement precision and accuracy is not tightly quality controlled at nearly all weather stations on the planet, with a tiny handful of exceptions, whether airports or not.
The bottom line is, obviously, the earth has been warming on average for the last 15K years give or take, with various interludes of cooling here and there. The fact that the earth continues to warm as it has since long before human civilization sprouted between 6K and 10K years ago is what matters. Nobody can deny that.
The real argument is over “so what?”

Reply to  Duane
April 23, 2017 7:24 am

The real argument is over “so what?”
BINGO
I’ve been kicked off more than one left-wing Climate Change site for saying that.

Duane
Reply to  Duane
April 23, 2017 8:17 am

Steve – exactly … which is why the warmists refuse to acknowledge that nobody denies today that warming is occurring. Even most hird graders understand that the Earth used to be much cooler, during the last “ice age” (we’re still in the”ice age” — just in the current inter-glacial period). The science-poseurs much prefer to pretend that their opponents deny any warming at all, so that they can pretend to be science literate while their opponents are all supposed to be scientific neanderthals who are “deniers”.
The only argument today is over the “so what” thing – which of course drives the warmists’ bonkers. They thought they had everybody who matters convinced that the “Climate of Fear” is deserved … rather than acknowledge that, as any non-imbecile knows who has studied history and archaeology at all, that WARM IS GOOD … COLD IS BAD as far as humans are concerned.

Reply to  Duane
April 23, 2017 8:18 am

“obviously, the earth has been warming on average for the last 15K years give or take, with various interludes of cooling here and there. The fact that the earth continues to warm as it has since long before human civilization sprouted between 6K and 10K years ago is what matters. Nobody can deny that.”
Say what?
For one thing, we have had trend reversals in the past 15,000 years, so that is not a logical point of reference if one is speaking to long term trends in global temp regimes.
But more recently, over say the past 8000 years, the Earth has been cooling, after reaching it’s warmest values soon after the interglacial commenced.
The warming since the end of the LIA represents a recovery from one of the coldest periods of the past 8000 years.
So what?
Because cold is bad for humans, human interests, and life in general.comment imagecomment image?w=720

Clyde Spencer
Reply to  Duane
April 23, 2017 10:36 am

Duane,
Occam’s Razor dictates that we should adopt the simplest explanation possible for the apparent increase in world temperatures. Because we can’t state with certainty just what was causing or controlling the temperature increases before the Industrial Revolution, we can’t rule out natural causes for the apparent recent increase in the rate. At best, anthropogenic influences (of which CO2 is just one of many) are a reasonable working hypothesis. However, the burden of the proof is on those promoting the hypothesis. We should be entertaining multiple working hypotheses, not just one.

David L. Fair
Reply to  Duane
April 23, 2017 1:40 pm

Uh, wrong-o, Duane.
Temperatures have been in decline the last half of the Holocene.

Duane
Reply to  Duane
April 24, 2017 9:33 am

Menicholas, and David Fair – your chart showing the last 15KY illustrates exactly what I wrote above – we are obviously in a warm, interlacial phase, much warmer than 15KYA, with various interludes of cooling but no reversal toward a new glaciation episode. Warming, as the average of the last 15KYA, no cooling trend.

george e. smith
Reply to  Duane
April 24, 2017 4:18 pm

Airport Temperatures are preferably over the runway Temperatures, as their sole purpose is to inform pilots whether or not it is safe for them to take off on that runway, given the weight and balance of their aircraft at the time.
The pilots don’t give a rip about the global temperature anomalies; they just want to know the right now runway air Temperature.
G

David Chappell
Reply to  richard verney
April 23, 2017 7:50 am

I have a problem understanding the left hand side of those graphs. Just how many “airports” were there before, say, 1910?

Reply to  David Chappell
April 23, 2017 8:26 am

Yeah…what up wit’ dat?

Reply to  David Chappell
April 24, 2017 11:03 am

None, but some of those stations were eventually ones that had airports built near them. What’s interesting is that this is an artifact of having large numbers of stations added to areas where no airports were built then having many of those sites dropped.

george e. smith
Reply to  David Chappell
April 24, 2017 4:40 pm

Nearly the same number as in 1852.
g

MFKBoulder
Reply to  richard verney
May 4, 2017 12:57 am

I always wondered on the percentage of _airport_-stations at the end of the 19th / begin fo 20th century.
These graphs are even better than the hockey-stick.
For the slow-witted: HAM is operated since 1911. Other “airflields” are not muach older….

April 23, 2017 12:56 am

About time that we got a statistical analysis. With the sun definitely in a grand minimum state and the consequent changes in the jet streams, solar vortex, etc., the distribution must be changing shape with higher standard deviation, skewness increasing towards the colder side and the kurtosis has definitely increased as there are increasingly cold and warm waves at various places on Earth.
I would like to see more about the made up data for the huge areas on the planet where there isn’t any measurements.

Reply to  Brent Walker
April 23, 2017 8:28 am

“I would like to see more about the made up data”
Just use your imagination.
That’s what the people making it up do, and I am gonna make a leap of faith and assume you are at LEAST as imaginative as those…um…”experts”.

george e. smith
Reply to  Brent Walker
April 24, 2017 4:41 pm

Why ?? it’s made up data just as you say; so it is meaningless.
G

richard verney
April 23, 2017 1:02 am

The number of NASA GISTemp stations (in thousands)comment image
And with this change there is a significant loss of high latitude stations. The station drop out is not random, and not equally distributed.

Bindidon
Reply to  richard verney
April 23, 2017 4:46 am

richard verney on April 23, 2017 at 1:02 am
1. You seem to have carefully avoided to present, in addition to this plot, the one placed immediately to its right:comment image
It is perfectly visible there that despite an inevitable loss of stations lacking inbetween modernst requirements, the mean station data coverage was by far less than the loss!
2. And with this change there is a significant loss of high latitude stations. The station drop out is not random, and not equally distributed.
Could you please cite your sources?

RACookPE1978
Editor
Reply to  Bindidon
April 23, 2017 5:29 am

Bindidon (challenging an earlier quote)

2. And with this change there is a significant loss of high latitude stations. The station drop out is not random, and not equally distributed.

Could you please cite your sources?

Is that not obvious from the plot of the stations? Do you need an “officially approved, formerly and formally peer-reviewed by officially sanctified officials” piece of printed paper in an officially approved journal to learn from and use data immediately obvious from NASA’s own plot?

Bindidon
Reply to  Bindidon
April 23, 2017 6:26 am

It is perfectly visible there that despite an inevitable loss of stations lacking inbetween modernst requirements, the mean station data coverage was by far less than the loss!
Wow! What I wrote here is nonsense! I should have written “… the loss of data coverage was by far less compared with the loss of stations.”
My bad!

Bindidon
Reply to  Bindidon
April 23, 2017 6:35 am

RACookPE1978 on April 23, 2017 at 5:29 am
Is that not obvious from the plot of the stations?
No it is not obvious at all. The plot presented by verney tell us merely about how many stations there were per decade, and not by latitude! Even the plot I added above doesn’t.
If I had the time to do, I would add a little stat function to my GHCN V3 processing software, showing you exactly what the record tells us about that in the 60N-82.5N area.

Reply to  Bindidon
April 23, 2017 8:45 am

You think that over 80% of the northern hemisphere has a thermometer on it? Really? Perhaps you can also tell us how close together those thermometers are. And since the number is constantly changing, how is it that they are not measuring something different every year?

MarkW
Reply to  Bindidon
April 24, 2017 7:39 am

All we have to do is redefine coverage and we can get whatever result we want.

Mindert Eiting
Reply to  richard verney
April 23, 2017 8:21 am

GHCN base. 1970: stations on duty 9644. During 1970-99: new stations included 2918, stations dropped 9434. Therefore 2000: stations on duty 3128. During 2000-2010 (end of my research) new stations included 14, stations dropped 1539. Therefore 2011: stations on duty 1603. This is not shown in the totals but demographers know that a group of people changes by birth/immigration and death/emigration.
I agree that the dropout was not random. The dropout rate depended on station characteristics and produced a great reduction of the variance in the sample. And the sample size decreased from 9644 to 1603. The standard error of the sample mean depends on population variance (not estimated any more by the sample variance) and sample size, on the assumption that the sample is taken randomly. Let’s not forget that all stations together are a sample from the population of all points on the earth surface. What a statistical nightmare this is.

Clyde Spencer
Reply to  Mindert Eiting
April 23, 2017 10:46 am

Mindert,
How do you propose to determine the population mean when it can’t be characterized theoretically?

Mindert Eiting
Reply to  Mindert Eiting
April 23, 2017 11:30 am

Clyde,
First moment of the temperature distribution over the earth’s surface at a certain point of time.

Nick Stokes
Reply to  Mindert Eiting
April 23, 2017 5:07 pm

” Let’s not forget that all stations together are a sample from the population of all points on the earth surface. What a statistical nightmare this is.”
It isn’t a nightmare. It’s just a spatial integration. You’re right that the population is points on Earth, not stations. The stations are a sample.
It’s wrong to talk about “stations on duty” etc. Many of the stations are still reporting. But GHCN had a historic period, pre 1995, when whole records were absorbed. Since then, it has been maintained with new data monthly, a very different practical proposition. Then you ask what you really need.
The key isn’t variability of sample average; that is good. It is coverage – how well can you estimate the areas unsampled? Packing in more stations which don’t improve coverage doesn’t help.

Reply to  richard verney
April 23, 2017 8:43 am

I am just wondering how it is that with more money than ever being spent, and by orders of magnitude at that, and more interest in finding out what is going on with the temp of the Earth, how is it that getting actual readings of what the temperature is, has gone by the wayside?
https://www.youtube.com/watch?v=gNMgqnUEMGM

Clyde Spencer
Reply to  Menicholas
April 23, 2017 10:54 am

Menicholas,
What happened after about 1985 -1990? This would seem to support the claim that there are fewer readings today at high latitudes than there were previously.

Nick Stokes
Reply to  Menicholas
April 23, 2017 10:54 am

” how is it that getting actual readings of what the temperature is”
Temperatures are measured better and in more places than ever before. You are focussing on GHCN V3, which is a collection of long record stations that is currently distributed within a few days of the end of the month, and is used for indices that come out a few days later. It’s a big effort to get so much data correct on that time scale, and it turns out that the data is enough. You can use a lot more, as BEST and ISTI do (and GHCN V4 will). It makes no difference.

Reply to  Menicholas
April 23, 2017 11:15 am

Clyde,
It sure does, doesn’t it?
Take a look at Canada in particular.
And we had more numbers coming out of Russia during the cold war than now.
So, if more readings in more places improves the degree of certainty of our knowledge of what the atmosphere is doing, why get rid of stations at all?
And why, of all places, in the parts of the world in which readings are already sparse, and in which the most dramatic changes are occurring?
It makes no sense, if the idea is really to get a better handle on objective reality.
It makes perfect sense, if the idea is to have better control over what numbers get published.
With billions with a B being spent annually by the US federal government alone, some $30 billion by credible accounts, should we seriously be expected to believe that recording basic weather data is just too onerous, difficult, inexact without upgraded equipment, or too expensive?
It defies credulity to claim anything of the sort.

Bindidon
Reply to  Menicholas
April 23, 2017 3:42 pm

What we need is a statistics giving us how many GHCN stations were present in which year starting e.g. with 1880, i.e. with the beginning of the GISTEMP and NOAA records.
And even better would be to do the job a bit finer, in order to have these yearly stats for the five main latitude zones, i.e. NoPol, NH Extratropics, Tropics, SH Extratropics, SoPol.
Without these numbers, discussions keep meaningless and hence leave us as clueless as before.

MarkW
Reply to  Menicholas
April 24, 2017 7:42 am

The money is going into models. With the theory being that with good enough models, we don’t need to actually collect data.

Clyde Spencer
Reply to  richard verney
April 23, 2017 10:41 am

Richard,
You concluded with, “And with this change there is a significant loss of high latitude stations. The station drop out is not random, and not equally distributed.” This is an extremely important point! It is claimed that the high latitudes are warming two or three times faster than the global average. Yet, the availability of temperature measurements today are about the same as in the Great Depression.

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 12:39 pm

“Yet, the availability of temperature measurements today are about the same as in the Great Depression.”
Absolutely untrue. There are far more measurements now. The fallacy is that people look at GHCN Monthly, which has a historical and current component. When it was assembled about 25 years ago, people digitised old archives and put them in the record. Ther was no time pressure. But then it became a maintained record, updated every month, over a few days. Then you have to be selective.
There are far more records just in GHCN Daily alone.

Chimp
Reply to  Clyde Spencer
April 23, 2017 12:58 pm

Nick,
Did NOAA not close 600 of its 9000 stations, thanks to the efforts of our esteemed blog host, showing that they reported way too much warming?
What about the thousands to tens of thousands of stations once maintained in the former British Empire and Commonwealth and other jurisdictions around the world, which no longer report regularly or accurately, if at all?
See Menicholas April 23, 2017 at 8:43 am and richard verney April 23, 2017 at 1:02 am above, for instance.
IIRC, Gavin says he could produce good “data” with just fifty stations, ie one per more than 10,000,000 sq km on average, for instance one to represent all of Europe.

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 7:03 pm

Chimp,
“Did NOAA not close 600 of its 9000 stations, thanks to the efforts of our esteemed blog host”
No. GHCN V2, extant when WUWT started, had 7280 stations. That hasn’t changed.
“What about the thousands to tens of thousands of stations once maintained in the former British Empire and Commonwealth”
I’ve analysed the reductions in GHCN from the change to monthly maintenance here. When GHCN was just an archive, no-one cared much about distribution. Places were way over-represented. Turkey had 250, Brazil 50. Many such stations are not in GHCN, but they are still reporting. There is a long list of Australian ststions here.
I don’t know about Gavin’s 50, but Santer said 60. And he’s right. The effect of reducing station numbers is analysed here.

Chimp
Reply to  Clyde Spencer
April 23, 2017 7:19 pm

Nick,
Sure, sixty are as good as 60,000 when not a single one of them samples actual surface temperature over the 71% of the earth that is ocean nor over most of the polar regions.
GASTA would be a joke were it not such a serious criminal activity threatening the lives of billions and treasure in the trillions.
Do you honestly believe that a reliable, accurate and precise GASTA can be calculated from one station per ten million square kilometers? That means one for Australia, New Guinea, the Coral and Tasman Seas, New Zealand and surrounding areas of Oceania and Indonesia, for instance. Maybe two for North America. One for Europe. Two for the Arctic. Two for the Antarctic.
Does that really make sense to you?

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 7:53 pm

“Does that really make sense to you?”
Yes. Again people just can’t get their heads around an anomaly average. I showed here (globe plot, scroll down) the effect for one month of plotting all the anomalies, and then reducing station numbers. With all stations, there are vast swathes where adjacent stations have very similar anomalies. It doesn’t matter which one you choose. As you reduce (radio buttons), the boundaries of these regions get fuzzier, but they are still there. And when you take the global average, fuzzy boundaries don’t matter. What I do show there, in the plots, is that the average changes very little as station numbers reduce, even as the error range rises.

Clyde Spencer
Reply to  Nick Stokes
April 23, 2017 9:54 pm

Nick,
I just read your piece on culling the stations down to 60. I think that you need to consider what it means to have your SD expanding so dramatically, even if the mean appears stable. I would suggest that one interpretation would be that you are maintaining accuracy, but dramatically decreasing precision and certainty.

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 10:11 pm

Clyde,
The σ expands, and at 60 stations is about 0.2C. That is for a single month. But for a trend, say, that goes down again. Even for annual, it’s about 0.06°C (that is the additional coverage component from culling). Yes, 60 stations isn’t ideal, but it isn’t meaningless.

Bindidon
Reply to  Clyde Spencer
April 24, 2017 1:38 pm

Chimp on April 23, 2017 at 12:58 pm
1. Did NOAA not close 600 of its 9000 stations…?
You misunderstand something here. Nick is talking about GHCN stations.
Your 9,000 stations refer to a different set (GSOD, Global Summary of the Day).
*
Chimp on April 23, 2017 at 7:19 pm
2. Sure, sixty are as good as 60,000 when not a single one of them samples actual surface temperature over the 71% of the earth that is ocean nor over most of the polar regions.
Below you see an interesting chart, constructed out of UAH’s 2.5° TLT monthly grid data (144 x 66 = 9,504 cells), by selecting evenly distributed subsets (32, 128, 512 cells, compared with the complete set).
The data source: from
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltmonamg.1978_6.0
till
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltmonamg.2016_6.0
I hope you understand that due to the distribution, any mixture of the polar, extratropic and tropic regions, and any mixture of land and sea can be generated that way:
http://fs5.directupload.net/images/170424/lxywwb3t.jpg
In black you see a 60 month running mean of the average of all 9,504 UAH TLT cells (which exactly corresponds to the Globe data published by Roy Spencer in
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
In red the average of 512 cells; in yellow, 128; and in green, 32!
So you can see how few data is necessary to accurately represent the entire Globe…
P.S. The 64 cell average (which is not plotted here) was manifestly contaminated by heavily warming sources (0.18 °C / decade instead of about 0.12). The warmest cells you find in the latitude zone 80-82.5° N… 🙂
*
Clyde Spencer on April 23, 2017 at 9:54 pm
I think that you need to consider what it means to have your SD expanding so dramatically, even if the mean appears stable.
Why do you always exclusively concentrate on standard deviations you detect in surface records?
Look at the chart above, and at the peaks and drops visible in the different UAH plots.
And now imagine what would happen if I would compute for all 9,504 UAH grid cells the highest deviations up and down, sort the result, and plot a monthly times series of the average of those cells where maximal and minimal deviations add instead of being averaged out.
How, do you think, would that series’ SD look like?

Clyde Spencer
Reply to  Bindidon
April 25, 2017 8:41 am

Bindidon,
The standard deviation of the raw data is what it is! Smoothing the data and calculating the SD gives you the SD of smoothed data. It might be appropriate and useful to do that if you are trying to remove noise from a data set. However, since the defined operation is to try to estimate the precision of the mean, smoothing is not justified. Does that answer your question about “always?”

george e. smith
Reply to  richard verney
April 24, 2017 4:43 pm

Soo where is you plot from 1880 of the stations in the Arctic; i.e. >60 NL
G

Bindidon
Reply to  richard verney
April 25, 2017 5:58 am

george e. smith on April 24, 2017 at 4:43 pm
Soo where is you plot from 1880 of the stations in the Arctic; i.e. >60 NL
Sorry George for the late answer, I live at WUWT+9.
Here is a chart showing the plot obtained from the file
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/ghcnm.tavg.latest.qcu.tar.gz
which contains the GHCN V3 unadjusted dataset.
http://fs5.directupload.net/images/170425/nbr4ztlp.jpg
As you can see, the GHCN (!) temperatures measured there from 1880 till 1930 were by far higher than the actual ones.
The linear trends ± 2 σ for the Arctic (60-82.5N), 1880-2016 (monthly) in °C / decade:
– GHCN unadj: -0.343 ± 0.013
It is interesting to compare the GHCN time series with those GISTEMP derived from, as they differ by a lot.
GISTEMP zonal data is available in txt format for 64-90° N
https://data.GISTEMP.nasa.gov/gistemp/tabledata_v3/ZonAnn.Ts.txt
but is annual, so I constructed an annual average of GHCN’s monthly data to compare.
http://fs5.directupload.net/images/170425/iwhgexi9.jpg
The linear trends ± 2 σ for the Arctic (64-82.5N), 1880-2015 (yearly) in °C / decade:
– GHCN unadj: -0.498 ± 0.033
– GISTEMP land: 0.183 ± 0.011
GHCN’s yearly trend is here lower than in the monthly data. Not only is 60-90N a bit warmer than 64-90N; the yearly averaging has here also clearly overweighted the past.
But the incredible difference in plots and trends between GHCN and GISTEMP will inevitably feed the suspicion that GISTEMP “cools the past to warm the present”; it is therefore better to show a similar chart with Globe data:
http://fs5.directupload.net/images/170425/pwev9qpj.jpg
The linear trends ± 2 σ for the Globe, 1880-2015 (yearly) in °C / decade:
– GHCN unadj: 0.231 ± 0.010
– GISTEMP land: 0.099 ± 0.004
Here we see that for the Globe, GISTEMP land-only data has a much lower trend than its own GHCN origin.
We should not forget that GISTEMP automatically eliminates GHCN data showing for 6 consecutive months anomalies above 10 °C. Not only in the past 🙂

Clyde Spencer
Reply to  Bindidon
April 25, 2017 9:39 am

Bindidon,
I see that you go nothing out of my articles. You are citing temperatures to the nearest thousandth of a degree.

Bindidon
Reply to  Bindidon
April 25, 2017 11:40 am

Clyde Spencer on April 25, 2017 at 9:39 am
I see that you go nothing out of my articles. You are citing temperatures to the nearest thousandth of a degree.
Clyde Spencer, I see that you are fixated on philosophical debates having few in common with the daily, real work of people processing datasets (including those people doing that on the hobby line).
If you were daily busy in similar work, you soon would discover that when using numbers with no more than one digit after the decimal point, you simply run into the stupid problem of losing accuracy when you have to average all the time.
And you have to average: the simplest example is just above, with the necessity of averaging a monthly series (GHCN unadjusted) into an annual one just because it must be compared with data (GISTEMP zonal) not existing in a monthly variant in text format.
Do you really think I would change all my dozens of Excel tables in order to show my data in your guest posts such that it fits to your obsessional narrative?
It’s so simple to read over what disturbs you.
And by the way: why don’t you write to e.g. Dr Roy Spencer? Does all the data he publishes for us not have in your mind a digit too much behind the point as well? Thanks for publishing his response…

Clyde Spencer
Reply to  Bindidon
April 25, 2017 2:50 pm

Bindidon,
Roy Spencer is not reading thermometers to the nearest tenth of a degree and he has a synoptic view of the entire global temperature field. He is almost certainly doing it correctly.
My “fixation” is not philosophical. It is pragmatic. As to my “daily real work,” besides my academic background and post-graduate work experience (I’m retired), I worked my way through college working for Lockheed MSC in the Data Processing and Analysis Group in the Agena Satellite Program. I have more than a passing acquaintance with numbers. I’m not doing it as a hobbyist. I would assume that a hobbyist would be interested in finding sources of knowledge to compensate for their lack of formal instruction in a field they have taken an interest in.

Dave Fair
Reply to  Clyde Spencer
April 25, 2017 2:58 pm

Not so, Clyde! You must have tenured mentors, such as Drs. Mann or Jones, to tutor you in the arcane sciences of global warming/climate change. Anything less invalidates everything you might have to say about CAGW.

Clyde Spencer
Reply to  Dave Fair
April 25, 2017 3:52 pm

Dave,
Congratulations! I had to read it twice before I realized that you were being sarcastic. I usually catch it right away. 🙂

Dave Fair
Reply to  Clyde Spencer
April 25, 2017 4:08 pm

People overuse “/sarc,” Clyde. I think people should engage their brains before responding to comments.
But, then again, I’ve always said one’s use of “should” indicates an unwarranted belief in the intelligence and/or goodwill of others.

Dave Fair
Reply to  Clyde Spencer
April 25, 2017 4:12 pm

Plus, do-gooders use it all the time. Save the Whales! Save the Snails! Save Mother Earth!

Bindidon
Reply to  Bindidon
April 26, 2017 4:36 am

Clyde Spencer on April 25, 2017 at 2:50 pm
I would assume that a hobbyist would be interested in finding sources of knowledge to compensate for their lack of formal instruction in a field they have taken an interest in.
What an arrogant statement, Clyde Spencer! Not only you do have a working past. The mine was full of interesting matters (bezier splines, algebraic specifications, etc etc).
The difference between us is that while wrt climate we are in my really humble opinion both no more than a couple of laymen, you seem to have a thoroughly different meaning about that.
No problem for me!

April 23, 2017 1:03 am

Why use fahrenheit?

Reply to  Hans Erren
April 23, 2017 1:17 am

It’s parochial but it doesn’t affect the argument.

richardscourtney
Reply to  Hans Erren
April 23, 2017 1:18 am

Why not use Fahrenheit when writing for a mostly American audience?

fretslider
Reply to  richardscourtney
April 23, 2017 4:01 am

Why not use scientific units?
Do Americans have problems with SI? If so, why?

Duncan
Reply to  richardscourtney
April 23, 2017 4:38 am

Fahrenheit is a scientific unit, so are BTU’s, etc. As with all units it is an estimation of ratios of quantities. It is just not part of the International System. There is nothing wrong with either (scientifically).

Janice The American Elder
Reply to  richardscourtney
April 23, 2017 4:47 am

fretslider, all measurement systems are based upon something arbitrary, therefore they are all equivalent.

R. Shearer
Reply to  richardscourtney
April 23, 2017 6:05 am

The Fahrenheit degree is more precise.

fretslider
Reply to  richardscourtney
April 23, 2017 6:25 am

Do Americans have problems with SI?<
Ask a stupid question. Clearly they do.

Reply to  richardscourtney
April 23, 2017 8:46 am

Do not-Americans have a problem with Fahrenheit?
Clearly they do.
I have never heard anyone complaining about using Celsius, even though the numbers are far more course grained.

Gary Pearse by
Reply to  richardscourtney
April 23, 2017 11:53 am

Fretslider: Having won 50% of all Nobel prizes in hard sciences and economics I’d venture to say they have no trouble at all, wouldn’t you say so on reconsideration? I have to ask who your guitar influences are, too.

george e. smith
Reply to  richardscourtney
April 24, 2017 4:57 pm

Some of us actually use SI units for everything. MY GPS outputs all its data in SI units.

Clyde Spencer
Reply to  george e. smith
April 25, 2017 8:10 am

GES,
Is the thermostat in your house calibrated in C? How about the temperature indicator on the dash of your car? How about the oven in your kitchen? If the answer to those is “yes,” then you have a unique American lifestyle and I imagine it cost you a lot of money to get EVERYTHING to conform to SI. Or perhaps you exaggerated just a little.

Reply to  richardscourtney
April 25, 2017 9:22 am

I’m a units fanatic. If someone was to give an answer that was numerically correct but left the units off, I would say it was 100% wrong. Instead, if they gave an answer that was numerically wrong but had the right units, I’d give partial credit.
Just about any unit is valid if used properly. As an engineer, we have had to use a lot of silly units. There’s a difference between using pounds force and pound mass. If pounds is the force unit, then the mass unit is the slug. If pounds is the mass unit, the force unit is the poundal. Metric is easier but not without its traps. If you use cgs (centimeters-grams-seconds) as your standard, the arbitrary constant in Coulomb’s law is exactly 1. However, if you use MKS (meters-kilograms-second) as your standard, then the arbitrary constant in Coulomb’s law is more complex (it was an attempt to clean up Maxwell’s equations). The units volts, amperes, ohms, and coulombs are from the MKS system.
Still, Fahrenheit is a perfectly valid unit for temperature. Does anyone know the standard unit for Hubble’s constant? Well it’s kilometers per second per megaparsec. Look up “parsec” in your SI list of units and then tell astronomers they shouldn’t use it–or lightyear for that matter.
Jim

Reply to  Hans Erren
April 23, 2017 7:31 am

Fahrenheit has smaller degrees, so it sort of inherently has more accuracy and with a lower zero point you don’t have to deal with negative numbers as often. Celsius offers no advantage whatsoever. Why Science doesn’t use Kelvin all the time every time is a mystery.

Reply to  Steve Case
April 23, 2017 7:33 am

“… accuracy…” – Uh I should have said precision

Clyde Spencer
Reply to  Steve Case
April 23, 2017 9:59 pm

Steve,
It should be noted that K uses the same resolution as C, and that C is based on the arbitrary decision to use the phase changes in water and divide it into 100 divisions. There is nothing magic about K!

george e. smith
Reply to  Steve Case
April 24, 2017 4:57 pm

Some of us do.
G

Dinsdale
Reply to  Hans Erren
April 23, 2017 7:33 am

Why not use Kelvin, since temperature is related to energy states of matter? Then replace the so-called anomaly plots with absolute temperature plots. Then the size of recent changes would be near zero compared to long-term glaciation-cycle changes. But if you are in the business of getting funding for your chicken-little claims that just wouldn’t do.

Clyde Spencer
Reply to  Dinsdale
April 23, 2017 11:07 am

Fretslider and others,
I have a preference for Fahrenheit only because of a long familiarity with it. I know from personal experience how I should dress for a certain forecasted temperature. I know that at -10 F the moisture on nostril hairs will freeze immediately on my first breath. I know that at 110 F I will be uncomfortable in the desert and immobilized if I have to deal with the humidity of Georgia (USA) as well. If I’m doing scientific calculations I’ll probably convert whatever units are available to Kelvin. At this point in my life, I won’t live long enough to acquire the kind of intuitive understanding that is acquired by experience. Please bear with me until I die.

Clyde Spencer
Reply to  Hans Erren
April 23, 2017 10:56 am

Hans,
Historically, there was a preponderance of Fahrenheit readings. Even today, the ASOS automated readings are collected in Deg F, and then improperly converted to Celsius. See my preceding article for the details.

Reply to  Clyde Spencer
April 23, 2017 1:34 pm

if you pretend to write scientific papers, you should use international standards.

Clyde Spencer
Reply to  Hans Erren
April 23, 2017 3:38 pm

Hans,
You said, “if you pretend to write scientific papers, you should use international standards.”
In case you hadn’t noticed, what I wrote was not a peer-reviewed scientific article. I chose to use a unit of measurement (Which is really irrelevant for the argument. However, if you are not up to making the conversion to Celsius I will be glad to do it for you.) that is most common in historical records and is still used in the US. I’m sorry if you have difficulty with the mathematics of conversion.

george e. smith
Reply to  Clyde Spencer
April 24, 2017 5:01 pm

I’m with Hans. If you want to be informative to a diverse group of people you should use the recognized universal units.
And with kelvin, you don’t even need any negative signs.
G

Clyde Spencer
Reply to  george e. smith
April 25, 2017 9:03 am

GES,
When you write an article for WUWT, please do use K. And see how many people complain that they can’t relate to Kelvin! Despite having used the Celsius temperature scale for decades, I still don’t have the intuitive understanding of what anything other than 0, 20, and 100 degrees C mean on an existential level. However, it is trivial to make a conversion if I need to in order to better grasp the personal implications. The reason that different temperature scales are still in common usage is that they are more appropriate at certain times than the alternatives. You have hypocritically claimed that you use nothing but SI units; however, you have demonstrated in your own writing that it is only puffery to make you look superior. There are more important things to worry about then whether or not we make our colleagues across the pond happy with our choice of measurement units. Next thing you know they will be complaining that we drive on the wrong side of the road.

Reply to  Clyde Spencer
April 24, 2017 10:24 pm

>>
And with kelvin, you don’t even need any negative signs.
<<
Or degrees.
Jim

John M. Ware
Reply to  Hans Erren
April 23, 2017 11:19 am

F degrees are only 5/9 the size of C degrees, making F nearly twice as precise as C.

george e. smith
Reply to  Hans Erren
April 24, 2017 4:44 pm

M.akes it sound warmer

Phillip Bratby
April 23, 2017 1:09 am

Temperature is an intensive property and therefore there is no such thing as the “earth’s average temperature” or even “the earth’s temperature” (unless the earth is in thermal equilibrium, which it isn’t).

richardscourtney
Reply to  Phillip Bratby
April 23, 2017 1:21 am

Phillip Bratby:
Yes. If you have not seen it, I think you will want to read this especially its Appendix B.
Richard

Samuel C Cogar
Reply to  richardscourtney
April 24, 2017 5:23 am

Duncan April 23, 2017 at 3:54 pm

I am done with Samuel, but as far as Math “averages”, …………….. Math is not always an exact ‘number’ as he thinks

That’s OK iffen you are “done with me” ……. as long as you are not done with trying to IMPROVE your reading comprehension skills.
Duncan, I made no mention, accusation or claim whatsoever about …. “math being an exact number”.
Tis always better when one puts brain in gear before putting mouth in motion.
Cheers

Nick Stokes
Reply to  Phillip Bratby
April 23, 2017 3:46 am

“Temperature is an intensive property and therefore there is no such thing as the “earth’s average temperature””
People are always saying this here, with no authority quoted. It just isn’t true. Scientists are constantly integrating and averaging intensive quantities; there isn’t much else you can do with them. The mass of a rock is the volume integral of its density. It is the volume times average density. The heat content is the volume integral of the temperature multiplied by density and specific heat. There is no issue of equilibrium there (except LTE so that temperature can be defined, but that an issue for rarefied gases). That is just how you work it out.

Phillip Bratby
Reply to  Nick Stokes
April 23, 2017 3:59 am

Nonsense. You would need to know the temperature of every molecule in order to work out an average temperature of the earth.

Nick Stokes
Reply to  Nick Stokes
April 23, 2017 4:44 am

“You would need to know the temperature of every molecule in order to work out an average temperature of the earth.”
If that were true, then you’d need to know about every molecule to work out the temperature of anything. But it isn’t. How you know about any continuum field property in science is by sampling. And the uncertainty is generally sampling uncertainty.

Duncan
Reply to  Nick Stokes
April 23, 2017 4:57 am

I have to agree with Nick on this one. In any study of science, math, chemistry, physics, etc averages must be defined and used. When engineers build they rely on the averages to perform calculations as example. Saying scientist cannot use or try to calculate the average of earths atmospheric temperature would make every other science null and void. Arguing this point, while it might sound convenient and truistic to us non-believers, is just playing semantics with what every other science attempts to do.

fah
Reply to  Nick Stokes
April 23, 2017 5:09 am

Mass and heat are both extensive properties. Temperature is intensive. Intensive properties generally are ratios of extensive properties, such as the case with temperature being defined (differentially) as the ratio of entropy and energy. See any thermodynamics or stat mech textbook. If one likes Wiki, here is a reference
https://en.wikipedia.org/wiki/Intensive_and_extensive_properties

Duncan
Reply to  Nick Stokes
April 23, 2017 5:12 am

“You would need to know the temperature of every molecule in order to work out an average temperature of the earth.”
Phillip, you are confusing accuracy with averages. When calculating the circumference of a circle, do you use every decimal place of PI to do this? We can still calculate a circle “close enough” for the purpose required.

fh
Reply to  Nick Stokes
April 23, 2017 5:21 am

There is nothing “wrong” about calculating the global average of local temperatures. As others point out, an average can be calculated for any set of numbers. One could ask successive people who enter a room for their favorite number and then compute an average. The average can be calculated and there is nothing wrong with it as an average. The question is, what does one want to do with the number. The confusion enters when one wants to use some discipline such as physics to describe phenomena. In that case temperature is a quantity that appears in dynamical equations governing the evolution of thermodynamic systems. It holds that role as an intensive property, i.e. it is the magnitude of a differential form. When one averages over its value, the average can be computed but it no longer plays the role played by temperature in thermodynamics – it is not connected to the states or evolution of the system. There is nothing wrong with computing an average of any set of global temperatures, at the same instant of global time, at the same instant of local time, at random instants of time, at daily local maxima or minima, etc. etc. Just so long as one understands that it says nothing explicit about the global thermodynamics, i.e. whether anything is “hotter” or “cooler” other than the non-physical average quantity.

Solomon Green
Reply to  Nick Stokes
April 23, 2017 5:26 am

If scientists are “constantly integrating and averaging intensive quantities” why are they still only averaging two thermometer readings to get Tmean? (Tmean = .0.5*(Tmax +Tmin)?
Why not derive the true mean temperature from the area under the T- curve? Even prior to electronic measuring equipment, more accurate approximations than .0.5*(Tmax +Tmin) were always possible provided that there were sufficient number of daily readings.

Duncan
Reply to  Nick Stokes
April 23, 2017 5:37 am

Solomon, I think you answered your own question. Before electronic recording and data logging there was just not the need (or computing power) to record/sample temperatures often enough. I do agree though, the area under the curve would be a much better averaging method (vs Tmax/Tmin).

Samuel C Cogar
Reply to  Nick Stokes
April 23, 2017 6:19 am

Duncan – April 23, 2017 at 4:57 am

I have to agree with Nick on this one.

Duncan, given the remaining content of your post, …. your agreeing with Nick was not a compliment.
Duncan also saidith:

In any study of science, math, chemistry, physics, etc averages must be defined and used.

Don’t be talking “trash”. Math is used for calculating averages …….. but nowhere in mathematics is/are “averages” defined or mandated. Mathematics is a per se “exact” discipline or science.
Duncan also saidith:

When engineers build they rely on the averages to perform calculations as example.

“DUH”, silly is as silly claims.
Design engineers perform calculations to DETERMINE what is referred to as “worse case” average(s), …….. not vice versa. Only incompetent designers would use a “calculated average” as a specified/specific design “quantity”.
Duncan also saidith:

Saying scientist cannot use or try to calculate the average of earths atmospheric temperature would make every other science null and void.

I already told you, …….. Don’t be talking “trash”.

Bob boder
Reply to  Nick Stokes
April 23, 2017 6:26 am

Nick
You can average anything that is true, the question is wheather it means anything, in the case of global temperature it is not clear that it does.

Duncan
Reply to  Nick Stokes
April 23, 2017 7:10 am

Samuel, I thought my responses were respectful, not trash, maybe that is just me. I get the sense you have a bone to pick.
Math “uses” averages (mean) was my only point I did not say it was exacting or not. You can try to turn that into a big deal if you want but the point is meaningless.
Use “worse case” average(s) when designing – this is NOT correct all the time when designing. As example, when building a space ship, one does not design for the worst case asteroid strike, the ship would be too heavy to launch. You may design for small paint chip sized strikes but that is it. This risk needs to be accepted and hopefully mitigated in other ways. My point was, when looking a say strength of materials, you do not examine every molecular bond in the material but you take an average of a sample (like earths temperature). If one chooses to use the worst case or not depends on the critical nature of the application and other factors such as cost, manufacturability, etc. Engineers do rely on other averages when designing, such a gravity, they don’t worry about the small variances across the globe. I will accept your apology on this one.
I did not understand your last “trash” comment so I will ignore it.

PiperPaul
Reply to  Nick Stokes
April 23, 2017 7:41 am

Even the Catholic church integrated and averaged when calculating angels dancing on heads of pins!

bruce
Reply to  Nick Stokes
April 23, 2017 8:38 am

Nick, and all,
It might be a good idea to start measuring the earth’s temperature in a more concrete method than currently done. Try to imagine some method that would not be prone to degradation and error. Maybe “find” a dozen remote locations spaced around the globe, measure the surface, not even the air (as it is so prone to movement and inconsistencies). In this way hoping to have a foolproof means of knowing what the conditions are going forward.
In the end there will be no final answer how to create a device that will accurately and without degradation measure the temperature without recalibration or maintenance. And thus introduce the same question about the correctness of our understanding of the inferred size of an angel’s girth.

Reply to  Nick Stokes
April 23, 2017 8:53 am

Hey…how about them satellites?
The ones that costed a bajillon dollars and, according to NASA, are the best way to determine global temperatures, as they measure the whole atmosphere.
Well, for a while at least, that was what NASA said.
Right up until those dang satellites just stopped co-operating with the warmista conclusion.

Nick Stokes
Reply to  Nick Stokes
April 23, 2017 10:39 am

“Nick, now tell us how you take a volume integral without knowing the value of the function at each point. “
All practical (numerical) integration calculates an integral from values at a finite number of points. It’s all you ever have. There is a whole science about getting it right.

Clyde Spencer
Reply to  Nick Stokes
April 23, 2017 11:29 am

Bruce,
You suggested, ” Maybe “find” a dozen remote locations spaced around the globe, measure the surface, not even the air (as it is so prone to movement and inconsistencies).” For purposes of climatology, we could probably get by with fewer stations (Although, how do we make historical comparisons?), If optimized recording stations were located in each of the climate zones defined by physical geographers, and the number were proportional to the area of the zone, that should provide reasonably good sampling. As it is, our sampling protocol is not random, and it is highly biased to the less extreme regional climates where most people live.
We would, however, still need weather stations for airports and where people live so that they know how to dress for the weather.

Duncan
Reply to  Nick Stokes
April 23, 2017 3:54 pm

I am done with Samuel, but as far as Math “averages”, they do ponder non-exacting problems such as milliseconds before the Big-Bang or the mass of the Boston Higgs particle. Math is not always an exact ‘number’ as he thinks but can be a non-number such as infinity. What is the average of two infinities added together? Just needed to complete my thoughts.

Samuel C Cogar
Reply to  Nick Stokes
April 24, 2017 5:03 am

Duncan – April 23, 2017 at at 7:10 am

Samuel, I thought my responses were respectful, not trash, maybe that is just me.
I get the sense you have a bone to pick.

Duncan, responses being “respectful” has nothing whatsoever to do with them being ….. trash, lies, half-truths, junk-science, Biblical truths or whatever.
And “Yes”, …….. I do have a per se “bone to pick” with anyone that talks or professes “junk science” or anyone that touts or exhibits a misnurturing and/or a miseducation in/of the natural or applied sciences.
Duncan , I spent 20+ years as a design engineer, systems programmer and manufacturing consultant of mini-computers and peripherals. Just what is your designing “track record” of experiences?

Samuel C Cogar
Reply to  Nick Stokes
April 24, 2017 5:28 am

Duncan April 23, 2017 at 3:54 pm

I am done with Samuel, but as far as Math “averages”, …………….. Math is not always an exact ‘number’ as he thinks

That’s OK iffen you are “done with me” ……. as long as you are not done with trying to IMPROVE your reading comprehension skills.
Duncan, I made no mention, accusation or claim whatsoever about …. “math being an exact number”.
Tis always better when one puts brain in gear before putting mouth in motion.
Cheers

Bindidon
Reply to  Nick Stokes
April 24, 2017 2:19 pm

I would enjoy Roy Spencer telling us right here that, due to the fact that temperatures aren’t extensive quantities, his whole averaging of averages of averages of averages, giving at the end a mean global temperature trend of 0.12 °C per decade for the satellite era, is nothing else than pure trash.
Oh yes, I would enjoy that.

Clyde Spencer
Reply to  Phillip Bratby
April 23, 2017 11:14 am

Phillip,
I did mention that Earth is not in thermal equilibrium. If someone wants to define the Earth’s “Average Temperature” as being the arithmetic mean of all available recorded temperatures for a given period of time, I have no problem with that. However, I do have a problem with how precise they claim it to be and how they use that number. Often times disagreements are the result of not carefully defining something, and then getting agreement on accepting that definition.

george e. smith
Reply to  Phillip Bratby
April 24, 2017 5:05 pm

Temperature is defined in terms of the mean KE per molecule; actually per degree of freedom of such random motions, so there is no such thing as the temperature of a single molecule, although you can postulate such an equivalence, which would be the time averaged KE per degree of freedom of that molecule, and the problem would be, since it is a time average, you don’t know just exactly when the molecule had that Temperature.
G

Pablo
April 23, 2017 1:10 am

If continents move away from the poles to allow warm tropical water better access to the frigid seas in the polar night of winter, ice caps tend to disappear. The tropical seas cool a little and the polar seas warm up. The average stays the same.

Reply to  Pablo
April 23, 2017 8:59 am

Yes, when they slide up them poles like that, it def warms things up down there.

richardscourtney
April 23, 2017 1:28 am

Pablo:
You mistakenly assertIf continents move away from the poles to allow warm tropical water better access to the frigid seas in the polar night of winter, ice caps tend to disappear. The tropical seas cool a little and the polar seas warm up. The average stays the same.
No!
The changes in regional temperatures alter the rates of heat losses from the regions with resulting change to rate of heat loss from the planet and, therefore, the average temperature changes until radiative balance is restored.
Radiative heat loss is proportional to the fourth power of temperature (T^4) and the planet only loses heat to space by radiation.
Richard

Lyndon
Reply to  richardscourtney
April 23, 2017 2:19 am

We also lose heat via loss of mass.
[??? By how much, compared to radiation losses? .mod]

Samuel C Cogar
Reply to  Lyndon
April 23, 2017 6:53 am

Lyndon – April 23, 2017 at 2:19 am

We also lose heat via loss of mass.

Lyndon, don’t be badmouthing Einstein’s equation of …… E=MC2 …. or …. M=E/C2
And one shouldn’t be forgetting that photosynthesis uses solar (heat) energy to create bio-mass ……. and the oxidation of that bio-mass releases that (heat) energy back into the environment ….. and ONLY if that released (heat) energy gets radiated back into space can one confirm that …. “loss of (heat) energy = loss of mass”.

Lyndon
Reply to  Lyndon
April 23, 2017 4:40 pm

Almost nil.

george e. smith
Reply to  Lyndon
April 24, 2017 5:09 pm

What about all the mass of space dust we get everyday.
Each day, the earth lands on literally millions of other planets/asteroids/space dust, and emsquishenates most of them, so how do you know we are losing heat through escape of renegade matter to space ?? We are probably gaining mass.
G

Dr. S. Jeevananda Reddy
Reply to  richardscourtney
April 23, 2017 5:34 am

Climate system plays the vital role at any given location. Over this, general circulation pattern prevailing at any given location or region [Advective heat transfer] modifies the local temperatures. So, local temperature is not directly related to Sun’s heat over space and time. In India, northeast and southwestern parts get cold waves in winter and heat waves in summer from Western Disturbances — associated with six months summer and six months winter in polar regions.
Dr. S. Jeevananda Reddy

Pablo
Reply to  richardscourtney
April 23, 2017 8:56 am

The ocean, a very good absorber of solar energy, warms less than land for the same input and so cools less than land by radiation at night.
It takes 3000 times as much heat to warm a given volume of water 1ºC as to to warm an equal volume of air by the same amount. A layer of water a meter deep, on cooling .1ºC could warm a layer of air 33 metres thick by 10ºC.
If more ocean warmth reaches the poles to lessen the tropical to polar gradient then the global temperature range becomes less extreme but the average stays the same.
Earth’s atmosphere with mass moderates the extremes of temperature that we see on the surface of the Moon.
Earth’s oceans and the water vapour it allows to exist, lessen those extremes further.
The average stays the same.

Reply to  Pablo
April 23, 2017 9:11 am

Pablo,
Each polar region is perpetually dark for six months of the year, and since it is so cold there, the air is very dry, mostly and on average.
Dry air radiates its heat into space very efficiently, and so any thermal energy arriving at the polar regions is lost to space far more quickly than is the case at lower latitudes.
Richard is exactly correct…the thermal equilibrium is altered and the averages are in no way constrained to remain the same, when changes are made to the arrangement of the continents or the net polar movement of energy from the sun.
If what you are saying is, that it is possible to have less thermal gradient from poles to the equator while having the same average temp, obviously that is true.
if you are saying that IS what happens, or IS what HAS to happen, you are obviously incorrect.
Obviously the averages can changes…just look at any temperature record or historical reconstruction.
Flat lines are notably absent.

george e. smith
Reply to  Pablo
April 24, 2017 5:11 pm

The hottest midsummer dry deserts radiate more than twelve times as fast as the coldest Antarctic highlands midnight regions.
So the earth is NOT cooled at the poles; but in the equatorial dry deserts, in the middle of the day.
G

Retired Engineer John
Reply to  richardscourtney
April 23, 2017 9:04 am

Richard, your comment is interesting. Does this mean when the Earth has significant temperature differences compared to average conditions, more energy is removed, radiated to space. Has anyone done actual calculations for different locations of extreme temperature differences?

richardscourtney
Reply to  Retired Engineer John
April 23, 2017 9:13 am

Retired Engineer John:
You ask me

Richard, your comment is interesting. Does this mean when the Earth has significant temperature differences compared to average conditions, more energy is removed, radiated to space. Has anyone done actual calculations for different locations of extreme temperature differences?

Calculations are easy and I have reported some on WUWT, but actual simultaneous variations of temperature differences are difficult to obtain.
Richard Lindzen has said the effect I mentioned is sufficient to explain global temperature variations since the industrial revolution, but I have not seen a publication of his determinations that led him to this conclusion.
Richard

Reply to  Retired Engineer John
April 23, 2017 9:14 am

How would we go from interglacial to full glacial advance and then back again if there were not conditions and periods during which the amount of energy removed from the earth increased or decreased?

ferdberple
Reply to  richardscourtney
April 23, 2017 10:22 am

the planet only loses heat to space by radiation.
How would we go from interglacial to full glacial advance
=========================
We are not measuring the heat of the planet. rather we are measuring surface temperatures, which is quite a different animal.
All that is required to go from interglacial to glacial is to modify the overturning rate of the oceans. increase the overturning rate and there is more than sufficient cold water in the deep oceans to plunge the earth into a full on ice age. decrease the overturning rate and you have an interglacial.
Given the heat capacity of water, perhaps a change in the deep ocean overturning rate of just a few percent would take us from interglacial to full on glaciation. Perhaps resulting from some long term harmonic of the ocean currents, stirred up as the oceans are dragged north and south of the equator by the earth’s moon, along with the earth-sun orbital mechanics, combined with the 800 year deep ocean conveyor.

Reply to  ferdberple
April 23, 2017 12:43 pm

And how is somewhat colder water on some ocean surfaces cooling down the interiors of continents by the amounts required for miles of ice to form and never melt?
For this to happen without increased losses to space (like because, for instance, the air was drier), all of the difference in temp would have to be due to more energy going into heating water which is then carried under the sea, no?
Not saying I think that scenario is impossible.
But, absent changes in atmospheric circulation…

skorrent1
Reply to  richardscourtney
April 23, 2017 11:29 am

Thank you for your reminder that the earth’s energy balance depends primarily on radiation, i.e., T^4. Translating the author’s record temps to Kelvin gives us the range, roughly, 190k to 344K. The numerical average record is 267K. The radiative effect of this temperature is 5.08*10^9. The 4th power of the records are 1.30*10^9 and 14.00*10^9 with an “average” radiative effect of 7.65*10^9. With the constant, diurnal, and seasonal spread of temperatures across the Earth, variations in the “annual global average temperature” tells us f***all about changes in the radiative energy balance of the Earth.

PabloNH
April 23, 2017 1:29 am

“The apparent record low temperature is -135.8° F and the highest recorded temperature is 159.3° F”
Neither of these is an official measurement; both are surface (not atmospheric) temperatures measured by satellites. The referenced article is a remarkably poor one; besides falsely claiming that these measurements meet WMO standards (they clearly come nowhere close), it also (apparently) confuses the ecliptic with the sun’s equatorial plane.

DHR
Reply to  PabloNH
April 23, 2017 5:51 am

And your point is…?

Clyde Spencer
Reply to  PabloNH
April 23, 2017 11:37 am

PNH,
I provided you with a link so that you can see that I didn’t invent the temperatures. In any event, I didn’t use those extremes in constructing the frequency distribution curve. I used more conservative extremes. I suggest that you take your complaint to those who maintain the website at the link I provided.

george e. smith
Reply to  Clyde Spencer
April 24, 2017 5:19 pm

I think there is an actual North African Official high air temperature (in the shade) of 136. x deg F, some time in the 1920s I think, and NASANOAA has reported large regions of the Antarctic highlands that quite often get down to -94 deg. C.
In any case I will adopt your +70 deg. C surface high temperature. I have used +60 deg. C but now I will use + 70.
It is SURFACE Temperatures that determine the radiative emittance; not the air Temperatures.
G

Clyde Spencer
Reply to  george e. smith
April 25, 2017 8:18 am

GES,
Would you please convert “136. x deg F” to degrees C so that the Fahrenheit-challenged people can understand? Are you going to have to do some sort of penance for breaking your vow of SI-units exclusivity?

Don K
April 23, 2017 1:34 am

“and the highest recorded temperature is 159.3° F”
Are you sure about that? I thought it is commonly thought to be 56.7C (134F),at Greenland Ranch (aka Furnace Creek–sort of), CA 10 July 1913.
Other than that. A good article I think. The point about measuring a constantly changing mix of stations and instruments seems to me to be valid.. Focusing on the variance (OK, standard deviation) of the data seems shakier. What is at issue is the precision of the mean, not the wild swings in the data. Thought experiment: Average Vehicle Velocity on a stretch of expressway where (most) vehicles zip through most of the time, but crawl at rush hour. Extremely bimodal distribution with a few outliers. Is the mean stable and precise? To me it seems likely to be stable with enough samples. Is that mean a useful number? Not so sure about that. Maybe useful for some things, not for others? And even if meaningful, not necessarily easy to interpret.

tty
Reply to  Don K
April 23, 2017 2:05 am

Those are surface temperatures, which is what satellites can measure. They can also measure average temperatures in deep swaths of the atmosphere (which is what UAH and RSS does). What they can’t do is measure the temperature five feet above ground, which is the meteorological standard.
Incidentally the reason for the “5 foot” convention is that it is eye-level height for europeans, so the first meteorologists back in the 17th-18th century foud it to be the most convenient level to hang their thermometers.

Reply to  tty
April 23, 2017 9:27 am

Or maybe because the temp right at the ground is extremely high when the sun is shining on it, and very low during a nighttime with a clear sky and low humidity, and bearing little relation to what we feel (unless we are walking around barefooted or lick some hot asphalt)?

April 23, 2017 1:40 am

The usage of the mean of temperatures was addressed nicely in “Does a Global Temperature Exist” http://www.uoguelph.ca/~rmckitri/research/globaltemp/globaltemp.html
What I did not see addressed is the usage of the NON-representative sample of temperature measurements. If you are looking over land measurements points (an example here, with logic and empirical denialist generalization from a point to a huge area:comment image ) you can quickly find out that it’s far from being a representative sample. Generalizing from a convenience sampling to the whole ‘population’ (that is, claiming to be representative for the whole Earth’s surface temperature field) is anti-scientific.
Despite this, it’s what they do in the climastrological pseudo science.

Reply to  Adrian Roman
April 23, 2017 9:29 am

An arbitrary grid pattern in three dimensions would seem to be the way to go about getting a real idea of what is going on.

Clyde Spencer
Reply to  Adrian Roman
April 23, 2017 11:43 am

Adrian,
As Anthony knows all too well, there are numerous problems with the temperature database. How about the ASOS system at airports, with a large percentage of impervious surfaces, recording temperatures to the nearest degree F, converting to degrees C and reporting to the nearest 0.1 degree?
What passes for good science is, unfortunately, too much like “Ignore that man behind the curtain!”

MarkW
Reply to  Adrian Roman
April 24, 2017 7:53 am

One problem at a time.
Once everyone has a good grasp of the issues surrounding averaging this kind of data, then we can move on to problems with the data itself.

tty
April 23, 2017 1:54 am

As for temperatures being normally distributed. They aren’t, nor are other climatological parameters. Many have Hurst-Kolmogorov distributions, others are probably AR(N) or ARMA.
Note that this fact invalidates much (most?) of the inferences based on climatological data. For example the confidence levels based on standard deviations seen in almost all climatological papers are only valid for normally distributed data (yes, you can calculate confidence levels for other distributions, but you must know what the distribution is to do it).

Don K
Reply to  tty
April 23, 2017 3:53 am

“As for temperatures being normally distributed. They aren’t, nor are other climatological parameters….”
AFAIK, that’s correct. Moreover, I suspect that very few parameters of interest in any scientific field are actually normally distributed in practice. With one important exception.
Central Limit Theory says that so long as a number of mostly incomprehensible conditions are met, the computed mean of a large number of measurements will be close to the actual mean and the discrepancy between the computed and actual mean will be normally distributed. Provided those mostly incomprehensible conditions are met, the underlying distribution doesn’t have to be normal. In this case one would suspect that the Standard Deviation of the data has little meaning because the distribution of the values almost certainly isn’t remotely normal. But, as I understand it, because the error in estimating the mean is normally distributed, the Standard Error of the Mean could still be a meaningful measure of how good our estimate of the mean is.
However, I’m quite sure that does not say or suggest that the mean of a lot of flaky numbers is somehow transformed into a useful value by the magic of large numbers. .

Nick Stokes
Reply to  Don K
April 23, 2017 4:03 am

None of this means much unless you come to terms with the fact that they are averaging anomalies. Fluctuations about a mean are much more likely to be approx normal than the quantities themselves.
But yes, the CLM does say that the sample mean will approach normal, whatever the sample distributions. Most of the theory about the distribution of mean relies on the scaling and additivity of variance, which is not special to normal distributions.

Dr. S. Jeevananda Reddy
Reply to  Don K
April 23, 2017 5:26 am

Anomalies are derived from the absolute values only. Tampering of absolute values form the part of anomalies.
Dr. S. Jeevananda Reddy

Reply to  Don K
April 23, 2017 6:44 am

NS: “None of this means much unless you come to terms with the fact that they are averaging anomalies. Fluctuations about a mean are much more likely to be approx normal than the quantities themselves.”
If that is the case, why all the adjustments to past temperature readings? The absolute values of the anomalies shouldn’t change if you are right.

fh
Reply to  Don K
April 23, 2017 8:06 am

There seems to be a bit of inverse thinking about averages and normality. The act of taking a mean does not endow any normality on the underlying distribution or the distribution of deviations from the mean. Normality is a property of the underlying numbers, not the act of taking a mean (or average). To see an example of a mean which does not have normal variance, take some number, say 10,000 numbers from a uniform random distribution from say 0 to 1. In Matlab this would be x = rand(10,000,1). Now take the mean of x, i.e. y = mean(x). Then calculate the deviations d = x – y i.e. x – mean(x). Now look at the deviations d. First do a histogram, and see that it is distinctly non-normal. Perhaps to a qqplot and see that it is definitely not normal. If you want, do it with 10^6 numbers and see that it never approaches normality. The property of normality is a property of the distribution of numbers one is considering, it doesn’t magically appear by taking an average. To determine normality, one needs to look at the distribution of numbers themselves, in this case presumably it would be the distribution of global temperatures over which one wished to compute an average.

tty
Reply to  Don K
April 23, 2017 9:43 am

“But yes, the CLM does say that the sample mean will approach normal, whatever the sample distributions”
No it does not. It says that this is true for independent random variables with a finite expected value and a finite variance. The two later conditions are fulfilled in climatology, the first emphatically is not.

Don K
Reply to  Don K
April 24, 2017 2:54 am

Nick. Yes I think you’re correct that averaging anomalies does change the situation — especially since they seem to average monthly anomalies, not anomalies versus the complete data set. I’m not so sure about “Fluctuations about a mean are much more likely to be approx normal than the quantities themselves.” Could be so, but I think it’s possible to imagine pathological cases where the fluctuations about the mean are not normally distributed and will never converge toward a gaussian distribution. I’ll have to think about whether that means anything even it it is true.

Nick Stokes
Reply to  Don K
April 24, 2017 3:54 am

Don,
” I’ll have to think about whether that means anything even it it is true.”
Yes, I don’t think anything really hangs on individual stations being normally distributed. But the reason I say more likely is mainly that anomaly takes out the big seasonal oscillation, which is an obvious departure.

Richard Saumarez
Reply to  tty
April 23, 2017 4:02 am

That is a very good point.
The Gaussian distribution is formed when you add randomly distributed variables (usually with the same mean).
The assumption that the temperature follows a Gaussian distribution and that there is a mean temperature assumes random variation about that mean.
There is no a-priori reason to assume this, especially when the observations are biased due to latitude, insolation, vegetation etc. What is more likely is that the errors in measurement are random and so a local estimate of the temperature has a normal distribution.

Reply to  tty
April 23, 2017 9:39 am

The big takeaway from all of this, to anyone for whom the above is incomprehensible mumbo jumbo, is that the claimed accuracy and precision numbers used in so-called climate science are completely and 100% full of hot crap, and we all know it.
And anyone who does not know it is deluded or an ignoramus.
And the reasons that this is so include everything from bad data recording methods, oftentimes shoddy and incomplete records, questionable and unjustified analytical techniques, and just plain making stuff up.

Mindert Eiting
Reply to  tty
April 23, 2017 1:56 pm

Interesting point. May I add that in those skewed distributions mean and variance are dependent?

Clyde Spencer
Reply to  Mindert Eiting
April 23, 2017 4:01 pm

Mindert and tty,
I haven’t absorbed all of this yet, but I thought that you and others might find this link to be of interest: https://www.researchgate.net/publication/252741800_Hurst-Kolmogorov_dynamics_in_paleoclimate_reconstructions

tty
Reply to  Mindert Eiting
April 24, 2017 5:09 am

Clyde Spencer:
I am familiar with the paper and with the “Hurst-Kolmogorov phenomenon”. Unfortunately the same can’t be said of most “climate scientists”.

Geoff Sherrington
Reply to  tty
April 23, 2017 7:46 pm

When I first started my science career, in the 1960s I was helped by a wise and experienced statistician who said “First, work out the form of your distribution.”
There are many practical applications where the form does not matter because the outcome need only be approximate.
In climate work, a real problem arises as Clyde notes, when you connect temperature observations to physics. As a simple example, it is common to raise T to the power 4. Small errors in the estimate of T lead to consequences such as large uncertainty in estimates of radiation balance.
This balance is important to a fairly large amount of thinking about the whole global warming hypothesis.
An extended example, if one uses a global average temperature derived from a warmer geographic selection of observing stations, then compares it to one from a cooler selection, the magnitude of their difference is large enough to conclude that the globe is cooling rather than warming, or vice versa, thereby throwing all global warming mathematical model results in doubt. Or more doubt.
It is time for a proper, comprehensive, formal, authorised study of accuracy and precision in the global temperature record. It is easy to propose, semi-quantitatively, that half of the alleged official global warming to date is instrumental error. (Satellite T is a different case).
There has been far too much polish put on this temperature turd.
Geoff

April 23, 2017 2:09 am

Reporting global average temperature anomalies at 0.1 °C as a settled proof of manmade weather extremes rockets cAGW movement beyond parody.

DWR54
April 23, 2017 2:12 am

Surely any sampling error would be in both directions, not just one? A temperature is just as likely to be misread low as it is high. In that case, if there really was no underlying change, then you would expect the low and high errors to cancel out. There would be no discernible trend.
Yet every producer of global temperatures, including satellite measurements of the lower troposphere, show statistically significant warming over the past 30 years. The probability of that happening by chance alone, of each producer being wrong and all in the same direction, seem pretty remote.

Reg Nelson
Reply to  DWR54
April 23, 2017 2:43 am

Phil Jones was finally forced to admit that there was no significant warming during the pause. You are clearly wrong on this one.
The satellite data was first ignored and then attacked, because it did fit the confirmation bias that was clearly evident in the Climategate emails. Likewise USCRN and ARGO were equally ignored when they did not fit the Global Warming meme.
Scientists who conspire to dodge FOIA requests, delete emails, and data, are not really scientists

DWR54
Reply to  Reg Nelson
April 23, 2017 3:33 am

Reg
I’m referring to the past 3 decades, 30 years, which is the averaging period the above author specified. Every global temperature data set we have, including lower troposphere satellite sets, shows statistically significant warming over that period. There are differences in the slope, but these are relatively small over longer time scales. UAH is the smallest, but still statistically significant over that time (0.11 +/- 0.09 C/dec): http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html
The question is, if there really was no significant warming trend, how come all these different producers managed to make the same error?

Reply to  Reg Nelson
April 23, 2017 9:49 am

And for the thirty years before that is was cooling, while CO2 was rising, and before that is was warming, when CO2 was hardly changing at all.
And for the past nearly twenty years, during which time 30% of all global emissions of CO2 have occurred, temps have been largely flat, outside of a couple of big el nino caused spikes.
Looking at all proxy reconstructions and historical records, the temp of any place and of all places combined has always been rising or falling, in patterns with many distinct periods.
The reason we are all discussing this is not the if, but the why.

Reply to  DWR54
April 23, 2017 3:06 am

“Surely any sampling error would be in both directions, not just one?”
Not if there is a bias in the convenience sampling. Example: you interview rich people at a rich people party on how much fortune they have. You conclude based on your sample that all people on the Earth are rich, because ‘sampling error is in both directions’.
Not if there is a systematic error. Example: You use digital thermometers made using the same principle with a senor that degrades over time such that it shows ‘warming’ over time. They might not degrade at the same rate, but you might figure out what happens with measurements.
Not if there is a confirmation bias in the ‘correcting’ data. Do I need to give an example to this?
Do I need to go on?
There is no physical law that forces ‘errors to cancel out’. If that would be true, you would not need science to find the truth. You would just look into goat entrails, average the results hoping to ‘average out the errors’ in your prediction.

DWR54
Reply to  Adrian Roman
April 23, 2017 3:46 am

Adrian Roman
As I understand it the temperature stations used to compile global surface temperature data are by no means universal but nevertheless are fairly widespread. This is especially the case in the BEST data, which uses around 30,000 different stations, I believe. Then there are the satellite measurements of the lower troposphere. UAH claims to provide global cover for all but the extreme high latitudes. The satellite data also show warming over the past 30 years. Less than the surface, but still statistically significant.
You mention a possible systematic defects with instruments. Why should any such defect lead to an error in one direction only (warming in this case)? An instrument defect is just as likely to lead to a cool bias as a warm one surely. Which brings us back to the same question: why are all these supposed defects, sampling errors, etc all pointing in roughly the same direction according to every global temperature data producer over the past 30 years of measurement?

Reply to  Adrian Roman
April 23, 2017 9:56 am

Hey Adrian mon,
Don’t you be givin’ no short shrift to the signs we be having from rollin’ dem chicken bones, you know mon.
Ay, and we got the Donald Trump voodoo dolls rolling over a hot bed o’ coals too, you know mon.
Goat entrails…pffftt!

Reply to  Adrian Roman
April 23, 2017 2:52 pm

DWR54, I gave some example on how it can happen so one could comprehend that errors do not necessarily ‘average out’. I did not claim something specific to the particular methodology of the climastrological pseudo science. I could go into that, too, but since you didn’t figure out the simple example, I doubt you would the more complex situations.

Reply to  Adrian Roman
April 23, 2017 2:55 pm

Yes, Menicholas, goat entrails. Or a pile of shit. Or dices, if you prefer. Just claim that by averaging the ‘answers’ the errors will be averaged out.

MarkW
Reply to  Adrian Roman
April 24, 2017 7:59 am

Adrian, die.
PS: I’m not attempting to insult Adrian, just reminding him that the plural of dice is die.

Reply to  Adrian Roman
April 24, 2017 10:41 am

Well, it is used sometimes: https://en.wiktionary.org/wiki/dices I’m not a native English speaker, so I do make quite a lot of mistakes.

MarkW
Reply to  Adrian Roman
April 24, 2017 2:07 pm

Adrian, it’s a slow morning and it looks like my attempts at humor are falling flat today.

Reply to  Adrian Roman
April 24, 2017 11:41 pm

>>
MarkW
April 24, 2017 at 7:59 am
Adrian, die.
PS: I’m not attempting to insult Adrian, just reminding him that the plural of dice is die.
<<
It must be an inside joke, because I thought dice was the plural of die.
Jim

Richard M
Reply to  DWR54
April 23, 2017 6:57 am

Same ridiculous nonsense from DWR54. You can continue your silly denial or accept reality. It doesn’t matter. It is clear there has been no warming in the past 20 years and smearing warming from the previous 20 years into the last 20 years only shows how desperate you are to create warming when it is clear none exists in recent satellite data.

ferdberple
Reply to  DWR54
April 23, 2017 10:35 am

The probability of that happening by chance alone
========================
then what caused temperatures to rise from 1910 to 1940? or what caused temperatures to drop during the LIA?
The answer is clear. WE DON’T KNOW. The problem is Human Superstition. Humans in general blame the Gods and Other Humans for anything that in the Natural world that is not understood. However, the solution is always the same.
primitive culture – Climate is changing – the gods are angry – solution – human sacrifice
medieval culture – Climate is changing – witchcraft and sin – solution – human sacrifice
modern culture – Climate is changing – pollution and CO2 – solution – human sacrifice

Gary Pearse by
Reply to  ferdberple
April 23, 2017 4:41 pm

Ferd, they burned witches during the LIA because it went with plague and crop failures. They actually blamed climate on people! Nowadays, they… er… blame climate on people and this is what they call PROGRESSIVE thought.

Clyde Spencer
Reply to  DWR54
April 23, 2017 11:57 am

DWR54,
There are some unstated assumptions in your statement. If something is being “misread” then that is a systematic error. It is quite likely if there is an issue of parallax in reading a thermometer. The height of the observer will afrfect whether the systematic error is high or low. Also, the position of the mercury in the glass column will affect parallax-reading errors, so they will NOT cancel out.
Random errors will be the result of interpolation, which can be affected by parallax also.
Yes, the records seem to indicate a general warming trend, particularly at high latitudes and urban areas. But, what I’m questioning is the statistical significance of the temperatures, commonly reported to two or three significant figures to the right of the decimal point. I’m fundamentally questioning the statistical significance.

Mindert Eiting
Reply to  Clyde Spencer
April 23, 2017 2:16 pm

And here we have, Clyde, the problem of the decision procedure as once devised by Ronald Fisher: the significance (at certain level) depends on sample size. With one million of surface stations almost every change of temperature would be significant. Let’s talk about the size of the changes.

Gary Pearse by
Reply to  Clyde Spencer
April 23, 2017 5:02 pm

DWR is also unaware of a clear 60yr cycle, or at least he was until it was explained that rise in temperature for 70-98 was preceded by 30yrs of cooling that had the worriers projecting an ice age unfolding. After 98 it flattened out again and was beginning to decline after 2005. There was no warming for 20years and the record keepers chiseled and pinched to alter this and finally a fellow at NOAA just ready to retire erased the Pause before he went.
The pause didn’t definitively kill the CO2 idea perhaps, but it definitely reduced it’s effect to a minor factor (perhaps slowed the natural down cycle) but there has been so much fiddling with the T record that this is another thing outside that of error.

Reply to  Clyde Spencer
April 23, 2017 10:47 pm

And I am sure I am not the only one who recalls that for many a year, warmistas argued vociferously that there never was any cooling scare, even going so far as to state that that whole notion was due to a Newsweek article.
Just another in a decades long series of warmista talking points, arguments, and predictions that have been completely disproven, falsified, or shown to be made up nonsense.

MarkW
Reply to  DWR54
April 24, 2017 7:56 am

Care to demonstrate that your supposition is the case. Many have presented evidence that most errors result in readings that are too warm.

tty
April 23, 2017 2:20 am

And how do you know it is “statistically significant”? Read my 1.54 am post above.

Dr. S. Jeevananda Reddy
April 23, 2017 3:00 am

Temperature is measured to the first place of decimal only. While averaging the data — daily, monthly, yearly, state, country, globe, etc the averaging process is repeated.
maximum + minimum = Average
35.6 + 20.1 = 27.85 = 27.9
35.7 + 20.1 = 27.9
35.6 + 20.7 = 28.15 = 28.1
35.6 + 20.9 = 28.25 = 28.3
Over the global average, such type of adjustment goes on.
Dr. S. Jeevananda Reddy

Nick Stokes
April 23, 2017 3:13 am

“It is obvious that the distribution has a much larger standard deviation than the cave-temperature scenario and the rationalization of dividing by the square-root of the number of samples cannot be justified to remove random-error when the parameter being measured is never twice the same value.”
You are completely off the beam here. Scientists never average absolute temperatures. I’ve written endlessly on the reasons for this – here is my latest. You may think they shouldn’t use anomalies, but they do. So if you want to make any progress, that is what you have to analyse. It’s totally different.
Your whole idea of averaging is way off too. Temperature anomaly is a field variable. It is integrated over space, and then maybe averaged over time. The integral is a weighted average (by area). It is the sum of variables with approximately independent variation, and the variances sum. When you take the square root of that for the total the range is much less than for individual readings.. Deviations from independence can be allowed for. It is that reduction of variance (essentially sample average) that makes the mean less variable than the individual values – not this repeated measurement nonsense.

Reg Nelson
Reply to  Nick Stokes
April 23, 2017 3:35 am

Pretty sure the Min\Max temps used in the pre-satellite era are “absolute”, averaged, and manipulated for political reasons.
Isn’t that completely apparent? Have you read the Climategate emails, Nick?
How can you ignore the corruption of Science?
Why split hairs over something so corrupt?

Reply to  Reg Nelson
April 23, 2017 4:04 am

“How can you ignore the corruption of Science?”
From my readings of his work here, it seems that is his job. It is hard to get a man to see a thing when his paycheck depends on his not seeing it. (the corruption that is)

Reply to  Nick Stokes
April 23, 2017 6:55 am

Ask yourself why past temperatures are adjusted at all if the anomalies are all that is important.

Reply to  Nick Stokes
April 23, 2017 8:29 am

Not if you break Identicality, Nick. That’s basic mathematical theory (CLT). Also there’s Nyquist to consider.
Temperature anomalies are a theoretical construct not a realistic one. So any conclusions are purely hypothetical.
Which means nothing in reality.

Reply to  Nick Stokes
April 23, 2017 10:00 am

Once data is adjusted, it is not properly called data anymore at all.
At that point, it is just someone’s idea or opinion.

ferdberple
Reply to  Nick Stokes
April 23, 2017 11:30 am

square root of that for the total the range is much less than for individual readings
=====================
precisely what everyone is complaining about. Just because the 30 year average of temperature on this day at my location is 19C doesn’t mean that a reading of 19C today is more accurate than a reading of 18C. Yet that is the statistical result of using anomalies, because they artificially reduce the variance.
Say for example I calculated the anomalies using yearly averages versus hourly averages. The anomalies would be further from zero using yearly average and closer to zero using hourly averages. You would artificially conclude that the anomalies calculated using the hourly average were more accurate, because their variance would be much, much smaller.
But in point of fact this would be false because the original temperate readings were the same in both cases and thus the expected error is unchanged. Thus, the anomalies have artificially reduced the expected error. As such, the statistical basis for Climate Science lacks foundation.

Clyde Spencer
Reply to  Nick Stokes
April 23, 2017 12:11 pm

Nick,
You claimed, “Scientists never average absolute temperatures.” If that is so, then please explain how the baseline temperature is obtained. How can you obtain an anomaly if you don’t first compute an “average absolute temperature”?
I covered this before: You can arbitrarily define a baseline and state that it has the same precision as your current readings, or average of current readings. However, you still should observe the rules of subtraction when you calculate your anomalies. If you arbitrarily assign unlimited precision to your baseline average, you are still not justified in retaining more significant figures in the anomaly than was in your recent temperature reading, usually one-tenth of a degree.

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 1:10 pm

” If that is so, then please explain how the baseline temperature is obtained.”
The baseline temperature is the mean for the individual site, usually over a fixed 30 year period. You form the anomaly first, then do the spatial averaging. These are essential steps, and it is a waste of time writing about this until you have understood it.
Subtracting a 30-year average adds very little to the uncertainty of the anomaly. The uncertainty of the global average is dominated by sampling issues. Your focus on measurement pedantry is misplaced.

Mindert Eiting
Reply to  Clyde Spencer
April 23, 2017 2:33 pm

Do you mean this, Nick?
t(ij) = a + b(i) + c(j) + d(ij),
in which t(ij) is the mean temperature as measured by station i in year j. The first term a is a grand mean. The effects b(i) are stations’ local temperatures as deviations from a. The effects c(j) represent the global annual temperatures as deviations from a. Finally, d(ij) is a residual.

RACookPE1978
Editor
Reply to  Mindert Eiting
April 23, 2017 3:48 pm

Mindert Eiting

The effects c(j) represent the global annual temperatures as deviations from a. Finally, d(ij) is a residual.

OK, so let me ask you specifically this question:
What is the “correct” hourly “weather” (2 meter air temperature, dewpoint, pressure, wind speed, wind direction) for a single specific latitude and longitude for every hour of the year if I have 4 years of hourly recorded data?
Now, I need the hourly information of all five measured data points over the year. What is the “correct” average for 22 Feb at 0200, at 0300, and at 0400, etc?
Do you require I average the 4 years of data for Feb 22 at 0200?
Repeat again for 0300, 0400, 0500, etc.
Do a time-weighted average of the previous day’s temperature and next day’s temperature at each hour to smooth variations?
Daily temperatures change each hour, but the information changes slowly over the period of a week – since most storms last less than 3 days. Do you average the previous 3 day and next three days together? Average the previous and next hour with the previous and next day’s hourly data?
I am NOT interested in a minimum and maximum of each day. I DO need to know what the “average” temperature is at 0200 every day of the year when I have only 4 years of data. (At 24 hours/day x 365 x 4, it’s 35,040 hours of “numbers” but no “information.”
Now, what I really want to build is a theorectical “year” of “average” weather conditions at that single latitude and longitude. (The “real” imaginary year is a series of 24 curve-fit equations that one can determine FROM the list of 365 points for each hour.)

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 4:29 pm

Mindert,
I’ve set out a linear model here, more recent version here. I write it as
t_smy = L_sm + G_my + e_smy
station, month, year, where L are the station offsets (climatologies), G the global anomaly and e the random (becomes residual). It’s what I solve every month.

Mindert Eiting
Reply to  Clyde Spencer
April 23, 2017 5:16 pm

RAC and Nick, thanks for the comments. I did something very unsophisticated. The matrix t(ij) has numerous missing values because stations exist (like human beings) for a limited number of years. So I wrote a little program estimating the station effects iteratively. As far as I remember, the solution converged in fifteen rounds when the estimates did not differ more than a small amount in two rounds. Next, I made pictures of the annual effects from about 1700 till 2010. I cannot guarantee their worth but it was a funny job to do.

commieBob
April 23, 2017 3:19 am

I would be surprised if the cave temperature were the arithmetic average of the outside temperatures. My wild-ass guess is that rms would work better.
The temperature of the cave is the result of a flow of heat energy (calories, BTU). The flow of heat over time is power. Power is calculated using rms.
I have no idea how much difference it will make. It probably depends on local conditions.

commieBob
Reply to  commieBob
April 23, 2017 3:49 am

Oops. If negative values are involved, rms will give nonsensical results.

Duncan
Reply to  commieBob
April 23, 2017 4:25 am

That is why absolute temperature is/should be used and is a principle parameter in Thermodynamics. Zero and 100C are arbitrary points based on the freezing/boiling point of one type of matter at our average atmospheric pressure (i.e. 101 kPa). At different pressures these boundaries change dramatically.

Kaiser Derden
Reply to  commieBob
April 23, 2017 10:30 am

the air in the cave (which is measured) doesn’t move and is surrounded by heavy insulation … it really doesn’t matter what the temperature is outside … the energy just can’t transfer between the outside and the cave …

Clyde Spencer
Reply to  commieBob
April 23, 2017 12:16 pm

commieBob,
I acknowledged that the situation is more complex than a simple average of the annual above-ground temperatures. It was simply offered as an analogy for the difference in results for sampling on a parameter with very little variance versus sampling global temperatures.

Clyde Spencer
Reply to  commieBob
April 23, 2017 12:27 pm

Forrest,
I suspect that one will find that there is a depth in a mine at which the surface influence is swamped by the upwelling geothermal heat and there will cease to be a measurable annual variation. I’m not familiar with the technique of borehole temperature assessments of past surface temperatures, but I’m reasonably confident that the is a maximum depth and maximum historical date for which it can be used.

Reply to  commieBob
April 23, 2017 12:49 pm

Ground water too seems to represent a rough average of the annual temp for a given location, although the deeper the source aquifer of the water, the longer ago the average that applies may be.
Similar for caves?
Very deep ones may be closer to what the temp was a long time ago.

Reply to  Menicholas
April 23, 2017 12:52 pm

It appears my recollection from my schoolin’ thirty some years ago is fairly close to what is thought to be the case:
“The geothermal gradient varies with location and is typically measured by determining the bottom open-hole temperature after borehole drilling. To achieve accuracy the drilling fluid needs time to reach the ambient temperature. This is not always achievable for practical reasons.
In stable tectonic areas in the tropics a temperature-depth plot will converge to the annual average surface temperature. However, in areas where deep permafrost developed during the Pleistocene a low temperature anomaly can be observed that persists down to several hundred metres.[18] The Suwałki cold anomaly in Poland has led to the recognition that similar thermal disturbances related to Pleistocene-Holocene climatic changes are recorded in boreholes throughout Poland, as well as in Alaska, northern Canada, and Siberia.”
https://en.wikipedia.org/wiki/Geothermal_gradient

Reply to  Menicholas
April 23, 2017 12:53 pm

This would seem to indicate that the temps in the tropics was changed little if any during the glacial advances.

D B H
April 23, 2017 3:41 am

Hells bells…..six beers (literally) and having read this article….it actually makes sense, and I (maybe) actually understood the points being made.
Maybe, just maybe, some of those ‘march for science’ die hards, should also have a six pack and then read some of this….it worked for me!!!
Oh, go on, give me a hard time…. 🙂

D B H
April 23, 2017 4:22 am

I am a trader in the stock market, and there is NO BS in this arena….for you live or die upon the correct interpretation of correct and available DATA.
Information of and from different time scales in all matters is paramount…period.
Information and DETAIL are though, often thought to be the same…they need not be, and assuming they are, is fatal.
Ditto it would seem, with any sort of analysis of (global) temperatures.
Attempting to ‘average’ temperatures on a global scale, while appearing that science is supporting this attempt, is little more than ‘a best effort’.
Do that within my sphere of interest, and you’d end up in the poor house.
I agree with this articles underlying rationale and conclusions, and despite myself being anything but scientifically trained, would attempt to defend its conclusions (in general, if not in detail).
I would suggest, that this approach would more likely lead to it proving itself far more robust and viable, than that which would be its corollary….ie, the CAGW.
Sorry if that this may be a bit obtuse, but I’m working under some considerable limitations, as noted in my previous comment.
D B H

Reply to  D B H
April 23, 2017 6:07 am

D B H
Your comment works for me.
Break open another tinnie.

R. Shearer
Reply to  D B H
April 23, 2017 6:15 am

In your opinion then, would aspects of the situation be similar to companies being removed/inserted into the various stock market indices? For example, Sears was removed from the Dow in 1999.

D B H
Reply to  R. Shearer
April 23, 2017 1:42 pm

You are quite correct.
Index charts as you’ve used as an example, are biased upward – (sound familiar?) – simply by removing defunct companies or ones that have been taken over or failed within themselves, through normal business development.
The chart of an index therefore IS visual data that has been altered, and an average ‘investor’ might be excused for thinking trends are more significant, than they really are.
My point (badly made) was that having data come from a single source AND only superficial data, does not and can not allow the observer (non scientific people – like myself) to understand what is truth and what is fiction.
Explain it better, (as in the above article) and we can gain greater understanding of the matter, if only in general and to our ability to understand.
Superficial data (a stock price chart) can to those trained, be read and understood….but that would be a rare person indeed.
Supply me a ‘yard stick’ by which I can read and compare the superficial data (like this article above) then I CAN make sense of the information being presented….all by my lonesome, non-scientific, self.
Can I be fooled by that data?
Heck yes, if it were stand alone and not corroborated….but that is not the case.
Robust data CAN be corroborated, supported and replicated.
Bad data…well….ends up creating the (insert expletive here) march of concerned scientists.

Richie
April 23, 2017 5:07 am

It is disingenuous in my view to stipulate 30 years as a climatologically significant period. They “gotcha” with that one. It’s an arbitrary parameter that obscures the inconvenient truth that the planet is well into an interglacial warm spell. The catastrophists’ claim that humans are responsible for the current, pleasantly life-sustaining climate is predicated on everyone’s taking the shortest possible view of climate. That short-sightedness is what led Hansen to predict, back in the ’70s, that global cooling would end civilization through famine and resource wars. This prediction was extremely poorly timed, as the ’70s marked the end of a 30-year cooling cycle. Hansen’s “science” was simply a projection of the recent cold past into the unknowable future. Sound familiar?
Statistics are just as useful for obscuration as for illumination. Perhaps more so. Nothing changes the fact that the temperature data must be massaged in order to produce “meaningful” inferences, and, to torture McLuhan, ultimately the meaning is in the massage.

Clyde Spencer
Reply to  Richie
April 23, 2017 12:22 pm

Richie,
You said, “Statistics are just as useful for obscuration as for illumination.” The last sentence in my third paragraph was, ” Averages can obscure information, both unintentionally and intentionally.”

azeeman
Reply to  Richie
April 23, 2017 7:24 pm

Thirty years is not arbitrary, it’s the length of a professor’s tenure. If the professor is a climatologist, then thirty years is climatologically significant. Any longer and it would cut into his retirement. Any shorter and he would be unemployed late in life.

April 23, 2017 5:36 am

Thank you Clyde Spencer for this important post.
Temperature and its measurement is fundamental to science and deserves far more attention than it gets.
I disagree strongly with Nick Stokes and assert that averaging temperatures is almost always wrong. I will explain in part what is right.
If I open the door of my wood stove, and a hot coal with temperature roughly 1000F falls on the floor, does that mean the average temperature of my room is now (68 + 1000)/2 = 534 F ? How ridiculous, you say. However, I can calculate the temperature that might result if the coal looses its heat to the room from (total heat)/(total heat capacity) Since the mass of air is many times that of the coal, the temperature will change only by fractions of a degree. To average temperatures one must weight by the heat content.
The property that can be averaged and converted to an effective temperature for air is the moist enthalpy. When properly weighted by density it can usefully be averaged then converted back to a temperature for the compressible fluid air.
More relevant for analysis of CO2 impact is to use the total energy which includes kinetic and potential energies, again converted to an equivalent temperature This latter measure is the one which might demonstrate some impact of CO2 induced change in temperature gradients, and should be immune to effects of inter conversion of energies to among internal, convection and moisture content.
With the billions of dollars spent on computers and measurements it is ridiculous to assert that we cannot
calculate global average energy content equivalent temperature.
A discussion which contains other references to Pielke and Massen is:
https://aea26.wordpress.com/2017/02/23/atmospheric-energy/

Nick Stokes
Reply to  4kx3
April 23, 2017 10:01 am

“does that mean the average temperature of my room is now”
No. As I said above, the appropriate average is a spatial integral – here volume-weighted. As I also said, you also have to multiply by density and specific heat.

Reply to  Nick Stokes
April 24, 2017 9:34 am

Stokes “you also have to multiply by density and specific heat”
Consider that since density = P/(Rm Ta) where Rm is molecular weighted R, and Ta is absolute temperature you are dividing T by Ta; weighting Cp (=1.) for added water leaves (T/Ta) * (1-Q* .863)*Rm/P where Q is the mixing ratio. Does this more sense than just T ? It does diminish Denver relative to Dallas.

Nick Stokes
Reply to  Nick Stokes
April 24, 2017 9:40 am

“Does this more sense than just T ? “
No. T has an important function. It determines which way heat will flow. It makes you feel hot. Heat content is important too, but I can’t see what could be done with the measure you propose.

April 23, 2017 5:42 am

Thank you Clyde Spencer for this important post.
Temperature and its measurement is fundamental to science and deserves far more attention than it gets.
I disagree with Nick Stokes and assert that average temperature is almost always wrong. I will explain in part what is right.
If I open the door of my wood stove, and a hot coal with temperature roughly 1000F falls on the floor, does that mean the average temperature of my room is now (68 + 1000)/2 = 534 F ? How ridiculous, you say. However, I can calculate the temperature that might result if the coal looses its heat to the room from (total heat)/(total heat capacity) Since the mass of air is many times that of the coal, the temperature will change only by fractions of a degree.
A property that can be averaged and converted to an effective temperature for air is the moist enthalpy. When properly weighted by density it can usefully be averaged then converted back to a temperature.
More relevant for analysis of CO2 impact is to use the total energy which includes kinetic and potential energies, converted to an equivalent temperature This latter measure is the one which might demonstrate some impact of CO2 induced change in temperature gradients , and should be immune to effects of inter conversion of energies to internal, convection, and moisture content.
With all the billions of dollars spent on computers and measurements it is ridiculous to assert that we cannot
calculate global average energy content equivalent temperature.
A discussion which contains other references to Pielke and Massen is:
https://aea26.wordpress.com/2017/02/23/atmospheric-energy/

R. Shearer
Reply to  4kx3
April 23, 2017 6:18 am

That would inevitably lead to the end of this scam, however.

Reply to  4kx3
April 23, 2017 6:45 am

You make a good argument against using average temperatures, or average anomalies, as a stand-in for global average energy content. However, we’re currently dealing with a shell game and not Texas hold’em, so we first have to concentrate on showing how this game is fixed.

Ron Clutz
April 23, 2017 6:10 am

“We should be looking at the trends in diurnal highs and lows for all the climatic zones defined by physical geographers.”
There is research that follows this direction, using the generally accepted Köppen zones as a basis to measure whether zonal boundaries are changing due to shifts in temperature and precipitation patterns.
The researchers concluded:
“The table and images show that most places have had at least one entire year with temperatures and/or precipitation atypical for that climate. It is much more unusual for abnormal weather to persist for ten years running. At 30-years and more the zones are quite stable, such that is there is little movement at the boundaries with neighboring zones.”
https://rclutz.wordpress.com/2016/05/17/data-vs-models-4-climates-changing/

Thomas Graney
April 23, 2017 6:18 am

I think is thread amply demonstrates how poorly people, some people at least, understand statistics, measurements, and how they both relate to science. In their defense, it’s not a simple subject.

Reply to  Thomas Graney
April 23, 2017 6:47 am

Statistics are a way to analyze large amounts of data. What’s being demonstrated here is not how poorly some people understand the subject, but how differences of opinion in the appropriate application of statistical tools will result in vastly different results.

Clyde Spencer
Reply to  Thomas Graney
April 23, 2017 12:31 pm

Thomas,
Would you care to clarify your cryptic statement? Are you criticizing me, or some of the posters, or both?

April 23, 2017 6:19 am

The other point is that CO2 is a constant for any short period of time. CO2 is 400 ppm no matter the latitude, longitude and/or altitude. The temperature isn’t what needs to be measured, the impact of CO2 on temperature. Gathering mountainous amounts of corrupted data doesn’t answer that question. We shouldn’t be gathering more data, we should be gathering the right data. Gathering data that requires “adjustments” is a complete waste of time and money, and greatly reduces the validity of the conclusion. If we want to explore the impact of CO2 on Temperature collect data from areas that measure that relationship and don’t need “adjustments.” Antarctica and the oceans don’t suffer from the urban heat island effect. Antarctica and the oceans cover every latitude on earth. CO2 is 400 ppm over the oceans and Antarctica. Collecting all this corrupted data opens the door for corrupt bureaucrats to adjust the data in a manner that favors their desired position. Collecting more data isn’t beneficial is the data isn’t applicable to the hypothesis. The urban heat island isn’t applicable to CO2 caused warming. Don’t collect that data.

April 23, 2017 6:30 am

the difference between the diurnal highs and lows has not been constant during the 20th Century.
B I N G O !
The average of 49 and 51 is 50 and the average of 1 and 99 is 50.
And if you pay attention to the Max and Min temperatures some interesting things drop out of the data.
I’ve spammed this blog too many times with this US Map but it illustrates the point.

Kalifornia Kook
Reply to  Steve Case
April 23, 2017 9:10 pm

This is a really cool map, and contains some clues as to to how to duplicate this result – but could you give a few more clues? This is great information, and combined with the info in the main article regarding how minimums have been going up, it really closes the link about how temperatures have been rising without having been exposed to higher daytime temperatures. This was a great response!

Kalifornia Kook
Reply to  Steve Case
April 24, 2017 10:30 pm

Haven’t been able to replicate that map. Can someone help me?

April 23, 2017 6:39 am

Nick Stokes
April 23, 2017 at 3:13 am

It is obvious that the distribution has a much larger standard deviation than the cave-temperature scenario and the rationalization of dividing by the square-root of the number of samples cannot be justified to remove random-error when the parameter being measured is never twice the same value.

You are completely off the beam here. Scientists never average absolute temperatures. I’ve written endlessly on the reasons for this – here is my latest. You may think they shouldn’t use anomalies, but they do. So if you want to make any progress, that is what you have to analyse. It’s totally different.

I read your referenced article, and I have to disagree with your logic on the point of location uncertainty. You say “You measured in sampled locations – what if the sample changed?” However, in your example, you are not “changing the sample” — you are changing a sample of the sample. Your example compares that subset of data with the results from the full set of data, to “prove” your argument. In the real world you don’t have the “full set” of data of the Earth’s temperature; the data you have IS the full set, and I think applying any more statistical analyses to that data is unwarranted, and the results unprovable — without having a much larger data set with which to compare them.
In a response to another post, you stated that the author provided no authority for his argument. I see no authority for your argument, other than your own assertion it is so. In point of fact, it seems that most of the argument for using these particular adjustments in climate science is that the practitioners want to use them because they like the results they get. You like using anomalies because the SD is smaller. But where’s your evidence that this matches reality?

Your whole idea of averaging is way off too. Temperature anomaly is a field variable. It is integrated over space, and then maybe averaged over time. The integral is a weighted average (by area). It is the sum of variables with approximately independent variation, and the variances sum. When you take the square root of that for the total the range is much less than for individual readings.. Deviations from independence can be allowed for. It is that reduction of variance (essentially sample average) that makes the mean less variable than the individual values – not this repeated measurement nonsense.

The Law of Large Numbers (which I presume is the authority under which this anomaly averaging scheme is justified) is completely about multiple measurements and multiple samples. But the point of the exercise is to decrease the variance — it does nothing to increase the accuracy. One can use 1000 samples to reduce the variance to +/- 0.005 C, but the statement of the mean itself does not get more accurate. One can’t take 999 samples (I hate using factors of ten because the decimal point just ends up getting moved around; nines make for more interesting significant digits) of temperatures measured to 0.1 C and say the mean is 14.56 +/- 0.005C. The statement of the mean has to keep the significant digits of its measurement — in this case, 14.6 — with zeros added to pad out to the variance: 14.60 +/- 0.005. That is why expressing anomalies to three decimal points is invalid.

Reply to  James Schrumpf
April 23, 2017 7:15 am

Measuring Jello® cubes multiple times with multiple rubber yard sticks won’t get you 3 place accuracy.

Reply to  Steve Case
April 23, 2017 10:11 am

No, but the cleanup might be tasty and nutritious.

Nick Stokes
Reply to  James Schrumpf
April 23, 2017 10:18 am

“However, in your example, you are not “changing the sample” — you are changing a sample of the sample.”
Yes. We have to work with the record we have. From that you can get an estimate of the pattern of spatial variability, and make inferences about what would happen elsewhere. It’s somewhat analogous to bootstrapping. You can use part of he sample to make that inference, and test it on the remainder.
None of this is peculiar to climate science. In all of science, you have to infer from finite samples; it’s usually all we have. You make inferences about what you can’t see.

Reply to  Nick Stokes
April 23, 2017 3:34 pm

Two problems that I can see with that approach. First, bootstrapping works where a set of observations can be assumed to be from an independent and identically distributed population. I don’t think this can be said about global temperature data.
Secondly, the method works by treating inference of the true probability distribution, given the original data, as being analogous to inference of the empirical distribution of the sample.
However, if you don’t know how your original sample set represents your actual population, no amount of bootstrapping is going to improve the skewed sample to look more like the true population.
If the sample is from temperate regions, heavy on US and Europe, all bootstrapping is going to give one is a heavy dose of temperate regions/US/Europe. It seems pretty obvious that the presumption of bootstrapping is that the sample at least has the same distribution as the population — something that certainly can NOT be presumed about the global temperature record.

April 23, 2017 6:47 am

Clyde, I like your ball bearing analogy. Perhaps you could expand on later in the series.
1) Yes we know that when one micrometer is used to measure one ball bearing (293 mm in diameter), multiple measurements help reduce micrometer measurement errors.
2) Yes we know that when we measure many near identical ball bearings (293 mm in diameter) with one micrometer, the more ball bearings we measure the more accurate the average reading will be.
3) Climate: we measure random ball bearings, whose diameter varies between 250 mm up to 343 mm, and we use 100 micrometers.
4) If we measure 100 of the ball bearings chosen at random twice per day, how does averaging the measured random ball bearing measurement increase the measurements accuracy?
5) Replace mm with K and you get the global average temperature problem.
6) Stats gives us many ways to judge a sample size but if the total population for this temperature measurement is 510,000,000 square kilometers, and each temperature sample represented the temperature of one square kilometer, how many thermometers do we need?

graphicconception
April 23, 2017 6:57 am

My concern is that average temperatures are then averaged again, sometimes many times before a global average is obtained.
First you have min and max then you average them. Then you have daily figures and you average them then there is the area weighting (Kriging) etc etc.
OK you can do the math(s) but when you average an average you do not necessarily end up with a meaningful average.
Consider two (cricket) batsmen. A batting average is number of runs divided by number of times bowled out. In the first match, batsman A gets more runs than batsman B and both are bowled out. “A” has the higher average. A maintains that higher average all season and in the final match both batsmen are bowled out but A gets more runs than B. Who has the higher average over the season?
The answer is, You can’t tell.
For instance. A gets 51 runs in match 1 and B gets 50 runs. A was injured and only plays again in the last match. B gets 50 runs and is out in every one of the other 19 matches apart from the last one. So A’s average all year was 51 while B’s was 50. In the last match A gets 2 runs and B gets 1.
Average of averages for A gives (51 + 2)/2 = 26.5
Average of averages for B gives (50 + 1)/2 = 25.5
True average for A is 26.5
True average for B is (20*50 + 1)/21 = 47.666..

Nick Stokes
Reply to  graphicconception
April 23, 2017 10:21 am

i>”True average for A is 26.5
True average for B… ”
Obviously, you can tell. You just have to weight it properly. This is elementary.

graphicconception
Reply to  Nick Stokes
April 24, 2017 6:20 am

Thanks Nick, I know how to do it. My concern is that with all the processing that the temperature data is subjected to, I would find it hard to believe that they apply all the necessary weightings – even if they calculate them.
For instance, taking the average of min and max temperatures by summing and dividing by two is already not the best system. An electrical engineer would not quote an average voltage that way.
Shouldn’t you also weight the readings by the local specific heat value? For example by taking into account local humidity? Basically, you need to perform an energy calculation not a temperature one.
Is it known how much area a temperature value represents? If not, where do the weightings come from?
It would be interesting to know just how much the global average surface temperature could be varied just by changing the averaging procedures.

Nick Stokes
Reply to  Nick Stokes
April 24, 2017 8:23 am

“Is it known how much area a temperature value represents? If not, where do the weightings come from?
It would be interesting to know just how much the global average surface temperature could be varied just by changing the averaging procedures.”

The first is a matter of numerical integration – geometry. I spend a lot of time trying to work it out. But yes, it can be worked out. On the second, here is a post doing that comparison for four different methods. Good ones agree well.

April 23, 2017 7:04 am

It is change in the earth’s energy content that is important — energy flow into and out of the earth system. This seems like a job for satellites rather than a randomly distributed network of thermometers in white boxes and floats bobbing in the oceans.

Reply to  rovingbroker
April 23, 2017 10:12 am

You have no future as a government paid political hac…I mean climate scientist.
That much is clear.

Rick C PE
April 23, 2017 7:11 am

What is global average temperature anyway? How is it defined and how is it measured? As a metrologist and statistician, if you asked me to propose a process of measuring the average global temperature, my first reaction would be I need a lot more definition of what you mean. When dealing with the quite well defined discipline of Measurement Uncertainty analysis, the very first concern that leads to high uncertainty is “Incomplete definition of the measurand”. Here’s the list of factors that lead to uncertainty identified in the “ISO Guide to the Expression of Uncertainty in Measurement” (GUM).
• Incomplete definition of measurand.
• Imperfect realization of the definition of the measurand.
• Non-representative sampling – the sample measured may not represent the defined measurand.
• Inadequate knowledge of the effects of environmental conditions on the measurement or imperfect measurement of the environmental conditions.
• Personal bias in reading analogue instruments.
• Finite instrument resolution or discrimination threshold.
• Inexact values of measurement standards and reference materials.
• Approximations and assumptions incorporated into the measurement method and procedure.
• Variations in repeated observations of the measurand under apparently identical conditions.
• Inexact values of constants and other parameters obtained from external sources and used in the data-reduction algorithm.
As far as I can tell there is no such thing as a complete “definition of the measurand” (global average temperature). It seems that every researcher in this area defines it as the mean of the data that is selected for their analysis. Then most avoid the issue of properly analyzing and clearly stating the measurement uncertainty of their results.
This is, IMHO, bad measurement practice and poor science. One should start with a clear definition of the measurand and the design a system – including instrumentation, adequate sampling, measurement frequency, data collection, data reduction, etc. – such that the resulting MU will be below the level needed to produce a useful result. Trying to use data from observations not adequate for the purpose is trying to make a silk purse out of a sow’s ear.

Reply to  Rick C PE
April 23, 2017 8:29 am

Here, Here! I have said the same thing, just not as detailed as you. What is global temperature? Does comparing annual figures, as computed now, say more about weather phenomena in a particular region than the actual temperature of the earth as a whole? I bet almost none of the authors of climate papers can adequately address the issues brought up here. Precision to 0.001 or even 0.01 when using averages of averages of averages is a joke.

Rick C PE
Reply to  Jim Gorman
April 23, 2017 9:39 am

Having spent an entire 40 yr career in the laboratory measuring temperatures of all kinds of things in many different situations, I can say that over a range of – 40 to + 40 C, it is extremely difficult to achieve uncertainties of less than +/- 0.2 C. I can’t buy the claim that global average temperature can be determined within anything close to even +/- 1 C.

Reply to  Jim Gorman
April 23, 2017 10:40 am

Rick C PE and Jim Gorman. It’s a pleasure to read your opinions on this topic.

Clyde Spencer
Reply to  Rick C PE
April 23, 2017 12:41 pm

Rick,
I agree completely with your advice. However, the problem is that in order for the alarmists to make the claim that it is warming, and doing so anomalously, they have to reference today’s temperatures to what is historically available. There is no way that the historical measurements can be transmogrified to agree with your recommended definition. We are captives of our past. Again, all I’m asking for is that climatologists recognize the limitations of their data and be honest in the presentation of what can be concluded.

Rick C PE
Reply to  Clyde Spencer
April 23, 2017 1:45 pm

Clyde: Yes, obviously we are stuck with historical records not fit for the purpose. I’m not opposed to trying to use this information to try to determine what trend may exist. I just don’t think it should be presented without acknowledgement of the substantial uncertainties that make reliance on this data to support a hypothesis highly questionable.

Clyde Spencer
Reply to  Rick C PE
April 23, 2017 3:46 pm

Rick,
You said, “I just don’t think it should be presented without acknowledgement of the substantial uncertainties that make reliance on this data to support a hypothesis highly questionable.”
And that is the point of this and the previous article!

April 23, 2017 7:28 am

IPCC AR5 glossary defines “surface” as 1.5 m above the ground, not the ground itself.
If the earth/atmosphere are NOT in thermodynamic equilibrium then S-B BB, Kirchhoff, upwelling/downwelling/”back” radiation go straight in the dumpster.

Clyde Spencer
Reply to  Nicholas Schroeder
April 23, 2017 12:45 pm

Nicholas,
We obviously can’t treat the entire globe as being in equilibrium. However, we might be able to treat patches of the surface as being in equilibrium for specified periods of time and integrate all the patches over time and area. Whether that is computationally feasible or not, I don’t know. However, I suspect it is beyond our capabilities.

Peta from Cumbria, now Newark
April 23, 2017 7:45 am

Exactly agree with all (most, will confess I’ve not read every little bit) of the above.
There are soooo many holes in this thing, not least
1. Temperature is not climate
2. Temperature per-se does not cause weather (averaged to make Climate somehow – I ain’t holding any breath) Temperature difference causes weather
This entire climate thing is just crazy.

April 23, 2017 8:02 am

This artical completely focuses on data problems with current temperature measurements. It never ties back to CO2s impact on temperature. 1) the urban heat island is an exogenous factor that requires “abjustments,” and that is just one exogenous factor. 2) CO2 is a constant 400 ppm, constants can’t explain variations 3) CO2 doesn’t warm water 4) CO2’s only way to affect climate change is through absorbing 13 to 18 microns, and its impact is largely saturated 5) CO2 can’t cause record high daytime temperatures, CO2 only traps outgoing non-visible light. Climate science is a science, only data collected that helps isolate the impact of CO2 on temperature is relevant. The law of large numbers doesn’t apply when applied to corrupt data. Only data that isolates the impact of CO2 on temperature should be used. All this other stuff is academic. Focus on the science, and how a good experiment would be run. If I was doing an experiment I wouldn’t be collecting data that needs to be adjusted. The issue isn’t if we are warming, the issue is does CO2 cause warming. Global temperatures won’t prove that, it doesn’t isolate the cause of CO2.

John Shotsky
April 23, 2017 8:05 am

Often, taking things to an extreme helps illustrate a point. If we had a temperature measuring device near Moscow, and another near Rio de Janeiro, both with zero measurement errors, and without urban heat effects, with readings taken hourly for 100 years, we could then average all these numbers and come up with a number. Exactly what would that average mean? Absolutely nothing, it is simply a number with no actual meaning. Throwing more such stations into the mix doesn’t change that. It is still a meaningless number.
Oh, did I forget to mention that the measurement stations are at different altitudes? And different hemispheres?
We could fix that with sea level thermometers, one on the US west coast, and another on the east coast. Same as above, average the readings, and what do you have? Let’s see, one is measuring the temperature of air from over thousands of miles of ocean, and the other thousands of miles of land. Exactly what would this average mean, then? The point is that these averages are meaningless to start with, and throwing more stations in does not add value, regardless of measurement errors. You do not get a ‘better’ average.

April 23, 2017 8:08 am

If climate science was applied to your automobile, and an average temperature of it was contrived, what would knowing that average temp be worth?
Andrew

Olen
April 23, 2017 8:14 am

Cannot read article because of page moving to share this.

John M Tyson
April 23, 2017 8:23 am

I am a pit bull latching to political fake news about global warming and delighted to have found your site, However my I am neither a scientist or mathematician. Is it reasonable to ask you to “translate” important points to lay language? I would like to pass some of your points on to others in language they will understand, as will I if challenged. Thanks for sanity.

[Of course, pull in and ask away. There are many individuals here who know a great deal about a great deal and are always happy to pass their knowledge along to a genuine questioner. Just be polite and clear about what you are looking for, nobody likes ambiguity. . . . mod]

Reply to  John M Tyson
April 23, 2017 10:54 am

John there is much stuff you can use, written for laymen, in my ebook Blowing Smoke. All illustrated. Nothing as technical as this guest post.

Clyde Spencer
Reply to  John M Tyson
April 23, 2017 12:53 pm

John,
You will note that there are no mathematical formulas, much to the chagrin of some regulars such as Windchasers. Some WUWT articles range from just whining about alarmists, to others, say by Monckton, that have mathematical formulas and are fairly rigorous. It seems that there are a lot of retired engineers and scientists that frequent this blog. Therefore, I tried to strike a balance between trying to have my ideas accepted by those with technical backgrounds without losing intelligent layman. Sort of a Scientific American for bloggers. 🙂 If you have any specific questions, I’ll try to respond over the next couple of days.

Tom Halla
April 23, 2017 8:26 am

Displaying temperature distributions should be graphic. Once upon a time, such as when I was in college, that was involved and expensive to print. Graphing software is cheap and available, and a graph will show any skewness of the distribution, and if trying to use Gaussian statistical tests is reasonable.
I think trying to use as much of the data as possible is a good thing,

April 23, 2017 8:33 am

What is the standard deviation of the absorption of IR by CO2? Temperatures have a huge variation, the physics of CO2’s absorption doesn’t. CO2 at best can explain a parallel shift in temperatures, it can’t explain the huge variation.once again, tie everything back to how can CO2 explain the observation. CO2 doesn’t cause the urban heat island effect, CO2 doesn’t cause record daytime temperatures, CO2 doesn’t warm water, etc etc. Stay focused on the hypothesis. Evidence of warming isn’t evidence CO2 is causing it.

April 23, 2017 8:47 am

Forget about how to represent “…reported. average temperatures.” There is no average. Forget about data error and precision, they don’t belong. Global temperature is not a random quantity and cannot be understood by statistics meant for analyzing random data. Now plot the entire temperature curve from 1850 to 2017 on a graph. Use a data set that has not been edited to make it conform to global warming doctrine. HadCRUT3 would fit the bill. On the same graph, also plot the Keeling curve of global carbon dioxide and its ice core extension. Now sit back and contemplate all this. What do you see? The first thing you should notice is that the two curves are very different. The Keeling curve is smooth, no ups or downs. But the temperature curve is irregular, It has its ups and downs and is jagged. Two peaks on the temperature curve especially stand out. The first one is at year 1879, the second one at year 1940. Warmists keep telling us that global temperature keeps going up because of the greenhouse effect of carbon dioxide. Take a close look at the part of the Keeling curve directly opposite these two temperature peaks. Where is that greenhouse effect? There is no sign that the Keeling curve had anything to do with these two global warm peaks. Problem is that from 1879 to 1910 temperature actually goes down, not up, completely contrary to global warming doctrine. That distance is just over 30 years, the magic number that turns weather into climate. There is a corresponding warm spell from 1910 to 1940 that also qualifies as climate, warm climate in fact. But this is not the end. 1940 is a high temperature point and another cooling spell begins with it. That coolth is from the cold spell that ushered in World War II. It stayed cool until 1950, at which point a new warming set in. Global temperature by then was so low that it took it until 1980 to reach the same temperature level that existed in 1940. f And in 1980 a hiatus began that lasted for 18 years. The powers that control temperature at IPCC decided, however, to change that stretch into global warming instead. That is a scientific fraud but they apparently don’t care because they control what is published. By my calculation this act adds 0.06 degrees Celsius to every ten year stretch of temperature from that point on to the present. I cannot see how statistics can have any use for interpreting such data. I have laid out the data. What we need is for someone to throw out the temperature pirates who spread misinformation about the real state of global temperature.

Peter Sable
April 23, 2017 9:11 am

By convention, climate is usually defined as the average of meteorological parameters over a period of 30 years

Do you have a reference for this?
Knowing that the multi-decadal oscillations (e.g. the PDO) are on the order of 65-80 years, anyone with a modicum of signal processing experience would understand how stupid it is to look at something over a mere 30 years. You need at least double the cycle length, or about 140 years, and with the amount of error in the measurements and the number of overlapping cycles you need hundreds of years of data to make a call on anything.
The widest confidence interval metric (i.e. least confidence) one could come up with for any time series data is the trend of the data. It just doesn’t get worse than that. That’s not signal processing 101, but it’s definitely 501 (early grad school). There’s just very little useful information for the lowest frequency portion of a time series.
reference: http://paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf
Peter

Clyde Spencer
Reply to  Peter Sable
April 23, 2017 12:57 pm

Peter,
NOAA, NASA, and other organizations use 30-year averages for their baselines, although they use different 30-year intervals. Go to their websites.

April 23, 2017 9:18 am

Jim Gorman
April 23, 2017 at 6:44 am
NS: “None of this means much unless you come to terms with the fact that they are averaging anomalies. Fluctuations about a mean are much more likely to be approx normal than the quantities themselves.”
”If that is the case, why all the adjustments to past temperature readings? The absolute values of the anomalies shouldn’t change if you are right.”

According to a February 9, 2015 Climate Etc. blog titled Berkeley Earth: raw versus adjusted temperature data, “The impact of adjustments on the global record are scientifically inconsequential.”
Adjustments to the data are one of the most frequent criticisms. Is there anything in the above post or comments that justifies data adjustments?

Nick Stokes
Reply to  pmhinsc
April 23, 2017 10:33 am

“Is there anything in the above post or comments that justifies data adjustments?”
No, nor should there be. It is a separate issue. You average temperature anomalies that are representative of regions. Sometimes the readings that you would like to use have some characteristic that makes you doubt that they are representative. So other data for that region is used. Arithmetically, that is most simply done as an adjustment.
The adjustments done makes very little difference to the global average.

Editor
April 23, 2017 9:32 am

Clyde Spencer ==> Well done on this…..the whole idea of LOTI temps (Land and Ocean) is absurd — I have been calling it The Fruit Salad Metric. (apples and oranges and bananas) .

Any number
created by averaging averages, of spatially and temporally diverse measurements, creates an informational abomination – not a “more precise or accurate figure”. This is High School science and does not require any fancy statistical knowledge at all — my High School science teacher was able explain this clearly to a class of 16-year-olds suffering from hormone-induced insanity in twenty minutes with a couple of everyday examples. How today’s Climate Scientists can fool themselves into believing otherwise is a source of mystery and concern to me.

Editor
April 23, 2017 9:36 am

Clyde Spencer ==> My piece on Alaska is a good example of averages obscuring information.

Clyde Spencer
Reply to  Kip Hansen
April 23, 2017 4:38 pm

Kip,
When I was in the army, I was assigned to the Cold Regions Research and Engineering Laboratory in 1966. It didn’t snow in Vermont, where I was living, until Christmas Eve. We got 18″ overnight! The next year, we got snow in mid-October during deer hunting season.

Editor
Reply to  Clyde Spencer
April 23, 2017 7:42 pm

Clyde Spencer ==> And the same sort of variability exists at all scales, within the envelope of the boundaries of the weather/climate system. I have an essay in progress on the averaging issue — mathematically. (Been in progress for more than a year…:-)

Clyde Spencer
Reply to  Kip Hansen
April 23, 2017 8:42 pm

Kip,
I look forward to reading your essay.

Editor
Reply to  Clyde Spencer
April 24, 2017 7:51 am

Clyde Spencer ==> Somewhere above you mention Occam’s razor….”Occam’s Razor dictates that we should adopt the simplest explanation possible for the apparent increase in world temperatures. ”
I’m pretty sure that you know that although that is the “popular science” version of Occam’s, it is not actually what he said nor what the concept really is.
Newton phrased it “”We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.””
The misapplication of Occam’s results in such things as the false belief that increasing CO2 concentrations alone explain changing climate justified by Occam’s as “simplest explanation”.

Clyde Spencer
Reply to  Kip Hansen
April 25, 2017 8:33 am

Kip,
Yes, I’m aware that the original statement in archaic English was a lot more convoluted. But, I think that the “popular” form captures the essence of the advice. What we are dealing with is a failure to define “simplest.” I would say that if Earth were experiencing warming after the last glaciation, and then started to cool, we should look for an explanation. However, continued warming is most easily explained by ‘business as usual.’
However, the real crux of the problem is not knowing what typical climate variation is like, and averaging averages further hides that information. We don’t know what was changing climate before humans, so we aren’t in a strong position to state to what degree we are impacting it today.

Dave Fair
Reply to  Clyde Spencer
April 25, 2017 1:38 pm

Plus many!

Editor
Reply to  Clyde Spencer
April 25, 2017 11:25 am

Clyde ==> Occam’s calls for the fewest unsupported (not in evidence) prior assumptions.
Postulating that wind is caused by “Pegasus horses chasing Leprechauns who are in rebellion against the Fairy Queen” has too many unsupported priors.
The CO2 hypothesis fails Occam’s test because it relies on the unsupported, not in evidence (in fact, there is a great deal of contrary evidence) assumption that nothing else causes (or caused) the warming and cooling of the past and that the present is unique and that CO2 is the primary mover of climate. The number of assumptions — assumptions of absence of effect, past causes not present cause, etc — necessary to make the CO2 hypothesis “sufficient” is nearly uncountable. While it seems “simple”, it requires a huge number of unstated assumptions — and it is the necessity of those assumptions that result in the CO2 hypothesis’ failure to meet the requirements of Occam’s.
CO2 may yet be found to be one of the true causes of recent warming, none the less, but not in such a simplistic formulation as is currently promoted by the Consensus.
I think we are both in the same general page on this topic.

Clyde Spencer
Reply to  Kip Hansen
April 25, 2017 12:56 pm

Yes, I think we are. I will go where the believable evidence leads me.

Andrew Kerber
April 23, 2017 10:29 am

A discussion of random error vs. systematic error would be good too. Any measure prior to the advent of digital thermometers is sure to have random error.

Clyde Spencer
Reply to  Andrew Kerber
April 23, 2017 1:04 pm
April 23, 2017 11:27 am

Can anyone here provide a physical, not a mathematical, rationale for the notion that a whole bunch of readings from an instrument which is only graduated in units x, can yield a measurement that is ten times smaller than x?
It is easy to understand how making numerous measurements can lead one to have confidence in measurements that approach the resolution of the device.
How exactly, physically, does that device, or what you do with it, give you information that the device itself is incapable of capturing?
Can a bunch of grainy photographs be processed in such a way to give you a single photo with ten times the resolution of the pixels in each of the original photo?
Can a balance which is graduated in tenths of a gram allow you, by some means of repetition, to confidently declare the weight of a sample to within 100th of a gram?

Reply to  Menicholas
April 23, 2017 11:32 am

It seems to me that this idea can be tested by means of the scale example.
Have some people measure a sample of something with a scale that has a certain resolution.
Have them do this a large number of times with a large number of samples, and perhaps with a large number of scales.
Have them report their readings and the results of calculations using the statistical means being discussed here.
Then check the actual sample weights with a far more sensitive instrument and see if this methods works.
Such an experiment can be done under closely controlled conditions…people in a sealed room, etc.

Reply to  Menicholas
April 23, 2017 12:18 pm

Assuming all of that is true, it sounds like the error in this method is a large fraction of the difference between the tallest and the shortest person in your sample.
IOW, a large fraction of the height anomaly.
But I would not assume any of that is necessarily so without seeing some data from someone who actually did it.
And, if the range of sizes of adult males were more like the range in temps all over the Earth and throughout the year, how would the numbers look then?
But Are you sure about that?
The vast majority of adult men are between 5’6″ and 6’6″, and hence the vast majority of readings will be 6′.
Perhaps all of them in some samples.
How many men have you ever met taller than 6’6″ or shorter than 5’6″?
For me, in my actual personal life, i think the answer may be zero, or maybe one or two.

Reply to  Menicholas
April 23, 2017 12:23 pm

I think you are unlikely to have enough men in your sample who measure in at the 5′ line to bring down the average much.
http://www.fathersmanifesto.net/standarddeviationheight.htm

Clyde Spencer
Reply to  Menicholas
April 23, 2017 1:11 pm

Menicholas,
I think the simplest question to your answer is that before the advent of digital, laser theodolites, it was common procedure for surveyors to “accumulate” multiple readings on a transit turning an angle. The proof is in the pudding, as the saying goes.
Before I retired I was a remote sensing scientist. I can assure you that the resolution of an image can be improved with the use of multiple images, through several techniques.
However, key to all of these is the requirement that the object being measured not change!

Reply to  Clyde Spencer
April 23, 2017 11:03 pm

“the object being measured not change!”
That is my understanding as well Clyde.
You have to be measuring the same thing in the same way.

Clyde Spencer
Reply to  Menicholas
April 23, 2017 1:17 pm

Menicholas,
The average height, without a stated precision or uncertainty, is similar to what climatologists do routinely. The key to getting both a good estimate of the mean, and to be able to report it within a stated range with high probability, varies with both the variance of the population and the number of samples taken.
The difference with the temperature case is that the population of American males is essentially fixed during the interval that the sampling takes place.

April 23, 2017 11:30 am

It is one thing to measure a temperature with even a very precise and accurate thermometer. It is quite another to attribute any change over time to the correct causal factor.
The IPCC likes to say they have correctly adjusted the temperature measurements to account for non-CO2 influences, or biases. Yet there are many, many such factors that certainly influence the temperature but (as far as I can determine) are not properly considered. Below is a list of ten such non-CO2 factors that are known to cause an upward temperature trend.
1. Increased population density in the local area, cities (more buildings in a small area)
2. Increased energy use per capita (each building uses more energy, and people use more)
3. Increased local humidity due to activities such as lawn watering, industry cooling towers
4. Prolonged drought (the opposite, regular rain, reduces temperatures in arid regions)
5. Reduced artificial aerosols via pollution laws being enforced – since 1973 in the US
6. Change in character of the measurement site, from rural to more urban with pavement and other artificial heating
7. Wind shadows from dense buildings prevent cooling winds from reaching thermometer
8. El Niño short-term heating effect in many areas (e.g. the US South and Southeast)
9. Increased sunspot activity and number that allows fewer cloud-forming cosmic rays to reach Earth
10. Fewer large volcanoes erupting with natural aerosols flung high into the atmosphere
To have proper science to measure the impact of changes in CO2 on surface temperature, the data must exclude any sites that are affected by the above factors.
Instead, the IPCC and scientists that prepare the input to IPCC reports adjust the data, even though the data is known to be biased, not just by those ten factors, but probably others as well.
This is not science. It is false-alarmism.

Reply to  Roger Sowell
April 23, 2017 11:36 am

Yep, that is the point I’ve been making. There is data that largely isolates the impact of CO2. That is the data that counts.

Reply to  Roger Sowell
April 23, 2017 11:44 am

You do not even have to do such enumerations to spot the errors in judgement and the flaws in logic and thus the ridiculousness in the confidence of the conclusion that CO2 must be the cause of recent warming.
All you need to know is that is was warming and cooling on scales large and small prior to the advent of the industrial age and any additional CO2 in the air.
And that many of these warming and cooling events were both more rapid and of a higher magnitude than any recent warming.
Hence the hockey stick, and the “adjustments”, and the general rewriting of the relevant history.

ferdberple
April 23, 2017 11:50 am

The 30 year average temperature at my location is 19C. I measure the temperature today and get 19C. Is this any more likely correct than if I got a reading of 18C? How about -20C?
The problem with anomalies is that they statistically tell us that 19C is more likely correct than -20C, because the variance will be lower over multiple samples, which gives us a false confidence in the expected error. However, there is no reason to expect out reading of -20C is any less accurate than 19C.
The problem is similar to the gambler fallacy. We expect the highs and lows to average out, so a reading of 19C appears more likely correct than a reading of -20C. But in point of fact this is incorrect, because today’s temperature is for all practical purposes independent of the long term average.

Clyde Spencer
Reply to  ferdberple
April 23, 2017 1:21 pm

ferdperple,
One must be careful in dealing with probabilistic events such as coin tosses, die tosses, and hands of cards. A very large number of trials are necessary for these probabilistic events to approach their theoretical distribution.

April 23, 2017 12:54 pm

“The approach currently used is to calculate the arithmetic mean for an arbitrary base period, and subtract modern temperatures (either individual temperatures or averages) to determine what is called an anomaly. However, just what does it mean to collect all the temperature data and calculate the mean?:”
Wrong. That’s not what we do.
The other mistake you make is that averages in spatial stats are not what you think they are. And the precision is not what you think it is.
In spatial stats the area average is the PREDICTION of the unmeasured locations.
When we say the average is 9.8656c that MEANS this.
We predict that if you ta key a perfect thermometer and randomly place it at UN MEASURED locations you will find
That 9.8656 is the prediction that minimizes your error.
Measuring temperature is not doing repeated measurements of the same thing.
A simple example. You have a back yard pool.
You measure with a thermometer that records whole numbers. The shallow end is 70F. The deep end is 69F.
Estimate the temperature I will record if I take a perfect thermometer and jump into any random location of the pool?
THAT is the problem spatial stats solves.
What will we predict if you measure the temperature in the exact same location in the deep end? 69. Jump in the pool in a place we haven’t measured? A random location… we predict 69.5.. that will be wrong. .. but will be less wrong than other predictions.
You will be judged on minimizing the error of prediction. That’s the goal .. minimize error. ..
So you use the measured data to predict the unmeasured.
That’s spatial stats. You might average 69 and 70 and say…
I predict that if you jump into a random spot the temperature will be 69.5… The precision of the prediction should never be confused with the precision of the data. In other words 69.5 will minimize the error of prediction. It’s not about the precision of the measurements. .
When you do spatial stats you must nevER forget that you are not measuring the same thing multiple times. You are not averaging the known. You are predicting the un measured locations based on the measured. And yes you actually test your prediction as an inherent part of the process.
Here is a simple test you all can do.
Pretend CRN stations don’t exist.hide that data
Then take all the bad stations in the usa. Round the measurements to whole degrees. Then using those stations
PREDICT the values for CRN…
When you do that you will understand what people are doing in spatial stats. And yes your prediction will have more “precision” than the measurements. ..because it’s a prediction. It’s predicting what you will see when you look at the CRN data you hid. Go do that. Learn something.
So we dont know the global average to several decimal points. We estimate or predict unmeasured locations..those predictions will always show more bits…The goal is to reduce the error in the prediction.

Clyde Spencer
Reply to  Steven Mosher
April 23, 2017 4:15 pm

Steven,
I applaud your attempt to estimate temperatures for locations that you don’t have data for. However, I see a number of problems. The Earth isn’t a smooth sphere and I don’t read anything about how you take into account changes in temperature for a varying elevation and unknown lapse rates. You also miss the reality of microclimates that are created by more than just the local topography, such as local water bodies. Lastly, because you are dealing with time series, you miss the abrupt changes that occur at the leading edge of a moving cold front. I have difficulty in putting much reliance in theoretical constructs when they aren’t firmly grounded in empirical data.
Now that I apparently have your attention, how does BEST justify listing anomalies in the late 19th Century with the same number of significant figures as modern temperature anomalies?

Clyde Spencer
Reply to  Steven Mosher
April 23, 2017 4:22 pm

Steven,
You said, “Learn something.” Your arrogance contributes nothing to the discussion.
You also said, “And yes your prediction will have more “precision” than the measurements. ..because it’s a prediction. It’s predicting what you will see when you look at the CRN data you hid.” You want me to believe that a distance-weighted interpolation is going to be more trustworthy than the original data? That is why we see the world differently. Just because you can calculate a number with a large number of digits does not mean that the numbers are useful or even realistic.

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 8:48 pm

Clyde,
“You want me to believe that a distance-weighted interpolation is going to be more trustworthy than the original data?”
Mosh’s advice may have been abrupt, but not without merit. You are persistently ignoring two major facts:
1. They average anomalies, not temperatures.
2. They are calculating a whole earth average, not a station average.
2 is relevant here. You can’t get a whole earth average without interpolating. It’s no use saying that the interpolates are less accurate. They are the only knowledge outside the samples that you have.
This is not just climate; it is universal in science and engineering. Building a skyscraper – you need to test the soil and rock. How? By testing samples. You can’t test it all. The strength of the base is calculated based on the strength of those few samples. The rest is inferred, probably by FEM, which includes fancy interpolation.

Clyde Spencer
Reply to  Nick Stokes
April 23, 2017 9:19 pm

Nick,
I think that you and Mosh miss the point of my last two articles. Even if a more complex or sophisticated algorithm is being used to determine anomalies than a simple average and subtraction process, the result of the calculations are going to be limited by the (unknown) accuracy and precision of the raw data used as input to those algorithms. Thus, with raw measurements that are only precise to the nearest whole or one-tenth degree, there is no justification for reporting either current global temperatures or anomalies to three or even two significant figures to the right of the decimal point. Any claim to the contrary is a claim that a way has been found to make a silk purse out of a sow’s ear.

Clyde Spencer
Reply to  Nick Stokes
April 23, 2017 9:43 pm

Nick,
With respect to point 1, the definition of an anomaly is the difference between some baseline temperature, and a temperature with which it is being compared. To come up with anomalies, it will be necessary to subtract the baseline from modern daily, monthly, and/or annual averages. That baseline will have to be either arbitrary, or more commonly a 30-year average. Thus, the claim that averages are not computed is false.
To address the problem of stations being at different elevations, it will be necessary to compute a baseline average for every station before station anomalies can be computed. If that is not being done, then things are even worse than I thought because lapse rates are not available for all stations for every reading.
Yes, uniform spatial coverage is necessary to compute a global anomaly average. However, as I have remarked before, problems with moving cold fronts, rain cooling the ground, clouds that are not uniformly distributed, topography, and microclimates introduce interpolate error that is larger than the error at individual stations, and the precision cannot be greater than what the stations provide. Again, my point is that claims for greater precision and confidence are being made than can be justified. I’m just asking for complete transparency in what is known and what is assumed.

afonzarelli
Reply to  Clyde Spencer
April 23, 2017 9:54 pm

But Nick, doesn’t a “whole earth average” equal a “station average”? You only need a certain random sampling to get your average, any more than that being redundant. That’s why pollsters can get a fairly accurate picture of the nation as a whole, assuming methodology is sound, with just a few hundred samples. (rasmussen nailed it with hillary up by two) Is that what you’re talking about here or am i missing something?

afonzarelli
Reply to  Clyde Spencer
April 23, 2017 10:25 pm

And Clyde, does the fact that 300 stations give the same result as 3,000 stations render your concerns moot (or no)?

Clyde Spencer
Reply to  afonzarelli
April 24, 2017 8:51 am

afonzarelli,
I don’t think that Nick is making the claim that 300 stations are as good as 30,000. I believe he is claiming that they might be “good enough for government work.” However, I still have concerns about whether or not propagation of error is being given the rigorous attention that it deserves for such a convoluted attempt at trying to use data for a purpose that it was not intended.

Nick Stokes
Reply to  Clyde Spencer
April 23, 2017 11:24 pm

Clyde (and fonz),
People have asked what is the point of averaging. Mindert identified it above. It is usually to estimate a population mean. And the population here is that of anomalies at points on Earth. The stations are a sample – a means to an end. Ideally the estimate will be independent of the stations chosen. The extent to which that is not true is the location uncertainty.
To make that estimate, you do a spatial integration. That can be done in various ways – the primitive one is to put them in cells and area-average that. But the key thing is that you must average members of the population (anomalies) and weight them to be representative of points on Earth (by area).
The prior calculation of anomalies is done by station – subtracting the 30-year mean of that station. It involves no property of the station as sample. Some methods fuzz this by using grids to embrace stations without enough data in the 30-year. BEST and I use a better way.
“there is no justification for reporting either current global temperatures or anomalies to three or even two significant figures to the right of the decimal point”
That’s wrong. The global mean is not the temperature of a point. It is a calculated result, and has a precision determined by the statistics of sampling. An example is political polling. The data is often binary, 0 or 1. But the mean of 1000 is correctly quoted to 2 sig fig.
“it will be necessary to compute a baseline average for every station before station anomalies can be computed”
Yes, that is what is done.
” moving cold fronts”
This matters very little in a global average. For time, it’s on a day scale when he minimum unit is a monthly average. And for space, it doesn’t matter where the front is; it will still be included.
“I’m just asking for complete transparency”
There are plenty of scientific papers, Brohan 2006 is often quoted. BEST will tell you everything, and provide the code. For my part, the code is on the Web too, and the method is pretty simple. I do try to explain. And I get basically the same results as the others (using unadjusted GHCN).

Clyde Spencer
Reply to  Nick Stokes
April 24, 2017 9:05 am

Nick,
I said, and you responded, “‘it will be necessary to compute a baseline average for every station before station anomalies can be computed’ Yes, that is what is done.”
However, [Nick Stokes April 23, 2017 at 3:13 am]: “Scientists NEVER average absolute temperatures.”
Which is it? Do ‘scientists’ NEVER average absolute temperatures or DO they average absolute temperatures?
Do you see why you might have a credibility problem with readers of this blog? It seems to me that you say things that support your claims when you want to try to shut off questioning, but then reverse yourself if backed into a logical corner. Is that transparency?

Editor
Reply to  Clyde Spencer
April 24, 2017 8:31 am

Clyde ==> Note that the denial that BEST etc are “:averaging averages” is simply false. Last I looked at the BEST methods paper, they deal with monthly averages from stations, Krieg values for unknown or questionable stations (with a known minimum error of 0.49 degrees C), etc etc. Read the BEST Methods paper..
Note, BEST will have made “improvements” to ythe original methods descrtibed, but they are not substantially changed.

Nick Stokes
Reply to  Clyde Spencer
April 24, 2017 9:56 am

Clyde,
“Which is it? Do ‘scientists’ NEVER average absolute temperatures or DO they average absolute temperatures?”
Scientists never do a spatial average of absolute temperatures, which is what we were talking about. For all the reasons of inhomogeneity that you go on about in your post. There is no point in that discussion, because they have an answer (anomalies) and you need to come to terms with it. It’s what they use.
Of course temperatures can be averaged at a single station. Daily temperatures are averaged into months, months to years (need to be careful about seasonal inhomogeneity). And you average to get an anomaly base.
The fundamental point is that spatial averaging is sampling, and you need to get it right. Averaging days to get a month is usually not sampling; you have them all. Sometimes you don’t, and then you have to be careful.

Editor
Reply to  Clyde Spencer
April 24, 2017 4:19 pm

Clyde ==> What the averaging is doing is hiding, obscuring, obfuscating, overlaying, covering-up…I could go on…the real data about the state of the environment in order to make a basically desired politically-correct result appear — they need to have the Earth warming to support the CO2 warming hypothesis.
The fact that this is not strictly true — some places are warming (or getting less cold, really, like the Arctic) and some places are getting cooler — my piece on Alaska again — necessitates finding a way to be able to say (without outright lying) that “the Earth is Warming”.
Thus the dependence on averaging averages until the places that are warming make the average go up. When ;land values failed to provided enough up, sea values were added in.
None of this is a mystery — nor a conspiracy — just how it is.
The Earth warms and cools — all of these Numbers Guys believe that their derived numbers = truth = reality. They are, however, just numbers that may or not have the meaning that is claimed for them.
This is what the attribution argument is all about — it is the attribution that is the important (and almost entirely unknown) part — not whether or not Climate Numbers Guys can produce an “up” number this year or not.

Reply to  Steven Mosher
April 23, 2017 4:43 pm

But you never do jump in that pool with a perfect thermometer. Then you use that highly precise, entirely theoretical, prediction as data???
I’ll retire to Bedlam.

Clyde Spencer
Reply to  James Schrumpf
April 24, 2017 9:39 am

Nick,
You said,”An example is political polling. The data is [sic] often binary, 0 or 1. But the mean of 1000 is correctly quoted to 2 sig fig.”
Two significant figures is not the correct precision! In a poll, individual humans are being questioned, and they are either polled or not, and represented by an integer (counted) if polled. In the division of two integers to determine a fraction, there is infinite precision. That means, the CORRECT number of significant figures is whatever the pollster feels is necessary to convey the information contained in the poll. That means a percentage to units (or even tens) may be appropriate if there is a preponderance for one position. However, if it is very close, it may be necessary to display more than two significant figures to differentiate between the two positions. The uncertainty in the polling is a probability issue, and is related to the size of the sample. Of course, this doesn’t take into account the accuracy, which can be influenced by how the question is worded, and in the case of controversial subjects, the unwillingness of people to tell a stranger how they really feel.

Reg Nelson
Reply to  Steven Mosher
April 23, 2017 8:00 pm

“So you use the measured data to predict the unmeasured.”
Wow, one of the most unhinged and unscientific thing you have ever said.
75% of SST is made up. Phil Jones admitted as much,

Nick Stokes
Reply to  Reg Nelson
April 23, 2017 8:38 pm

“75% of SST is made up. Phil Jones admitted as much,”
He didn’t, and it isn’t.

AndyG55
Reply to  Reg Nelson
April 24, 2017 12:31 am

He did say it.
And they are.

Nick Stokes
Reply to  Reg Nelson
April 24, 2017 12:32 am

“He did say it.”
Quote, please.

pbweather
Reply to  Reg Nelson
April 24, 2017 4:44 am

date: Wed Apr 15 14:29:03 2009
from: Phil Jones subject: Re: Fwd: Re: contribution to RealClimate.org
to: Thomas Crowley
Tom,
The issue Ray alludes to is that in addition to the issue
of many more drifters providing measurements over the last
5-10 years, the measurements are coming in from places where
we didn’t have much ship data in the past. For much of the SH between 40 and 60S the normals are mostly made up as there is very little ship data there.
Cheers
Phil

Nick Stokes
Reply to  Reg Nelson
April 24, 2017 5:12 am

pbweather,
Thank you. That is not saying that 75% of SST data is made up. It isn’t saying that any data was made up. He’s saying that normals were made up. He explains the history – we have a whole lot of new buoy data in a Southern Ocean region where there wasn’t much in the anomaly base period. Should we use it? Of course. Normals aren’t data – they are devices for making the anomaly set as homogeneous as possible. But it is better to allow a little inhomogeneity, with an estimated normal, than to throw away the data.
Normals are estimated for land stations too, when data in the base period is lacking. The methods have names like first difference method, reference station method. Zeke explains. Of course, the Moyhu/BEST method bypasses all this.

Mark T
April 23, 2017 1:34 pm

Mosher needs to read thus, apologize for his gross ignorance over the years regarding all things statistical, then shut up for the rest of forever. Every time he comments, a valid statistic somewhere dies.

Tom in Florida
April 23, 2017 1:44 pm

It seems to me that the only reason for trying to come up with a “justifiable” one single temperature for the Earth is to then be able to blame any change on one thing, namely CO2.

Reply to  Tom in Florida
April 23, 2017 10:06 pm

+many

April 23, 2017 2:16 pm

Clyde, I work with distributions all day long in a statistical sense. I may have missed your explanation some where above, but do you know what the distribution for temperatures is? It obviously has negative skewness, but I can invert a large number of distributions to get that figure (or stick with a bounded distribution with negative skewness.) Thanks.

Clyde Spencer
Reply to  John Mauer
April 23, 2017 4:34 pm

John,
I have not seen a histogram of the binned global temperatures. The frequency distribution I supplied for the article was my construction based on a mean of about 50 deg F, a maximum range of about 250 deg F, and an estimated SD of about 30 deg F. The construction provided about 70% of the samples within +/- 30 deg F and about 95% within +/- 60 deg F. Its primary purpose was to compare the distribution of temperature data for the globe versus the hypothetical case of improving the precision of temperature measurement in a system with much smaller variance.

Reply to  Clyde Spencer
April 23, 2017 6:31 pm

I tried to match those parameters with doubly bounded distributions. (Beta and JohnsonSB) I’d include images but ‘m not adept at inserting them. Send me an email at jmauer@geerms.com and I will mail them.

Clyde Spencer
Reply to  Clyde Spencer
April 24, 2017 8:12 am

John,
Except for the range of the horizontal scale, your constructions look very much like mine. Is there something that you wanted to point out?

April 23, 2017 5:17 pm

Clyde,
Minor quibble about the ball bearing example. In such a measurement there are the instrumental errors (randomness in the micrometer reading) and placement errors. Averaging will tend, in the limit, to cancel out the instrumental errors. The placement errors, however, have a different property: they yield, by definition a number less than the actual diameter. This is because the the definition of the diameter is the maximum possible distance between two points on the sphere. Every other position measures something less than the diameter. Hence, an average of placement errors will yield a bias that increases with the placement error.
TGB

Clyde Spencer
Reply to  thomasbrown32000
April 23, 2017 7:19 pm

TB3200,
I would consider it a quibble because the purpose of the article was NOT to instruct the readers on best practices for determining the maximum diameter of a ball bearing. It was to demonstrate how precision can be increased with a fixed value being measured, versus what is done with a quantity that is always different.

Reply to  Clyde Spencer
April 24, 2017 5:41 am

But the diameter is fixed. I simply gave an example of how the measurement of a ‘fixed’ quantity may not fluctuate about the actual value in a way that the average converges to the actual diameter. For anyone thinking deeply about the meaning of the mean (apologies for the pun), it is important to think seriously about issues such as this.

afonzarelli
April 23, 2017 5:45 pm
afonzarelli
Reply to  afonzarelli
April 23, 2017 5:56 pm

Mosher once mentioned that sampling was not a problem. 300 stations give the same result as 3,000 stations. In the above graph (the blue question mark thingy), uah land is compared with uah grids at the temperature stations. And they look pretty close… Doesn’t this vouch for the accuracy as far as sampling goes?

Mark T
Reply to  afonzarelli
April 23, 2017 9:50 pm

No. If the complete set is invalid, any subset is likewise invalid.

Mark T
Reply to  afonzarelli
April 23, 2017 9:58 pm

Insufficient is probably a better word than invalid.

afonzarelli
Reply to  afonzarelli
April 23, 2017 9:59 pm

Right Mark, that’s why i said, “as far as sampling goes”…

Mark T
Reply to  afonzarelli
April 24, 2017 8:40 am

Um, that’s actually what I was referring to. If whatever data you are analyzing is not sampled properly, then no subset of the data is sampled properly, either. If there is a systemic (or systematic) error in the data, it will show up in the subsets as well. If the average is bad across the whole, it is bad across a subset of the whole. The only thing such an analysis vouches for is the consistency in its inaccuracy.

Bindidon
Reply to  afonzarelli
April 24, 2017 2:04 pm

Mark T on April 23, 2017 at 9:50 pm
No. If the complete set is invalid, any subset is likewise invalid.
Either you did not understand what afonzarelli presented, or you think that UAH is as invalid as is GHCN.
Maybe you try to explain your ‘thought’ a bit more accrately? You keep so carefully superficial here…

afonzarelli
Reply to  afonzarelli
April 24, 2017 5:39 pm

Hey there, Bindi… It seems to me that your comparison graph using the uah land data renders clyde’s concerns moot (with the possible exception of elevation when it comes to the land stations). UAH is like having thermometers EVERYWHERE. So when we use just the uah data at the GHCN stations and get the same result, that essentially says the same thing as mosher (regarding size of sample)…
Terrific graph, btw, was it your idea or did you get wind of it from someone else?

Bindidon
Reply to  afonzarelli
April 25, 2017 8:26 am

afonzarelli on April 24, 2017 at 5:39 pm
Hi again Fonzi,
Terrific graph, btw, was it your idea or did you get wind of it from someone else?
1. The very first reason to exploit UAH’s grid data was that I wanted to exactly know how UAH behaves above the mythic NINO3+4 region, whose SSTs are so determinant in computing ENSO signals. Roy Spencer gave me a hint on the readme file assciated to that grid data.
I thought: well, if UAH’s Tropics plot shows higher deviations during ENSO activities than for the whole Globe, then maybe they are even higher in the ENSO region. Bad catch:
http://fs5.directupload.net/images/170425/wzfccr9o.jpg
At least for 1998 and 2016, the Tropics plot keeps way ahead.
2. Then I wanted to compute anomalies and trends for the 66 latitude zones in the UAH grid data:
http://fs5.directupload.net/images/161028/g25fmuo9.jpg
where you see that the younger the trend period, the more it cools in the middle, and the more it warms at the poles, especially at SoPol.
It became suddenly interesting to compare 80-82.5N in UAH with the same zone in GHCN. And last not least I had the little idea to mix the UAH grid software with that I made for GHCN, in order to compare the trend for UAH’s 80-82.5 N zone with that of the average for the three cells encompassing the 3 GHCN stations there. The fit was good (0.46 °C / decade for the 3 cells over GHCN vs. 0.42 for the 144 grid cells).
3. The idea of comparing small, evenly distributed subsets of UAH’s grid with the full average is from commenter ‘O R’ (maybe it’s Olof R we know from Nick’s moyhu):
https://wattsupwiththat.com/2017/01/18/berkeley-earth-record-temperature-in-2016-appears-to-come-from-a-strong-el-nino/#comment-2401985

Bindidon
Reply to  afonzarelli
April 24, 2017 3:41 pm

afonzarelli on April 23, 2017 at 5:45 pm
In the above graph … uah land is compared with uah grids at the temperature stations. And they look pretty close… Doesn’t this vouch for the accuracy as far as sampling goes?
Hello fonzi
Until last year I was quite convinced by the accuracy of such comparisons.
Simply because I thought that the UAH temperature record would have some little degree of redundancy, and thus comparing UAH’s Globe’s land temperature average time series with one made out those UAH grid cells encompassing the GHCN stations might give a hint on a correct GHCN station distribution over land surfaces.
But inbetween I produced time series of e.g. 32, 128 or 512 evenly distributed cells out of the 9,504 cells of UAH’s 2.5° grid.
The analogy of monthly anomalies, linear estimates and long time running means of the 512 cell selection is amazing. Steven Mosher is right: the Globe is heavily oversampled.
That however means that above comparison makes no longer sense, as the 2,250 grid cells encompassing actually 5,750 GHCN V3 stations can be accurately represented by far less cells over land surfaces.

Clyde Spencer
Reply to  Bindidon
April 25, 2017 9:21 am

Bindidon,
You said, “Steven Mosher is right: the Globe is heavily oversampled.”
The problem is that the areas where people live are oversampled, and the areas where few people live are undersampled.

afonzarelli
Reply to  Bindidon
April 25, 2017 10:29 pm

Clyde, wouldn’t that be relatively easy to figure out? What i mean is, couldn’t the data from the most remote stations be compared with more typically located stations to see if there is a difference between the two? (or for that matter, a hundred or so stations could be placed in remote areas) Bindidon did it using the uah grids comparing uah land with the uah grids at the GHCN stations and got the same result…
http://fs5.directupload.net/images/170104/e74esgs9.jpg

Clyde Spencer
Reply to  afonzarelli
April 26, 2017 8:34 am

afonzarelli,
First, the whole point of my articles is that we have the means to quantify just how accurate and precise our data and calculations are. Saying that two things look similar is only qualitative. I’m arguing that the bad habit of mathematicians and physicists often ignoring the kind of realities that engineers deal with in measurements has created a false belief in precision that isn’t there. Modern computers don’t help either because people get into the habit of typing in numbers and seeing a long string of numbers come out and don’t question whether or not they are meaningful. When strongly-typed languages like FORTRAN were in vogue, one had to pay attention to the type of variable (integer versus floating-point) and the number of significant figures for input and output. Today, we have programming languages that the variable type can change on the fly and it becomes more difficult to follow the propagation of error.
One should be careful about the proverbial comparing apples with oranges. If you want to compare two stations then there are a lot of tests that should be defined to be sure that they actually make good comparison samples. For example, are they in the same climate zone, same elevation, same distance from large bodies of water, same distance from mountains and on the same side, are there confounding effects such as one of them being downwind from a major source of air pollutants, do the surrounding areas have similar land use, etc. Anecdotally, where I live in Ohio, it is commonly believed that most of the snow falls north of Interstate 70 in the Winter. Assuming that common wisdom is correct, that means a change of a few miles can mean a big difference in snow cover and temperatures.

Bindidon
Reply to  Bindidon
April 26, 2017 4:26 am

afonzarelli on April 25, 2017 at 10:29 pm
Please Fonzi: read again the end of the comment above. UAH’s grid data contains by far too many redundancy, as you need by far less grid cells than the 2,250 cells above GHCN stations in order to produce a time series showing e.g. a 98 % fit to the entire 9,504 set.
The only valid statement here would be the inverse: if the set of UAH grid cells encompassing GHCN stations of a given regional or latitudinal zone of the Globe gives a time series differing totally from the complete UAH subset for the zone, then it is likely that the GHCN set is not representative for that zone.

April 23, 2017 7:39 pm

I’ll just repeat my call for even experimentally testable quantitative equations for the mean temperature of a radiantly heated ball . I have yet to see see that testable physics and until I do statistical estimates are of secondary interest because they have no theory against which to be judged .

anna v
April 23, 2017 8:42 pm

minor comment: i do not see a “stubby green line” .

Nick Stokes
Reply to  anna v
April 23, 2017 8:50 pm

I think it’s on the x-axis above the 50. And it isn’t green.

Clyde Spencer
Reply to  Nick Stokes
April 23, 2017 9:25 pm

Nick and anna,
Yes, it is the short, thin line above 50 on the x-axis. I’m sorry, but it looks green on my monitor and was selected from a palette of colors where green should have been.

Reply to  Nick Stokes
April 23, 2017 10:05 pm

Green on my screen as well.

Editor
Reply to  Nick Stokes
April 24, 2017 7:19 am

Clyde ==> Don’t sweat it — I’m blue/green color blind (see the blue/green scale differently than others, apparently) and have several times directed readers to look at the green line when it was blue and vice versa.

April 23, 2017 8:54 pm

A fantastic series of two articles. Very understandable basic statistics explained simply.

April 24, 2017 6:23 am

Nick Stokes: “Sometimes the readings that you would like to use have some characteristic that makes you doubt that they are representative”
Like a Global Average Temperature? And not even a reading. It’s a concoction.
Andrew

Ian Macdonald
April 24, 2017 8:16 am

Think the other relevant point is the sheer extent of the averaging. Diurnal and seasonal variations are up to 50 times larger than the claimed change in average due to human activities. Even under controlled lab conditions, could accurate measurements be made from a system where the noise is so much stronger than the signal? I doubt it. Not even with massive low-pass filtering. In most scientific circles is is considered poor practice to infer anything from measurements made below the noise floor of the system. Here, they are at least 30dB below the noise floor.

MarkW
Reply to  Ian Macdonald
April 24, 2017 2:14 pm

Even if we were to assume that diurnal and seasonal variations average out, there are still the decadal and longer variations in the data set that we can’t control for. Until you can demonstrate that whatever caused the Little Ice Age and all the warm periods over the last 8000 years isn’t also causing the current warming, then you can’t assume that the warming must be caused by CO2.

robinedwards36
April 24, 2017 9:01 am

Averaging is a very useful bit of technology, often somewhat mis-understood, and has as its intention (so I believe) the partial simplification of otherwise horrendously complicated assemblies of data. Averaging over time is a great help to politicians and media persons, both groups tending not to be overly expert arriving at reasonable conclusions from scattered original data, with the typical intent providing some sort of prognosis. Inevitably, whatever form of averaging is used, valid information is disguised or hidden, leaving scope for alternative opinions. Of great importance is exactly what is averaged.
In the discussions above we’ve seen some eloquent defences of various choices, instructive to me and I suspect instructive to some persons other than me.
What seems not to have been discussed explicitly is exactly what types of values are being addressed, and why. I and the vast majority of ordinary folks exist at the surface of the earth at elevations between -50m and perhaps 1000m (I’m guessing). A very few (relatively) spend their time on the oceans at sea level. The conditions that control what we can grow things in exist approximately in these bands – plenty of substantial exceptions of course – but the important temperatures and moisture conditions are roughly in these regions.
So I ask, why are we so concerned about conditions remote from these bands?
Taking averages is the the ultimate in data smoothing, with the linear fit next in severity.
When I see a plot like the 1979-2016 one above I really do despair. Who in their wildest imaginings can seriously propose that the straight line that has been “fitted” to the observations serves a useful purpose? A really worthwhile improvement would have been a pairs of lines representing confidence intervals (95% level?) for the least squares line and for single observations from the same series. Even the simple inferential statistics (Quenouille correction omitted) would have been a help, but no, there are none. We have no idea whether the published line has any value.

Bindidon
April 24, 2017 3:13 pm

robinedwards36 on April 24, 2017 at 9:01 am
1. Inevitably, whatever form of averaging is used, valid information is disguised or hidden, leaving scope for alternative opinions. Of great importance is exactly what is averaged.
Feel free to have some closer look at e.g.
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/ghcnm.tavg.latest.qcu.tar.gz
or at
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltmonacg_6.0
There you’ll find all you need; but I guess you won’t enjoy it that much 🙂
2. I and the vast majority of ordinary folks exist at the surface of the earth at elevations between -50m and perhaps 1000m (I’m guessing)
Your guess isn’t so bad! The average GHCN land station height is around 410 m above sea level.
3. When I see a plot like the 1979-2016 one above I really do despair. Who in their wildest imaginings can seriously propose that the straight line that has been “fitted” to the observations serves a useful purpose?
Aha. Excel’s linear estimate function creates “straight lines “fitted” to the observations”.
Very interesting…
4. A really worthwhile improvement would have been a pairs of lines representing confidence intervals (95% level?)
Do you really think that we here don’t know about CI’s? I can’t imagine that.
Want such a chart Sah? Here is one.
http://fs5.directupload.net/images/170425/s7qhyvnn.png
The problem is not to show CI’s. The problems of doing that is:
– who is interested in such information (roughly 1% of the commenters I guess);
– I for example often publish here charts with a comparison of various plots. If I show all them with the CI intervals, you soon stop understanding the info.

April 24, 2017 6:20 pm

I wonder about the very concept of ‘average’ as applied to temperatures…over the whole globe…what can it mean? Then I look at the distribution of temperatures: fairly fat-tailed. Looks somewhat like a Cauchy distribution, which as we know, does not entertain an average…if this is correct, then the ‘average’ temperature of the globe slips like sand through our fingers.

don penman
April 24, 2017 6:46 pm

Something which I believe illustrates what is being said here is the relative nature of the meteorological measurements being taken. When we take measurements of length and weight then we have a standard weight or length to compare them with which does not change but when we measure the temperature of air circulating over the point where we have placed our thermometer we compare it with measurements at different places or at different times . Suppose we used the same relative convention when measuring length of objects then we would be saying things like the object were measuring today is longer than the object we measured yesterday at this point or the object you are measuring over there is longer than the object I am measuring here and then try to take the average of the length of all the objects being measured here in order to get a clearer picture of the length of objects generally.

April 25, 2017 3:00 am

It seems to me that taking the average Tmax and Tmin would be a better use of the data than trying to generate the mythical Tavg. From the amount of statistical thrashing performed on the raw data, it’s obvious that a square peg is trying to be formed from a round hole. There are 5700-some temperature stations in the GHCN system, mostly in the temperate areas and heavily in the US and Europe. All kinds of shenanigans are performed with the data to create data for areas where no data exists, and then this data is used in the anomaly calculations, unless I’m misunderstanding what I’ve read to this point. I really hope I am, because using predictions as data violates pretty much every rule of measurement.
So rather than doing all of this work for dubious results, why not just use the data one has to generate an anomaly, or Tmax and Tmin, for those stations and call a duck a duck. “Here is the anomaly for the xxxx number of weather stations in the GHCN system for the month of March.”
Then one isn’t projecting temperatures for grid cells up to 1200 km away and calling it “data.”

Brian J in UK
Reply to  James Schrumpf
April 25, 2017 3:20 am

HEAR HEAR!!!! +1000
Why don’t we put a satellite orbiting Mars so it has constant view of Earth and so we can use Wien’s Law to measure the temperature of the “Blue Dot” (Sagan) by dividing (the constant) 2.898 by the peak frequency of the energy radiated? This is one of the fundamental laws used by astronomers to measure the temperature of stars. This gives the temperature of the (black) body and is an “all over” reading – hence a proper “average” temperature.
This could be easily accomplished, and could be done in a few years. Then all the argy bargy about average temperatures, bias, not enough stations, projecting onto non station shells etc just falls away. Readings would soon accumulate to give us a definitive take on Earth’s “average” temperature and at a fraction of the annual cost of the climate alarmist establishment’s terrestrial measurements.
BJ in UK.

Clyde Spencer
Reply to  Brian J in UK
April 25, 2017 9:32 am

How about just using one of the GOES weather satellites in geosynchronous orbit for a lot less money?

robinedwards36
Reply to  James Schrumpf
April 25, 2017 8:07 am

I’m not sure, but I think that the reported “temperature” is simply (TMax +Tmin)/2, In effect some information has already been discarded. Whenever I’ve decided to look at both independently the grand scale result turns out to be effectively the same, so I scarcely bother and more unless seasonal effects are what is under scrutiny. Normally I work with monthly data, reducing these to what I call monthly differences and what some would call monthly anomalies – which to me implies that something is wrong with them!

Clyde Spencer
Reply to  James Schrumpf
April 25, 2017 9:29 am

James,
Basically, I agree with you. Instead of trying to demonstrate what the average global temperature is, and how it has changed, I think that it would be preferable to just select the best and longest recording stations and state something to the effect, “Our analysis indicates that the most reliable temperature stations for the last xxx years have a trend in temperature change of x.x degrees C per century, with a 95% certainty of +/- x.x degrees C.”

Bindidon
Reply to  James Schrumpf
April 25, 2017 3:54 pm

James Schrumpf on April 25, 2017 at 3:00 am
It seems to me that taking the average Tmax and Tmin would be a better use of the data than trying to generate the mythical Tavg.
I can’t agree with you.
Firstly because firstly, Tmax and Tmin measurements didn’t exist in earlier times. Moreover, building their mean to construct the average would lead to errors.
Here you see for example some number columns
18.43 25.71 21.89 22.07 -0.18
18.86 26.10 22.27 22.48 -0.21
17.56 25.25 21.16 21.41 -0.25
15.40 24.07 19.49 19.74 -0.25
13.04 23.29 17.84 18.17 -0.32
10.63 21.44 15.95 16.04 -0.09
10.27 21.44 15.54 15.86 -0.32
11.19 21.38 16.23 16.29 -0.05
12.56 21.54 16.90 17.05 -0.15
14.14 21.90 17.81 18.02 -0.21
15.68 23.18 19.34 19.43 -0.09
17.28 24.57 20.80 20.93 -0.13
representing from left to right
– the monthly absolute value averages for Tmin, Tmax and Tavg of a randomly chosen GHCN station (EAST LONDON, SA) for the period 1981-2010 (i.e. their so called baselines)
– the mean of Tmin and Tmax
– the difference between that mean and Tavg.
The mean of these differences in turn is 0.19 °C. In 30 years! This means that choosing, for a recent period where Tmin and Tmax measurement exist, the mean of Tmin and Tmax to represent Tavg leads to an average error of 0.06 °C per decade.
That is as much as the trend for GISTEMP in the whole XXeth century, or half that of UAH during the satellite era.

Reply to  Bindidon
April 25, 2017 9:27 pm

My point was that we should not be generating a global average temp at all; however, if the urge to quantify just can’t be resisted, a Tmax and Tmin average would be better than Tavg.
It’s obvious that merely averaging the monthly high and low together is a very imprecise and inaccurate average. But so is generating an average monthly anomaly by itself. Without the Tmax and Tmin, one has no context for Tavg, no sense of what’s really happening. It’s also obvious that if one takes a sample of temps and calculates Tavg, and then raises the lowest of the temps for each month by 0.5 degree, the Tavg will increase without the Tmax changing at all. It got “warmer,” but not really.
All these problem as obvious. However, it seems that the preferred choice is the least informative value that could be calculated from a large data sample.
I have really enjoyed reading all of the comments in this thread. I’d love to see this tacked so that the discussion doesn’t get lost just because the thread falls further and further back in time.

Bindidon
Reply to  Bindidon
April 26, 2017 11:14 am

James Schrumpf on April 25, 2017 at 9:27 pm
It’s also obvious that if one takes a sample of temps and calculates Tavg, and then raises the lowest of the temps for each month by 0.5 degree, the Tavg will increase without the Tmax changing at all. It got “warmer,” but not really.
Why to do that? Who should do that? I’m afraid that’s no more than one of these ugly ‘realsclimatecience’ mares. And btw: what increases since a while is not Tmax! It is Tmin:
http://fs5.directupload.net/images/170426/mcjprbl5.jpg
I have really enjoyed reading all of the comments in this thread.
Me too! I mostly enjoy the comments here by far more than many of the guest posts. That’s in some sense Anthony’s secret: to let us have heavy, often controversial but mostly fruitful discussions about matters having sometimes few to do with their “official” context 🙂

robinedwards36
April 25, 2017 7:59 am

Bindidon, Thanks for your reply and references. Haven’t so far been able to look at the first of these due to its format. The second seems to be text with numbers (a lot) that has no key whatsoever as to what they are or their format, so cannot yet say whether I’ll enjoy them!
3. Indeed, Most people (Prof Jones excluded) can make Excel produce linear fits, I understand from the numerous references to them in threads such as this one, though seldom does anyone see fit to display any of the inferential statistics. I never use spreadsheets, despite their power, preferring the much quicker route for stats/graphics of this sort offered by my own stats package. Thanks for your diagram showing that it can be done. I wondered if you may have omitted a “sarc” after the first line of your 4. If I were doing the fitting using my own stats package I’d also have probably included the CIs for single observations from the same data source (in this case a time series) together with the Quenouille adjustment if it made a real-life difference. As you’ll have noticed, Quenouille has virtually no effect when applied to long time series even if the serial correlation of the linear residuals is very substantial. I see from your annotation that the t ratio for the regression coefficient is very close to 4.03, implying a probability of around 6E-5. I like to include this sort of thing, in the hope that at least some of my readers gain something from it!
I’ll try to download the data from UAH – may already have it – but can’t post diagrams here. I have to use email.
Hope to see a further comment from you.
Robin

Bindidon
Reply to  robinedwards36
April 25, 2017 8:56 am

robinedwards36 on April 25, 2017 at 7:59 am
Thanks for the reply!
But please don’t try to use the data stored in the UAH reference, I deliberately chose the ugliest one, sorry. It was my sarc method 🙁
It is the UAH baseline (1981-2010 average of the absolute temperatures of each 2.5° grid cell in each month) and of no use except for people who want/need to reconstruct UAH absolute temperatures out of their anomalies.
And the GHCN data looks quite rebarbative as well: you have to write some software in R, C++ or whatever else to process it adequately. Please use data processed out of it instead.
*
If you want to start somewhere at UAH’s data in a meaningful way, take e.g.
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
The file contains the anomalies and trends for 8 zones (each global plus land and ocean) ans 3 regions.
Similar data exists for RSS, UAH’s satellite concurrence, for radio-sonde balloons, for surfaces (GISTEMP, NOAA, BEST, HadCRUT, JMA etc).
GISS land+ocean for example is in
https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
I have no math or deep stat education either. Thus Quenouille is an unknown matter to me; I just know that Nick Stokes is quite aware of what it is for:
https://moyhu.blogspot.de/2013/09/adjusting-temperature-series-stats-for.html
I guess Nick is the right interlocutor for such things…

Bindidon
Reply to  robinedwards36
April 25, 2017 9:29 am

A little addendum:
… but can’t post diagrams here. I have to use email.
Why not?

April 25, 2017 9:46 am

“Averages can serve several purposes. A common one is to increase accuracy and precision of the determination of some fixed property, such as a physical dimension.”
I disagree that it “increases” accuracy or precision. It abandons accuracy and precision in an effort to summarize and simplify.
It throws out precise and accurate observations in an effort to see through both noise and volatility to try to model, elicit, and compare more general aggregate trends.

Clyde Spencer
Reply to  mib8
April 25, 2017 12:55 pm

mib8,
Did you read my first essay with the citations?

Bindidon
Reply to  mib8
April 26, 2017 8:05 am

mib8 on April 25, 2017 at 9:46 am
I disagree that it “increases” accuracy or precision.
Firstly: to ‘disagree’ has nothing to do with to ‘falsify’.
And that is, on a science site, what in theory you should do.
Please read this below, and… try to manage NOT to draw the wrong conclusions.
http://www.ni.com/white-paper/3488/en/

jerry krause
Reply to  mib8
April 27, 2017 12:44 pm

Hi mib8 and Clyde,
When i wrote (jerry krause April 26, 2017 at 1:51 pm) I had not read mib8’s comment and your (Clyde) reply.
We three are possibly both right and wrong. I would say the averaging process referred by Clyde in his first article and in my comment serves the purpose of showing the precision of direct measurements of variables (results). If the precision is not good, one cannot pretend that the accuracy is good. However, if the precision is ‘good’ one cannot claim the accuracy is good. One must calibrate the instruments used and the process used with some known standard or standards to begin to make the ‘argument’ that the measurements are accurate.
In “Surely You’re Joking, Mr. Feynman!”, Richard Feynman in his account–The 7 Percent Solution–considers his reasoning is likely correct because his hypothesis consistently explains several different results even though there was a 9 percent difference between his predicted results than those which he considered to have explained because so many things ‘fit’.
Feynman wrote: “The next morning when I got to work I went to Wapstra, Boehm, and Jensen, and told them, “I’ve got it all worked out. Everything fits.” Christy, who was there, too, said, “What betap-decay constant did you use?” “The one from So-and-So’s book.” “But that’s been found out to be wrong. Recent measurements have shown it’s off by 7 percent.”
This created that possibility that Feynman’s hypothesis was off by 16 percent or only 2 percent. So, he and Christy went into separate rooms and pondered.
“Christy came out and I came out, and we both agreed. It’s 2 percent …. (Actually, it was wrong: it was off, really, by 1 percent, for a reason we hadn’t appreciated, which was only understood later by Nicola Cabibbo. So that 2 percent was not all experimental.)”
mib8, you wrote: “It throws out precise and accurate observations in an effort to see through both noise and volatility to try to model, elicit, and compare more general aggregate trends.” This refers to the averaging commonly done with climatic data and you can read that I totally agree with you before I read your statement.
And I am pretty sure Clyde would agree with you and I.
Have a good day, Jerry

robinedwards36
April 25, 2017 11:43 am

Thank you, Bindidon for your replies and comments. The plot you provide is of course exactly what I would produce from this data set, although I would normally also supply the confidence ranges for single observations from the same data -which tend to be a nasty surprise for anyone expecting to be able to generate a useful (at the practical level) prognostication for the net available observation.
I suppose that I have reluctantly to agree more or less with your estimate of how many readers are interested in confidence intervals. This does not mean that they should not be interested in them! It simply shows just how little understanding readers have of even simple statistical concepts, and even more that they are unlikely to be able to compute the necessary stuff. The 1% that you guess may have some understanding of the background to statistical fitting are surely worth catering for. We have to try to get some of it across to those who regularly display their indifference to or ignorance of what may legitimately be construed from statistical analyses.
I don’t use Excel for any stats. Having written and sold a fairly general stats package some years ago I find it to be more than adequate to do anything in stats and graphics that could conceivably be useful at the levels we are talking about (and vastly simpler to use!)
I may already have posted this inadvertently! However, I find I do have several copies of the UAH assemblies of data for various regions, and have already, over the years, done many analyses based on them. What I’ll do now is to exit into RISC OS and put my latest version into FIRST – my regression package – and look at the Global Land data only. Where can I send this output as an email, please? You are likely to be a bit surprised by my take on the data.
Robin

Bindidon
Reply to  robinedwards36
April 25, 2017 1:22 pm

robinedwards36 on April 25, 2017 at 11:43 am
Where can I send this output as an email, please?
Please simply contact Anthony Watts via
https://wattsupwiththat.com/about-wuwt/contact-2/
and ask him for my email address: this comment will be my agreement for him to do so.
But I must confess that I still don’t understand why you can’t publish your data directly.
I can’t imagine your system unable to produce some graphics in png, jpeg or even pdf format which you easily might upload using a web site like
http://www.directupload.net/index.php?mode=upload
providing this little service here in Germany for free.
I don’t use Excel for any stats.
If I wasn’t so lazy, I would have been using R, Matlab and stuff processing netcdf formats since longer time! But a hobby must keep a hobby.
Did you have ever had a look at e.g.
https://moyhu.blogspot.de/p/temperature-trend-viewer.html
Maybe you appreciate this interface…

jerry krause
April 26, 2017 1:51 pm

Hi Clyde,
A problem with a blogsite such as this is: too many articles, too many comments. However, I see that you do continue to review the comments and respond as you see need.
You first article, which you requested one to read, began as the first couple of lectures in my Chemistry Quantitative Analysis course began the fall of 1960 when I was a 2nd year university student. In this course we learned the need to do at least three sets of analysis to see what our precision might be and calculated the simple average of the three results so we could calculate the deviation of each result from this average value. And we did this so we could statistically determine if the one with the greatest deviation from the average could be dismissed and the average of the two closer results be used. Because the unknown we quantitatively analysed was a standard sample whose composition had been confirmed by ‘trained’ chemists our results were graded as to the deviation of our averaged results from this standardized value which was considered to be the ‘accurate’ result.
So, I consider this to be the primary useful purpose of the averaging process in science.
However, when the science is climatology, the average values of variables, measured over a long period of years, like temperature and precipitation are necessary to characterize the yearly ‘climate’ , as commonly divided into months,of a given location. However, given the long term average, a monthly average of a given year can vary widely from the monthly value of the previous and/or following year and from that of the long term average. The critical point in use of the averaging process is that there is no accurate value of any of this monthly and yearly fundamental variables. Nor, is there one expected.
Hence, the common practice of averaging the temperature of a day, of month, of a year destroys the information of what is actually occurring during a given day, month, or year. At most commercial airports, fundamental meteorological variables are measured (observed) and reported hourly. This practice quickly generates a lot of numbers which can quickly become mind-blogging. But if we are ever to understand how one day can be greatly unlike the previous and/or following or that which occurred the same day last year and/or the previous year, we must study at least the hourly data which is readily available if a scholars wants to make the effort to actually understand this.
Have a good day, Jerry