The Meaning and Utility of Averages as it Applies to Climate

Guest essay by Clyde Spencer 2017

Introduction

I recently had a guest editorial published here on the topic of data error and precision. If you missed it, I suggest that you read it before continuing with this article. This will then make more sense. And, I won’t feel the need to go back over the fundamentals. What follows is, in part, prompted by some of the comments to the original article. This is a discussion of how the reported, average global temperatures should be interpreted.

Averages

Averages can serve several purposes. A common one is to increase accuracy and precision of the determination of some fixed property, such as a physical dimension. This is accomplished by confining all the random error to the process of measurement. Under appropriate circumstances, such as determining the diameter of a ball bearing with a micrometer, multiple readings can provide a more precise average diameter. This is because the random errors in reading the micrometer will cancel out and the precision is provided by the Standard Error of the Mean, which is inversely related to the square root of the number of measurements.

Another common purpose is to characterize a variable property by making multiple representative measurements and describing the frequency distribution of the measurements. This can be done graphically, or summarized with statistical parameters such as the mean, standard deviation (SD) and skewness/kurtosis (if appropriate). However, since the measured property is varying, it becomes problematic to separate measurement error from the property variability. Thus, we learn more about how the property varies than we do about the central value of the distribution. Yet, climatologists focus on the arithmetic means, and the anomalies calculated from them. Averages can obscure information, both unintentionally and intentionally.

With the above in mind, we need to examine whether taking numerous measurements of the temperatures of land, sea, and air can provide us with a precise value for the ‘temperature’ of Earth.

Earth’s ‘Temperature’

By convention, climate is usually defined as the average of meteorological parameters over a period of 30 years. How can we use the available temperature data, intended for weather monitoring and forecasting, to characterize climate? The approach currently used is to calculate the arithmetic mean for an arbitrary base period, and subtract modern temperatures (either individual temperatures or averages) to determine what is called an anomaly. However, just what does it mean to collect all the temperature data and calculate the mean?

If Earth were in thermodynamic equilibrium, it would have one temperature, which would be relatively easy to measure. Earth does not have one temperature, it has an infinitude of temperatures. In fact, temperatures vary continuously laterally, vertically, and with time, giving rise to an indefinite number of temperatures. The apparent record low temperature is -135.8° F and the highest recorded temperature is 159.3° F, for a maximum range of 295.1° F, giving an estimated standard deviation of about 74° F, using the Empirical Rule. Changes of less than a year are both random and seasonal; longer time series contain periodic changes. The question is whether sampling a few thousand locations, over a period of years, can provide us with an average that has defensible value in demonstrating a small rate of change?

One of the problems is that water temperatures tend to be stratified. Water surface-temperatures tend to be warmest, with temperatures declining with depth. Often, there is an abrupt change in temperature called a thermocline; alternatively, upwelling can bring cold water to the surface, particularly along coasts. Therefore, the location and depth of sampling is critical in determining so-called Sea Surface Temperatures (SST). Something else to consider is that because water has a specific heat that is 2 to 5 times higher than common solids, and more than 4 times that of air, it warms more slowly than land! It isn’t appropriate to average SSTs with air temperatures over land. It is a classic case of comparing apples and oranges! If one wants to detect trends in changing temperatures, they may be more obvious over land than in the oceans, although water-temperature changes will tend to suppress random fluctuations. It is probably best to plot SSTs with a scale 4-times that of land air-temperatures, and graphically display both at the same time for comparison.

Land air-temperatures have a similar problem in that there are often temperature inversions. What that means is that it is colder near the surface than it is higher up. This is the opposite of what the lapse rate predicts, namely that temperatures decline with elevation in the troposphere. But, that provides us with another problem. Temperatures are recorded over an elevation range from below sea level (Death Valley) to over 10,000 feet in elevation. Unlike the Universal Gas Law that defines the properties of a gas at a standard temperature and pressure, all the weather temperature-measurements are averaged together to define an arithmetic mean global-temperature without concern for standard pressures. This is important because the Universal Gas Law predicts that the temperature of a parcel of air will decrease with decreasing pressure, and this gives rise to the lapse rate.

Historical records (pre-20th Century) are particularly problematic because temperatures typically were only read to the nearest 1 degree Fahrenheit, by volunteers who were not professional meteorologists. In addition, the state of the technology of temperature measurements was not mature, particularly with respect to standardizing thermometers.

Climatologists have attempted to circumscribe the above confounding factors by rationalizing that accuracy, and therefore precision, can be improved by averaging. Basically, they take 30-year averages of annual averages of monthly averages, thus smoothing the data and losing information! Indeed, the Law of Large Numbers predicts that the accuracy of sampled measurements can be improved (If systematic biases are not present!) particularly for probabilistic events such as the outcomes of coin tosses. However, if the annual averages are derived from the monthly averages, instead of the daily averages, then the months should be weighted according to the number of days in the month. It isn’t clear that this is being done. However, even daily averages will suppress (smooth) extreme high and low temperatures and reduce the apparent standard deviation.

However, even temporarily ignoring the problems that I have raised above, there is a fundamental problem with attempting to increase the precision and accuracy of air-temperatures over the surface of the Earth. Unlike the ball bearing with essentially a single diameter (with minimal eccentricity), the temperature at any point on the surface of the Earth is changing all the time. There is no unique temperature for any place or any time. And, one only has one opportunity to measure that ephemeral temperature. One cannot make multiple measurements to increase the precision of a particular surface air-temperature measurement!

Temperature Measurements

Caves are well known for having stable temperatures. Many vary by less than ±0.5° F annually. It is generally assumed that the cave temperatures reflect an average annual surface temperature for their locality. While the situation is a little more complex than that, it is a good first-order approximation. [Incidentally, there is an interesting article by Perrier et al. (2005) about some very early work done in France on underground temperatures.] For the sake of illustration, let’s assume that a researcher has a need to determine the temperature of a cave during a particular season, say at a time that bats are hibernating. The researcher wants to determine it with greater precision than the thermometer they have carried through the passages is capable of. Let’s stipulate that the thermometer has been calibrated in the lab and is capable of being read to the nearest 0.1° F. This situation is a reasonably good candidate for using multiple readings to increase precision because over a period of two or three months there should be little change in the temperature and there is high likelihood that the readings will have a normal distribution. The known annual range suggests that the standard deviation should be less than (50.5 – 49.5)/4, or about 0.3° F. Therefore, the expected standard deviation for the annual temperature change is of the same order of magnitude as the resolution of the thermometer. Let’s further assume that, every day when the site is visited, the first and last thing the researcher does is to take the temperature. After accumulating 100 temperature readings, the mean, standard deviation, and standard error of the mean are calculated. Assuming no outlier readings and that all the readings are within a few tenths of the mean, the researcher is confident that they are justified in reporting the mean with one more significant figure than the thermometer was capable of capturing directly.

Now, let’s contrast this with what the common practice in climatology is. Climatologists use meteorological temperatures that may have been read by individuals with less invested in diligent observations than the bat researcher probably has. Or temperatures, such as those from the automated ASOS, may be rounded to the nearest degree Fahrenheit, and conflated with temperatures actually read to the nearest 0.1° F. (At the very least, the samples should be weighted inversely to their precision.) Additionally, because the data suffer averaging (smoothing) before the 30-year baseline-average is calculated, the data distribution appears less skewed and more normal, and the calculated standard deviation is smaller than what would be obtained if the raw data were used. It isn’t just the mean temperature that changes annually. The standard deviation and skewness (kurtosis) is certainly changing also, but this isn’t being reported. Are the changes in SD and skewness random, or is there a trend? If there is a trend, what is causing it? What, is anything, does it mean? There is information that isn’t being examined and reported that might provide insight on the system dynamics.

Immediately, the known high and low temperature records (see above) suggest that the annual collection of data might have a range as high as 300° F, although something closer to 250° F is more likely. Using the Empirical Rule to estimate the standard deviation, a value of over 70° F would be predicted for the SD. Being more conservative, and appealing to Tschbycheff’s Theorem and dividing by 8 instead of 4, still gives an estimate of over 31° F. Additionally, there is good reason to believe that the frequency distribution of the temperatures is skewed, with a long tail on the cold side. The core of this argument is that it is obvious that temperatures colder than 50° F below zero are more common than temperatures over 150° F, while the reported mean is near 50° F for global land temperatures.

The following shows what I think the typical annual raw data should look like plotted as a frequency distribution, taking into account the known range, the estimated SD, and the published mean:

clip_image002

The thick, red line represents a typical year’s temperatures, and the little stubby green line (approximately to scale) represents the cave temperature scenario above. I’m confident that the cave-temperature mean is precise to about 1/100th of a degree Fahrenheit, but despite the huge number of measurements of Earth temperatures, the shape and spread of the global data does not instill the same confidence in me for global temperatures! It is obvious that the distribution has a much larger standard deviation than the cave-temperature scenario and the rationalization of dividing by the square-root of the number of samples cannot be justified to remove random-error when the parameter being measured is never twice the same value. The multiple averaging steps in handling the data reduces extreme values and the standard deviation. The question is, “Is the claimed precision an artifact of smoothing, or does the process of smoothing provide a more precise value?” I don’t know the answer to that. However, it is certainly something that those who maintain the temperature databases should be prepared to answer and justify!

Summary

The theory of Anthropogenic Global Warming predicts that the strongest effects should be observed during nighttime and wintertime lows. That is, the cold-tail on the frequency distribution curve should become truncated and the distribution should become more symmetrical. That will increase the calculated global mean temperature even if the high or mid-range temperatures don’t change. The forecasts of future catastrophic heat waves are based on the unstated assumption that as the global mean increases, the entire frequency distribution curve will shift to higher temperatures. That is not a warranted assumption because the difference between the diurnal highs and lows has not been constant during the 20th Century. They are not moving in step, probably because there are different factors influencing the high and low temperatures. In fact, some of the lowest low-temperatures have been recorded in modern times! In any event, a global mean temperature is not a good metric for what is happening to global temperatures. We should be looking at the trends in diurnal highs and lows for all the climatic zones defined by physical geographers. We should also be analyzing the shape of the frequency distribution curves for different time periods. Trying to characterize the behavior of Earth’s ‘climate’ with a single number is not good science, whether one believes in science or not!

Advertisements

  Subscribe  
newest oldest most voted
Notify of
mikebartnz

Very good article.
I always understood that in science you never used a precision that is greater than the least precise but that is exactly what the climatologists do.
In the past would someone out in a blizzard care as to how precise he was.

Stephen Richards

Or you need one more decimal place of accuracy than the figure you use , quote.

Clyde Spencer

Stephen, as I remarked in the article that preceded this, in performing multiplications/divisions, one more significant figure than the least precise measurement is sometimes retained in the final answer. However, to be conservative, the answer should be rounded off to the same number of significant figures as the least precise measurement. The number of significant figures to the right of the decimal point imply the resolution or precision, and says nothing about accuracy.

gnomish

so what do you do when you know a measurement is 3 1/3? those numbers are simple integers with one significant figure. the computer will not follow the rules, will it?
btw- this past autumn, there were red leaves and green leaves. what was the average color?
does it matter that you have more than twice the average number of testicles and fewer than the average number of ovaries?
as far as the number of arms and legs go- most people on the planet have more than the average number.
if my head is in death valley and my feet in vostok, do i feel just fine – on the average?

george e. smith

The random errors obtained in repetitive measures of the same item DO NOT cancel out.
That conclusion presupposes that positive errors are equally likely as negative errors.
A micrometer for example is far less likely to give too low a reading as too high a reading.
The average of many tests is more likely to converge on the RMS value of the measurements.
Practical measuring devices do not have infinite resolution so errors do not eventually tend towards zero. There also is always some systematic error that is not going to go away.
But I will give you one point. It IS necessary to be measuring the exact same thing, for statistics to have any meaning at all.
Averaging individual events, that never repeat is numerical origami; they aren’t supposed to be the same number in the first place. So the result has no meaning other than it is the “average” or the “median”, or what ever algorithmic expression you get out of any standard text book on statistical mathematics.
GISSTemp is GISSTemp, and nothing more. it has no meaning beyond the description of the algorithm used to compute it. Same gose for HADCrud; it is just HADCrud; nothing more.
G

george e. smith

“””””….. Unlike the Universal Gas Law that defines the properties of a gas at a standard temperature and pressure, …..”””””
Is this anything like the “Ideal Gas Law ” ??
If you have a gas at …… standard temperature and pressure ….. as you say; the only remaining variables are the number of moles of the gas and the occupied volume.
So what good is that; the occupied volume is a constant times the number of moles of the ideal gas. Whoopee ! what use is it to know that. Doesn’t seem to have anything to do with any lapse rates.
G

Paul Westhaver

Hello, My name is Error Bars. Nice to meet you. I am the offspring of Mrs Measurement Accuracy and Mr Instrument Precision. I know that I am an ugly tedious spud like my sister Uncertainty and cousin Confidence Level. My best friend is loved by everyone. His name is Outlier.
If you don’t know me your BSc. is a P.O.S. Go back to school.

Greg

Good topic for discussion. I published an article on this a C .Etc. last year.

It is probably best to plot SSTs with a scale 4-times that of land air-temperatures, and graphically display both at the same time for comparison.

A factor of four would compare SHC of water to that of dry rock. Most ground is more like wet rock a scaling factor of around seems about right comparing BEST land temps to SST.comment image
https://climategrog.wordpress.com/2016/02/09/are-land-sea-averages-meaningful-2/

It is a classic case of ‘apples and oranges’. If you take the average of an apple and an orange, the answer is a fruit salad. It is not a useful quantity for physics based calculations such as earth energy budget and the impact of a radiative “forcings”.

BallBounces

In fact, reputable climatolonianologists always wait until they have at least two bristlecone pines before making paradigm-shifting prognostications.

Philip Mulholland

Superb article. I particularly like this:-

It is probably best to plot SSTs with a scale 4-times that of land air-temperatures, and graphically display both at the same time for comparison.

Never seen it done however.

Samuel C Cogar

“Yes”, I agree, a superb commentary.
And I thank you, Clyde Spencer, for posting it.
I was especially impressed because it contained many of factual entities that I have been posting to these per se “news forums” and “blogs” for the past 20 years …… in what it has seemed to me to be a “futile attempt” on my part to educate and/or convince the readers and poster of said “factual entities”.
Cheers, Sam C

Clyde Spencer

Sam C,
Don’t feel like the Lone Ranger. Some of the things that I have published here in the past have elicited responses, both favorable and otherwise, but seem to diffuse out of the consciousness of the readers as quickly as a balloon popping when it runs up against a rose bush.

Ian H

Our every day experience is that it takes a lot more energy to warm up water than it does air. So only 4 times the specific heat doesn’t seem nearly enough. The reason of course is that specific heat is defined per kilogram not per cubic metre and water is a lot more dense. A cubic meter of air weighs 1.2 kg while a cubic metre of water weighs 1000kg.
Now it is important to note that while heat capacity is defined in terms of mass, there is no special reason for this. We just need a measure of how much stuff we are talking about to define heat capacity and mass happens to be convenient. But we could equally well have used volume and defined heat capacity in terms of how much energy it takes to heat a cubic meter of stuff instead of a kilogram. The heat capacity per cubic metre for water would then be roughly 3,300 times larger than the heat capacity per cubic metre for air.
So now the question is when it comes to averaging SST and air temperatures which is more reasonable.
The per volume approach is equivalent to allowing to top meter of water to reach thermal equilibrium with the top meter of air. The per mass approach is equivalent to allowing the top meter of water to reach thermal equilibrium with the top 830 meters of air. I find it hard to say that either method is correct, but to me the first if anything seems more reasonable. And if we did this using the per volume measure of heat capacity then we should be weighting SST and air temperatures not at 4:1 but in the ratio of 3,300:1 .

Clyde Spencer

Ian H,
We can argue about the correct measurement procedure for air and SSTs, but the point I was trying to make is that the two data sets should be analyzed separately, and not conflated, because water and air respond quantitatively differently to heating.

Owen in GA

Ian,
In order to use volume, you have to pin down at what pressure and temperature. The volume of things can change quite radically when pressure or temperature are changed.
Even worse, the values are calculated independent and quite often change based on the temperature and pressure of the material. So we say the heat capacity of something is X, but that is really for standard temperature and pressure.

Samuel C Cogar

The only correct measurement procedure for air and SSTs would be to use “liquid immersed thermocouples”.
All Surface Temperature Stations, ……. be they on the near-surface of the land or in the near-surface waters of the oceans, ……. should consist of a round or oval container with a two (2) gallon capacity ….. that is filled with a -40 F anti-freeze …… and with two (2) thermocouples suspended in the center of the liquid interior that are connected to an electronic transmitting device.
The “liquid” eliminates all of the present day “fuzzy math averaging” of the recorded near-surface air temperature “spikes” that occur hourly/daily/weekly as a result of local changes in cloud cover, winds and/or precipitation events.

george e. smith

“””””…..
Samuel C Cogar
April 24, 2017 at 4:03 am
The only correct measurement procedure for air and SSTs would be to use “liquid immersed thermocouples”. …..”””””
Well thermocouples are a totally crude way to measure Temperatures. They are highly non linear, and moreover they require a reference thermocouple held at some standard reference Temperature.
A “thermocouple” s a bimetallic electrical connection, that generates a voltage difference between the two metals depending on the Temperature at that junction.
Unfortunately you cannot measure that voltage. To do so, you would have to connect those two metallic “wires” to some sort of “voltmeter”.
Well that voltmeter has two input terminals that are usually made from the same material usually brass or some other copper alloy metal.
So when you connect your two wires to the voltmeter’s two terminals, now you have a loop containing a total of three thermocouples, with the possibility of more thermocouples inside the voltmeter, that you don’t know anything about.
But now you have at least three thermocouples in series, each with its own Temperature/emf characteristic, all of them non linear, and for all you know each of the three thermocouples is at a different and probably unknown Temperature..
Thermocouples suck; but not in any useful way.
G

Samuel C Cogar

Thermocouples suck; but not in any useful way.

Well now, ……. george e. smith, …… back in the late 60’s those thermocouple were nifty devices for R&D environmental testing, …….. so excuse the hell out of me for even mentioning them.
Are you also going to find problems with “newer” devices, such as these, to wit:

An IC Temperature Sensor is a two terminal integrated circuit temperature transducer that produces an output current proportional to absolute temperature. The sensor package is small with a low thermal mass and a fast response time. The most common temperature range is 55 to 150°C (-58 to 302°F). The solid state sensor output can be analog or digital.
Immersion IC Sensors
An IC temperature probe consists of solid state sensor housed inside a metallic tube. The wall of the tube is referred to as the sheath of the probe. A common sheath material is stainless steel. The probe can come with or without a thread. Common applications are automotive/industrial engine oil temperature measurement, air intake temperature, HVAC, system and appliance temperature monitoring. http://www.omega.com/prodinfo/Integrated-Circuit-Sensors.html#choose

Most anything would be a great improvement over what is now being employed.

george e. smith

Well Samuel, I guess you didn’t even address the issues I raised; the fact that a ” thermocouple ” is actually just one part of a closed circuit, that always has to have a minimum of two different materials, and at least two separate thermocouples, and more often than not there are three of each.
And each one, whether two or three of them generates a couple EMF that is a function of the Temperature at THAT point.
For a two material and two junction circuit, the response depends on the difference of the two temperatures, so you need one couple at some reference temperature. And back in the 60s when those thermocouples were nifty, things like Platinum resistance thermometers were well known and quite robust.
and many other techniques have developed since, including quartz crystal oscillators with a highly linear TC.
And yes today, modern semi-conductor ” bandgap ” sensors, can actually respond directly to absolute kelvin temperatures. Yes they also do need to be calibrated against certified standards, for accuracy, but for resolution and linearity they are about as good as it gets.
Thermo-couples are quite non-linear, as is the Platinum resistance, but for long term stability, PTRs are hard to beat.
G

I agree with this but I’d like to offer a different view.
There’s nothing inherently wrong with how temperature anomalies are calculated. As long as it stays within the limits of its arguments.
The reason why much of say the Met Office’s estimates of SST uncertainty (0.1 degrees C) is fine scientifically is that science itself allows an argument to be made using a mix of assumption with some measurement. You don’t have to always have data. If one day it can be shown that your assumptions have merit then your work will have more relevance.
I mean two words can demonstrate how pure theoretical musings can still be countenanced and funded: String Theory!
So with the NOAA or Met Office temperature anomaly calculations you have the following arguments (paraphrased) : “Assuming that the underlying temperature distribution follows a normal distribution and that measurement errors are random and independent, multiple measurements will produce less uncertainty” etc etc you know that type of thing.
And this is all fine as long as the paper states clearly the limits of the argument. So I can read the Met Office papers and say, okay I see where you have come to that conclusion and if it were true that’s interesting.
The problem is not the construction of a scientific argument. The problem is the application of the argument to the real world, bypassing the verification process.
Add that in with the theorists/modellers tending to believe their own ideas, mix it in with galactic mass proportions of funding, and pretty much you have the current God-cult of temperature anomalies varying to hundredths of a degree.
And for added bonus you have the same theorists trying to tell empirical scientists and engineers that they know better than them about temperature measurements. Or that they know better about the under-appreciated black art of a science: metrology.

Kaiser Derden

any experiment (and I consider any attempt to “measure” the global average an experiment) that explains itself with the words “assuming that … ” has basically invalidated its use since they are openly admitting they have not locked down all the variables … any experiment which has variables with uncertain values (anything that you claim is based on an assumption is by definition “uncertain”) is no longer an experiment but simply a bias display …

Yes for experiments. But science also allows for theoretical papers that present a possible scenario. There is nothing wrong with this per se as a hypothetical exercise but there is a very real problem if this is believed to hold any weight in the real world. Especially since everyday technology has been subjected to tough measurement criteria before being deemed “safe”.

Clyde Spencer

MC75,
I don’t have a serious problem with the procedure of calculating anomalies because it is a reasonable approach to dealing with the issue of weather stations being at different elevations. However, I am questioning the precision with which the results are reported routinely.

Roger Graves

The only way in which a truly representative global temperature could be estimated is to set up an observatory on, say, Mars, and view our planet, in Carl Sagan’s words, as a ‘pale blue dot’. You could then measure an overall radiation temperature for the said pale blue dot. I’m not altogether sure what that radiation temperature would actually represent, but it is reasonable to assume that, if global warming did occur, it would be reflected in that radiation temperature.
Attempting to measure an overall global temperature on Earth is a classic case of not being able to see the forest for the trees. There are so many random local variables to consider that it would be almost impossible to take them all into consideration. Two independent groups undertaking the task, if there was no collusion between them, would almost certainly come up with significantly different values.

BJ in UK

As Roger Graves says, per Carl Sagan, view Earth from Mars as a “blue dot.”
This is not as too way out as it sounds. Using Wien’s law, the temperature of Earth can be measured directly by dividing 2.898 by the wavelength of Earth’s peak emission.
Such a satellite could be put in a suitable Mars orbit in a year or so, at relatively low cost, given what’s at stake.
Then direct measurement with no anomalies, averages etc., to put these arguments to rest.

I think we agree on this Clyde. As a scientific exercise I don’t see a problem with it. As a national standard to be used in policy there is a very big problem with this approach.

Owen in GA

BJ,
I would place such a satellite in a leading or trailing Lagrange point (preferably one each). Then the satellite would always be the same approximate distance from the Earth.

“I’m not altogether sure what that radiation temperature would actually represent, but it is reasonable to assume that, if global warming did occur, it would be reflected in that radiation temperature.”
It’s mixed. Part of the spectrum represents surface (the atmospheric window). And a large part is from GHG at high altitude. But it’s no guide to Earth temperature. Total IR out has to equal (long-term) total sunlight absorbed – it doesn’t change. The AW part increases with arming, and would give some kind of guide. The rest decreases.

Samuel C Cogar

And I am questioning just what the ell is so important about the world’s populations knowing what the Global Average Temperature is for any given day, month or year? Especially given the fact that it will be DIFFERENT every time it is re-calculated.

Clyde Spencer

Samuel,
From my perspective, the importance of world temperatures is that they are being used to try to scare the populace by convincing them that we are becoming increasingly hotter and time is of the essence in preventing catastrophe. I’m attempting to put the claims into perspective by pointing out the unstated assumptions and unwarranted false precision in the public claims.

george e. smith

The earth according to NASANOAA has an average cloud cover of 65%.
So earth is more likely to look like a white dot, than a blue dot, from Mars.
G

Clyde Spencer

GES,
You said, “The earth according to NASANOAA has an average cloud cover of 65%. So earth is more likely to look like a white dot, than a blue dot, from Mars.”
Where did this come from? It is unrelated to the comment you are supposedly responding to. Also, with a generally accepted ‘albedo’ of about 30%, it would seem that your 65% figure is the complement of the actual average cloud coverage.

Samuel C Cogar

Samuel,
From my perspective, the importance of world temperatures is

Clyde S, …….. I understood what your “perspective” was …… and pretty much agreed with 100% of your posted commentary. My above comment was directed at those who think/believe that knowing what the Local, Regional or Global Average Temperatures are ……. is more important than food, sex or a cold beer.

george e. smith

Well Clyde you struck out.
I said NASANOAA says that average earth cloud coverage is 65%. Nowhere did I say they claim that clouds have a 100% diffuse reflectance.
“””””…..
Roger Graves
April 23, 2017 at 11:10 am
The only way in which a truly representative global temperature could be estimated is to set up an observatory on, say, Mars, and view our planet, in Carl Sagan’s words, as a ‘pale blue dot’. …..”””””
I presume YOU didn’t see this.
I did, and pointed out that the extent of cloud cover (mostly over the oceans,) makes each more likely a white dot from Mars; not Sagan’s “blue dot”.
Sky blue is very pale, so over the polar ice regions, the blue sky scattering is quite invisible.
G

richard verney

A common one is to increase accuracy and precision of the determination of some fixed property, such as a physical dimension. This is accomplished by confining all the random error to the process of measurement. Under appropriate circumstances, such as determining the diameter of a ball bearing with a micrometer, multiple readings can provide a more precise average diameter. This is because the random errors in reading the micrometer will cancel out and the precision is provided by the Standard Error of the Mean, which is inversely related to the square root of the number of measurements.

This only works when what you measure is the same and does not change.
However, this does not work in climate since what we are measuring is in constant flux, and the samples being used for measurement are themselves constantly changing.
When compiling the global average temperature data sets, sometimes say a few hundred weather stations are used, none of which are at airporta and are mainly rural, at other times it is say 1500, then 3000, then about 6000, then 4,500 ertc. The siting of these stations constantly moves, the spatial latitude distribution continually changes, the ratio between rural and urban continually changes, the ratio of Airport stations to non airport stations constantly changes, and of course equipment changes and TOB.
We never measure the same thing twice, errors are not random so never cancel out.

Clyde Spencer

Richard,
You said, “This only works when what you measure is the same and does not change.” I believe that is essentially what I said.

richard verney

It is well worth looking at the McKitrick 2010 paper. For example, see the change in the number of airport stations used to make up the data set.
http://notrickszone.com/wp-content/uploads/2017/02/NOAA-Data-Manipulation-Urban-Bias-Airport-Temperature.jpg
When on considers this, it is important to bear in mind that airports in say 1920 or 1940 were often themselves quite rural, and are nothing like the airports of the jet travel era of the 1960s.
An airport in 1920 or 1940 would often have grass runways, and there would be just a handful of piston prop planes. Very different to the airport complexes of today.

mikebartnz

Recently in Masterton NZ they moved the weather station to the airport and I remember when I was with the glider club coming into the landing circuit with full air brakes on and still rising at a good rate of knots.

Steve Case

Just watch the birds. They know where the thermals are.

Duane

Airports aren’t necessarily the final word on weather data anyway. Keep in mind that nearly all airports are built and maintained by local government authorities, which should not and do not inspire confidence at the outset. Local airport authorities don’t hire PhD level meteorologists to install the weather sensors, nor do they hire MS degreed or better scientists to operate them. They hire local contractors and, well people ranging from highly competent aviation professionals to people with GEDs, respectively.
Also, “airport temperature” is not a single fixed value anyway. The temperature where? Above an asphalt airport ramp? Or at the top of a control tower (most airports aren’t towered)? In between the runways at a busy commercial airport filled with multi-engined large turbine aircraft, or off in the back 40 of a rural area with cows grazing nearby and maybe one or a handful of airport operations a day?
The fact is that measurement precision and accuracy is not tightly quality controlled at nearly all weather stations on the planet, with a tiny handful of exceptions, whether airports or not.
The bottom line is, obviously, the earth has been warming on average for the last 15K years give or take, with various interludes of cooling here and there. The fact that the earth continues to warm as it has since long before human civilization sprouted between 6K and 10K years ago is what matters. Nobody can deny that.
The real argument is over “so what?”

Steve Case

The real argument is over “so what?”
BINGO
I’ve been kicked off more than one left-wing Climate Change site for saying that.

Duane

Steve – exactly … which is why the warmists refuse to acknowledge that nobody denies today that warming is occurring. Even most hird graders understand that the Earth used to be much cooler, during the last “ice age” (we’re still in the”ice age” — just in the current inter-glacial period). The science-poseurs much prefer to pretend that their opponents deny any warming at all, so that they can pretend to be science literate while their opponents are all supposed to be scientific neanderthals who are “deniers”.
The only argument today is over the “so what” thing – which of course drives the warmists’ bonkers. They thought they had everybody who matters convinced that the “Climate of Fear” is deserved … rather than acknowledge that, as any non-imbecile knows who has studied history and archaeology at all, that WARM IS GOOD … COLD IS BAD as far as humans are concerned.

Menicholas

“obviously, the earth has been warming on average for the last 15K years give or take, with various interludes of cooling here and there. The fact that the earth continues to warm as it has since long before human civilization sprouted between 6K and 10K years ago is what matters. Nobody can deny that.”
Say what?
For one thing, we have had trend reversals in the past 15,000 years, so that is not a logical point of reference if one is speaking to long term trends in global temp regimes.
But more recently, over say the past 8000 years, the Earth has been cooling, after reaching it’s warmest values soon after the interglacial commenced.
The warming since the end of the LIA represents a recovery from one of the coldest periods of the past 8000 years.
So what?
Because cold is bad for humans, human interests, and life in general.comment imagecomment image?w=720

Clyde Spencer

Duane,
Occam’s Razor dictates that we should adopt the simplest explanation possible for the apparent increase in world temperatures. Because we can’t state with certainty just what was causing or controlling the temperature increases before the Industrial Revolution, we can’t rule out natural causes for the apparent recent increase in the rate. At best, anthropogenic influences (of which CO2 is just one of many) are a reasonable working hypothesis. However, the burden of the proof is on those promoting the hypothesis. We should be entertaining multiple working hypotheses, not just one.

David L. Fair

Uh, wrong-o, Duane.
Temperatures have been in decline the last half of the Holocene.

Duane

Menicholas, and David Fair – your chart showing the last 15KY illustrates exactly what I wrote above – we are obviously in a warm, interlacial phase, much warmer than 15KYA, with various interludes of cooling but no reversal toward a new glaciation episode. Warming, as the average of the last 15KYA, no cooling trend.

george e. smith

Airport Temperatures are preferably over the runway Temperatures, as their sole purpose is to inform pilots whether or not it is safe for them to take off on that runway, given the weight and balance of their aircraft at the time.
The pilots don’t give a rip about the global temperature anomalies; they just want to know the right now runway air Temperature.
G

David Chappell

I have a problem understanding the left hand side of those graphs. Just how many “airports” were there before, say, 1910?

Menicholas

Yeah…what up wit’ dat?

None, but some of those stations were eventually ones that had airports built near them. What’s interesting is that this is an artifact of having large numbers of stations added to areas where no airports were built then having many of those sites dropped.

george e. smith

Nearly the same number as in 1852.
g

MFKBoulder

I always wondered on the percentage of _airport_-stations at the end of the 19th / begin fo 20th century.
These graphs are even better than the hockey-stick.
For the slow-witted: HAM is operated since 1911. Other “airflields” are not muach older….

About time that we got a statistical analysis. With the sun definitely in a grand minimum state and the consequent changes in the jet streams, solar vortex, etc., the distribution must be changing shape with higher standard deviation, skewness increasing towards the colder side and the kurtosis has definitely increased as there are increasingly cold and warm waves at various places on Earth.
I would like to see more about the made up data for the huge areas on the planet where there isn’t any measurements.

Menicholas

“I would like to see more about the made up data”
Just use your imagination.
That’s what the people making it up do, and I am gonna make a leap of faith and assume you are at LEAST as imaginative as those…um…”experts”.

george e. smith

Why ?? it’s made up data just as you say; so it is meaningless.
G

richard verney

The number of NASA GISTemp stations (in thousands)comment image
And with this change there is a significant loss of high latitude stations. The station drop out is not random, and not equally distributed.

Bindidon

richard verney on April 23, 2017 at 1:02 am
1. You seem to have carefully avoided to present, in addition to this plot, the one placed immediately to its right:comment image
It is perfectly visible there that despite an inevitable loss of stations lacking inbetween modernst requirements, the mean station data coverage was by far less than the loss!
2. And with this change there is a significant loss of high latitude stations. The station drop out is not random, and not equally distributed.
Could you please cite your sources?

Bindidon (challenging an earlier quote)

2. And with this change there is a significant loss of high latitude stations. The station drop out is not random, and not equally distributed.

Could you please cite your sources?

Is that not obvious from the plot of the stations? Do you need an “officially approved, formerly and formally peer-reviewed by officially sanctified officials” piece of printed paper in an officially approved journal to learn from and use data immediately obvious from NASA’s own plot?

Bindidon

It is perfectly visible there that despite an inevitable loss of stations lacking inbetween modernst requirements, the mean station data coverage was by far less than the loss!
Wow! What I wrote here is nonsense! I should have written “… the loss of data coverage was by far less compared with the loss of stations.”
My bad!

Bindidon

RACookPE1978 on April 23, 2017 at 5:29 am
Is that not obvious from the plot of the stations?
No it is not obvious at all. The plot presented by verney tell us merely about how many stations there were per decade, and not by latitude! Even the plot I added above doesn’t.
If I had the time to do, I would add a little stat function to my GHCN V3 processing software, showing you exactly what the record tells us about that in the 60N-82.5N area.

You think that over 80% of the northern hemisphere has a thermometer on it? Really? Perhaps you can also tell us how close together those thermometers are. And since the number is constantly changing, how is it that they are not measuring something different every year?

MarkW

All we have to do is redefine coverage and we can get whatever result we want.

Mindert Eiting

GHCN base. 1970: stations on duty 9644. During 1970-99: new stations included 2918, stations dropped 9434. Therefore 2000: stations on duty 3128. During 2000-2010 (end of my research) new stations included 14, stations dropped 1539. Therefore 2011: stations on duty 1603. This is not shown in the totals but demographers know that a group of people changes by birth/immigration and death/emigration.
I agree that the dropout was not random. The dropout rate depended on station characteristics and produced a great reduction of the variance in the sample. And the sample size decreased from 9644 to 1603. The standard error of the sample mean depends on population variance (not estimated any more by the sample variance) and sample size, on the assumption that the sample is taken randomly. Let’s not forget that all stations together are a sample from the population of all points on the earth surface. What a statistical nightmare this is.

Clyde Spencer

Mindert,
How do you propose to determine the population mean when it can’t be characterized theoretically?

Mindert Eiting

Clyde,
First moment of the temperature distribution over the earth’s surface at a certain point of time.

” Let’s not forget that all stations together are a sample from the population of all points on the earth surface. What a statistical nightmare this is.”
It isn’t a nightmare. It’s just a spatial integration. You’re right that the population is points on Earth, not stations. The stations are a sample.
It’s wrong to talk about “stations on duty” etc. Many of the stations are still reporting. But GHCN had a historic period, pre 1995, when whole records were absorbed. Since then, it has been maintained with new data monthly, a very different practical proposition. Then you ask what you really need.
The key isn’t variability of sample average; that is good. It is coverage – how well can you estimate the areas unsampled? Packing in more stations which don’t improve coverage doesn’t help.

Menicholas

I am just wondering how it is that with more money than ever being spent, and by orders of magnitude at that, and more interest in finding out what is going on with the temp of the Earth, how is it that getting actual readings of what the temperature is, has gone by the wayside?
https://www.youtube.com/watch?v=gNMgqnUEMGM

Clyde Spencer

Menicholas,
What happened after about 1985 -1990? This would seem to support the claim that there are fewer readings today at high latitudes than there were previously.

” how is it that getting actual readings of what the temperature is”
Temperatures are measured better and in more places than ever before. You are focussing on GHCN V3, which is a collection of long record stations that is currently distributed within a few days of the end of the month, and is used for indices that come out a few days later. It’s a big effort to get so much data correct on that time scale, and it turns out that the data is enough. You can use a lot more, as BEST and ISTI do (and GHCN V4 will). It makes no difference.

Menicholas

Clyde,
It sure does, doesn’t it?
Take a look at Canada in particular.
And we had more numbers coming out of Russia during the cold war than now.
So, if more readings in more places improves the degree of certainty of our knowledge of what the atmosphere is doing, why get rid of stations at all?
And why, of all places, in the parts of the world in which readings are already sparse, and in which the most dramatic changes are occurring?
It makes no sense, if the idea is really to get a better handle on objective reality.
It makes perfect sense, if the idea is to have better control over what numbers get published.
With billions with a B being spent annually by the US federal government alone, some $30 billion by credible accounts, should we seriously be expected to believe that recording basic weather data is just too onerous, difficult, inexact without upgraded equipment, or too expensive?
It defies credulity to claim anything of the sort.

Bindidon

What we need is a statistics giving us how many GHCN stations were present in which year starting e.g. with 1880, i.e. with the beginning of the GISTEMP and NOAA records.
And even better would be to do the job a bit finer, in order to have these yearly stats for the five main latitude zones, i.e. NoPol, NH Extratropics, Tropics, SH Extratropics, SoPol.
Without these numbers, discussions keep meaningless and hence leave us as clueless as before.

MarkW

The money is going into models. With the theory being that with good enough models, we don’t need to actually collect data.

Clyde Spencer

Richard,
You concluded with, “And with this change there is a significant loss of high latitude stations. The station drop out is not random, and not equally distributed.” This is an extremely important point! It is claimed that the high latitudes are warming two or three times faster than the global average. Yet, the availability of temperature measurements today are about the same as in the Great Depression.

“Yet, the availability of temperature measurements today are about the same as in the Great Depression.”
Absolutely untrue. There are far more measurements now. The fallacy is that people look at GHCN Monthly, which has a historical and current component. When it was assembled about 25 years ago, people digitised old archives and put them in the record. Ther was no time pressure. But then it became a maintained record, updated every month, over a few days. Then you have to be selective.
There are far more records just in GHCN Daily alone.

Chimp

Nick,
Did NOAA not close 600 of its 9000 stations, thanks to the efforts of our esteemed blog host, showing that they reported way too much warming?
What about the thousands to tens of thousands of stations once maintained in the former British Empire and Commonwealth and other jurisdictions around the world, which no longer report regularly or accurately, if at all?
See Menicholas April 23, 2017 at 8:43 am and richard verney April 23, 2017 at 1:02 am above, for instance.
IIRC, Gavin says he could produce good “data” with just fifty stations, ie one per more than 10,000,000 sq km on average, for instance one to represent all of Europe.

Chimp,
“Did NOAA not close 600 of its 9000 stations, thanks to the efforts of our esteemed blog host”
No. GHCN V2, extant when WUWT started, had 7280 stations. That hasn’t changed.
“What about the thousands to tens of thousands of stations once maintained in the former British Empire and Commonwealth”
I’ve analysed the reductions in GHCN from the change to monthly maintenance here. When GHCN was just an archive, no-one cared much about distribution. Places were way over-represented. Turkey had 250, Brazil 50. Many such stations are not in GHCN, but they are still reporting. There is a long list of Australian ststions here.
I don’t know about Gavin’s 50, but Santer said 60. And he’s right. The effect of reducing station numbers is analysed here.

Chimp

Nick,
Sure, sixty are as good as 60,000 when not a single one of them samples actual surface temperature over the 71% of the earth that is ocean nor over most of the polar regions.
GASTA would be a joke were it not such a serious criminal activity threatening the lives of billions and treasure in the trillions.
Do you honestly believe that a reliable, accurate and precise GASTA can be calculated from one station per ten million square kilometers? That means one for Australia, New Guinea, the Coral and Tasman Seas, New Zealand and surrounding areas of Oceania and Indonesia, for instance. Maybe two for North America. One for Europe. Two for the Arctic. Two for the Antarctic.
Does that really make sense to you?

“Does that really make sense to you?”
Yes. Again people just can’t get their heads around an anomaly average. I showed here (globe plot, scroll down) the effect for one month of plotting all the anomalies, and then reducing station numbers. With all stations, there are vast swathes where adjacent stations have very similar anomalies. It doesn’t matter which one you choose. As you reduce (radio buttons), the boundaries of these regions get fuzzier, but they are still there. And when you take the global average, fuzzy boundaries don’t matter. What I do show there, in the plots, is that the average changes very little as station numbers reduce, even as the error range rises.

Clyde Spencer

Nick,
I just read your piece on culling the stations down to 60. I think that you need to consider what it means to have your SD expanding so dramatically, even if the mean appears stable. I would suggest that one interpretation would be that you are maintaining accuracy, but dramatically decreasing precision and certainty.

Clyde,
The σ expands, and at 60 stations is about 0.2C. That is for a single month. But for a trend, say, that goes down again. Even for annual, it’s about 0.06°C (that is the additional coverage component from culling). Yes, 60 stations isn’t ideal, but it isn’t meaningless.

Bindidon

Chimp on April 23, 2017 at 12:58 pm
1. Did NOAA not close 600 of its 9000 stations…?
You misunderstand something here. Nick is talking about GHCN stations.
Your 9,000 stations refer to a different set (GSOD, Global Summary of the Day).
*
Chimp on April 23, 2017 at 7:19 pm
2. Sure, sixty are as good as 60,000 when not a single one of them samples actual surface temperature over the 71% of the earth that is ocean nor over most of the polar regions.
Below you see an interesting chart, constructed out of UAH’s 2.5° TLT monthly grid data (144 x 66 = 9,504 cells), by selecting evenly distributed subsets (32, 128, 512 cells, compared with the complete set).
The data source: from
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltmonamg.1978_6.0
till
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltmonamg.2016_6.0
I hope you understand that due to the distribution, any mixture of the polar, extratropic and tropic regions, and any mixture of land and sea can be generated that way:
http://fs5.directupload.net/images/170424/lxywwb3t.jpg
In black you see a 60 month running mean of the average of all 9,504 UAH TLT cells (which exactly corresponds to the Globe data published by Roy Spencer in
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
In red the average of 512 cells; in yellow, 128; and in green, 32!
So you can see how few data is necessary to accurately represent the entire Globe…
P.S. The 64 cell average (which is not plotted here) was manifestly contaminated by heavily warming sources (0.18 °C / decade instead of about 0.12). The warmest cells you find in the latitude zone 80-82.5° N… 🙂
*
Clyde Spencer on April 23, 2017 at 9:54 pm
I think that you need to consider what it means to have your SD expanding so dramatically, even if the mean appears stable.
Why do you always exclusively concentrate on standard deviations you detect in surface records?
Look at the chart above, and at the peaks and drops visible in the different UAH plots.
And now imagine what would happen if I would compute for all 9,504 UAH grid cells the highest deviations up and down, sort the result, and plot a monthly times series of the average of those cells where maximal and minimal deviations add instead of being averaged out.
How, do you think, would that series’ SD look like?

Clyde Spencer

Bindidon,
The standard deviation of the raw data is what it is! Smoothing the data and calculating the SD gives you the SD of smoothed data. It might be appropriate and useful to do that if you are trying to remove noise from a data set. However, since the defined operation is to try to estimate the precision of the mean, smoothing is not justified. Does that answer your question about “always?”

george e. smith

Soo where is you plot from 1880 of the stations in the Arctic; i.e. >60 NL
G

Bindidon

george e. smith on April 24, 2017 at 4:43 pm
Soo where is you plot from 1880 of the stations in the Arctic; i.e. >60 NL
Sorry George for the late answer, I live at WUWT+9.
Here is a chart showing the plot obtained from the file
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/ghcnm.tavg.latest.qcu.tar.gz
which contains the GHCN V3 unadjusted dataset.
http://fs5.directupload.net/images/170425/nbr4ztlp.jpg
As you can see, the GHCN (!) temperatures measured there from 1880 till 1930 were by far higher than the actual ones.
The linear trends ± 2 σ for the Arctic (60-82.5N), 1880-2016 (monthly) in °C / decade:
– GHCN unadj: -0.343 ± 0.013
It is interesting to compare the GHCN time series with those GISTEMP derived from, as they differ by a lot.
GISTEMP zonal data is available in txt format for 64-90° N
https://data.GISTEMP.nasa.gov/gistemp/tabledata_v3/ZonAnn.Ts.txt
but is annual, so I constructed an annual average of GHCN’s monthly data to compare.
http://fs5.directupload.net/images/170425/iwhgexi9.jpg
The linear trends ± 2 σ for the Arctic (64-82.5N), 1880-2015 (yearly) in °C / decade:
– GHCN unadj: -0.498 ± 0.033
– GISTEMP land: 0.183 ± 0.011
GHCN’s yearly trend is here lower than in the monthly data. Not only is 60-90N a bit warmer than 64-90N; the yearly averaging has here also clearly overweighted the past.
But the incredible difference in plots and trends between GHCN and GISTEMP will inevitably feed the suspicion that GISTEMP “cools the past to warm the present”; it is therefore better to show a similar chart with Globe data:
http://fs5.directupload.net/images/170425/pwev9qpj.jpg
The linear trends ± 2 σ for the Globe, 1880-2015 (yearly) in °C / decade:
– GHCN unadj: 0.231 ± 0.010
– GISTEMP land: 0.099 ± 0.004
Here we see that for the Globe, GISTEMP land-only data has a much lower trend than its own GHCN origin.
We should not forget that GISTEMP automatically eliminates GHCN data showing for 6 consecutive months anomalies above 10 °C. Not only in the past 🙂

Clyde Spencer

Bindidon,
I see that you go nothing out of my articles. You are citing temperatures to the nearest thousandth of a degree.

Bindidon

Clyde Spencer on April 25, 2017 at 9:39 am
I see that you go nothing out of my articles. You are citing temperatures to the nearest thousandth of a degree.
Clyde Spencer, I see that you are fixated on philosophical debates having few in common with the daily, real work of people processing datasets (including those people doing that on the hobby line).
If you were daily busy in similar work, you soon would discover that when using numbers with no more than one digit after the decimal point, you simply run into the stupid problem of losing accuracy when you have to average all the time.
And you have to average: the simplest example is just above, with the necessity of averaging a monthly series (GHCN unadjusted) into an annual one just because it must be compared with data (GISTEMP zonal) not existing in a monthly variant in text format.
Do you really think I would change all my dozens of Excel tables in order to show my data in your guest posts such that it fits to your obsessional narrative?
It’s so simple to read over what disturbs you.
And by the way: why don’t you write to e.g. Dr Roy Spencer? Does all the data he publishes for us not have in your mind a digit too much behind the point as well? Thanks for publishing his response…

Clyde Spencer

Bindidon,
Roy Spencer is not reading thermometers to the nearest tenth of a degree and he has a synoptic view of the entire global temperature field. He is almost certainly doing it correctly.
My “fixation” is not philosophical. It is pragmatic. As to my “daily real work,” besides my academic background and post-graduate work experience (I’m retired), I worked my way through college working for Lockheed MSC in the Data Processing and Analysis Group in the Agena Satellite Program. I have more than a passing acquaintance with numbers. I’m not doing it as a hobbyist. I would assume that a hobbyist would be interested in finding sources of knowledge to compensate for their lack of formal instruction in a field they have taken an interest in.

Dave Fair

Not so, Clyde! You must have tenured mentors, such as Drs. Mann or Jones, to tutor you in the arcane sciences of global warming/climate change. Anything less invalidates everything you might have to say about CAGW.

Clyde Spencer

Dave,
Congratulations! I had to read it twice before I realized that you were being sarcastic. I usually catch it right away. 🙂

Dave Fair

People overuse “/sarc,” Clyde. I think people should engage their brains before responding to comments.
But, then again, I’ve always said one’s use of “should” indicates an unwarranted belief in the intelligence and/or goodwill of others.

Dave Fair

Plus, do-gooders use it all the time. Save the Whales! Save the Snails! Save Mother Earth!

Bindidon

Clyde Spencer on April 25, 2017 at 2:50 pm
I would assume that a hobbyist would be interested in finding sources of knowledge to compensate for their lack of formal instruction in a field they have taken an interest in.
What an arrogant statement, Clyde Spencer! Not only you do have a working past. The mine was full of interesting matters (bezier splines, algebraic specifications, etc etc).
The difference between us is that while wrt climate we are in my really humble opinion both no more than a couple of laymen, you seem to have a thoroughly different meaning about that.
No problem for me!

Why use fahrenheit?

M Courtney

It’s parochial but it doesn’t affect the argument.

richardscourtney

Why not use Fahrenheit when writing for a mostly American audience?

fretslider

Why not use scientific units?
Do Americans have problems with SI? If so, why?

Duncan

Fahrenheit is a scientific unit, so are BTU’s, etc. As with all units it is an estimation of ratios of quantities. It is just not part of the International System. There is nothing wrong with either (scientifically).

fretslider, all measurement systems are based upon something arbitrary, therefore they are all equivalent.

R. Shearer

The Fahrenheit degree is more precise.

fretslider

Do Americans have problems with SI?<
Ask a stupid question. Clearly they do.

Menicholas

Do not-Americans have a problem with Fahrenheit?
Clearly they do.
I have never heard anyone complaining about using Celsius, even though the numbers are far more course grained.

Gary Pearse by

Fretslider: Having won 50% of all Nobel prizes in hard sciences and economics I’d venture to say they have no trouble at all, wouldn’t you say so on reconsideration? I have to ask who your guitar influences are, too.

george e. smith

Some of us actually use SI units for everything. MY GPS outputs all its data in SI units.

Clyde Spencer

GES,
Is the thermostat in your house calibrated in C? How about the temperature indicator on the dash of your car? How about the oven in your kitchen? If the answer to those is “yes,” then you have a unique American lifestyle and I imagine it cost you a lot of money to get EVERYTHING to conform to SI. Or perhaps you exaggerated just a little.

I’m a units fanatic. If someone was to give an answer that was numerically correct but left the units off, I would say it was 100% wrong. Instead, if they gave an answer that was numerically wrong but had the right units, I’d give partial credit.
Just about any unit is valid if used properly. As an engineer, we have had to use a lot of silly units. There’s a difference between using pounds force and pound mass. If pounds is the force unit, then the mass unit is the slug. If pounds is the mass unit, the force unit is the poundal. Metric is easier but not without its traps. If you use cgs (centimeters-grams-seconds) as your standard, the arbitrary constant in Coulomb’s law is exactly 1. However, if you use MKS (meters-kilograms-second) as your standard, then the arbitrary constant in Coulomb’s law is more complex (it was an attempt to clean up Maxwell’s equations). The units volts, amperes, ohms, and coulombs are from the MKS system.
Still, Fahrenheit is a perfectly valid unit for temperature. Does anyone know the standard unit for Hubble’s constant? Well it’s kilometers per second per megaparsec. Look up “parsec” in your SI list of units and then tell astronomers they shouldn’t use it–or lightyear for that matter.
Jim

Steve Case

Fahrenheit has smaller degrees, so it sort of inherently has more accuracy and with a lower zero point you don’t have to deal with negative numbers as often. Celsius offers no advantage whatsoever. Why Science doesn’t use Kelvin all the time every time is a mystery.

Steve Case

“… accuracy…” – Uh I should have said precision

Clyde Spencer

Steve,
It should be noted that K uses the same resolution as C, and that C is based on the arbitrary decision to use the phase changes in water and divide it into 100 divisions. There is nothing magic about K!

george e. smith

Some of us do.
G

Dinsdale

Why not use Kelvin, since temperature is related to energy states of matter? Then replace the so-called anomaly plots with absolute temperature plots. Then the size of recent changes would be near zero compared to long-term glaciation-cycle changes. But if you are in the business of getting funding for your chicken-little claims that just wouldn’t do.

Clyde Spencer

Fretslider and others,
I have a preference for Fahrenheit only because of a long familiarity with it. I know from personal experience how I should dress for a certain forecasted temperature. I know that at -10 F the moisture on nostril hairs will freeze immediately on my first breath. I know that at 110 F I will be uncomfortable in the desert and immobilized if I have to deal with the humidity of Georgia (USA) as well. If I’m doing scientific calculations I’ll probably convert whatever units are available to Kelvin. At this point in my life, I won’t live long enough to acquire the kind of intuitive understanding that is acquired by experience. Please bear with me until I die.

Clyde Spencer

Hans,
Historically, there was a preponderance of Fahrenheit readings. Even today, the ASOS automated readings are collected in Deg F, and then improperly converted to Celsius. See my preceding article for the details.

if you pretend to write scientific papers, you should use international standards.

Clyde Spencer

Hans,
You said, “if you pretend to write scientific papers, you should use international standards.”
In case you hadn’t noticed, what I wrote was not a peer-reviewed scientific article. I chose to use a unit of measurement (Which is really irrelevant for the argument. However, if you are not up to making the conversion to Celsius I will be glad to do it for you.) that is most common in historical records and is still used in the US. I’m sorry if you have difficulty with the mathematics of conversion.

george e. smith

I’m with Hans. If you want to be informative to a diverse group of people you should use the recognized universal units.
And with kelvin, you don’t even need any negative signs.
G

Clyde Spencer

GES,
When you write an article for WUWT, please do use K. And see how many people complain that they can’t relate to Kelvin! Despite having used the Celsius temperature scale for decades, I still don’t have the intuitive understanding of what anything other than 0, 20, and 100 degrees C mean on an existential level. However, it is trivial to make a conversion if I need to in order to better grasp the personal implications. The reason that different temperature scales are still in common usage is that they are more appropriate at certain times than the alternatives. You have hypocritically claimed that you use nothing but SI units; however, you have demonstrated in your own writing that it is only puffery to make you look superior. There are more important things to worry about then whether or not we make our colleagues across the pond happy with our choice of measurement units. Next thing you know they will be complaining that we drive on the wrong side of the road.

>>
And with kelvin, you don’t even need any negative signs.
<<
Or degrees.
Jim

John M. Ware

F degrees are only 5/9 the size of C degrees, making F nearly twice as precise as C.

george e. smith

M.akes it sound warmer

Temperature is an intensive property and therefore there is no such thing as the “earth’s average temperature” or even “the earth’s temperature” (unless the earth is in thermal equilibrium, which it isn’t).

richardscourtney

Phillip Bratby:
Yes. If you have not seen it, I think you will want to read this especially its Appendix B.
Richard

Samuel C Cogar

Duncan April 23, 2017 at 3:54 pm

I am done with Samuel, but as far as Math “averages”, …………….. Math is not always an exact ‘number’ as he thinks

That’s OK iffen you are “done with me” ……. as long as you are not done with trying to IMPROVE your reading comprehension skills.
Duncan, I made no mention, accusation or claim whatsoever about …. “math being an exact number”.
Tis always better when one puts brain in gear before putting mouth in motion.
Cheers

“Temperature is an intensive property and therefore there is no such thing as the “earth’s average temperature””
People are always saying this here, with no authority quoted. It just isn’t true. Scientists are constantly integrating and averaging intensive quantities; there isn’t much else you can do with them. The mass of a rock is the volume integral of its density. It is the volume times average density. The heat content is the volume integral of the temperature multiplied by density and specific heat. There is no issue of equilibrium there (except LTE so that temperature can be defined, but that an issue for rarefied gases). That is just how you work it out.

Nonsense. You would need to know the temperature of every molecule in order to work out an average temperature of the earth.

“You would need to know the temperature of every molecule in order to work out an average temperature of the earth.”
If that were true, then you’d need to know about every molecule to work out the temperature of anything. But it isn’t. How you know about any continuum field property in science is by sampling. And the uncertainty is generally sampling uncertainty.

Duncan

I have to agree with Nick on this one. In any study of science, math, chemistry, physics, etc averages must be defined and used. When engineers build they rely on the averages to perform calculations as example. Saying scientist cannot use or try to calculate the average of earths atmospheric temperature would make every other science null and void. Arguing this point, while it might sound convenient and truistic to us non-believers, is just playing semantics with what every other science attempts to do.

fah

Mass and heat are both extensive properties. Temperature is intensive. Intensive properties generally are ratios of extensive properties, such as the case with temperature being defined (differentially) as the ratio of entropy and energy. See any thermodynamics or stat mech textbook. If one likes Wiki, here is a reference
https://en.wikipedia.org/wiki/Intensive_and_extensive_properties

Duncan

“You would need to know the temperature of every molecule in order to work out an average temperature of the earth.”
Phillip, you are confusing accuracy with averages. When calculating the circumference of a circle, do you use every decimal place of PI to do this? We can still calculate a circle “close enough” for the purpose required.

There is nothing “wrong” about calculating the global average of local temperatures. As others point out, an average can be calculated for any set of numbers. One could ask successive people who enter a room for their favorite number and then compute an average. The average can be calculated and there is nothing wrong with it as an average. The question is, what does one want to do with the number. The confusion enters when one wants to use some discipline such as physics to describe phenomena. In that case temperature is a quantity that appears in dynamical equations governing the evolution of thermodynamic systems. It holds that role as an intensive property, i.e. it is the magnitude of a differential form. When one averages over its value, the average can be computed but it no longer plays the role played by temperature in thermodynamics – it is not connected to the states or evolution of the system. There is nothing wrong with computing an average of any set of global temperatures, at the same instant of global time, at the same instant of local time, at random instants of time, at daily local maxima or minima, etc. etc. Just so long as one understands that it says nothing explicit about the global thermodynamics, i.e. whether anything is “hotter” or “cooler” other than the non-physical average quantity.

Solomon Green

If scientists are “constantly integrating and averaging intensive quantities” why are they still only averaging two thermometer readings to get Tmean? (Tmean = .0.5*(Tmax +Tmin)?
Why not derive the true mean temperature from the area under the T- curve? Even prior to electronic measuring equipment, more accurate approximations than .0.5*(Tmax +Tmin) were always possible provided that there were sufficient number of daily readings.

Duncan

Solomon, I think you answered your own question. Before electronic recording and data logging there was just not the need (or computing power) to record/sample temperatures often enough. I do agree though, the area under the curve would be a much better averaging method (vs Tmax/Tmin).

Samuel C Cogar

Duncan – April 23, 2017 at 4:57 am

I have to agree with Nick on this one.

Duncan, given the remaining content of your post, …. your agreeing with Nick was not a compliment.
Duncan also saidith:

In any study of science, math, chemistry, physics, etc averages must be defined and used.

Don’t be talking “trash”. Math is used for calculating averages …….. but nowhere in mathematics is/are “averages” defined or mandated. Mathematics is a per se “exact” discipline or science.
Duncan also saidith:

When engineers build they rely on the averages to perform calculations as example.

“DUH”, silly is as silly claims.
Design engineers perform calculations to DETERMINE what is referred to as “worse case” average(s), …….. not vice versa. Only incompetent designers would use a “calculated average” as a specified/specific design “quantity”.
Duncan also saidith:

Saying scientist cannot use or try to calculate the average of earths atmospheric temperature would make every other science null and void.

I already told you, …….. Don’t be talking “trash”.

Bob boder

Nick
You can average anything that is true, the question is wheather it means anything, in the case of global temperature it is not clear that it does.

Duncan

Samuel, I thought my responses were respectful, not trash, maybe that is just me. I get the sense you have a bone to pick.
Math “uses” averages (mean) was my only point I did not say it was exacting or not. You can try to turn that into a big deal if you want but the point is meaningless.
Use “worse case” average(s) when designing – this is NOT correct all the time when designing. As example, when building a space ship, one does not design for the worst case asteroid strike, the ship would be too heavy to launch. You may design for small paint chip sized strikes but that is it. This risk needs to be accepted and hopefully mitigated in other ways. My point was, when looking a say strength of materials, you do not examine every molecular bond in the material but you take an average of a sample (like earths temperature). If one chooses to use the worst case or not depends on the critical nature of the application and other factors such as cost, manufacturability, etc. Engineers do rely on other averages when designing, such a gravity, they don’t worry about the small variances across the globe. I will accept your apology on this one.
I did not understand your last “trash” comment so I will ignore it.

PiperPaul

Even the Catholic church integrated and averaged when calculating angels dancing on heads of pins!

bruce

Nick, and all,
It might be a good idea to start measuring the earth’s temperature in a more concrete method than currently done. Try to imagine some method that would not be prone to degradation and error. Maybe “find” a dozen remote locations spaced around the globe, measure the surface, not even the air (as it is so prone to movement and inconsistencies). In this way hoping to have a foolproof means of knowing what the conditions are going forward.
In the end there will be no final answer how to create a device that will accurately and without degradation measure the temperature without recalibration or maintenance. And thus introduce the same question about the correctness of our understanding of the inferred size of an angel’s girth.

Menicholas

Hey…how about them satellites?
The ones that costed a bajillon dollars and, according to NASA, are the best way to determine global temperatures, as they measure the whole atmosphere.
Well, for a while at least, that was what NASA said.
Right up until those dang satellites just stopped co-operating with the warmista conclusion.

“Nick, now tell us how you take a volume integral without knowing the value of the function at each point. “
All practical (numerical) integration calculates an integral from values at a finite number of points. It’s all you ever have. There is a whole science about getting it right.

Clyde Spencer

Bruce,
You suggested, ” Maybe “find” a dozen remote locations spaced around the globe, measure the surface, not even the air (as it is so prone to movement and inconsistencies).” For purposes of climatology, we could probably get by with fewer stations (Although, how do we make historical comparisons?), If optimized recording stations were located in each of the climate zones defined by physical geographers, and the number were proportional to the area of the zone, that should provide reasonably good sampling. As it is, our sampling protocol is not random, and it is highly biased to the less extreme regional climates where most people live.
We would, however, still need weather stations for airports and where people live so that they know how to dress for the weather.

Duncan

I am done with Samuel, but as far as Math “averages”, they do ponder non-exacting problems such as milliseconds before the Big-Bang or the mass of the Boston Higgs particle. Math is not always an exact ‘number’ as he thinks but can be a non-number such as infinity. What is the average of two infinities added together? Just needed to complete my thoughts.

Samuel C Cogar

Duncan – April 23, 2017 at at 7:10 am

Samuel, I thought my responses were respectful, not trash, maybe that is just me.
I get the sense you have a bone to pick.

Duncan, responses being “respectful” has nothing whatsoever to do with them being ….. trash, lies, half-truths, junk-science, Biblical truths or whatever.
And “Yes”, …….. I do have a per se “bone to pick” with anyone that talks or professes “junk science” or anyone that touts or exhibits a misnurturing and/or a miseducation in/of the natural or applied sciences.
Duncan , I spent 20+ years as a design engineer, systems programmer and manufacturing consultant of mini-computers and peripherals. Just what is your designing “track record” of experiences?

Samuel C Cogar

Duncan April 23, 2017 at 3:54 pm

I am done with Samuel, but as far as Math “averages”, …………….. Math is not always an exact ‘number’ as he thinks

That’s OK iffen you are “done with me” ……. as long as you are not done with trying to IMPROVE your reading comprehension skills.
Duncan, I made no mention, accusation or claim whatsoever about …. “math being an exact number”.
Tis always better when one puts brain in gear before putting mouth in motion.
Cheers

Bindidon

I would enjoy Roy Spencer telling us right here that, due to the fact that temperatures aren’t extensive quantities, his whole averaging of averages of averages of averages, giving at the end a mean global temperature trend of 0.12 °C per decade for the satellite era, is nothing else than pure trash.
Oh yes, I would enjoy that.

Clyde Spencer

Phillip,
I did mention that Earth is not in thermal equilibrium. If someone wants to define the Earth’s “Average Temperature” as being the arithmetic mean of all available recorded temperatures for a given period of time, I have no problem with that. However, I do have a problem with how precise they claim it to be and how they use that number. Often times disagreements are the result of not carefully defining something, and then getting agreement on accepting that definition.

george e. smith

Temperature is defined in terms of the mean KE per molecule; actually per degree of freedom of such random motions, so there is no such thing as the temperature of a single molecule, although you can postulate such an equivalence, which would be the time averaged KE per degree of freedom of that molecule, and the problem would be, since it is a time average, you don’t know just exactly when the molecule had that Temperature.
G

Pablo

If continents move away from the poles to allow warm tropical water better access to the frigid seas in the polar night of winter, ice caps tend to disappear. The tropical seas cool a little and the polar seas warm up. The average stays the same.

Menicholas

Yes, when they slide up them poles like that, it def warms things up down there.

richardscourtney

Pablo:
You mistakenly assertIf continents move away from the poles to allow warm tropical water better access to the frigid seas in the polar night of winter, ice caps tend to disappear. The tropical seas cool a little and the polar seas warm up. The average stays the same.
No!
The changes in regional temperatures alter the rates of heat losses from the regions with resulting change to rate of heat loss from the planet and, therefore, the average temperature changes until radiative balance is restored.
Radiative heat loss is proportional to the fourth power of temperature (T^4) and the planet only loses heat to space by radiation.
Richard

Lyndon

We also lose heat via loss of mass.
[??? By how much, compared to radiation losses? .mod]

Samuel C Cogar

Lyndon – April 23, 2017 at 2:19 am

We also lose heat via loss of mass.

Lyndon, don’t be badmouthing Einstein’s equation of …… E=MC2 …. or …. M=E/C2
And one shouldn’t be forgetting that photosynthesis uses solar (heat) energy to create bio-mass ……. and the oxidation of that bio-mass releases that (heat) energy back into the environment ….. and ONLY if that released (heat) energy gets radiated back into space can one confirm that …. “loss of (heat) energy = loss of mass”.

Lyndon

Almost nil.

george e. smith

What about all the mass of space dust we get everyday.
Each day, the earth lands on literally millions of other planets/asteroids/space dust, and emsquishenates most of them, so how do you know we are losing heat through escape of renegade matter to space ?? We are probably gaining mass.
G

Dr. S. Jeevananda Reddy

Climate system plays the vital role at any given location. Over this, general circulation pattern prevailing at any given location or region [Advective heat transfer] modifies the local temperatures. So, local temperature is not directly related to Sun’s heat over space and time. In India, northeast and southwestern parts get cold waves in winter and heat waves in summer from Western Disturbances — associated with six months summer and six months winter in polar regions.
Dr. S. Jeevananda Reddy

Pablo

The ocean, a very good absorber of solar energy, warms less than land for the same input and so cools less than land by radiation at night.
It takes 3000 times as much heat to warm a given volume of water 1ºC as to to warm an equal volume of air by the same amount. A layer of water a meter deep, on cooling .1ºC could warm a layer of air 33 metres thick by 10ºC.
If more ocean warmth reaches the poles to lessen the tropical to polar gradient then the global temperature range becomes less extreme but the average stays the same.
Earth’s atmosphere with mass moderates the extremes of temperature that we see on the surface of the Moon.
Earth’s oceans and the water vapour it allows to exist, lessen those extremes further.
The average stays the same.

Menicholas

Pablo,
Each polar region is perpetually dark for six months of the year, and since it is so cold there, the air is very dry, mostly and on average.
Dry air radiates its heat into space very efficiently, and so any thermal energy arriving at the polar regions is lost to space far more quickly than is the case at lower latitudes.
Richard is exactly correct…the thermal equilibrium is altered and the averages are in no way constrained to remain the same, when changes are made to the arrangement of the continents or the net polar movement of energy from the sun.
If what you are saying is, that it is possible to have less thermal gradient from poles to the equator while having the same average temp, obviously that is true.
if you are saying that IS what happens, or IS what HAS to happen, you are obviously incorrect.
Obviously the averages can changes…just look at any temperature record or historical reconstruction.
Flat lines are notably absent.

george e. smith

The hottest midsummer dry deserts radiate more than twelve times as fast as the coldest Antarctic highlands midnight regions.
So the earth is NOT cooled at the poles; but in the equatorial dry deserts, in the middle of the day.
G

Retired Engineer John

Richard, your comment is interesting. Does this mean when the Earth has significant temperature differences compared to average conditions, more energy is removed, radiated to space. Has anyone done actual calculations for different locations of extreme temperature differences?

richardscourtney

Retired Engineer John:
You ask me

Richard, your comment is interesting. Does this mean when the Earth has significant temperature differences compared to average conditions, more energy is removed, radiated to space. Has anyone done actual calculations for different locations of extreme temperature differences?

Calculations are easy and I have reported some on WUWT, but actual simultaneous variations of temperature differences are difficult to obtain.
Richard Lindzen has said the effect I mentioned is sufficient to explain global temperature variations since the industrial revolution, but I have not seen a publication of his determinations that led him to this conclusion.
Richard

Menicholas

How would we go from interglacial to full glacial advance and then back again if there were not conditions and periods during which the amount of energy removed from the earth increased or decreased?

the planet only loses heat to space by radiation.
How would we go from interglacial to full glacial advance
=========================
We are not measuring the heat of the planet. rather we are measuring surface temperatures, which is quite a different animal.
All that is required to go from interglacial to glacial is to modify the overturning rate of the oceans. increase the overturning rate and there is more than sufficient cold water in the deep oceans to plunge the earth into a full on ice age. decrease the overturning rate and you have an interglacial.
Given the heat capacity of water, perhaps a change in the deep ocean overturning rate of just a few percent would take us from interglacial to full on glaciation. Perhaps resulting from some long term harmonic of the ocean currents, stirred up as the oceans are dragged north and south of the equator by the earth’s moon, along with the earth-sun orbital mechanics, combined with the 800 year deep ocean conveyor.

Menicholas

And how is somewhat colder water on some ocean surfaces cooling down the interiors of continents by the amounts required for miles of ice to form and never melt?
For this to happen without increased losses to space (like because, for instance, the air was drier), all of the difference in temp would have to be due to more energy going into heating water which is then carried under the sea, no?
Not saying I think that scenario is impossible.
But, absent changes in atmospheric circulation…

skorrent1

Thank you for your reminder that the earth’s energy balance depends primarily on radiation, i.e., T^4. Translating the author’s record temps to Kelvin gives us the range, roughly, 190k to 344K. The numerical average record is 267K. The radiative effect of this temperature is 5.08*10^9. The 4th power of the records are 1.30*10^9 and 14.00*10^9 with an “average” radiative effect of 7.65*10^9. With the constant, diurnal, and seasonal spread of temperatures across the Earth, variations in the “annual global average temperature” tells us f***all about changes in the radiative energy balance of the Earth.

PabloNH

“The apparent record low temperature is -135.8° F and the highest recorded temperature is 159.3° F”
Neither of these is an official measurement; both are surface (not atmospheric) temperatures measured by satellites. The referenced article is a remarkably poor one; besides falsely claiming that these measurements meet WMO standards (they clearly come nowhere close), it also (apparently) confuses the ecliptic with the sun’s equatorial plane.

DHR

And your point is…?

Clyde Spencer

PNH,
I provided you with a link so that you can see that I didn’t invent the temperatures. In any event, I didn’t use those extremes in constructing the frequency distribution curve. I used more conservative extremes. I suggest that you take your complaint to those who maintain the website at the link I provided.

george e. smith

I think there is an actual North African Official high air temperature (in the shade) of 136. x deg F, some time in the 1920s I think, and NASANOAA has reported large regions of the Antarctic highlands that quite often get down to -94 deg. C.
In any case I will adopt your +70 deg. C surface high temperature. I have used +60 deg. C but now I will use + 70.
It is SURFACE Temperatures that determine the radiative emittance; not the air Temperatures.
G

Clyde Spencer

GES,
Would you please convert “136. x deg F” to degrees C so that the Fahrenheit-challenged people can understand? Are you going to have to do some sort of penance for breaking your vow of SI-units exclusivity?

Don K

“and the highest recorded temperature is 159.3° F”
Are you sure about that? I thought it is commonly thought to be 56.7C (134F),at Greenland Ranch (aka Furnace Creek–sort of), CA 10 July 1913.
Other than that. A good article I think. The point about measuring a constantly changing mix of stations and instruments seems to me to be valid.. Focusing on the variance (OK, standard deviation) of the data seems shakier. What is at issue is the precision of the mean, not the wild swings in the data. Thought experiment: Average Vehicle Velocity on a stretch of expressway where (most) vehicles zip through most of the time, but crawl at rush hour. Extremely bimodal distribution with a few outliers. Is the mean stable and precise? To me it seems likely to be stable with enough samples. Is that mean a useful number? Not so sure about that. Maybe useful for some things, not for others? And even if meaningful, not necessarily easy to interpret.

tty

Those are surface temperatures, which is what satellites can measure. They can also measure average temperatures in deep swaths of the atmosphere (which is what UAH and RSS does). What they can’t do is measure the temperature five feet above ground, which is the meteorological standard.
Incidentally the reason for the “5 foot” convention is that it is eye-level height for europeans, so the first meteorologists back in the 17th-18th century foud it to be the most convenient level to hang their thermometers.

Menicholas

Or maybe because the temp right at the ground is extremely high when the sun is shining on it, and very low during a nighttime with a clear sky and low humidity, and bearing little relation to what we feel (unless we are walking around barefooted or lick some hot asphalt)?

The usage of the mean of temperatures was addressed nicely in “Does a Global Temperature Exist” http://www.uoguelph.ca/~rmckitri/research/globaltemp/globaltemp.html
What I did not see addressed is the usage of the NON-representative sample of temperature measurements. If you are looking over land measurements points (an example here, with logic and empirical denialist generalization from a point to a huge area:comment image ) you can quickly find out that it’s far from being a representative sample. Generalizing from a convenience sampling to the whole ‘population’ (that is, claiming to be representative for the whole Earth’s surface temperature field) is anti-scientific.
Despite this, it’s what they do in the climastrological pseudo science.

Menicholas

An arbitrary grid pattern in three dimensions would seem to be the way to go about getting a real idea of what is going on.

Clyde Spencer

Adrian,
As Anthony knows all too well, there are numerous problems with the temperature database. How about the ASOS system at airports, with a large percentage of impervious surfaces, recording temperatures to the nearest degree F, converting to degrees C and reporting to the nearest 0.1 degree?
What passes for good science is, unfortunately, too much like “Ignore that man behind the curtain!”

MarkW

One problem at a time.
Once everyone has a good grasp of the issues surrounding averaging this kind of data, then we can move on to problems with the data itself.

tty

As for temperatures being normally distributed. They aren’t, nor are other climatological parameters. Many have Hurst-Kolmogorov distributions, others are probably AR(N) or ARMA.
Note that this fact invalidates much (most?) of the inferences based on climatological data. For example the confidence levels based on standard deviations seen in almost all climatological papers are only valid for normally distributed data (yes, you can calculate confidence levels for other distributions, but you must know what the distribution is to do it).

Don K

“As for temperatures being normally distributed. They aren’t, nor are other climatological parameters….”
AFAIK, that’s correct. Moreover, I suspect that very few parameters of interest in any scientific field are actually normally distributed in practice. With one important exception.
Central Limit Theory says that so long as a number of mostly incomprehensible conditions are met, the computed mean of a large number of measurements will be close to the actual mean and the discrepancy between the computed and actual mean will be normally distributed. Provided those mostly incomprehensible conditions are met, the underlying distribution doesn’t have to be normal. In this case one would suspect that the Standard Deviation of the data has little meaning because the distribution of the values almost certainly isn’t remotely normal. But, as I understand it, because the error in estimating the mean is normally distributed, the Standard Error of the Mean could still be a meaningful measure of how good our estimate of the mean is.
However, I’m quite sure that does not say or suggest that the mean of a lot of flaky numbers is somehow transformed into a useful value by the magic of large numbers. .

None of this means much unless you come to terms with the fact that they are averaging anomalies. Fluctuations about a mean are much more likely to be approx normal than the quantities themselves.
But yes, the CLM does say that the sample mean will approach normal, whatever the sample distributions. Most of the theory about the distribution of mean relies on the scaling and additivity of variance, which is not special to normal distributions.

Dr. S. Jeevananda Reddy

Anomalies are derived from the absolute values only. Tampering of absolute values form the part of anomalies.
Dr. S. Jeevananda Reddy

Jim Gorman

NS: “None of this means much unless you come to terms with the fact that they are averaging anomalies. Fluctuations about a mean are much more likely to be approx normal than the quantities themselves.”
If that is the case, why all the adjustments to past temperature readings? The absolute values of the anomalies shouldn’t change if you are right.

There seems to be a bit of inverse thinking about averages and normality. The act of taking a mean does not endow any normality on the underlying distribution or the distribution of deviations from the mean. Normality is a property of the underlying numbers, not the act of taking a mean (or average). To see an example of a mean which does not have normal variance, take some number, say 10,000 numbers from a uniform random distribution from say 0 to 1. In Matlab this would be x = rand(10,000,1). Now take the mean of x, i.e. y = mean(x). Then calculate the deviations d = x – y i.e. x – mean(x). Now look at the deviations d. First do a histogram, and see that it is distinctly non-normal. Perhaps to a qqplot and see that it is definitely not normal. If you want, do it with 10^6 numbers and see that it never approaches normality. The property of normality is a property of the distribution of numbers one is considering, it doesn’t magically appear by taking an average. To determine normality, one needs to look at the distribution of numbers themselves, in this case presumably it would be the distribution of global temperatures over which one wished to compute an average.

tty

“But yes, the CLM does say that the sample mean will approach normal, whatever the sample distributions”
No it does not. It says that this is true for independent random variables with a finite expected value and a finite variance. The two later conditions are fulfilled in climatology, the first emphatically is not.

Don K

Nick. Yes I think you’re correct that averaging anomalies does change the situation — especially since they seem to average monthly anomalies, not anomalies versus the complete data set. I’m not so sure about “Fluctuations about a mean are much more likely to be approx normal than the quantities themselves.” Could be so, but I think it’s possible to imagine pathological cases where the fluctuations about the mean are not normally distributed and will never converge toward a gaussian distribution. I’ll have to think about whether that means anything even it it is true.

Don,
” I’ll have to think about whether that means anything even it it is true.”
Yes, I don’t think anything really hangs on individual stations being normally distributed. But the reason I say more likely is mainly that anomaly takes out the big seasonal oscillation, which is an obvious departure.

Richard Saumarez

That is a very good point.
The Gaussian distribution is formed when you add randomly distributed variables (usually with the same mean).
The assumption that the temperature follows a Gaussian distribution and that there is a mean temperature assumes random variation about that mean.
There is no a-priori reason to assume this, especially when the observations are biased due to latitude, insolation, vegetation etc. What is more likely is that the errors in measurement are random and so a local estimate of the temperature has a normal distribution.

Menicholas

The big takeaway from all of this, to anyone for whom the above is incomprehensible mumbo jumbo, is that the claimed accuracy and precision numbers used in so-called climate science are completely and 100% full of hot crap, and we all know it.
And anyone who does not know it is deluded or an ignoramus.
And the reasons that this is so include everything from bad data recording methods, oftentimes shoddy and incomplete records, questionable and unjustified analytical techniques, and just plain making stuff up.

Mindert Eiting

Interesting point. May I add that in those skewed distributions mean and variance are dependent?

Clyde Spencer

Mindert and tty,
I haven’t absorbed all of this yet, but I thought that you and others might find this link to be of interest: https://www.researchgate.net/publication/252741800_Hurst-Kolmogorov_dynamics_in_paleoclimate_reconstructions

tty

Clyde Spencer:
I am familiar with the paper and with the “Hurst-Kolmogorov phenomenon”. Unfortunately the same can’t be said of most “climate scientists”.

Geoff Sherrington

When I first started my science career, in the 1960s I was helped by a wise and experienced statistician who said “First, work out the form of your distribution.”
There are many practical applications where the form does not matter because the outcome need only be approximate.
In climate work, a real problem arises as Clyde notes, when you connect temperature observations to physics. As a simple example, it is common to raise T to the power 4. Small errors in the estimate of T lead to consequences such as large uncertainty in estimates of radiation balance.
This balance is important to a fairly large amount of thinking about the whole global warming hypothesis.
An extended example, if one uses a global average temperature derived from a warmer geographic selection of observing stations, then compares it to one from a cooler selection, the magnitude of their difference is large enough to conclude that the globe is cooling rather than warming, or vice versa, thereby throwing all global warming mathematical model results in doubt. Or more doubt.
It is time for a proper, comprehensive, formal, authorised study of accuracy and precision in the global temperature record. It is easy to propose, semi-quantitatively, that half of the alleged official global warming to date is instrumental error. (Satellite T is a different case).
There has been far too much polish put on this temperature turd.
Geoff

Reporting global average temperature anomalies at 0.1 °C as a settled proof of manmade weather extremes rockets cAGW movement beyond parody.

DWR54

Surely any sampling error would be in both directions, not just one? A temperature is just as likely to be misread low as it is high. In that case, if there really was no underlying change, then you would expect the low and high errors to cancel out. There would be no discernible trend.
Yet every producer of global temperatures, including satellite measurements of the lower troposphere, show statistically significant warming over the past 30 years. The probability of that happening by chance alone, of each producer being wrong and all in the same direction, seem pretty remote.

Reg Nelson

Phil Jones was finally forced to admit that there was no significant warming during the pause. You are clearly wrong on this one.
The satellite data was first ignored and then attacked, because it did fit the confirmation bias that was clearly evident in the Climategate emails. Likewise USCRN and ARGO were equally ignored when they did not fit the Global Warming meme.
Scientists who conspire to dodge FOIA requests, delete emails, and data, are not really scientists

DWR54

Reg
I’m referring to the past 3 decades, 30 years, which is the averaging period the above author specified. Every global temperature data set we have, including lower troposphere satellite sets, shows statistically significant warming over that period. There are differences in the slope, but these are relatively small over longer time scales. UAH is the smallest, but still statistically significant over that time (0.11 +/- 0.09 C/dec): http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html
The question is, if there really was no significant warming trend, how come all these different producers managed to make the same error?

Menicholas

And for the thirty years before that is was cooling, while CO2 was rising, and before that is was warming, when CO2 was hardly changing at all.
And for the past nearly twenty years, during which time 30% of all global emissions of CO2 have occurred, temps have been largely flat, outside of a couple of big el nino caused spikes.
Looking at all proxy reconstructions and historical records, the temp of any place and of all places combined has always been rising or falling, in patterns with many distinct periods.
The reason we are all discussing this is not the if, but the why.

“Surely any sampling error would be in both directions, not just one?”
Not if there is a bias in the convenience sampling. Example: you interview rich people at a rich people party on how much fortune they have. You conclude based on your sample that all people on the Earth are rich, because ‘sampling error is in both directions’.
Not if there is a systematic error. Example: You use digital thermometers made using the same principle with a senor that degrades over time such that it shows ‘warming’ over time. They might not degrade at the same rate, but you might figure out what happens with measurements.
Not if there is a confirmation bias in the ‘correcting’ data. Do I need to give an example to this?
Do I need to go on?
There is no physical law that forces ‘errors to cancel out’. If that would be true, you would not need science to find the truth. You would just look into goat entrails, average the results hoping to ‘average out the errors’ in your prediction.

DWR54

Adrian Roman
As I understand it the temperature stations used to compile global surface temperature data are by no means universal but nevertheless are fairly widespread. This is especially the case in the BEST data, which uses around 30,000 different stations, I believe. Then there are the satellite measurements of the lower troposphere. UAH claims to provide global cover for all but the extreme high latitudes. The satellite data also show warming over the past 30 years. Less than the surface, but still statistically significant.
You mention a possible systematic defects with instruments. Why should any such defect lead to an error in one direction only (warming in this case)? An instrument defect is just as likely to lead to a cool bias as a warm one surely. Which brings us back to the same question: why are all these supposed defects, sampling errors, etc all pointing in roughly the same direction according to every global temperature data producer over the past 30 years of measurement?

Menicholas

Hey Adrian mon,
Don’t you be givin’ no short shrift to the signs we be having from rollin’ dem chicken bones, you know mon.
Ay, and we got the Donald Trump voodoo dolls rolling over a hot bed o’ coals too, you know mon.
Goat entrails…pffftt!

DWR54, I gave some example on how it can happen so one could comprehend that errors do not necessarily ‘average out’. I did not claim something specific to the particular methodology of the climastrological pseudo science. I could go into that, too, but since you didn’t figure out the simple example, I doubt you would the more complex situations.

Yes, Menicholas, goat entrails. Or a pile of shit. Or dices, if you prefer. Just claim that by averaging the ‘answers’ the errors will be averaged out.

MarkW

Adrian, die.
PS: I’m not attempting to insult Adrian, just reminding him that the plural of dice is die.

Well, it is used sometimes: https://en.wiktionary.org/wiki/dices I’m not a native English speaker, so I do make quite a lot of mistakes.

MarkW

Adrian, it’s a slow morning and it looks like my attempts at humor are falling flat today.

>>
MarkW
April 24, 2017 at 7:59 am
Adrian, die.
PS: I’m not attempting to insult Adrian, just reminding him that the plural of dice is die.
<<
It must be an inside joke, because I thought dice was the plural of die.
Jim

Richard M

Same ridiculous nonsense from DWR54. You can continue your silly denial or accept reality. It doesn’t matter. It is clear there has been no warming in the past 20 years and smearing warming from the previous 20 years into the last 20 years only shows how desperate you are to create warming when it is clear none exists in recent satellite data.

The probability of that happening by chance alone
========================
then what caused temperatures to rise from 1910 to 1940? or what caused temperatures to drop during the LIA?
The answer is clear. WE DON’T KNOW. The problem is Human Superstition. Humans in general blame the Gods and Other Humans for anything that in the Natural world that is not understood. However, the solution is always the same.
primitive culture – Climate is changing – the gods are angry – solution – human sacrifice
medieval culture – Climate is changing – witchcraft and sin – solution – human sacrifice
modern culture – Climate is changing – pollution and CO2 – solution – human sacrifice

Gary Pearse by

Ferd, they burned witches during the LIA because it went with plague and crop failures. They actually blamed climate on people! Nowadays, they… er… blame climate on people and this is what they call PROGRESSIVE thought.

Clyde Spencer

DWR54,
There are some unstated assumptions in your statement. If something is being “misread” then that is a systematic error. It is quite likely if there is an issue of parallax in reading a thermometer. The height of the observer will afrfect whether the systematic error is high or low. Also, the position of the mercury in the glass column will affect parallax-reading errors, so they will NOT cancel out.
Random errors will be the result of interpolation, which can be affected by parallax also.
Yes, the records seem to indicate a general warming trend, particularly at high latitudes and urban areas. But, what I’m questioning is the statistical significance of the temperatures, commonly reported to two or three significant figures to the right of the decimal point. I’m fundamentally questioning the statistical significance.

Mindert Eiting

And here we have, Clyde, the problem of the decision procedure as once devised by Ronald Fisher: the significance (at certain level) depends on sample size. With one million of surface stations almost every change of temperature would be significant. Let’s talk about the size of the changes.

Gary Pearse by

DWR is also unaware of a clear 60yr cycle, or at least he was until it was explained that rise in temperature for 70-98 was preceded by 30yrs of cooling that had the worriers projecting an ice age unfolding. After 98 it flattened out again and was beginning to decline after 2005. There was no warming for 20years and the record keepers chiseled and pinched to alter this and finally a fellow at NOAA just ready to retire erased the Pause before he went.
The pause didn’t definitively kill the CO2 idea perhaps, but it definitely reduced it’s effect to a minor factor (perhaps slowed the natural down cycle) but there has been so much fiddling with the T record that this is another thing outside that of error.

Menicholas

And I am sure I am not the only one who recalls that for many a year, warmistas argued vociferously that there never was any cooling scare, even going so far as to state that that whole notion was due to a Newsweek article.
Just another in a decades long series of warmista talking points, arguments, and predictions that have been completely disproven, falsified, or shown to be made up nonsense.

MarkW

Care to demonstrate that your supposition is the case. Many have presented evidence that most errors result in readings that are too warm.

tty

And how do you know it is “statistically significant”? Read my 1.54 am post above.

Dr. S. Jeevananda Reddy

Temperature is measured to the first place of decimal only. While averaging the data — daily, monthly, yearly, state, country, globe, etc the averaging process is repeated.
maximum + minimum = Average
35.6 + 20.1 = 27.85 = 27.9
35.7 + 20.1 = 27.9
35.6 + 20.7 = 28.15 = 28.1
35.6 + 20.9 = 28.25 = 28.3
Over the global average, such type of adjustment goes on.
Dr. S. Jeevananda Reddy

“It is obvious that the distribution has a much larger standard deviation than the cave-temperature scenario and the rationalization of dividing by the square-root of the number of samples cannot be justified to remove random-error when the parameter being measured is never twice the same value.”
You are completely off the beam here. Scientists never average absolute temperatures. I’ve written endlessly on the reasons for this – here is my latest. You may think they shouldn’t use anomalies, but they do. So if you want to make any progress, that is what you have to analyse. It’s totally different.
Your whole idea of averaging is way off too. Temperature anomaly is a field variable. It is integrated over space, and then maybe averaged over time. The integral is a weighted average (by area). It is the sum of variables with approximately independent variation, and the variances sum. When you take the square root of that for the total the range is much less than for individual readings.. Deviations from independence can be allowed for. It is that reduction of variance (essentially sample average) that makes the mean less variable than the individual values – not this repeated measurement nonsense.

Reg Nelson

Pretty sure the Min\Max temps used in the pre-satellite era are “absolute”, averaged, and manipulated for political reasons.
Isn’t that completely apparent? Have you read the Climategate emails, Nick?
How can you ignore the corruption of Science?
Why split hairs over something so corrupt?

“How can you ignore the corruption of Science?”
From my readings of his work here, it seems that is his job. It is hard to get a man to see a thing when his paycheck depends on his not seeing it. (the corruption that is)

Jim Gorman

Ask yourself why past temperatures are adjusted at all if the anomalies are all that is important.

Not if you break Identicality, Nick. That’s basic mathematical theory (CLT). Also there’s Nyquist to consider.
Temperature anomalies are a theoretical construct not a realistic one. So any conclusions are purely hypothetical.
Which means nothing in reality.

Menicholas

Once data is adjusted, it is not properly called data anymore at all.
At that point, it is just someone’s idea or opinion.

square root of that for the total the range is much less than for individual readings
=====================
precisely what everyone is complaining about. Just because the 30 year average of temperature on this day at my location is 19C doesn’t mean that a reading of 19C today is more accurate than a reading of 18C. Yet that is the statistical result of using anomalies, because they artificially reduce the variance.
Say for example I calculated the anomalies using yearly averages versus hourly averages. The anomalies would be further from zero using yearly average and closer to zero using hourly averages. You would artificially conclude that the anomalies calculated using the hourly average were more accurate, because their variance would be much, much smaller.
But in point of fact this would be false because the original temperate readings were the same in both cases and thus the expected error is unchanged. Thus, the anomalies have artificially reduced the expected error. As such, the statistical basis for Climate Science lacks foundation.

Clyde Spencer

Nick,
You claimed, “Scientists never average absolute temperatures.” If that is so, then please explain how the baseline temperature is obtained. How can you obtain an anomaly if you don’t first compute an “average absolute temperature”?
I covered this before: You can arbitrarily define a baseline and state that it has the same precision as your current readings, or average of current readings. However, you still should observe the rules of subtraction when you calculate your anomalies. If you arbitrarily assign unlimited precision to your baseline average, you are still not justified in retaining more significant figures in the anomaly than was in your recent temperature reading, usually one-tenth of a degree.

” If that is so, then please explain how the baseline temperature is obtained.”
The baseline temperature is the mean for the individual site, usually over a fixed 30 year period. You form the anomaly first, then do the spatial averaging. These are essential steps, and it is a waste of time writing about this until you have understood it.
Subtracting a 30-year average adds very little to the uncertainty of the anomaly. The uncertainty of the global average is dominated by sampling issues. Your focus on measurement pedantry is misplaced.

Mindert Eiting

Do you mean this, Nick?
t(ij) = a + b(i) + c(j) + d(ij),
in which t(ij) is the mean temperature as measured by station i in year j. The first term a is a grand mean. The effects b(i) are stations’ local temperatures as deviations from a. The effects c(j) represent the global annual temperatures as deviations from a. Finally, d(ij) is a residual.

Mindert Eiting

The effects c(j) represent the global annual temperatures as deviations from a. Finally, d(ij) is a residual.

OK, so let me ask you specifically this question:
What is the “correct” hourly “weather” (2 meter air temperature, dewpoint, pressure, wind speed, wind direction) for a single specific latitude and longitude for every hour of the year if I have 4 years of hourly recorded data?
Now, I need the hourly information of all five measured data points over the year. What is the “correct” average for 22 Feb at 0200, at 0300, and at 0400, etc?
Do you require I average the 4 years of data for Feb 22 at 0200?
Repeat again for 0300, 0400, 0500, etc.
Do a time-weighted average of the previous day’s temperature and next day’s temperature at each hour to smooth variations?
Daily temperatures change each hour, but the information changes slowly over the period of a week – since most storms last less than 3 days. Do you average the previous 3 day and next three days together? Average the previous and next hour with the previous and next day’s hourly data?
I am NOT interested in a minimum and maximum of each day. I DO need to know what the “average” temperature is at 0200 every day of the year when I have only 4 years of data. (At 24 hours/day x 365 x 4, it’s 35,040 hours of “numbers” but no “information.”
Now, what I really want to build is a theorectical “year” of “average” weather conditions at that single latitude and longitude. (The “real” imaginary year is a series of 24 curve-fit equations that one can determine FROM the list of 365 points for each hour.)

Mindert,
I’ve set out a linear model here, more recent version here. I write it as
t_smy = L_sm + G_my + e_smy
station, month, year, where L are the station offsets (climatologies), G the global anomaly and e the random (becomes residual). It’s what I solve every month.

Mindert Eiting

RAC and Nick, thanks for the comments. I did something very unsophisticated. The matrix t(ij) has numerous missing values because stations exist (like human beings) for a limited number of years. So I wrote a little program estimating the station effects iteratively. As far as I remember, the solution converged in fifteen rounds when the estimates did not differ more than a small amount in two rounds. Next, I made pictures of the annual effects from about 1700 till 2010. I cannot guarantee their worth but it was a funny job to do.

commieBob

I would be surprised if the cave temperature were the arithmetic average of the outside temperatures. My wild-ass guess is that rms would work better.
The temperature of the cave is the result of a flow of heat energy (calories, BTU). The flow of heat over time is power. Power is calculated using rms.
I have no idea how much difference it will make. It probably depends on local conditions.

commieBob

Oops. If negative values are involved, rms will give nonsensical results.

Duncan

That is why absolute temperature is/should be used and is a principle parameter in Thermodynamics. Zero and 100C are arbitrary points based on the freezing/boiling point of one type of matter at our average atmospheric pressure (i.e. 101 kPa). At different pressures these boundaries change dramatically.

Kaiser Derden

the air in the cave (which is measured) doesn’t move and is surrounded by heavy insulation … it really doesn’t matter what the temperature is outside … the energy just can’t transfer between the outside and the cave …

Clyde Spencer

commieBob,
I acknowledged that the situation is more complex than a simple average of the annual above-ground temperatures. It was simply offered as an analogy for the difference in results for sampling on a parameter with very little variance versus sampling global temperatures.

Clyde Spencer

Forrest,
I suspect that one will find that there is a depth in a mine at which the surface influence is swamped by the upwelling geothermal heat and there will cease to be a measurable annual variation. I’m not familiar with the technique of borehole temperature assessments of past surface temperatures, but I’m reasonably confident that the is a maximum depth and maximum historical date for which it can be used.

Menicholas

Ground water too seems to represent a rough average of the annual temp for a given location, although the deeper the source aquifer of the water, the longer ago the average that applies may be.
Similar for caves?
Very deep ones may be closer to what the temp was a long time ago.

Menicholas

It appears my recollection from my schoolin’ thirty some years ago is fairly close to what is thought to be the case:
“The geothermal gradient varies with location and is typically measured by determining the bottom open-hole temperature after borehole drilling. To achieve accuracy the drilling fluid needs time to reach the ambient temperature. This is not always achievable for practical reasons.
In stable tectonic areas in the tropics a temperature-depth plot will converge to the annual average surface temperature. However, in areas where deep permafrost developed during the Pleistocene a low temperature anomaly can be observed that persists down to several hundred metres.[18] The Suwałki cold anomaly in Poland has led to the recognition that similar thermal disturbances related to Pleistocene-Holocene climatic changes are recorded in boreholes throughout Poland, as well as in Alaska, northern Canada, and Siberia.”
https://en.wikipedia.org/wiki/Geothermal_gradient

Menicholas

This would seem to indicate that the temps in the tropics was changed little if any during the glacial advances.

D B H

Hells bells…..six beers (literally) and having read this article….it actually makes sense, and I (maybe) actually understood the points being made.
Maybe, just maybe, some of those ‘march for science’ die hards, should also have a six pack and then read some of this….it worked for me!!!
Oh, go on, give me a hard time…. 🙂

D B H

I am a trader in the stock market, and there is NO BS in this arena….for you live or die upon the correct interpretation of correct and available DATA.
Information of and from different time scales in all matters is paramount…period.
Information and DETAIL are though, often thought to be the same…they need not be, and assuming they are, is fatal.
Ditto it would seem, with any sort of analysis of (global) temperatures.
Attempting to ‘average’ temperatures on a global scale, while appearing that science is supporting this attempt, is little more than ‘a best effort’.
Do that within my sphere of interest, and you’d end up in the poor house.
I agree with this articles underlying rationale and conclusions, and despite myself being anything but scientifically trained, would attempt to defend its conclusions (in general, if not in detail).
I would suggest, that this approach would more likely lead to it proving itself far more robust and viable, than that which would be its corollary….ie, the CAGW.
Sorry if that this may be a bit obtuse, but I’m working under some considerable limitations, as noted in my previous comment.
D B H

Philip Mulholland

D B H
Your comment works for me.
Break open another tinnie.

R. Shearer

In your opinion then, would aspects of the situation be similar to companies being removed/inserted into the various stock market indices? For example, Sears was removed from the Dow in 1999.

D B H

You are quite correct.
Index charts as you’ve used as an example, are biased upward – (sound familiar?) – simply by removing defunct companies or ones that have been taken over or failed within themselves, through normal business development.
The chart of an index therefore IS visual data that has been altered, and an average ‘investor’ might be excused for thinking trends are more significant, than they really are.
My point (badly made) was that having data come from a single source AND only superficial data, does not and can not allow the observer (non scientific people – like myself) to understand what is truth and what is fiction.
Explain it better, (as in the above article) and we can gain greater understanding of the matter, if only in general and to our ability to understand.
Superficial data (a stock price chart) can to those trained, be read and understood….but that would be a rare person indeed.
Supply me a ‘yard stick’ by which I can read and compare the superficial data (like this article above) then I CAN make sense of the information being presented….all by my lonesome, non-scientific, self.
Can I be fooled by that data?
Heck yes, if it were stand alone and not corroborated….but that is not the case.
Robust data CAN be corroborated, supported and replicated.
Bad data…well….ends up creating the (insert expletive here) march of concerned scientists.

Richie

It is disingenuous in my view to stipulate 30 years as a climatologically significant period. They “gotcha” with that one. It’s an arbitrary parameter that obscures the inconvenient truth that the planet is well into an interglacial warm spell. The catastrophists’ claim that humans are responsible for the current, pleasantly life-sustaining climate is predicated on everyone’s taking the shortest possible view of climate. That short-sightedness is what led Hansen to predict, back in the ’70s, that global cooling would end civilization through famine and resource wars. This prediction was extremely poorly timed, as the ’70s marked the end of a 30-year cooling cycle. Hansen’s “science” was simply a projection of the recent cold past into the unknowable future. Sound familiar?
Statistics are just as useful for obscuration as for illumination. Perhaps more so. Nothing changes the fact that the temperature data must be massaged in order to produce “meaningful” inferences, and, to torture McLuhan, ultimately the meaning is in the massage.

Clyde Spencer

Richie,
You said, “Statistics are just as useful for obscuration as for illumination.” The last sentence in my third paragraph was, ” Averages can obscure information, both unintentionally and intentionally.”

azeeman

Thirty years is not arbitrary, it’s the length of a professor’s tenure. If the professor is a climatologist, then thirty years is climatologically significant. Any longer and it would cut into his retirement. Any shorter and he would be unemployed late in life.

Thank you Clyde Spencer for this important post.
Temperature and its measurement is fundamental to science and deserves far more attention than it gets.
I disagree strongly with Nick Stokes and assert that averaging temperatures is almost always wrong. I will explain in part what is right.
If I open the door of my wood stove, and a hot coal with temperature roughly 1000F falls on the floor, does that mean the average temperature of my room is now (68 + 1000)/2 = 534 F ? How ridiculous, you say. However, I can calculate the temperature that might result if the coal looses its heat to the room from (total heat)/(total heat capacity) Since the mass of air is many times that of the coal, the temperature will change only by fractions of a degree. To average temperatures one must weight by the heat content.
The property that can be averaged and converted to an effective temperature for air is the moist enthalpy. When properly weighted by density it can usefully be averaged then converted back to a temperature for the compressible fluid air.
More relevant for analysis of CO2 impact is to use the total energy which includes kinetic and potential energies, again converted to an equivalent temperature This latter measure is the one which might demonstrate some impact of CO2 induced change in temperature gradients, and should be immune to effects of inter conversion of energies to among internal, convection and moisture content.
With the billions of dollars spent on computers and measurements it is ridiculous to assert that we cannot
calculate global average energy content equivalent temperature.
A discussion which contains other references to Pielke and Massen is:
https://aea26.wordpress.com/2017/02/23/atmospheric-energy/

“does that mean the average temperature of my room is now”
No. As I said above, the appropriate average is a spatial integral – here volume-weighted. As I also said, you also have to multiply by density and specific heat.

@Nick Stokes “you also have to multiply by density and specific heat”
Consider that since density = P/(Rm Ta) where Rm is molecular weighted R, and Ta is absolute temperature you are dividing T by Ta; weighting Cp (=1.) for added water leaves (T/Ta) * (1-Q* .863)*Rm/P where Q is the mixing ratio. Does this more sense than just T ? It does diminish Denver relative to Dallas.

“Does this more sense than just T ? “
No. T has an important function. It determines which way heat will flow. It makes you feel hot. Heat content is important too, but I can’t see what could be done with the measure you propose.

Thank you Clyde Spencer for this important post.
Temperature and its measurement is fundamental to science and deserves far more attention than it gets.
I disagree with Nick Stokes and assert that average temperature is almost always wrong. I will explain in part what is right.
If I open the door of my wood stove, and a hot coal with temperature roughly 1000F falls on the floor, does that mean the average temperature of my room is now (68 + 1000)/2 = 534 F ? How ridiculous, you say. However, I can calculate the temperature that might result if the coal looses its heat to the room from (total heat)/(total heat capacity) Since the mass of air is many times that of the coal, the temperature will change only by fractions of a degree.
A property that can be averaged and converted to an effective temperature for air is the moist enthalpy. When properly weighted by density it can usefully be averaged then converted back to a temperature.
More relevant for analysis of CO2 impact is to use the total energy which includes kinetic and potential energies, converted to an equivalent temperature This latter measure is the one which might demonstrate some impact of CO2 induced change in temperature gradients , and should be immune to effects of inter conversion of energies to internal, convection, and moisture content.
With all the billions of dollars spent on computers and measurements it is ridiculous to assert that we cannot
calculate global average energy content equivalent temperature.
A discussion which contains other references to Pielke and Massen is:
https://aea26.wordpress.com/2017/02/23/atmospheric-energy/

R. Shearer

That would inevitably lead to the end of this scam, however.

You make a good argument against using average temperatures, or average anomalies, as a stand-in for global average energy content. However, we’re currently dealing with a shell game and not Texas hold’em, so we first have to concentrate on showing how this game is fixed.

“We should be looking at the trends in diurnal highs and lows for all the climatic zones defined by physical geographers.”
There is research that follows this direction, using the generally accepted Köppen zones as a basis to measure whether zonal boundaries are changing due to shifts in temperature and precipitation patterns.
The researchers concluded:
“The table and images show that most places have had at least one entire year with temperatures and/or precipitation atypical for that climate. It is much more unusual for abnormal weather to persist for ten years running. At 30-years and more the zones are quite stable, such that is there is little movement at the boundaries with neighboring zones.”
https://rclutz.wordpress.com/2016/05/17/data-vs-models-4-climates-changing/

Thomas Graney

I think is thread amply demonstrates how poorly people, some people at least, understand statistics, measurements, and how they both relate to science. In their defense, it’s not a simple subject.

Statistics are a way to analyze large amounts of data. What’s being demonstrated here is not how poorly some people understand the subject, but how differences of opinion in the appropriate application of statistical tools will result in vastly different results.

Clyde Spencer

Thomas,
Would you care to clarify your cryptic statement? Are you criticizing me, or some of the posters, or both?

co2islife

The other point is that CO2 is a constant for any short period of time. CO2 is 400 ppm no matter the latitude, longitude and/or altitude. The temperature isn’t what needs to be measured, the impact of CO2 on temperature. Gathering mountainous amounts of corrupted data doesn’t answer that question. We shouldn’t be gathering more data, we should be gathering the right data. Gathering data that requires “adjustments” is a complete waste of time and money, and greatly reduces the validity of the conclusion. If we want to explore the impact of CO2 on Temperature collect data from areas that measure that relationship and don’t need “adjustments.” Antarctica and the oceans don’t suffer from the urban heat island effect. Antarctica and the oceans cover every latitude on earth. CO2 is 400 ppm over the oceans and Antarctica. Collecting all this corrupted data opens the door for corrupt bureaucrats to adjust the data in a manner that favors their desired position. Collecting more data isn’t beneficial is the data isn’t applicable to the hypothesis. The urban heat island isn’t applicable to CO2 caused warming. Don’t collect that data.

Steve Case

the difference between the diurnal highs and lows has not been constant during the 20th Century.
B I N G O !
The average of 49 and 51 is 50 and the average of 1 and 99 is 50.
And if you pay attention to the Max and Min temperatures some interesting things drop out of the data.
I’ve spammed this blog too many times with this US Map but it illustrates the point.

Kalifornia Kook

This is a really cool map, and contains some clues as to to how to duplicate this result – but could you give a few more clues? This is great information, and combined with the info in the main article regarding how minimums have been going up, it really closes the link about how temperatures have been rising without having been exposed to higher daytime temperatures. This was a great response!

Kalifornia Kook

Haven’t been able to replicate that map. Can someone help me?

Nick Stokes
April 23, 2017 at 3:13 am

It is obvious that the distribution has a much larger standard deviation than the cave-temperature scenario and the rationalization of dividing by the square-root of the number of samples cannot be justified to remove random-error when the parameter being measured is never twice the same value.

You are completely off the beam here. Scientists never average absolute temperatures. I’ve written endlessly on the reasons for this – here is my latest. You may think they shouldn’t use anomalies, but they do. So if you want to make any progress, that is what you have to analyse. It’s totally different.

I read your referenced article, and I have to disagree with your logic on the point of location uncertainty. You say “You measured in sampled locations – what if the sample changed?” However, in your example, you are not “changing the sample” — you are changing a sample of the sample. Your example compares that subset of data with the results from the full set of data, to “prove” your argument. In the real world you don’t have the “full set” of data of the Earth’s temperature; the data you have IS the full set, and I think applying any more statistical analyses to that data is unwarranted, and the results unprovable — without having a much larger data set with which to compare them.
In a response to another post, you stated that the author provided no authority for his argument. I see no authority for your argument, other than your own assertion it is so. In point of fact, it seems that most of the argument for using these particular adjustments in climate science is that the practitioners want to use them because they like the results they get. You like using anomalies because the SD is smaller. But where’s your evidence that this matches reality?

Your whole idea of averaging is way off too. Temperature anomaly is a field variable. It is integrated over space, and then maybe averaged over time. The integral is a weighted average (by area). It is the sum of variables with approximately independent variation, and the variances sum. When you take the square root of that for the total the range is much less than for individual readings.. Deviations from independence can be allowed for. It is that reduction of variance (essentially sample average) that makes the mean less variable than the individual values – not this repeated measurement nonsense.

The Law of Large Numbers (which I presume is the authority under which this anomaly averaging scheme is justified) is completely about multiple measurements and multiple samples. But the point of the exercise is to decrease the variance — it does nothing to increase the accuracy. One can use 1000 samples to reduce the variance to +/- 0.005 C, but the statement of the mean itself does not get more accurate. One can’t take 999 samples (I hate using factors of ten because the decimal point just ends up getting moved around; nines make for more interesting significant digits) of temperatures measured to 0.1 C and say the mean is 14.56 +/- 0.005C. The statement of the mean has to keep the significant digits of its measurement — in this case, 14.6 — with zeros added to pad out to the variance: 14.60 +/- 0.005. That is why expressing anomalies to three decimal points is invalid.

Steve Case

Measuring Jello® cubes multiple times with multiple rubber yard sticks won’t get you 3 place accuracy.

Menicholas

No, but the cleanup might be tasty and nutritious.

“However, in your example, you are not “changing the sample” — you are changing a sample of the sample.”
Yes. We have to work with the record we have. From that you can get an estimate of the pattern of spatial variability, and make inferences about what would happen elsewhere. It’s somewhat analogous to bootstrapping. You can use part of he sample to make that inference, and test it on the remainder.
None of this is peculiar to climate science. In all of science, you have to infer from finite samples; it’s usually all we have. You make inferences about what you can’t see.

Two problems that I can see with that approach. First, bootstrapping works where a set of observations can be assumed to be from an independent and identically distributed population. I don’t think this can be said about global temperature data.
Secondly, the method works by treating inference of the true probability distribution, given the original data, as being analogous to inference of the empirical distribution of the sample.
However, if you don’t know how your original sample set represents your actual population, no amount of bootstrapping is going to improve the skewed sample to look more like the true population.
If the sample is from temperate regions, heavy on US and Europe, all bootstrapping is going to give one is a heavy dose of temperate regions/US/Europe. It seems pretty obvious that the presumption of bootstrapping is that the sample at least has the same distribution as the population — something that certainly can NOT be presumed about the global temperature record.

Clyde, I like your ball bearing analogy. Perhaps you could expand on later in the series.
1) Yes we know that when one micrometer is used to measure one ball bearing (293 mm in diameter), multiple measurements help reduce micrometer measurement errors.
2) Yes we know that when we measure many near identical ball bearings (293 mm in diameter) with one micrometer, the more ball bearings we measure the more accurate the average reading will be.
3) Climate: we measure random ball bearings, whose diameter varies between 250 mm up to 343 mm, and we use 100 micrometers.
4) If we measure 100 of the ball bearings chosen at random twice per day, how does averaging the measured random ball bearing measurement increase the measurements accuracy?
5) Replace mm with K and you get the global average temperature problem.
6) Stats gives us many ways to judge a sample size but if the total population for this temperature measurement is 510,000,000 square kilometers, and each temperature sample represented the temperature of one square kilometer, how many thermometers do we need?

graphicconception

My concern is that average temperatures are then averaged again, sometimes many times before a global average is obtained.
First you have min and max then you average them. Then you have daily figures and you average them then there is the area weighting (Kriging) etc etc.
OK you can do the math(s) but when you average an average you do not necessarily end up with a meaningful average.
Consider two (cricket) batsmen. A batting average is number of runs divided by number of times bowled out. In the first match, batsman A gets more runs than batsman B and both are bowled out. “A” has the higher average. A maintains that higher average all season and in the final match both batsmen are bowled out but A gets more runs than B. Who has the higher average over the season?
The answer is, You can’t tell.
For instance. A gets 51 runs in match 1 and B gets 50 runs. A was injured and only plays again in the last match. B gets 50 runs and is out in every one of the other 19 matches apart from the last one. So A’s average all year was 51 while B’s was 50. In the last match A gets 2 runs and B gets 1.
Average of averages for A gives (51 + 2)/2 = 26.5
Average of averages for B gives (50 + 1)/2 = 25.5
True average for A is 26.5
True average for B is (20*50 + 1)/21 = 47.666..

i>”True average for A is 26.5
True average for B… ”
Obviously, you can tell. You just have to weight it properly. This is elementary.

graphicconception

Thanks Nick, I know how to do it. My concern is that with all the processing that the temperature data is subjected to, I would find it hard to believe that they apply all the necessary weightings – even if they calculate them.
For instance, taking the average of min and max temperatures by summing and dividing by two is already not the best system. An electrical engineer would not quote an average voltage that way.
Shouldn’t you also weight the readings by the local specific heat value? For example by taking into account local humidity? Basically, you need to perform an energy calculation not a temperature one.
Is it known how much area a temperature value represents? If not, where do the weightings come from?
It would be interesting to know just how much the global average surface temperature could be varied just by changing the averaging procedures.

“Is it known how much area a temperature value represents? If not, where do the weightings come from?
It would be interesting to know just how much the global average surface temperature could be varied just by changing the averaging procedures.”

The first is a matter of numerical integration – geometry. I spend a lot of time trying to work it out. But yes, it can be worked out. On the second, here is a post doing that comparison for four different methods. Good ones agree well.

It is change in the earth’s energy content that is important — energy flow into and out of the earth system. This seems like a job for satellites rather than a randomly distributed network of thermometers in white boxes and floats bobbing in the oceans.

Menicholas

You have no future as a government paid political hac…I mean climate scientist.
That much is clear.

Rick C PE

What is global average temperature anyway? How is it defined and how is it measured? As a metrologist and statistician, if you asked me to propose a process of measuring the average global temperature, my first reaction would be I need a lot more definition of what you mean. When dealing with the quite well defined discipline of Measurement Uncertainty analysis, the very first concern that leads to high uncertainty is “Incomplete definition of the measurand”. Here’s the list of factors that lead to uncertainty identified in the “ISO Guide to the Expression of Uncertainty in Measurement” (GUM).
• Incomplete definition of measurand.
• Imperfect realization of the definition of the measurand.
• Non-representative sampling – the sample measured may not represent the defined measurand.
• Inadequate knowledge of the effects of environmental conditions on the measurement or imperfect measurement of the environmental conditions.
• Personal bias in reading analogue instruments.
• Finite instrument resolution or discrimination threshold.
• Inexact values of measurement standards and reference materials.
• Approximations and assumptions incorporated into the measurement method and procedure.
• Variations in repeated observations of the measurand under apparently identical conditions.
• Inexact values of constants and other parameters obtained from external sources and used in the data-reduction algorithm.
As far as I can tell there is no such thing as a complete “definition of the measurand” (global average temperature). It seems that every researcher in this area defines it as the mean of the data that is selected for their analysis. Then most avoid the issue of properly analyzing and clearly stating the measurement uncertainty of their results.
This is, IMHO, bad measurement practice and poor science. One should start with a clear definition of the measurand and the design a system – including instrumentation, adequate sampling, measurement frequency, data collection, data reduction, etc. – such that the resulting MU will be below the level needed to produce a useful result. Trying to use data from observations not adequate for the purpose is trying to make a silk purse out of a sow’s ear.

Jim Gorman

Here, Here! I have said the same thing, just not as detailed as you. What is global temperature? Does comparing annual figures, as computed now, say more about weather phenomena in a particular region than the actual temperature of the earth as a whole? I bet almost none of the authors of climate papers can adequately address the issues brought up here. Precision to 0.001 or even 0.01 when using averages of averages of averages is a joke.

Rick C PE

Having spent an entire 40 yr career in the laboratory measuring temperatures of all kinds of things in many different situations, I can say that over a range of – 40 to + 40 C, it is extremely difficult to achieve uncertainties of less than +/- 0.2 C. I can’t buy the claim that global average temperature can be determined within anything close to even +/- 1 C.

Rick C PE and Jim Gorman. It’s a pleasure to read your opinions on this topic.

Clyde Spencer

Rick,
I agree completely with your advice. However, the problem is that in order for the alarmists to make the claim that it is warming, and doing so anomalously, they have to reference today’s temperatures to what is historically available. There is no way that the historical measurements can be transmogrified to agree with your recommended definition. We are captives of our past. Again, all I’m asking for is that climatologists recognize the limitations of their data and be honest in the presentation of what can be concluded.

Rick C PE

Clyde: Yes, obviously we are stuck with historical records not fit for the purpose. I’m not opposed to trying to use this information to try to determine what trend may exist. I just don’t think it should be presented without acknowledgement of the substantial uncertainties that make reliance on this data to support a hypothesis highly questionable.

Clyde Spencer

Rick,
You said, “I just don’t think it should be presented without acknowledgement of the substantial uncertainties that make reliance on this data to support a hypothesis highly questionable.”
And that is the point of this and the previous article!

IPCC AR5 glossary defines “surface” as 1.5 m above the ground, not the ground itself.
If the earth/atmosphere are NOT in thermodynamic equilibrium then S-B BB, Kirchhoff, upwelling/downwelling/”back” radiation go straight in the dumpster.

Clyde Spencer

Nicholas,
We obviously can’t treat the entire globe as being in equilibrium. However, we might be able to treat patches of the surface as being in equilibrium for specified periods of time and integrate all the patches over time and area. Whether that is computationally feasible or not, I don’t know. However, I suspect it is beyond our capabilities.

Peta from Cumbria, now Newark

Exactly agree with all (most, will confess I’ve not read every little bit) of the above.
There are soooo many holes in this thing, not least
1. Temperature is not climate
2. Temperature per-se does not cause weather (averaged to make Climate somehow – I ain’t holding any breath) Temperature difference causes weather
This entire climate thing is just crazy.

co2islife

This artical completely focuses on data problems with current temperature measurements. It never ties back to CO2s impact on temperature. 1) the urban heat island is an exogenous factor that requires “abjustments,” and that is just one exogenous factor. 2) CO2 is a constant 400 ppm, constants can’t explain variations 3) CO2 doesn’t warm water 4) CO2’s only way to affect climate change is through absorbing 13 to 18 microns, and its impact is largely saturated 5) CO2 can’t cause record high daytime temperatures, CO2 only traps outgoing non-visible light. Climate science is a science, only data collected that helps isolate the impact of CO2 on temperature is relevant. The law of large numbers doesn’t apply when applied to corrupt data. Only data that isolates the impact of CO2 on temperature should be used. All this other stuff is academic. Focus on the science, and how a good experiment would be run. If I was doing an experiment I wouldn’t be collecting data that needs to be adjusted. The issue isn’t if we are warming, the issue is does CO2 cause warming. Global temperatures won’t prove that, it doesn’t isolate the cause of CO2.

John Shotsky

Often, taking things to an extreme helps illustrate a point. If we had a temperature measuring device near Moscow, and another near Rio de Janeiro, both with zero measurement errors, and without urban heat effects, with readings taken hourly for 100 years, we could then average all these numbers and come up with a number. Exactly what would that average mean? Absolutely nothing, it is simply a number with no actual meaning. Throwing more such stations into the mix doesn’t change that. It is still a meaningless number.
Oh, did I forget to mention that the measurement stations are at different altitudes? And different hemispheres?
We could fix that with sea level thermometers, one on the US west coast, and another on the east coast. Same as above, average the readings, and what do you have? Let’s see, one is measuring the temperature of air from over thousands of miles of ocean, and the other thousands of miles of land. Exactly what would this average mean, then? The point is that these averages are meaningless to start with, and throwing more stations in does not add value, regardless of measurement errors. You do not get a ‘better’ average.

If climate science was applied to your automobile, and an average temperature of it was contrived, what would knowing that average temp be worth?
Andrew

Olen

Cannot read article because of page moving to share this.

John M Tyson

I am a pit bull latching to political fake news about global warming and delighted to have found your site, However my I am neither a scientist or mathematician. Is it reasonable to ask you to “translate” important points to lay language? I would like to pass some of your points on to others in language they will understand, as will I if challenged. Thanks for sanity.

[Of course, pull in and ask away. There are many individuals here who know a great deal about a great deal and are always happy to pass their knowledge along to a genuine questioner. Just be polite and clear about what you are looking for, nobody likes ambiguity. . . . mod]

John there is much stuff you can use, written for laymen, in my ebook Blowing Smoke. All illustrated. Nothing as technical as this guest post.

Clyde Spencer

John,
You will note that there are no mathematical formulas, much to the chagrin of some regulars such as Windchasers. Some WUWT articles range from just whining about alarmists, to others, say by Monckton, that have mathematical formulas and are fairly rigorous. It seems that there are a lot of retired engineers and scientists that frequent this blog. Therefore, I tried to strike a balance between trying to have my ideas accepted by those with technical backgrounds without losing intelligent layman. Sort of a Scientific American for bloggers. 🙂 If you have any specific questions, I’ll try to respond over the next couple of days.

Tom Halla

Displaying temperature distributions should be graphic. Once upon a time, such as when I was in college, that was involved and expensive to print. Graphing software is cheap and available, and a graph will show any skewness of the distribution, and if trying to use Gaussian statistical tests is reasonable.
I think trying to use as much of the data as possible is a good thing,

co2islife

What is the standard deviation of the absorption of IR by CO2? Temperatures have a huge variation, the physics of CO2’s absorption doesn’t. CO2 at best can explain a parallel shift in temperatures, it can’t explain the huge variation.once again, tie everything back to how can CO2 explain the observation. CO2 doesn’t cause the urban heat island effect, CO2 doesn’t cause record daytime temperatures, CO2 doesn’t warm water, etc etc. Stay focused on the hypothesis. Evidence of warming isn’t evidence CO2 is causing it.

Forget about how to represent “…reported. average temperatures.” There is no average. Forget about data error and precision, they don’t belong. Global temperature is not a random quantity and cannot be understood by statistics meant for analyzing random data. Now plot the entire temperature curve from 1850 to 2017 on a graph. Use a data set that has not been edited to make it conform to global warming doctrine. HadCRUT3 would fit the bill. On the same graph, also plot the Keeling curve of global carbon dioxide and its ice core extension. Now sit back and contemplate all this. What do you see? The first thing you should notice is that the two curves are very different. The Keeling curve is smooth, no ups or downs. But the temperature curve is irregular, It has its ups and downs and is jagged. Two peaks on the temperature curve especially stand out. The first one is at year 1879, the second one at year 1940. Warmists keep telling us that global temperature keeps going up because of the greenhouse effect of carbon dioxide. Take a close look at the part of the Keeling curve directly opposite these two temperature peaks. Where is that greenhouse effect? There is no sign that the Keeling curve had anything to do with these two global warm peaks. Problem is that from 1879 to 1910 temperature actually goes down, not up, completely contrary to global warming doctrine. That distance is just over 30 years, the magic number that turns weather into climate. There is a corresponding warm spell from 1910 to 1940 that also qualifies as climate, warm climate in fact. But this is not the end. 1940 is a high temperature point and another cooling spell begins with it. That coolth is from the cold spell that ushered in World War II. It stayed cool until 1950, at which point a new warming set in. Global temperature by then was so low that it took it until 1980 to reach the same temperature level that existed in 1940. f And in 1980 a hiatus began that lasted for 18 years. The powers that control temperature at IPCC decided, however, to change that stretch into global warming instead. That is a scientific fraud but they apparently don’t care because they control what is published. By my calculation this act adds 0.06 degrees Celsius to every ten year stretch of temperature from that point on to the present. I cannot see how statistics can have any use for interpreting such data. I have laid out the data. What we need is for someone to throw out the temperature pirates who spread misinformation about the real state of global temperature.

Peter Sable

By convention, climate is usually defined as the average of meteorological parameters over a period of 30 years

Do you have a reference for this?
Knowing that the multi-decadal oscillations (e.g. the PDO) are on the order of 65-80 years, anyone with a modicum of signal processing experience would understand how stupid it is to look at something over a mere 30 years. You need at least double the cycle length, or about 140 years, and with the amount of error in the measurements and the number of overlapping cycles you need hundreds of years of data to make a call on anything.
The widest confidence interval metric (i.e. least confidence) one could come up with for any time series data is the trend of the data. It just doesn’t get worse than that. That’s not signal processing 101, but it’s definitely 501 (early grad school). There’s just very little useful information for the lowest frequency portion of a time series.
reference: http://paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf
Peter

Clyde Spencer

Peter,
NOAA, NASA, and other organizations use 30-year averages for their baselines, although they use different 30-year intervals. Go to their websites.

Jim Gorman
April 23, 2017 at 6:44 am
NS: “None of this means much unless you come to terms with the fact that they are averaging anomalies. Fluctuations about a mean are much more likely to be approx normal than the quantities themselves.”
”If that is the case, why all the adjustments to past temperature readings? The absolute values of the anomalies shouldn’t change if you are right.”

According to a February 9, 2015 Climate Etc. blog titled Berkeley Earth: raw versus adjusted temperature data, “The impact of adjustments on the global record are scientifically inconsequential.”
Adjustments to the data are one of the most frequent criticisms. Is there anything in the above post or comments that justifies data adjustments?

“Is there anything in the above post or comments that justifies data adjustments?”
No, nor should there be. It is a separate issue. You average temperature anomalies that are representative of regions. Sometimes the readings that you would like to use have some characteristic that makes you doubt that they are representative. So other data for that region is used. Arithmetically, that is most simply done as an adjustment.
The adjustments done makes very little difference to the global average.

Clyde Spencer ==> Well done on this…..the whole idea of LOTI temps (Land and Ocean) is absurd — I have been calling it The Fruit Salad Metric. (apples and oranges and bananas) .

Any number
created by averaging averages, of spatially and temporally diverse measurements, creates an informational abomination – not a “more precise or accurate figure”. This is High School science and does not require any fancy statistical knowledge at all — my High School science teacher was able explain this clearly to a class of 16-year-olds suffering from hormone-induced insanity in twenty minutes with a couple of everyday examples. How today’s Climate Scientists can fool themselves into believing otherwise is a source of mystery and concern to me.

Clyde Spencer ==> My piece on Alaska is a good example of averages obscuring information.

Clyde Spencer

Kip,
When I was in the army, I was assigned to the Cold Regions Research and Engineering Laboratory in 1966. It didn’t snow in Vermont, where I was living, until Christmas Eve. We got 18″ overnight! The next year, we got snow in mid-October during deer hunting season.

Clyde Spencer ==> And the same sort of variability exists at all scales, within the envelope of the boundaries of the weather/climate system. I have an essay in progress on the averaging issue — mathematically. (Been in progress for more than a year…:-)

Clyde Spencer

Kip,
I look forward to reading your essay.

Clyde Spencer ==> Somewhere above you mention Occam’s razor….”Occam’s Razor dictates that we should adopt the simplest explanation possible for the apparent increase in world temperatures. ”
I’m pretty sure that you know that although that is the “popular science” version of Occam’s, it is not actually what he said nor what the concept really is.
Newton phrased it “”We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.””
The misapplication of Occam’s results in such things as the false belief that increasing CO2 concentrations alone explain changing climate justified by Occam’s as “simplest explanation”.

Clyde Spencer

Kip,
Yes, I’m aware that the original statement in archaic English was a lot more convoluted. But, I think that the “popular” form captures the essence of the advice. What we are dealing with is a failure to define “simplest.” I would say that if Earth were experiencing warming after the last glaciation, and then started to cool, we should look for an explanation. However, continued warming is most easily explained by ‘business as usual.’
However, the real crux of the problem is not knowing what typical climate variation is like, and averaging averages further hides that information. We don’t know what was changing climate before humans, so we aren’t in a strong position to state to what degree we are impacting it today.

Dave Fair

Plus many!

Clyde ==> Occam’s calls for the fewest unsupported (not in evidence) prior assumptions.
Postulating that wind is caused by “Pegasus horses chasing Leprechauns who are in rebellion against the Fairy Queen” has too many unsupported priors.
The CO2 hypothesis fails Occam’s test because it relies on the unsupported, not in evidence (in fact, there is a great deal of contrary evidence) assumption that nothing else causes (or caused) the warming and cooling of the past and that the present is unique and that CO2 is the primary mover of climate. The number of assumptions — assumptions of absence of effect, past causes not present cause, etc — necessary to make the CO2 hypothesis “sufficient” is nearly uncountable. While it seems “simple”, it requires a huge number of unstated assumptions — and it is the necessity of those assumptions that result in the CO2 hypothesis’ failure to meet the requirements of Occam’s.
CO2 may yet be found to be one of the true causes of recent warming, none the less, but not in such a simplistic formulation as is currently promoted by the Consensus.
I think we are both in the same general page on this topic.

Clyde Spencer

Yes, I think we are. I will go where the believable evidence leads me.

Andrew Kerber

A discussion of random error vs. systematic error would be good too. Any measure prior to the advent of digital thermometers is sure to have random error.

Clyde Spencer