Climate Science Double-Speak: Update

Update by Kip Hansen

 

mystery_solvedLast week I wrote about UCAR/NCAR’s very interesting discussion on “What is the average global temperature now?”.

[Adding link to previous post mentioned.]

Part of that discussion revolved around the question of why current practitioners of Climate Science insist on using Temperature Anomalies — the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average — instead of simply showing us a graph of the Absolute Global Average Temperature in degrees Fahrenheit or Celsius or Kelvin.

Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.

In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:

“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

You see, as Dr. Schmidt carefully explains for us non-climate-scientists, if they use Absolute Temperatures the recent years are all the same — no way to say this year is the warmest ever — and, of course, that just won’t do — not in “RealClimate Science”.

# # # # #

Author’s Comment Policy:

Same as always — and again, this is intended just as it sounds — a little tongue-in-cheek but serious as to the point being made.

Readers not sure why I make this point might read my more general earlier post:  What Are They Really Counting?

# # # # #

 

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

289 Comments
Inline Feedbacks
View all comments
Sweet Old Bob
August 19, 2017 3:36 pm

What a tangled web we weave …….

wayne Job
Reply to  Sweet Old Bob
August 20, 2017 3:36 am

Odd is it not that some fifty years ago the accepted standard for the world was 14.7C 1313 Mb.
I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.

Reply to  wayne Job
August 20, 2017 4:44 am

It is not odd.
It is an embarrassment.

“Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.
In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:”

None of those titles claimed by Schmidt disguise the facts that Gavin Schmidt is an elitist who believes himself so superior, that Gavin will not meet others as equals.
A lack of quality that Gavin Schmidt proclaims loudly and displays smugly when facing scientists; one can imagine how far superior Schmidt considers himself above normal people.
As further proof of Schmidt’s total lack of honest forthright science is Gavin’s latest snake oil sales pitch “climate science double-speak”.

“wayne Job August 20, 2017 at 3:36 am
Odd is it not that some fifty years ago the accepted standard for the world was 14.7C 1313 Mb.
I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.”

Wayne job demonstrates superlatively that no matter how Gavin’s and his obedient goons adjust temperatures; they are unable to hide current temperatures from historical or common sense comparisons.
Gavin should be permanently and directly assigned to Antarctica where Gavin can await his dreaded “global warming” as the Antarctica witch.

Sceptical lefty
Reply to  wayne Job
August 20, 2017 5:24 am

Sorry to be pedantic, but I believe that the pressure should have been 1013mb.
As an aside, it’s a real bitch when the inclusion of realistic error figures undermines one’s whole argument. This sort of subversive behaviour must be stopped!

PiperPaul
Reply to  wayne Job
August 20, 2017 7:02 am

14.7 is also air pressure in PSI at sea level! I’m 97% sure there’s some kind of conspiracy here…

Reply to  wayne Job
August 20, 2017 8:11 am

Good point about the errors. Gavin shows the usual consensus abhorrence of tracking error.
If the cliimatology is known only to ±0.5 K and the measured absolute temperature is known to ±0.5 K, then the uncertainty in the anomaly is their root-sum-square = ±0.7 K.
There’s no avoidance of uncertainty by taking anomalies. It’s just that consensus climate scientists, apparently Gavin included, don’t know what they’re doing.
The anomalies will inevitably have a greater uncertainty than either of the entering temperatures.

wayne Job
Reply to  wayne Job
August 21, 2017 5:41 am

Sorry the Mb should read a 1013, I do know that the temp was right as an old flight engineer they were the standard figures for engine and take off performance.

Reply to  Sweet Old Bob
August 20, 2017 7:30 am

…when we practice to receive – grants, lots and lots of taxpayer funded grants!

Bill Hanson
August 19, 2017 3:38 pm

Stunning.

Donald Kasper
August 19, 2017 3:46 pm

We live in a world of absolute temperature numbers, not long term averages. Averages have no social meaning.

NW sage
Reply to  Donald Kasper
August 19, 2017 4:22 pm

Averages are a statistical method of trying to detect meaning when there is none.

Reply to  NW sage
August 20, 2017 8:31 am

Climatology is about averages. To know, for example,the 30 year average temperature at a given location is useful for some purposes. Climatologists erred when they began to try to predict these averages without identifying the statistical populations underlying their models for to predict without identifying this population is impossible.

george e. smith
Reply to  NW sage
August 20, 2017 7:26 pm

NOTHING ever happens twice; something else happens instead. So any observation creates a data set with one element; the observation itself.
And the average value of a data set containing a single element is ALWAYS the value of that one element. So stick with the observed values they are automatically the correct numbers to use.
G

Reply to  NW sage
August 29, 2017 12:30 pm

Gavin should learn little Math – specifically Significant Digits. If the climatology is to a precision of 0.1, then the Anomaly MAY NOT BE calculated to a precision greater than 0.1 degree. Absolute or Anomaly – both ought to show that the temperatures are the same.
i always wonder, if the Alarmists’ case is so strong, then why do they need to lie?

Santa Baby
Reply to  Donald Kasper
August 19, 2017 11:19 pm

In postmodernism nothing is truth. Except postmodern consensus policy based science?

Auto
Reply to  Santa Baby
August 20, 2017 2:51 pm

Santa
“postmodern consensus policy based science” is the revealed and frighteningly enforceable truth.
Disagree and – no tenure.
Out on your ear.
Never mind scientific method.
Sad that science has descended into a belief system, isn’t it??
Auto

Bill Powers
August 19, 2017 4:01 pm

in a somewhat different tack, check you local TV channel – weather meteorologists. I detected a pattern in markets I have lived. when the Temperature is above the average over time they almost always say that the “Temperature was above NORMAL today” but when it is below they say that the “Temperature was below the AVERAGE” for this date.
Now subliminally we are receiving a bad news message when the temperate is not normal but it comes across somewhat non newsworthy to be innocuously below an average, Do they teach them this in meteorology courses?
CAGW Hidden Persuaders? Check it out. Maybe it’s just my imagination.

Michael Smith
Reply to  Kip Hansen
August 19, 2017 9:27 pm

In parts of Australia I have heard TV weather persons say that monthly rainfall was “less than what we should have received” as if it were some sort of entitlement rather than just a calculated average of widely fluctuating numbers. I grimace when I hear it.

Crispin in Waterloo but really in Bishkek
Reply to  Bill Powers
August 19, 2017 8:30 pm

What I hear is a continuous reference to the ‘average’ temperature with no bounds as to what the range of ‘average’ is.
It is not nearly enough to say ‘average’ temperature for today is 25 C and not mention that the thirty years which contributed to that number had a range of 19-31. The CBC will happily say the temperature today is 2 degrees ‘above average’ but not say that it is well within the normal range experienced over the calibration period.
The use of an ‘anomaly’ number hides reality by pretending there is a ‘norm’ that ‘ought to be experienced’ were it not for the ‘influence’ of human activities.
All this is quite separate from the ridiculous precision claimed for Gavin’s numbers which are marketed to the public as ‘real’. These numbers are from measurements and the error propagation is not being done and reported properly.

crackers345
Reply to  Crispin in Waterloo but really in Bishkek
August 19, 2017 9:00 pm

crispin, the baseline is not the “norm.” it’s
just an arbitrary choice to compare temperatures
against. it can be changed at will. it
hides nothing

george e. smith
Reply to  Crispin in Waterloo but really in Bishkek
August 20, 2017 7:28 pm

Well nuts ! the observed value IS the norm; it can never be anything else.
G

Patrick MJD
Reply to  Bill Powers
August 19, 2017 11:03 pm

No, not your imagination. It’s to scare people, ie, the warm/cold is abnormal (Somehow) when it is perfectly normal. I am seeing this in Australian weather broadcasts more and more now.

tom s
Reply to  Bill Powers
August 20, 2017 9:52 am

I am a meteorologist…30yrs now. I cannot stand TV weather. I never watch it anymore as I do all my own forecasting myself. It’s catered to 7yr olds. It’s painful to watch. I need not listen to any of these dopes. No, I am not a TV weatherman.

AGW is not Science
Reply to  Bill Powers
August 21, 2017 12:31 pm

I actually haven’t taken notice of the differences between how “above” and “below” average temps are referenced, but I have always abhorred the (frequent, and seemingly prevailing) use of the word “normal” in that respect.
As I like to say, “There IS no “normal” temperature – it is whatever it is.” What they are calling “normal” is an average temperature of a (fairly arbitrarily selected) 30-year period (and at one point they weren’t moving the reference period forward as they were supposed to, because they knew that was going to raise the “average” temps and thereby shrink the “anomalies,” thereby undermining (they felt) the “belief” in man-made climate catastrophe).
I object to the word “anomaly” as well, because it once again suggests that there is something “abnormal” about any temperature that is higher or lower than a 30-year average, which itself is nothing more than a midpoint of extremes. There IS NOTHING “ANOMALOUS” about a temperature that is not equal to ANY “average” of prior temperatures, which itself is nothing more than a midpoint of extremes. “Anomalies” are complete BS.
Great, revealing OP.

JohnWho
August 19, 2017 4:01 pm

Wait, does that mean all the years are the “hottest ever” or none of them?
I note that Gavin states with certainty that it is uncertain and it is somewhat surprising that he does so.

Louis
Reply to  Kip Hansen
August 19, 2017 6:51 pm

If absolute temperatures carry uncertainties, why don’t anomalies? It seems to me that anomalies are usually less than the uncertainty and therefore are virtually equivalent to zero. So why are they allowed to use anomalies without revealing their corresponding uncertainties?

richard verney
Reply to  Kip Hansen
August 20, 2017 12:51 am

If absolute temperatures carry uncertainties, why don’t anomalies?

They do. Gavin states:

The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC.

So he suggests that the error bounds of the anomalies are very small, only +/- 0.05degC Whether one considers that small error bound reasonable is a different matter.

Stephen Richards
Reply to  Kip Hansen
August 20, 2017 1:18 am

I wrote to Realclimate many years ago about this stupidity. I got back the usual bile. One and only time I looked at that site.

Reply to  Kip Hansen
August 20, 2017 1:53 am

Sorry folks, but probably a dumb question from an ill educated oaf.
Being that Stephenson screens with thermometers were probably still being used in 1981, and for some time after, with, presumably, a conventional thermometer, surely observations of the temperature couldn’t possibly be accurate to 0.5K i.e. 287.4±0.5K.
Nor do I believe it credible that every Stephenson screen was well maintained, and we know about the siting controversy. And I suspect not all were properly monitored, with the office tea boy being sent out into the snow to take the measurements, myopic technicians wiping rain off their specs. or the days when someone forgets, and just has a guess.
And I don’t suppose for a moment every Stephenson screen, at every location, was checked once every hour, possibly four times in 24 hours, or perhaps 8 times, in which case there are numerous periods when temperatures can spike (up or down) before declining or rising.
It therefore doesn’t surprise me one bit that with continual electronic monitoring we are seeing ‘hottest temperatures evah’ simply because they were missed in the past.
Sorry, a bit of a waffle.

richard verney
Reply to  Kip Hansen
August 20, 2017 2:33 am

I wrote to Realclimate many years ago about this stupidity. I got back the usual bile. One and only time I looked at that site.

I do from time to time look at the site, but I understand that comments are often censored or dismissed without proper explanation. I have posted a comment (awaiting moderation) inquiring about the time series data set and what the anomaly really represents. It will be interesting to see whether it gets posted and answered.

I must confess that I am having difficulty in understanding what this anomaly truly represents, given that the sample set is constantly changing over time.
If the sample set were to remain true and the same throughout the time series, then it would be possible to have an anomaly across that data set, but that is not what is or has happened with the time series land based thermometer data set.
The sample set of data used in say 1880 is not the same sample set used in 1900 which in turn is not the same sample set used in 1920, which in turn is not the same sample set used in 1940, which in turn is not the same sample set used in 1960, which in turn is not the same sample set used in 1980, which in turn is not the same sample set used in 2000, which in turn is not the same sample set used in 2016.
You mention the climatology reference of 1981 to 2010 against which the anomaly is assessed, however, the data source that constitutes the sample set for the period 1981 to 2010, is not the same sample set used to ascertain the 1880 or 1920 or 1940 ‘data’. We do not know whether any calculated anomaly is no more than a variation in the sample set, as opposed to a true and real variation from that set.
When the sample set is constantly changing over time, any comparison becomes meaningless. For example, if I wanted to assess whether the average height of Americans has changed over time, I cannot ascertain this by say using the statistic of 200 American men measured in 1920 and finding the average, then using the statistics of 200 Finnish men who speak English measured in 1940 and finding the average, then using the statistics of 100 American women and 100 Spanish men who speak English as measured in 1960 etc. etc
It is not even as if we can claim that the sample set is representative since we all know that there is all but no data of the Southern hemisphere going back to say 1880 or 1900. In fact, there are relative few stations that have continuous records going back 60 years, still less about 140 years. Maybe it is possible to do something with the Northern Hemisphere, particularly the United States which is well sampled and which possesses historic data, but outside that, I do not see how any meaningful comparisons can be made.
Your further thoughts would be welcome.

Reply to  Kip Hansen
August 20, 2017 5:18 am

“HotScot August 20, 2017 at 1:53 am
Sorry folks, but probably a dumb question from an ill educated oaf.
Being that Stephenson screens with thermometers were probably still being used in 1981, and for some time after, with, presumably, a conventional thermometer, surely observations of the temperature couldn’t possibly be accurate to 0.5K i.e. 287.4±0.5K.
Nor do I believe it credible that every Stephenson screen was well maintained, and we know about the siting controversy. And I suspect not all were properly monitored, with the office tea boy being sent out into the snow to take the measurements, myopic technicians wiping rain off their specs. or the days when someone forgets, and just has a guess.
And I don’t suppose for a moment every Stephenson screen, at every location, was checked once every hour, possibly four times in 24 hours, or perhaps 8 times, in which case there are numerous periods when temperatures can spike (up or down) before declining or rising.
It therefore doesn’t surprise me one bit that with continual electronic monitoring we are seeing ‘hottest temperatures evah’ simply because they were missed in the past.
Sorry, a bit of a waffle.”

No apologies necessary. Nor is your question unreasonable and it is certainly not “dumb”; except to CAGW alarmists hiding the truth.
Everyone should read USA temperature station maintenance staff writings!

What’s in that MMTS Beehive Anyway?
– By Michael McAllister OPL, NWS Jacksonville, FL,
If you’re not involved with cleaning a Maximum/Minimum Temperature Sensor (MMTS) sensor unit, you probably have not seen inside it. The white louvered “beehive” contains a thermistor in its center with two
white wires. The wires connect it to the plug on the base of the unit. It’s really a very basic instrument. So what else is there to be discovered in the disassembly of the unit?
I cannot vouch for the rest of the country, but here in northeast Florida and southeast Georgia, we regularly find various critters making their home inside the beehive. At the Jacksonville, FL, NWS office, we usually
replace the beehive on our annual visits. After getting the dirty beehive back to the office, and before carefully taking it apart for cleaning, we leave it in a secure outside area for a day to let any “residents” inside vacate, then we dunk it in a bucket of water to flush out any reluctant squatters…”

N.B.;
At no point do the maintenance or NOAA staff ever conduct side by side measurements to determine before/after impacts to data.
Stations are moved,
sensor housings are replaced,
sensors are replaced and even “upgraded”,
data transmission lines and connections are replaced, lengthened, shortened, crimped, bent, etc.,
data handling methods and code are changed,
etc.
None of these potential “temperature impacts” are ever quantified, verified introduced into Gavin’s mystical error bounds theology.

Latitude
August 19, 2017 4:02 pm

why current practitioners of Climate Science insist on using Temperature Anomalies….
…it’s easier to hide their cheating

Reply to  Latitude
August 19, 2017 7:10 pm

Also, it becomes obvious that the amounts of difference they are screaming about are below the limits of detection to a person without instrumentation.

AGW is not Science
Reply to  Latitude
August 21, 2017 12:40 pm

BINGO!

Tom in Florida
August 19, 2017 4:08 pm

“Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”
And of course, you lose the ability to scare people into parting with their money.
Snake Oil Salesman: The phrase conjures up images of seedy profiteers trying to exploit an unsuspecting public by selling it fake cures.

August 19, 2017 4:11 pm

Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”

So…in other words, if the actual temperatures won’t make it “warmest year ever!”, we’ll use something else to make it the “swarmiest year ever!”.
(http://www.urbandictionary.com/define.php?term=Swarmy)

TonyL
August 19, 2017 4:13 pm

The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.
This appears to be attempting to make an issue where there is none.
It’s a Nothingburger.
Fake News.

Greg
Reply to  TonyL
August 19, 2017 4:30 pm

Agreed.

Greg
Reply to  Greg
August 19, 2017 4:35 pm

“The proper use of anomalies is well known and the reasons are sound. ”
Agreed.

— a little tongue-in-cheek but serious as to the point being made.

So what is the serious point being made? That you don’t understand why anomalies are used?

Latitude
Reply to  Greg
August 19, 2017 4:47 pm

” All of which appear to be the same within the uncertainty”

Greg
Reply to  Greg
August 19, 2017 4:50 pm

Gav would do better to try to explain why he is averaging ( ie adding ) temperatures of land and sea which are totally different physical media and thus not additive:
https://climategrog.wordpress.com/category/bad-methods/

seaice1
Reply to  Greg
August 19, 2017 5:24 pm

“So what is the serious point being made? That you don’t understand why anomalies are used?”
That appears to be the case. I suggest anyone who finds this amusing go and read the article at realclimate with an open mind and you may then understand why anomalies are used. Ho ho. As if that will happen! We can all share in the joke.

bobl
Reply to  Greg
August 19, 2017 5:35 pm

Actually the whole of climate science would do well to explain why they use the unreliable almost nonphysical concept of temperature to do anything useful since the actual physical parameter is energy. Temperatures represent vastly different energies depending on the phase of matter, and the medium it is being measured in. For example between a dry day and a humid day, or between smog or air, between ozone or oxygen. The assumption of constant relative humidity alone makes the whole thing a pseudoscience.

KTM
Reply to  Greg
August 19, 2017 8:12 pm

Bobl it is so they can take a high energy maximum daily temperature and directly add it to a low energy minimum temperature, then divide that value in half as if they are both equivalent to arrive at an average temperature without proper weighting.
When is the last time you heard a Warmist talking about maximum temperatures? It’s taboo to discuss those in polite society.

blcjr
Editor
Reply to  Greg
August 20, 2017 5:11 am

In terms of statistics, the point is valid. To compare a “spot” temperature against an “average” (like a 30 year norm) ignores the uncertainty in the “average.” This is similar to the difference between a “confidence interval” and a “prediction interval” in regression analysis. The latter is much greater than the former. In the first case one is trying to predict the “average.” In the second case one is trying to predict a specific (“spot” in the jargon of stock prices) observation.
Implicitly, an anomaly is trying to measure changes in the average temperature, not changes in the actual temperature at which time the measurement is taken. If the anomaly in June of this year is higher than the anomaly in June of last year, that does not mean that the June temperature this year was necessarily higher than the June temperature last year. It means that there is some probability that the average temperature for June has increased, relative to the (usually) 30 year norm. But in absolute terms that does not mean we are certain that June this year was warmer than June last year.
Anomalies are okay, if understood and presented for what they are: a means of tracking changes in average temperature. But that is not how they are used by the warmistas. The ideologues use them to make claims about “warmest month ever,” and that is statistical malpractice.
Basil

Reply to  Greg
August 20, 2017 7:32 am

blcjr: [anomalies are] ” a means of tracking changes in average temperature”. This is exactly what the CAGW quote. You are feeding their assumption. I know you are aware of the difference but the normal person does not; they simply read your text and say, “O, the normal temperature is going up or down”.
I usually try to explain the anomalies as a differential, that is, an infinitely small section of a line with the magnitude and direction of the change. The width of the change is no wider than a dot on the graph. This seems to make more sense to the most people.

rd50
Reply to  TonyL
August 19, 2017 4:31 pm

Give us a link.

Editor
Reply to  rd50
August 21, 2017 12:41 pm

rd50 ==> Sorry — who? give you a link to what?

HAS
Reply to  TonyL
August 19, 2017 4:41 pm

Actually it isn’t uncontroversial. One problem does lie with the uncertainty and its distribution. Another with working with linear transformations of variables in non-linear systems.

Reply to  TonyL
August 19, 2017 4:53 pm

TonyL
It gets better-
“[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)
If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.
BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”
Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.
Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.

Reply to  TonyL
August 19, 2017 4:55 pm

It gets better-
“[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)
If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.
BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”
Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.
Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.

Robert of Ottawa
Reply to  Aphan
August 19, 2017 5:32 pm

Yes indeed, 0.07 +/- 0.5 doesn’t appear to be very significnt does it 🙂

jorgekafkazar
Reply to  TonyL
August 19, 2017 5:14 pm

Just think of it as a statistical rug under which to sweep tangled web weaving.

Reply to  jorgekafkazar
August 19, 2017 5:28 pm

jorgekafkazar-
Right!
And yet they say “the Earth’s temperature is increasing” instead of “the Earth’s anomalies are increasingly warmer” etc. Al Gore says “the Earth has a temperature” instead of “The Earth has a higher anomaly”. And since Gav and the boys ALL ADMIT that it’s virtually impossible to know “exactly” what Earth’s actual global average temperature is, and that Earth is not adequately covered with thermometers, and that the thermometers we DO have are not in any way all properly cited and maintained and accurate… why in the crap do we let them get away with stating that “average surface temperatures were 0.07 F warmer” than a prior year? Why would any serious “Scientist” with any integrity use that kind of language when he’s really talking about something else??
Oh yeah…..rug weaving. 🙂

Sheri
Reply to  jorgekafkazar
August 20, 2017 9:04 am

Aphan: that “average surface temperatures were 0.07 F warmer” than a prior year
If only they did actually say that. They don’t even say that. It’s just “hottest year ever” with no quantification, usually.

Clyde Spencer
Reply to  TonyL
August 19, 2017 6:45 pm

TonyL,
Yes, at least some of us are aware of the ‘proper’ use of anomalies. At issue is whether anomalies are being used properly. Gavin even admits that frequently they are not: “This means we need to very careful in combining these two analyses – and unfortunately, historically, we haven’t been and that is a continuing problem.”

TonyL
Reply to  Clyde Spencer
August 19, 2017 7:09 pm

At issue is whether anomalies are being used properly.

Very True.
A closely related issue:
The ongoing story of the use, misuse, and abuse of statistics in ClimateScience! is the longest running soap opera in modern science.
The saga continues.

Rick C PE
Reply to  TonyL
August 19, 2017 8:49 pm

TonyL: I disagree that the use of anomalies is well known.

Anomaly
NOUN
Something that deviates from what is standard, normal, or expected:
“there are a number of anomalies in the present system”
Synonyms: oddity, peculiarity, abnormality, irregularity, inconsistency

My objection is that the reporting of data as anomalies, like reporting averages without the variance, standard deviation or other measure of dispersion, simply reduces the value of the information conveyed. It eliminates the context. It is not a common practice in statistical analysis in engineering or most scientific fields. None of my statistics textbooks even mentions the term. It simply reduces a data set to the noise component.
While it seems to be common in climate science, the use of the term anomaly implies abnormal, irregular or inconsistent results. But, as has been extensively argued here and elsewhere, variation in the temperature of our planet seems to be entirely normal.
That said, I do get that when analyzing temperature records it is useful to look at temperatures for individual stations as deviations from some long term average. E.g. if the average annual temp. in Minneapolis has gone from 10 C (long term average) to 11 C and the temp. in Miami has gone from 20 to 21 C, we can say both have warmed by 1 C.
Of course, if one averages all the station anomalies and all the station baseline temperatures the sum would be identical to the average of all the actual measured temperatures.
But it is another thing to only report the average of the ‘anomalies’ over hundreds or thousands of stations without including any information about the dispersion of the input data. Presenting charts showing only average annual anomalies by year for 50, 120, 1000 years is pretty meaningless.

Reply to  TonyL
August 20, 2017 6:50 am

“TonyL August 19, 2017 at 4:13 pm
The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.
This appears to be attempting to make an issue where there is none.
It’s a Nothingburger.
Fake News.”

The “Fake news and nothingburger” start right with Gavin, his mouth, his writing and Gavin’s foul treatment of others.

“TonyL August 19, 2017 at 4:13 pm
The proper use of anomalies is well known and the reasons are sound.”

What absurd usage of “well known” and “the reasons are sound”, TonyL.
Just another fake consensus Argumentum ad Populum fallacy.
Use of anomalies can be proper under controlled conditions for specific measurements,
• When all data is kept and presented unsullied,
• When equipment is fully certified and verified,
• When measurements are parallel recorded before and after installation and impacts noted,
• When temperature equipment is properly installed everywhere,
• When temperature equipment installation represents all Latitudes, Longitudes, elevations, rural, suburban and urban environments,
• When temperatures and only temperatures are represented, not some edited version of data, data fill-in, smudged or other data imitation method is used.
Isn’t it astonishing, that “adjustments”, substitutions, deletions, adjustments or data creation based on distant stations, introduce obvious error bounds into temperature records; yet 0.5K is the alleged total error range?
Error bounds are not properly tracked, determined, applied or fully represented in end charts.
Gavin and his religious pals fail to track, qualify or quantify error rates making the official NOAA approach anti-science, anti-mathematical and anti-anomaly. NOAA far prefers displaying “snake oil”, derision, elitism, egotism and utter disdain for America and Americans.
“Double speak” is far too nice a description for Gavin and NOAA misrepresented temperatures. Climastrologists’ abuse of measurements, data keeping, error bounds and data presentation would bring criminal charges and civil suits if used in any industry producing real goods Americans depend upon.

NW sage
August 19, 2017 4:19 pm

Kip – good post!
The REAL answer of course is normally called ‘success testing’. Using this philosophy the test protocol – in this case the way the raw data is treated/analyzed – is chosen in order to produce the kind of result desired. NOT an analysis to find out if the temperatures are warmer, colder, or the same but to produce results that show there is a warming trend.
The usual way of detecting this success testing phenomena is to read the protocol and see just how much scientific technobabble is there (think of the Startgate TV series). The more technobabble the less credible the result.

Bill Illis
Reply to  NW sage
August 19, 2017 5:18 pm

This is what is really going on. Station selection, data selection, methodology selection allows the gate-keepers of the temperature record and the global warming religion, the ability to produce the number they want.
Think of it as someone standing over the shoulder of a data analyst in the basement of the NCDC each month saying “we’ll, what happens if we pull out the 5 Africa stations in the eastern side? How about we just add in that station with all the warming errors? Let’s adjust the bouys up and pretend it is because of ship engine intakes that nobody can/will check? Why don’t we bump up the time of observation bias adjustment and make a new adjustment for the MMTS sensors? Show me all the stations that have the highest warming? Let’s just drop those 1500 stations that show no warming. The South American stations are obviously too low by 1.0C. Just change them and call it an error.
We”ll call it version 4.4.3.2.”

David A
Reply to  Bill Illis
August 19, 2017 6:24 pm

…which explains why 50 percent of the data is often not used, made up, extrapolated.

Nick Stokes
August 19, 2017 4:23 pm

Gavin had an analogy. If you’re measuring a bunch of kids to see who’s the tallest, running a ruler head to foot, you can get a good answer. If you measure the height of their heads above sea level, there is a lot more uncertainty. So which would you do?

Latitude
Reply to  Kip Hansen
August 19, 2017 6:00 pm

elevation above sea level of the classroom floor….
…and then make adjustments for the weight of each child…..because they are making the floor sink

D. Cohen
Reply to  Nick Stokes
August 19, 2017 6:19 pm

To continue the analogy, what people want to know is ***not*** which kid is tallest, but rather which kid is highest above sea level, allowing for the possibility that the “sea level” — that is, the global absolute temperature — may be changing over time (day by day and year by year) in a way that is very difficult to measure accurately.

Greg
Reply to  D. Cohen
August 20, 2017 1:17 am

No, the best way is to measure their height using low orbit satellite range finding, whilst getting the kids to jump up and down on a trampoline and measure the reflection off the surface of the trampoline at the bottom of the movement. This is accurate to within +/- 1mm as has been established for sea level measurements.

Reply to  D. Cohen
August 20, 2017 12:14 pm

and yet actual absolute measurements are better than statistical output which is pure fantasy, it’s not an temperature anomaly, its a statistical anomaly, which requires a “leap of faith” to accept it as a temperature anomaly when talking GISS GAMTA

Clyde Spencer
Reply to  Nick Stokes
August 19, 2017 6:53 pm

NS,
The primary uncertainty is introduced by adding in the elevation above sea level. Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair. Therein lies the problem with temperature anomalies. We aren’t measuring the anomalies directly (height) but obtaining them indirectly from an imperfectly known temperature baseline!

Nick Stokes
Reply to  Clyde Spencer
August 20, 2017 8:43 am

“Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair.”
Exactly. And that is the case here, because we are talking not about individual locations, but the anomaly average vs absolute average. And we can calculate the anomaly average much better, just as we can measure better top to toe.
It has another useful analogue feature. Although we are uncertain of the altitude, that uncertainty does not actually affect relative differences, although that isn’t obvious if you just write it as a±b. The uncertainty of the absolute average doesn’t affect our knowledge of one year vs another, say. Because that component of error is the sae for both. So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015. The reason is that you took the same number 14.0±1 (abs normal), and added the anomalies of 0.7±0.1 and 0.5±0.1. The normal might have been 13 or 15, but 2016 will still be warmer than 2015.

TheOtherBobFromOttawa
Reply to  Nick Stokes
August 20, 2017 12:45 pm

You clearly have a different understanding of “error” than I do, Nick.
You wrote: “So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015.”
I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5. Since the temperature difference between 2015 & 2016 is well within the error range of both temperatures it’s impossible to know which year is warmer or cooler.
That’s what I remember from my first year Physics Prof, some 50 years ago. But maybe Physics has “evolved” since then. :))

TheOtherBobFromOttawa
Reply to  Kip Hansen
August 20, 2017 5:16 pm

Thanks Kip. Yes, my thoughts exactly. I didn’t want to repeat the point I made in my first post about adding the errors to get the anomaly error but you covered it most eloquently. Thanks for starting a very interesting discussion.

Nick Stokes
Reply to  Clyde Spencer
August 21, 2017 12:25 pm

Kip,
“if your ancestors are from Devon”
None from Devon, AFAIK. Lots from Wilts, Glos.

Nick Stokes
Reply to  Clyde Spencer
August 21, 2017 12:31 pm

“I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5”
But not independently. If 2016 was at 13.7 because the estimate of normal was wrong on the low side (around 13), then that estimate is common to 2015, so there is no way that it could be 15+.
There are many things that can’t be explained by what you learnt in first year physics.

TheOtherBobFromOttawa
Reply to  Nick Stokes
August 21, 2017 3:06 pm

I don’t know what point you’re making in your comment.
And there are many things that Gavin & Co. do that can’t be explained by anyone – at least in a way that makes sense to most people. :))

Reply to  Nick Stokes
August 19, 2017 7:09 pm

No problem if all 5 boys are standing on the same level platform … but WE know that the platform is not level !

Urederra
Reply to  Streetcred
August 20, 2017 6:59 am

One of the kids puts his hair in a bun.

P. Berberich
Reply to  Nick Stokes
August 20, 2017 12:45 am

There is another analogy. This morning my wife asks: What’s the outside temperature today? My answer is: the temperature anomaly is 0.5 K. When I add you need no new clothes I will run into problems this day.

Reply to  P. Berberich
August 20, 2017 6:58 am

Nor will she nicely ask what the outside temperature is, again.
NOAA should reap equal amounts of derision for their abuse of anomalies.

Reply to  Nick Stokes
August 20, 2017 12:11 pm

what if 60% of the kids are not measured Nick, does Gavin just make it up?

commieBob
August 19, 2017 4:26 pm

Suppose that we have a data set: 511, 512, 513, 510, 512, 514, 512 and the accuracy is +/- 3. The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0 and the accuracy is still +/- 3.
I don’t understand how using anomalies lets us determine the maximum any differently than using the absolute values. There has to be some mathematical bogusness going on in CAGW land. I suspect they think that if you have enough data it averages out and gives you greater accuracy. I can tell you from bitter experience that it doesn’t always work that way.

Pat Lane
Reply to  commieBob
August 19, 2017 5:24 pm

But if you ADD the uncertainties together, you get zero!
Here’s the appropriate “world’s best practice” algorithm:
1. Pick a mathematical operator (+, -, /, *, sin, cos, tan, sinh, Chebychev polynomial etc.)
2. Set uncertainty = 0
2a. Have press conference announcing climate is “worse than originally thought”, “science is settled” and “more funding required.”
3. Calculate uncertainty after applying operator to (homoginised) temperature records
4. Is uncertainty still zero?
5. No, try another operator.
6. go back to 3 or, better yet, 2a.

Pat Lane
Reply to  Pat Lane
August 19, 2017 5:37 pm

The sharp-eyed will note the above algorithm has no end. As climate projects are funded on a per-year basis, this ensures the climate scientist will receive infinite funding.

Reply to  commieBob
August 19, 2017 5:32 pm

Thank you Bob!
My math courses in Engineering and grad studies (stats, linear programming, economic modelling, and surprising to me the toughest of all, something called “Math Theory”) were 50 years ago. But the reasoning that somehow anomalies are more precise or have less uncertainty than the absolute values upon which they were based set off bells and whistles in my old noggin. I was very hesitant though to raise any question for fear of displaying my ig’nance..
Maybe both of us are wrong, but now I know I’m in good company. 🙂

Rolf
Reply to  George Daddis
August 19, 2017 11:03 pm

Me too !

Nick Stokes
Reply to  commieBob
August 20, 2017 8:57 am

“The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0”
But you don’t form the anomalies by subtracting a common average. You do it by subtracting the expected value for each site.
“how using anomalies lets us determine the maximum”
You don’t use anomalies to determine the maximum. You use it to determine the anomaly average. And you are interested in the average as representing a population mean, not just the numbers you sampled. The analogy figures here might be
521±3, 411±3, 598±3. Obviously it is an inhomogeneous population, and the average will depend far more on how you sample than how you measure. But if you can subtract out something that determines the big differences, then it can work.

commieBob
Reply to  Nick Stokes
August 20, 2017 6:35 pm

That’s what you say. Here’s what Dr. Schmidt said:

But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

My example is a simplified version of the above. If you think Dr. Schmidt erred, that’s between you and him.

ferdberple
Reply to  commieBob
August 20, 2017 11:10 am

the accuracy is still +/- 3.
≠======
Of course it is. But what climate science does is to re-calculate the error statistically from the anomaly and come to the absurd conclusion that the error changed from 0.5 to 0.05. The nonsense is that averaging reduces the variance and gives the misleading impression that it provides a quick way to reduce error. And it does in very specific circumstances. Of which this is not one.

August 19, 2017 4:27 pm

Extra! EXTRA! Read all about it! Gavin Schmidt of NASA ADMITS that there has been NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years!!!

SMC
Reply to  Aphan
August 19, 2017 4:36 pm

I’m in denial. A climate scientist actually told the truth… kind’a… sort’a… maybe… in a convoluted way? I don’t believe it. 🙂

Reply to  SMC
August 19, 2017 5:33 pm

He told the truth, and then rationalized why that truth is completely unimportant to the actual “science” involved in climate science. Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate. And if you don’t like unicorns or pizza parties, you’re a hating-hate-hater-denier and should be put to death.
ISIS is more tolerant.

SMC
Reply to  SMC
August 19, 2017 9:44 pm

“Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate.”
What happened to the rainbows, fairy dust and hockey sticks?
“…hating-hate-hater-denier…”
You forgot lying, hypocritical, sexist, egotistical, homophobic, misogynist, deplorable bigot. :))

Reply to  SMC
August 19, 2017 10:33 pm

Thanks SMC….I knew I was forgetting something… 🙂

Reply to  Aphan
August 20, 2017 4:31 am

“NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years”
Earth’s Absolute Temperature has changed by roughly 4°C in every one of those lasts 30 years.
Surely that is statistically significant. 🙂

Cold in Wisconsin
August 19, 2017 4:44 pm

What is the sensitivity of the measuring device, and what are the significant figures? Can an average of thousands of measurements accurate to a tenth of a degree be more accurate than each individual measuring device? I am asking an honest question that someone here can answer accurately. We learned significant figures in chemistry, but wouldn’t they also apply to these examples? How accurate are land based temp records versus the satellite measuring devices? This has been a central question for me in all of this “warmest ever” hoopla, and I would appreciate a good explanation.

Reply to  Kip Hansen
August 19, 2017 6:12 pm

Kip,
To compound that, in the sixties I was taught that, at least in Engineering, there existed MANY decision rules about whether to round a “5” up or down if it was the last significant digit, and that those recording data often failed to specify which rule they used. We were instructed to allow for that.
I don’t think Wiley Post or Will Rogers gave two shoots about how to round up or down factional temperatures at their airstrips in the 20’s or early 30’s.
Why modern “Climate Scientists” assume that those who recorded temperature at airports or agricultural stations in 1930 were aware that those figures would eventually be used to direct the economies of the world is typical of the “history is now” generation.

Clyde Spencer
Reply to  Kip Hansen
August 19, 2017 7:00 pm

Kip,
The automated weather stations (ASOS) are STILL reading to the nearest degree F, and then converting to the nearest 0.1 deg C.

Walter Sobchak
Reply to  Kip Hansen
August 19, 2017 7:08 pm

Those numbers were not anywhere near that good. How often were thermometers calibrated. Were they read with verniers or magnifiers? What did they use to illuminate thermometers for night time readings? Open flames? And don’t forget all of the issues that Anthony identified with his work on modern weather observation equipment.

Dr. S. Jeevananda Reddy
Reply to  Kip Hansen
August 19, 2017 9:58 pm

The temperature data was and is recorded to the first place of decimal. The adjustment is carried out as: 33.15 [0.01 to 0.05] as 33.1, 33.16 as 33.2, 33.25 [0.05 to 0.09] as 33.3. This is also followed in averaging.
Dr.S. Jeevananda Reddy

Clyde Spencer
Reply to  Kip Hansen
August 20, 2017 8:25 am
EE_Dan
Reply to  Kip Hansen
August 20, 2017 10:56 am

Interesting specification from the ASOS description:
http://www.nws.noaa.gov/asos/aum-toc.pdf
Temperature measurement: From -58F to +122F RMS error=0.9F, max error 1.8F.
“Once each minute the ACU calculates the 5-minute
average ambient temperature and dew point temperature
from the 1-minute average observations (provided at least
4 valid 1-minute averages are available). These 5-minute
averages are rounded to the nearest degree Fahrenheit, con-
verted to the nearest 0.1 degree Celsius, and reported once
each minute as the 5-minute average ambient and dew point
temperatures. All mid-point temperature values are rounded
up
(e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to –
3.0°F; while -3.6 °F rounds to -4.0 °F).”
This is presumably adequate for most meteorological work. I’m not sure how we get to a point where we know the climate is warming but it is within the error band of the instruments. Forgive me I’m only a retired EE with 40+ years designing instrumentation systems (etc).

Reply to  Kip Hansen
August 20, 2017 12:17 pm

“Can Kip, but you can’t know “if” it is.

Greg
Reply to  Cold in Wisconsin
August 19, 2017 5:04 pm

If you have one thermometer with a 1 degree scale you would attribute +/-0.5 degrees to a measurement. If it is scientific equipment, it will be made to ensure it is at least as accurate as the scale.
There is a rounding error when you read the scale and there is the instrumental error.
If you have many readings on different days, the rounding errors will average out. If you have thousands of observation stations , the calibration error the individual thermometers will average out.
That is the logic of averages being more accurate than the basic uncertainly of one reading.

Reply to  Greg
August 19, 2017 6:34 pm

Accuracy of scale: If the thermometers from 1880 through early 20th century read in whole degree increments (which was “good enough” for their purposes) then how does one justify declaring this year was the hottest year ever, by tenths of a degree?
Rounding errors will only “average out” if everyone recording temps used a flip of the coin (figuratively) to determine abut what to record. The reality is some may have used a decision rule to go to the next HIGHEST temp and some the LOWER. Then there’s the dilemma about what to do with “5 tenths”; there were “rules” about that too. You cannot assume the “logic of averages” unless we know how those rules of thumb were applied.

commieBob
Reply to  Greg
August 19, 2017 6:39 pm

Suppose that we have a sine wave of known frequency buried under twenty db of Gaussian noise. We can detect and reconstruct that signal even if our detector can only tell us if the signal plus noise is above or below zero volts (ie. it’s a comparator). By running the process for long enough we can get whatever accuracy we need. link
The problem is that Gaussian noise is a fiction. It’s physically impossible because it would have infinite bandwidth and therefore infinite power. Once the noise is non-Gaussian, our elegant experiment doesn’t work any more. It’s more difficult to extract signals from pink or red noise. link If we can’t accurately describe the noise, we can’t say anything about our accuracy.

crackers345
Reply to  Greg
August 19, 2017 7:53 pm

kip, if there are n stations and
if the error of the individual
readings is s, the error of the
average will be s/squareroot(n).
small

tty
Reply to  Greg
August 20, 2017 12:59 am

“if there are n stations and if the error of the individual readings is s, the error of the average will be s/squareroot(n).”
Ah, “the Law of large number”. Somebody always drags that up. Sorry but no, that only applies to independent identically distributed random variables.

Urederra
Reply to  Greg
August 20, 2017 7:31 am

Following the child height example:
First case: If you take one child and you measure his/her height 10 times, the average is more accurate.
Second case: If you have 10 children and you measure their haight once per child. the average height is not more accurate than the individual accuracy.
The temperature in Minneapolis is different from the temperature in Miami. The Earth average temperature belongs to the second case. That is my understanding.
It does not matter, anyway, since the Earth is not in thermal equilibrium or even in thermodynamic equilibrium and therefore the term average temperature is meaningless.

catweazle666
Reply to  Greg
August 20, 2017 6:14 pm

“the error of the
average will be s/squareroot(n).
small”

No it won’t.

Philo
Reply to  Cold in Wisconsin
August 19, 2017 5:59 pm

Cold(what else?) in Wisconsin- temperature is an Intensive property- the speed of the moving/vibrating atoms and molecules. Which for climate purposes is measured by a physical averaging process- the amount the temperature being measured changes the resistance of (usually now) some sort of calibrated resistor which can be very precise(to hundredths of a degree) but only as accurate as its calibration over a specific range. Averaging temperatures is pretty meaningless. You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing. Measuring how the temperature of the water changes tells you something about the amount of energy released by the burning gas but it’s a very crude calorimeter.
Like that example, the climate is driven by energy movements, not primarily by temperatures.

Reply to  Philo
August 19, 2017 6:53 pm

I’m not a climate scientist (but I did see one on TV) but why aren’t those far more educated than me pointing out Phil’s point which should be obvious to anyone with a basic science education.

You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing.

In discussions with my academic son, I point out that I can take the temperature at the blue flame of a match stick and then the temperature of a comfortable bath tub and the the average of the two has no meaning.
The response of course is 97% of scientists say I’m deluded. (Argument from Authority).

Mick
Reply to  Philo
August 19, 2017 6:58 pm

I have environment canada weather app on my phone. I noticed this summer they reported what it feels like rather than the measured number. Or, they use the inland numbers which are a few degrees higher, rather than the coastal number that they have been using at the same airport station for the last 80 years.
They especially do this on the radio weather reports. It feels like…30 degrees

crackers345
Reply to  Philo
August 19, 2017 7:55 pm

george – scientists have
made it very clear that anyone
should expect a change of the
global average at their
locale.
but the global avg is good
for spotting the earth’s energy
imbalance. not perfect, but
good

tty
Reply to  Philo
August 20, 2017 1:03 am

“but the global avg is good for spotting the earth’s energy imbalance. not perfect, but good”
Actually it is almost completely useless given the very low heat capacity of the atmosphere compared to the ocean (remember that it is the ocean that absorbs and emits the vast majority of solar energy).

TA
Reply to  Cold in Wisconsin
August 20, 2017 6:10 am

https://science.nasa.gov/science-news/science-at-nasa/1997/essd06oct97_1
Accurate “Thermometers” in Space
“An incredible amount of work has been done to make sure that the satellite data are the best quality possible. Recent claims to the contrary by Hurrell and Trenberth have been shown to be false for a number of reasons, and are laid to rest in the September 25th edition of Nature (page 342). The temperature measurements from space are verified by two direct and independent methods. The first involves actual in-situ measurements of the lower atmosphere made by balloon-borne observations around the world. The second uses intercalibration and comparison among identical experiments on different orbiting platforms. The result is that the satellite temperature measurements are accurate to within three one-hundredths of a degree Centigrade (0.03 C) when compared to ground-launched balloons taking measurements of the same region of the atmosphere at the same time. ”
The satellite measurements have been confirmed by the balloon measurements. Nothing confirms the bastardized surface temperature record.
And this:
http://www.breitbart.com/big-government/2016/01/15/climate-alarmists-invent-new-excuse-the-satellites-are-lying/
“This [satellite] accuracy was acknowledged 25 years ago by NASA, which said that “satellite analysis of the upper atmosphere is more accurate, and should be adopted as the standard way to monitor temperature change.”
end excerpts
Hope that helps.

Tony
August 19, 2017 4:52 pm

Watch me pull a rabbit out of my hat “±0.05ºC” … what utter rubbish!

August 19, 2017 4:55 pm

I am puzzled as to how it is how over a period of 30 years temperatures can be established to only ±0.5K but for the Gistemp 2016 baseline the uncertainty is only ±0.05ºC. How is the latter more precise? Is it that different measuring techniques are in use?

Greg
Reply to  Kip Hansen
August 19, 2017 5:17 pm

The order of magnitude not necessarily wrong because they are different things. there is no reason why they should be the same but I don’t believe either 0.5 or the 0.05 figures.

Greg
Reply to  Kip Hansen
August 19, 2017 5:38 pm

The problem is , while the instrumental and reading errors are random and will average out allowing a sqrt(N) error reduction, you can not apply the same logic to the number of stations and this is exactly what they do to get the silly uncertainties.
They try to argue that they have N-thousand measurements of the same thing : the mean temperature. This is not true because you can not measure a mean temperature, it is not physical, it is a statistic of individual measurements. Neither does the world have A temperature which you can try to measure at a thousand different places.
So all you have is thousands of measurements each with a fixed uncertainty That is not going more accurate if you go to do a thousand measurements on Mars and them claim that you know the mean temperature of the inner planets more accurately than you know the temperature of Earth.
The temperature at different places are really different. You don’t get a more accurate answer by measuring more different things.

Bob boder
Reply to  Kip Hansen
August 19, 2017 6:34 pm

There no evidence if the error is mechanical in nature that it would average out with more samples anyway. Devices of the same type tend to drift or fail all in the same direction.

Reply to  Kip Hansen
August 19, 2017 7:10 pm

But they are NOT “different things”.
If one is defined as a deviation from another, you can’t separate them, no matter how many statistical tricks you apply.

tty
Reply to  Kip Hansen
August 20, 2017 1:11 am

“while the instrumental and reading errors are random and will average out allowing a sqrt(N) error reduction”
Just what makes you believe that?

Reply to  Kip Hansen
August 20, 2017 5:01 am

Greg, you are moving from verifiable to hypothetical with the statement about errors averaging out. The mathematics is based on exact elements of a set having precise properties (IID).
Also one of the pillars of the scientific method is the Method of making measurements. You design the tools to achieve the resolution you want. Were the temperature measurements stations set up to measurement repeatably with sub-0.1K uncertainty? No they weren’t. Neither were the bucket measurements of SST.
And that is the fundamental problem with climate scientists. They are dealing in hypotheticals but believing that it is real. They have crossed into a different area.

wyzelli
August 19, 2017 5:07 pm

It is also well worth remembering (or learning) the difference between MEAN and MEDIAN and paying close attention to which one is used where in information sources.
So many reports that “the temperature is above the long term MEAN” where in a Normal Distribution exactly half of the samples are higher than the mean!
Its an interesting and worthwhile exercise to evaluate whether the temperature series in any particular station resembles a Normal Distribution…

wyzelli
Reply to  wyzelli
August 19, 2017 5:11 pm

For comparison purposes, note that sea ice extent is usually referenced to the MEDIAN.

Stephen Greene
August 19, 2017 5:14 pm

I was looking at temp. and CO2 data last week to see if NASA, NOAA and GIST would pass FDA scrutiny if approval was sought. There is a lot to it but from acquisition to security to analysis as well as quality checks for biases in sampling, to missing data, not to mention changing historical data etc, the answer is no. NOT EVEN CLOSE! Blinding is a big deal. So, ethically I believe any climate scientist who is also an activist must blind ALL PARTS of a study to ensure quality. What about asking to audit all marchers on Washington’s who received federal grants but do not employ FDA level or greater quality standards? Considering Michael Mann would not turn over his data to the Canadian courts last month, this might be a hoot, and REALLY VALUABLE!

Rick C PE
August 19, 2017 5:15 pm

TonyL: I disagree that the use of the “anomalies” is well known.

a·nom·a·ly
[əˈnäməlē]
NOUN
something that deviates from what is standard, normal, or expected:
“there are a number of anomalies in the present system”
synonyms: oddity · peculiarity · abnormality · irregularity · inconsistency

While It is used extensively in climate science these days, it is a very uncommon approach in statistical analysis, engineering and many scientific fields. The term or process is not mentioned or described in any of my statistics text books. I have spent 40 years in the business of collecting and analysis of all kinds of measurements and have never seen the need to convert data to ‘anomalies’. It can be viewed as simply reducing a data set to the noise component. My main objection is that, like an average without an estimate of dispersion such as the variance or standard deviation, it serves to reduce the information conveyed. Also, as the definition of anomaly indicates, it implies abnormality, irregularity, etc. As has been widely argued here and elsewhere significant variability in temperature of our planet seems quite normal.

Robert of Ottawa
August 19, 2017 5:17 pm

I think this is a fine demonstration of the falacy of false precision. Also of statistical fraud.
We can’t let the prols think “Hey guess what, the temperature hasn’t changed!”

August 19, 2017 5:21 pm

KH, I am of two minds about your interesting guest post.
On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless. OTH, a global average stationary station anomaly (correctly calculated) is meaningful, especially for climate trends. So useful if the stations are reliable (most aren’t),
On the other hand, useful anomalies hide a multitude of other climate sins. Not the least of which is the gross difference between absolute and ‘anomaly’ discrepancies in the CMIP5 archive of the most recent AR5 climate models. They get 0C wrong by +/-3 C! So CMIP5 not at all useful. Essay Models all the way Down in ebook Blowing Smoke covers the details of that, and more. See also previous guest post here ‘The Trouble with Models’.

Greg
Reply to  ristvan
August 19, 2017 6:00 pm

I agree that anomalies make more sense in principal, if you want to look at whether the earth has warmed due to changing radiation , for example.
The problem is the “climatololgy” for each month is the mean of 30days of that month over 30 years. 900 data. They will have a range of 5- 1- deg C for any given station with a distribution. You can take 2 std dev as the uncertainty of how representative that mean is and I’ll bet that is more than 0.05 deg C. So the uncertainty on your anomaly can never be lower than that.

Reply to  ristvan
August 19, 2017 7:16 pm

For anomalies to be useful in any respect , the original data should not be tampered with.

David Chappell
Reply to  ristvan
August 20, 2017 4:25 am

Ristvan: “On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless.”
What you are saying in simple terms is that a global average temperature is also a crock of fecal matter.

Tom Halla
August 19, 2017 6:14 pm

This is like the rules for stage psychics doing cold readings==>do not be specific on anything checkable.

Greg
August 19, 2017 6:18 pm

Another error they usually ignore is sampling error. Is the sample a true and accurate representation of the whole. In the case of SST almost certainly not.
Sampling patterns and methods have been horrendously variable and erratic over the years. The whole engine room / buckets fiasco is largely undocumented and is “corrected” based on guesswork, often blatantly ignore the written records.
What uncertainty needs to be added due to incomplete sampling?

Clyde Spencer
August 19, 2017 6:24 pm

KIP,
Something buried in the comments section of Gavin’s post is important and probably overlooked by most:
“…Whether it converges to a true value depends on whether there are systematic variations affecting the whole data set, but given a random component more measurements will converge to a more precise value.
[Response: Yes of course. I wasn’t thinking of this in my statement, so you are correct – it isn’t generally true. But in this instance, I’m not averaging the same variable multiple times, just adding two different random variables – no division by N, and no decrease in variance as sqrt(N).”
Gavin is putting to rest the claim by some that taking large numbers of temperature readings allows greater precision to be assigned to the mean value. To put it another way, the systematic seasonal variations swamp the random errors that might allow an increase in precision.
Another issue is that, by convention, the uncertainty represents +/- one (or sometimes two) standard deviations. He doesn’t explicitly state whether he is using one or two SD. Nor does he explain how the uncertainty is derived. I made a case in a recent post ( https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/ ) that the actual standard deviation for the global temperature readings for a year might be about two orders of magnitude greater than what Gavin is citing.

Gary Kerkin
August 19, 2017 6:38 pm

Schmidt cites two references as to why anomalies are preferred, one from NASA and one from NOAA. The latter is singularly useless as to why anomalies should be used. The opening paragraph of the NASA reference states:

The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.

Two factors are at work here. One is that the data is smoothed. The other is that the anomalies of two different geographical locations can be compared whilst the absolute temperatures cannot.
Is smoothed data useful? I guess that is moot but it is true to say that any smoothing processes loses fine detail, the most obvious of which is diurnal variation. Fine detail includes higher frequency information and removing it makes the analysis of natural processes more difficult.
Is a comparison of anomalies at geographically remote locations valid? I would think it would be, provided the statistics of the data from both locations are approximately the same. For example, since most analysis is based on unimodal gaussian distributions (and normally distributed at that), if the temperature distributions at the two locations are not normal, can a valid comparison be made? Having looked at distributions in several locations in New Zealand, I know that the distributions are not normal. Diurnal variation would suggest at least a bimodal distribution, but several stations exhibit at least trimodal distributions. The more smoothing applied to the data set the more closely the distribution will display normal, unimodal behaviour.
I suspect that smoothing the data is the primary objective, hiding the inconvenient truth that air temperature is a natural variable and is subject to a host of influences, many of which are not easily described, and incapable of successful, verifiable modeling.

Scott Wilmot Bennett
Reply to  Gary Kerkin
August 19, 2017 8:41 pm

Re: Gary Kerkin (August 19, 2017 at 6:38 pm)
[James] Hansen is quoting himself again, it’s all very inbred when you start reading the supporting – or not – literature!
However the literature doesn’t agree and he knows that he is dissembling.
In [James] Hansen’s analysis, the isotropic component of the covariance of temperature, assumes a constant correlation decay* in all directions. However, “It has long been established that spatial scale of climate variables varies geographically and depends on the choice of directions” (Chen, D. et al.2016).
In the paper The spatial structure of monthly temperature anomalies over Australia, the BOM definitively demonstrated the inappropriateness of Hansen’s assumptions about correlation of temperature anomalies:

In reality atmospheric fields are rarely isotropic, and indeed the maintenance of westerly flow in the southern extratropics against frictional dissipation is only possible due to the northwest-southeast elongation of transient eddy activity (Peixoto and Oort 1993). Seaman (1982a) provides a graphic illustration of this anisotropy on weather time-scales for the Australian region…This observation of considerable anisotropy is in contrast with Hansen and Lebedeff (1987) for North America and Europe.. We also note the inappropriateness of the function used by P.D. Jones et al. (1997) for describing anisotropy (at least for Australian temperature), which limits the major and minor axes of the correlation ellipse to the zonal and meridional direction (see Seaman 1982b).
Clearly, anisotropy represents an important characteristic of Australian temperature anomalies, which should be accommodated in analyses of Australian climate variability.(Jones, D.A. & Trewin, B. 2000)

*Decreasing exponentially with their spatial distance, spatial scales are quantified using the e-folding decay constant.

Scott Wilmot Bennett
Reply to  Scott Wilmot Bennett
August 19, 2017 11:01 pm

Mod or Mods! Whoops! I just realised that my comment above was directed at James Hansen of NASA but might be confused with the Author of the post, Kip Hansen!
To be clear, Gavin Schmidt(NASA), references James Hansen(NASA) quoting J.Hansen who references NASA(J.Hansen)! It’s turtles all the way down 😉

wyzelli
Reply to  Gary Kerkin
August 20, 2017 4:02 pm

It is true that temperature data is not Normally Distributed. At the very least most sets I have looked at are relatively skewed. The problem is that the variation from Normal in each station is different from other stations, and comparing, specifically averaging, non homogeneous data presents a whole other set of difficulties (i.e. shouldn’t be done).

1 2 3