Climate Science Double-Speak: Update

Update by Kip Hansen

Last week I wrote about UCAR/NCAR’s very interesting discussion on “What is the average global temperature now?”.

Part of that discussion revolved around the question of why current practitioners of Climate Science insist on using Temperature Anomalies — the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average — instead of simply showing us a graph of the Absolute Global Average Temperature in degrees Fahrenheit or Celsius or Kelvin.

Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.

In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:

“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

You see, as Dr. Schmidt carefully explains for us non-climate-scientists, if they use Absolute Temperatures the recent years are all the same — no way to say this year is the warmest ever — and, of course, that just won’t do — not in “RealClimate Science”.

# # # # #

Author’s Comment Policy:

Same as always — and again, this is intended just as it sounds — a little tongue-in-cheek but serious as to the point being made.

Readers not sure why I make this point might read my more general earlier post:  What Are They Really Counting?

# # # # #

Article Rating
Inline Feedbacks
Sweet Old Bob
August 19, 2017 3:36 pm

What a tangled web we weave …….

wayne Job
August 20, 2017 3:36 am

Odd is it not that some fifty years ago the accepted standard for the world was 14.7C @ 1313 Mb.
I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.

ATheoK
August 20, 2017 4:44 am

It is not odd.
It is an embarrassment.

“Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.
In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:”

None of those titles claimed by Schmidt disguise the facts that Gavin Schmidt is an elitist who believes himself so superior, that Gavin will not meet others as equals.
A lack of quality that Gavin Schmidt proclaims loudly and displays smugly when facing scientists; one can imagine how far superior Schmidt considers himself above normal people.
As further proof of Schmidt’s total lack of honest forthright science is Gavin’s latest snake oil sales pitch “climate science double-speak”.

“wayne Job August 20, 2017 at 3:36 am
Odd is it not that some fifty years ago the accepted standard for the world was 14.7C @ 1313 Mb.
I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.”

Wayne job demonstrates superlatively that no matter how Gavin’s and his obedient goons adjust temperatures; they are unable to hide current temperatures from historical or common sense comparisons.
Gavin should be permanently and directly assigned to Antarctica where Gavin can await his dreaded “global warming” as the Antarctica witch.

Sceptical lefty
August 20, 2017 5:24 am

Sorry to be pedantic, but I believe that the pressure should have been 1013mb.
As an aside, it’s a real bitch when the inclusion of realistic error figures undermines one’s whole argument. This sort of subversive behaviour must be stopped!

PiperPaul
August 20, 2017 7:02 am

14.7 is also air pressure in PSI at sea level! I’m 97% sure there’s some kind of conspiracy here…

Pat Frank
August 20, 2017 8:11 am

Good point about the errors. Gavin shows the usual consensus abhorrence of tracking error.
If the cliimatology is known only to ±0.5 K and the measured absolute temperature is known to ±0.5 K, then the uncertainty in the anomaly is their root-sum-square = ±0.7 K.
There’s no avoidance of uncertainty by taking anomalies. It’s just that consensus climate scientists, apparently Gavin included, don’t know what they’re doing.
The anomalies will inevitably have a greater uncertainty than either of the entering temperatures.

wayne Job
August 21, 2017 5:41 am

Sorry the Mb should read a 1013, I do know that the temp was right as an old flight engineer they were the standard figures for engine and take off performance.

August 20, 2017 7:30 am

…when we practice to receive – grants, lots and lots of taxpayer funded grants!

Bill Hanson
August 19, 2017 3:38 pm

Stunning.

August 19, 2017 3:46 pm

We live in a world of absolute temperature numbers, not long term averages. Averages have no social meaning.

NW sage
August 19, 2017 4:22 pm

Averages are a statistical method of trying to detect meaning when there is none.

August 20, 2017 8:31 am

Climatology is about averages. To know, for example,the 30 year average temperature at a given location is useful for some purposes. Climatologists erred when they began to try to predict these averages without identifying the statistical populations underlying their models for to predict without identifying this population is impossible.

george e. smith
August 20, 2017 7:26 pm

NOTHING ever happens twice; something else happens instead. So any observation creates a data set with one element; the observation itself.
And the average value of a data set containing a single element is ALWAYS the value of that one element. So stick with the observed values they are automatically the correct numbers to use.
G

August 29, 2017 12:30 pm

Gavin should learn little Math – specifically Significant Digits. If the climatology is to a precision of 0.1, then the Anomaly MAY NOT BE calculated to a precision greater than 0.1 degree. Absolute or Anomaly – both ought to show that the temperatures are the same.
i always wonder, if the Alarmists’ case is so strong, then why do they need to lie?

Santa Baby
August 19, 2017 11:19 pm

In postmodernism nothing is truth. Except postmodern consensus policy based science?

Auto
August 20, 2017 2:51 pm

Santa
“postmodern consensus policy based science” is the revealed and frighteningly enforceable truth.
Disagree and – no tenure.
Never mind scientific method.
Sad that science has descended into a belief system, isn’t it??
Auto

Bill Powers
August 19, 2017 4:01 pm

in a somewhat different tack, check you local TV channel – weather meteorologists. I detected a pattern in markets I have lived. when the Temperature is above the average over time they almost always say that the “Temperature was above NORMAL today” but when it is below they say that the “Temperature was below the AVERAGE” for this date.
Now subliminally we are receiving a bad news message when the temperate is not normal but it comes across somewhat non newsworthy to be innocuously below an average, Do they teach them this in meteorology courses?
CAGW Hidden Persuaders? Check it out. Maybe it’s just my imagination.

Michael Smith
August 19, 2017 9:27 pm

In parts of Australia I have heard TV weather persons say that monthly rainfall was “less than what we should have received” as if it were some sort of entitlement rather than just a calculated average of widely fluctuating numbers. I grimace when I hear it.

Crispin in Waterloo but really in Bishkek
August 19, 2017 8:30 pm

What I hear is a continuous reference to the ‘average’ temperature with no bounds as to what the range of ‘average’ is.
It is not nearly enough to say ‘average’ temperature for today is 25 C and not mention that the thirty years which contributed to that number had a range of 19-31. The CBC will happily say the temperature today is 2 degrees ‘above average’ but not say that it is well within the normal range experienced over the calibration period.
The use of an ‘anomaly’ number hides reality by pretending there is a ‘norm’ that ‘ought to be experienced’ were it not for the ‘influence’ of human activities.
All this is quite separate from the ridiculous precision claimed for Gavin’s numbers which are marketed to the public as ‘real’. These numbers are from measurements and the error propagation is not being done and reported properly.

crackers345
Reply to  Crispin in Waterloo but really in Bishkek
August 19, 2017 9:00 pm

crispin, the baseline is not the “norm.” it’s
just an arbitrary choice to compare temperatures
against. it can be changed at will. it
hides nothing

george e. smith
Reply to  Crispin in Waterloo but really in Bishkek
August 20, 2017 7:28 pm

Well nuts ! the observed value IS the norm; it can never be anything else.
G

Patrick MJD
August 19, 2017 11:03 pm

No, not your imagination. It’s to scare people, ie, the warm/cold is abnormal (Somehow) when it is perfectly normal. I am seeing this in Australian weather broadcasts more and more now.

tom s
August 20, 2017 9:52 am

I am a meteorologist…30yrs now. I cannot stand TV weather. I never watch it anymore as I do all my own forecasting myself. It’s catered to 7yr olds. It’s painful to watch. I need not listen to any of these dopes. No, I am not a TV weatherman.

AGW is not Science
August 21, 2017 12:31 pm

I actually haven’t taken notice of the differences between how “above” and “below” average temps are referenced, but I have always abhorred the (frequent, and seemingly prevailing) use of the word “normal” in that respect.
As I like to say, “There IS no “normal” temperature – it is whatever it is.” What they are calling “normal” is an average temperature of a (fairly arbitrarily selected) 30-year period (and at one point they weren’t moving the reference period forward as they were supposed to, because they knew that was going to raise the “average” temps and thereby shrink the “anomalies,” thereby undermining (they felt) the “belief” in man-made climate catastrophe).
I object to the word “anomaly” as well, because it once again suggests that there is something “abnormal” about any temperature that is higher or lower than a 30-year average, which itself is nothing more than a midpoint of extremes. There IS NOTHING “ANOMALOUS” about a temperature that is not equal to ANY “average” of prior temperatures, which itself is nothing more than a midpoint of extremes. “Anomalies” are complete BS.
Great, revealing OP.

JohnWho
August 19, 2017 4:01 pm

Wait, does that mean all the years are the “hottest ever” or none of them?
I note that Gavin states with certainty that it is uncertain and it is somewhat surprising that he does so.

Louis
August 19, 2017 6:51 pm

If absolute temperatures carry uncertainties, why don’t anomalies? It seems to me that anomalies are usually less than the uncertainty and therefore are virtually equivalent to zero. So why are they allowed to use anomalies without revealing their corresponding uncertainties?

richard verney
August 20, 2017 12:51 am

If absolute temperatures carry uncertainties, why don’t anomalies?

They do. Gavin states:

The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC.

So he suggests that the error bounds of the anomalies are very small, only +/- 0.05degC Whether one considers that small error bound reasonable is a different matter.

Stephen Richards
August 20, 2017 1:18 am

I wrote to Realclimate many years ago about this stupidity. I got back the usual bile. One and only time I looked at that site.

August 20, 2017 1:53 am

Sorry folks, but probably a dumb question from an ill educated oaf.
Being that Stephenson screens with thermometers were probably still being used in 1981, and for some time after, with, presumably, a conventional thermometer, surely observations of the temperature couldn’t possibly be accurate to 0.5K i.e. 287.4±0.5K.
Nor do I believe it credible that every Stephenson screen was well maintained, and we know about the siting controversy. And I suspect not all were properly monitored, with the office tea boy being sent out into the snow to take the measurements, myopic technicians wiping rain off their specs. or the days when someone forgets, and just has a guess.
And I don’t suppose for a moment every Stephenson screen, at every location, was checked once every hour, possibly four times in 24 hours, or perhaps 8 times, in which case there are numerous periods when temperatures can spike (up or down) before declining or rising.
It therefore doesn’t surprise me one bit that with continual electronic monitoring we are seeing ‘hottest temperatures evah’ simply because they were missed in the past.
Sorry, a bit of a waffle.

richard verney
August 20, 2017 2:33 am

I wrote to Realclimate many years ago about this stupidity. I got back the usual bile. One and only time I looked at that site.

I do from time to time look at the site, but I understand that comments are often censored or dismissed without proper explanation. I have posted a comment (awaiting moderation) inquiring about the time series data set and what the anomaly really represents. It will be interesting to see whether it gets posted and answered.

I must confess that I am having difficulty in understanding what this anomaly truly represents, given that the sample set is constantly changing over time.
If the sample set were to remain true and the same throughout the time series, then it would be possible to have an anomaly across that data set, but that is not what is or has happened with the time series land based thermometer data set.
The sample set of data used in say 1880 is not the same sample set used in 1900 which in turn is not the same sample set used in 1920, which in turn is not the same sample set used in 1940, which in turn is not the same sample set used in 1960, which in turn is not the same sample set used in 1980, which in turn is not the same sample set used in 2000, which in turn is not the same sample set used in 2016.
You mention the climatology reference of 1981 to 2010 against which the anomaly is assessed, however, the data source that constitutes the sample set for the period 1981 to 2010, is not the same sample set used to ascertain the 1880 or 1920 or 1940 ‘data’. We do not know whether any calculated anomaly is no more than a variation in the sample set, as opposed to a true and real variation from that set.
When the sample set is constantly changing over time, any comparison becomes meaningless. For example, if I wanted to assess whether the average height of Americans has changed over time, I cannot ascertain this by say using the statistic of 200 American men measured in 1920 and finding the average, then using the statistics of 200 Finnish men who speak English measured in 1940 and finding the average, then using the statistics of 100 American women and 100 Spanish men who speak English as measured in 1960 etc. etc
It is not even as if we can claim that the sample set is representative since we all know that there is all but no data of the Southern hemisphere going back to say 1880 or 1900. In fact, there are relative few stations that have continuous records going back 60 years, still less about 140 years. Maybe it is possible to do something with the Northern Hemisphere, particularly the United States which is well sampled and which possesses historic data, but outside that, I do not see how any meaningful comparisons can be made.
Your further thoughts would be welcome.

ATheoK
August 20, 2017 5:18 am

“HotScot August 20, 2017 at 1:53 am
Sorry folks, but probably a dumb question from an ill educated oaf.
Being that Stephenson screens with thermometers were probably still being used in 1981, and for some time after, with, presumably, a conventional thermometer, surely observations of the temperature couldn’t possibly be accurate to 0.5K i.e. 287.4±0.5K.
Nor do I believe it credible that every Stephenson screen was well maintained, and we know about the siting controversy. And I suspect not all were properly monitored, with the office tea boy being sent out into the snow to take the measurements, myopic technicians wiping rain off their specs. or the days when someone forgets, and just has a guess.
And I don’t suppose for a moment every Stephenson screen, at every location, was checked once every hour, possibly four times in 24 hours, or perhaps 8 times, in which case there are numerous periods when temperatures can spike (up or down) before declining or rising.
It therefore doesn’t surprise me one bit that with continual electronic monitoring we are seeing ‘hottest temperatures evah’ simply because they were missed in the past.
Sorry, a bit of a waffle.”

No apologies necessary. Nor is your question unreasonable and it is certainly not “dumb”; except to CAGW alarmists hiding the truth.
Everyone should read USA temperature station maintenance staff writings!

What’s in that MMTS Beehive Anyway?
– By Michael McAllister OPL, NWS Jacksonville, FL,
If you’re not involved with cleaning a Maximum/Minimum Temperature Sensor (MMTS) sensor unit, you probably have not seen inside it. The white louvered “beehive” contains a thermistor in its center with two
white wires. The wires connect it to the plug on the base of the unit. It’s really a very basic instrument. So what else is there to be discovered in the disassembly of the unit?
I cannot vouch for the rest of the country, but here in northeast Florida and southeast Georgia, we regularly find various critters making their home inside the beehive. At the Jacksonville, FL, NWS office, we usually
replace the beehive on our annual visits. After getting the dirty beehive back to the office, and before carefully taking it apart for cleaning, we leave it in a secure outside area for a day to let any “residents” inside vacate, then we dunk it in a bucket of water to flush out any reluctant squatters…”

N.B.;
At no point do the maintenance or NOAA staff ever conduct side by side measurements to determine before/after impacts to data.
Stations are moved,
sensor housings are replaced,
sensors are replaced and even “upgraded”,
data transmission lines and connections are replaced, lengthened, shortened, crimped, bent, etc.,
data handling methods and code are changed,
etc.
None of these potential “temperature impacts” are ever quantified, verified introduced into Gavin’s mystical error bounds theology.

Latitude
August 19, 2017 4:02 pm

why current practitioners of Climate Science insist on using Temperature Anomalies….
…it’s easier to hide their cheating

Menicholas
August 19, 2017 7:10 pm

Also, it becomes obvious that the amounts of difference they are screaming about are below the limits of detection to a person without instrumentation.

AGW is not Science
August 21, 2017 12:40 pm

BINGO!

Tom in Florida
August 19, 2017 4:08 pm

“Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”
And of course, you lose the ability to scare people into parting with their money.
Snake Oil Salesman: The phrase conjures up images of seedy profiteers trying to exploit an unsuspecting public by selling it fake cures.

Gunga Din
August 19, 2017 4:11 pm

Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”

So…in other words, if the actual temperatures won’t make it “warmest year ever!”, we’ll use something else to make it the “swarmiest year ever!”.
(http://www.urbandictionary.com/define.php?term=Swarmy)

TonyL
August 19, 2017 4:13 pm

The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.
This appears to be attempting to make an issue where there is none.
It’s a Nothingburger.
Fake News.

Greg
August 19, 2017 4:30 pm

Agreed.

Greg
August 19, 2017 4:35 pm

“The proper use of anomalies is well known and the reasons are sound. ”
Agreed.

— a little tongue-in-cheek but serious as to the point being made.

So what is the serious point being made? That you don’t understand why anomalies are used?

Latitude
August 19, 2017 4:47 pm

” All of which appear to be the same within the uncertainty”

Greg
August 19, 2017 4:50 pm

Gav would do better to try to explain why he is averaging ( ie adding ) temperatures of land and sea which are totally different physical media and thus not additive:

seaice1
August 19, 2017 5:24 pm

“So what is the serious point being made? That you don’t understand why anomalies are used?”
That appears to be the case. I suggest anyone who finds this amusing go and read the article at realclimate with an open mind and you may then understand why anomalies are used. Ho ho. As if that will happen! We can all share in the joke.

bobl
August 19, 2017 5:35 pm

Actually the whole of climate science would do well to explain why they use the unreliable almost nonphysical concept of temperature to do anything useful since the actual physical parameter is energy. Temperatures represent vastly different energies depending on the phase of matter, and the medium it is being measured in. For example between a dry day and a humid day, or between smog or air, between ozone or oxygen. The assumption of constant relative humidity alone makes the whole thing a pseudoscience.

KTM
August 19, 2017 8:12 pm

Bobl it is so they can take a high energy maximum daily temperature and directly add it to a low energy minimum temperature, then divide that value in half as if they are both equivalent to arrive at an average temperature without proper weighting.
When is the last time you heard a Warmist talking about maximum temperatures? It’s taboo to discuss those in polite society.

blcjr
Editor
August 20, 2017 5:11 am

In terms of statistics, the point is valid. To compare a “spot” temperature against an “average” (like a 30 year norm) ignores the uncertainty in the “average.” This is similar to the difference between a “confidence interval” and a “prediction interval” in regression analysis. The latter is much greater than the former. In the first case one is trying to predict the “average.” In the second case one is trying to predict a specific (“spot” in the jargon of stock prices) observation.
Implicitly, an anomaly is trying to measure changes in the average temperature, not changes in the actual temperature at which time the measurement is taken. If the anomaly in June of this year is higher than the anomaly in June of last year, that does not mean that the June temperature this year was necessarily higher than the June temperature last year. It means that there is some probability that the average temperature for June has increased, relative to the (usually) 30 year norm. But in absolute terms that does not mean we are certain that June this year was warmer than June last year.
Anomalies are okay, if understood and presented for what they are: a means of tracking changes in average temperature. But that is not how they are used by the warmistas. The ideologues use them to make claims about “warmest month ever,” and that is statistical malpractice.
Basil

Jim Gorman
August 20, 2017 7:32 am

blcjr: [anomalies are] ” a means of tracking changes in average temperature”. This is exactly what the CAGW quote. You are feeding their assumption. I know you are aware of the difference but the normal person does not; they simply read your text and say, “O, the normal temperature is going up or down”.
I usually try to explain the anomalies as a differential, that is, an infinitely small section of a line with the magnitude and direction of the change. The width of the change is no wider than a dot on the graph. This seems to make more sense to the most people.

rd50
August 19, 2017 4:31 pm

Editor
August 21, 2017 12:41 pm

rd50 ==> Sorry — who? give you a link to what?

HAS
August 19, 2017 4:41 pm

Actually it isn’t uncontroversial. One problem does lie with the uncertainty and its distribution. Another with working with linear transformations of variables in non-linear systems.

Aphan
August 19, 2017 4:53 pm

TonyL
It gets better-
“[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)
If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.
BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”
Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.
Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.

Aphan
August 19, 2017 4:55 pm

It gets better-
“[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)
If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.
BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”
Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.
Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.

Robert of Ottawa
August 19, 2017 5:32 pm

Yes indeed, 0.07 +/- 0.5 doesn’t appear to be very significnt does it 🙂

jorgekafkazar
August 19, 2017 5:14 pm

Just think of it as a statistical rug under which to sweep tangled web weaving.

Aphan
August 19, 2017 5:28 pm

jorgekafkazar-
Right!
And yet they say “the Earth’s temperature is increasing” instead of “the Earth’s anomalies are increasingly warmer” etc. Al Gore says “the Earth has a temperature” instead of “The Earth has a higher anomaly”. And since Gav and the boys ALL ADMIT that it’s virtually impossible to know “exactly” what Earth’s actual global average temperature is, and that Earth is not adequately covered with thermometers, and that the thermometers we DO have are not in any way all properly cited and maintained and accurate… why in the crap do we let them get away with stating that “average surface temperatures were 0.07 F warmer” than a prior year? Why would any serious “Scientist” with any integrity use that kind of language when he’s really talking about something else??
Oh yeah…..rug weaving. 🙂

Sheri
August 20, 2017 9:04 am

Aphan: that “average surface temperatures were 0.07 F warmer” than a prior year
If only they did actually say that. They don’t even say that. It’s just “hottest year ever” with no quantification, usually.

Clyde Spencer
August 19, 2017 6:45 pm

TonyL,
Yes, at least some of us are aware of the ‘proper’ use of anomalies. At issue is whether anomalies are being used properly. Gavin even admits that frequently they are not: “This means we need to very careful in combining these two analyses – and unfortunately, historically, we haven’t been and that is a continuing problem.”

TonyL
August 19, 2017 7:09 pm

At issue is whether anomalies are being used properly.

Very True.
A closely related issue:
The ongoing story of the use, misuse, and abuse of statistics in ClimateScience! is the longest running soap opera in modern science.
The saga continues.

Rick C PE
August 19, 2017 8:49 pm

TonyL: I disagree that the use of anomalies is well known.

Anomaly
NOUN
Something that deviates from what is standard, normal, or expected:
“there are a number of anomalies in the present system”
Synonyms: oddity, peculiarity, abnormality, irregularity, inconsistency

My objection is that the reporting of data as anomalies, like reporting averages without the variance, standard deviation or other measure of dispersion, simply reduces the value of the information conveyed. It eliminates the context. It is not a common practice in statistical analysis in engineering or most scientific fields. None of my statistics textbooks even mentions the term. It simply reduces a data set to the noise component.
While it seems to be common in climate science, the use of the term anomaly implies abnormal, irregular or inconsistent results. But, as has been extensively argued here and elsewhere, variation in the temperature of our planet seems to be entirely normal.
That said, I do get that when analyzing temperature records it is useful to look at temperatures for individual stations as deviations from some long term average. E.g. if the average annual temp. in Minneapolis has gone from 10 C (long term average) to 11 C and the temp. in Miami has gone from 20 to 21 C, we can say both have warmed by 1 C.
Of course, if one averages all the station anomalies and all the station baseline temperatures the sum would be identical to the average of all the actual measured temperatures.
But it is another thing to only report the average of the ‘anomalies’ over hundreds or thousands of stations without including any information about the dispersion of the input data. Presenting charts showing only average annual anomalies by year for 50, 120, 1000 years is pretty meaningless.

ATheoK
August 20, 2017 6:50 am

“TonyL August 19, 2017 at 4:13 pm
The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.
This appears to be attempting to make an issue where there is none.
It’s a Nothingburger.
Fake News.”

The “Fake news and nothingburger” start right with Gavin, his mouth, his writing and Gavin’s foul treatment of others.

“TonyL August 19, 2017 at 4:13 pm
The proper use of anomalies is well known and the reasons are sound.”

What absurd usage of “well known” and “the reasons are sound”, TonyL.
Just another fake consensus Argumentum ad Populum fallacy.
Use of anomalies can be proper under controlled conditions for specific measurements,
• When all data is kept and presented unsullied,
• When equipment is fully certified and verified,
• When measurements are parallel recorded before and after installation and impacts noted,
• When temperature equipment is properly installed everywhere,
• When temperature equipment installation represents all Latitudes, Longitudes, elevations, rural, suburban and urban environments,
• When temperatures and only temperatures are represented, not some edited version of data, data fill-in, smudged or other data imitation method is used.
Isn’t it astonishing, that “adjustments”, substitutions, deletions, adjustments or data creation based on distant stations, introduce obvious error bounds into temperature records; yet 0.5K is the alleged total error range?
Error bounds are not properly tracked, determined, applied or fully represented in end charts.
Gavin and his religious pals fail to track, qualify or quantify error rates making the official NOAA approach anti-science, anti-mathematical and anti-anomaly. NOAA far prefers displaying “snake oil”, derision, elitism, egotism and utter disdain for America and Americans.
“Double speak” is far too nice a description for Gavin and NOAA misrepresented temperatures. Climastrologists’ abuse of measurements, data keeping, error bounds and data presentation would bring criminal charges and civil suits if used in any industry producing real goods Americans depend upon.

NW sage
August 19, 2017 4:19 pm

Kip – good post!
The REAL answer of course is normally called ‘success testing’. Using this philosophy the test protocol – in this case the way the raw data is treated/analyzed – is chosen in order to produce the kind of result desired. NOT an analysis to find out if the temperatures are warmer, colder, or the same but to produce results that show there is a warming trend.
The usual way of detecting this success testing phenomena is to read the protocol and see just how much scientific technobabble is there (think of the Startgate TV series). The more technobabble the less credible the result.

August 19, 2017 5:18 pm

This is what is really going on. Station selection, data selection, methodology selection allows the gate-keepers of the temperature record and the global warming religion, the ability to produce the number they want.
Think of it as someone standing over the shoulder of a data analyst in the basement of the NCDC each month saying “we’ll, what happens if we pull out the 5 Africa stations in the eastern side? How about we just add in that station with all the warming errors? Let’s adjust the bouys up and pretend it is because of ship engine intakes that nobody can/will check? Why don’t we bump up the time of observation bias adjustment and make a new adjustment for the MMTS sensors? Show me all the stations that have the highest warming? Let’s just drop those 1500 stations that show no warming. The South American stations are obviously too low by 1.0C. Just change them and call it an error.
We”ll call it version 4.4.3.2.”

David A
August 19, 2017 6:24 pm

…which explains why 50 percent of the data is often not used, made up, extrapolated.

Nick Stokes
August 19, 2017 4:23 pm

Gavin had an analogy. If you’re measuring a bunch of kids to see who’s the tallest, running a ruler head to foot, you can get a good answer. If you measure the height of their heads above sea level, there is a lot more uncertainty. So which would you do?

Latitude
August 19, 2017 6:00 pm

elevation above sea level of the classroom floor….
…and then make adjustments for the weight of each child…..because they are making the floor sink

D. Cohen
August 19, 2017 6:19 pm

To continue the analogy, what people want to know is ***not*** which kid is tallest, but rather which kid is highest above sea level, allowing for the possibility that the “sea level” — that is, the global absolute temperature — may be changing over time (day by day and year by year) in a way that is very difficult to measure accurately.

Greg
August 20, 2017 1:17 am

No, the best way is to measure their height using low orbit satellite range finding, whilst getting the kids to jump up and down on a trampoline and measure the reflection off the surface of the trampoline at the bottom of the movement. This is accurate to within +/- 1mm as has been established for sea level measurements.

Mark - Helsinki
August 20, 2017 12:14 pm

and yet actual absolute measurements are better than statistical output which is pure fantasy, it’s not an temperature anomaly, its a statistical anomaly, which requires a “leap of faith” to accept it as a temperature anomaly when talking GISS GAMTA

Clyde Spencer
August 19, 2017 6:53 pm

NS,
The primary uncertainty is introduced by adding in the elevation above sea level. Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair. Therein lies the problem with temperature anomalies. We aren’t measuring the anomalies directly (height) but obtaining them indirectly from an imperfectly known temperature baseline!

Nick Stokes
August 20, 2017 8:43 am

“Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair.”
Exactly. And that is the case here, because we are talking not about individual locations, but the anomaly average vs absolute average. And we can calculate the anomaly average much better, just as we can measure better top to toe.
It has another useful analogue feature. Although we are uncertain of the altitude, that uncertainty does not actually affect relative differences, although that isn’t obvious if you just write it as a±b. The uncertainty of the absolute average doesn’t affect our knowledge of one year vs another, say. Because that component of error is the sae for both. So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015. The reason is that you took the same number 14.0±1 (abs normal), and added the anomalies of 0.7±0.1 and 0.5±0.1. The normal might have been 13 or 15, but 2016 will still be warmer than 2015.

TheOtherBobFromOttawa
August 20, 2017 12:45 pm

You clearly have a different understanding of “error” than I do, Nick.
You wrote: “So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015.”
I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5. Since the temperature difference between 2015 & 2016 is well within the error range of both temperatures it’s impossible to know which year is warmer or cooler.
That’s what I remember from my first year Physics Prof, some 50 years ago. But maybe Physics has “evolved” since then. :))

TheOtherBobFromOttawa
August 20, 2017 5:16 pm

Thanks Kip. Yes, my thoughts exactly. I didn’t want to repeat the point I made in my first post about adding the errors to get the anomaly error but you covered it most eloquently. Thanks for starting a very interesting discussion.

Nick Stokes
August 21, 2017 12:25 pm

Kip,
“if your ancestors are from Devon”
None from Devon, AFAIK. Lots from Wilts, Glos.

Nick Stokes
August 21, 2017 12:31 pm

“I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5”
But not independently. If 2016 was at 13.7 because the estimate of normal was wrong on the low side (around 13), then that estimate is common to 2015, so there is no way that it could be 15+.
There are many things that can’t be explained by what you learnt in first year physics.

TheOtherBobFromOttawa
August 21, 2017 3:06 pm

I don’t know what point you’re making in your comment.
And there are many things that Gavin & Co. do that can’t be explained by anyone – at least in a way that makes sense to most people. :))

Streetcred
August 19, 2017 7:09 pm

No problem if all 5 boys are standing on the same level platform … but WE know that the platform is not level !

Urederra
August 20, 2017 6:59 am

One of the kids puts his hair in a bun.

P. Berberich
August 20, 2017 12:45 am

There is another analogy. This morning my wife asks: What’s the outside temperature today? My answer is: the temperature anomaly is 0.5 K. When I add you need no new clothes I will run into problems this day.

ATheoK
August 20, 2017 6:58 am

Nor will she nicely ask what the outside temperature is, again.
NOAA should reap equal amounts of derision for their abuse of anomalies.

Mark - Helsinki
August 20, 2017 12:11 pm

what if 60% of the kids are not measured Nick, does Gavin just make it up?

commieBob
August 19, 2017 4:26 pm

Suppose that we have a data set: 511, 512, 513, 510, 512, 514, 512 and the accuracy is +/- 3. The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0 and the accuracy is still +/- 3.
I don’t understand how using anomalies lets us determine the maximum any differently than using the absolute values. There has to be some mathematical bogusness going on in CAGW land. I suspect they think that if you have enough data it averages out and gives you greater accuracy. I can tell you from bitter experience that it doesn’t always work that way.

Pat Lane
August 19, 2017 5:24 pm

But if you ADD the uncertainties together, you get zero!
Here’s the appropriate “world’s best practice” algorithm:
1. Pick a mathematical operator (+, -, /, *, sin, cos, tan, sinh, Chebychev polynomial etc.)
2. Set uncertainty = 0
2a. Have press conference announcing climate is “worse than originally thought”, “science is settled” and “more funding required.”
3. Calculate uncertainty after applying operator to (homoginised) temperature records
4. Is uncertainty still zero?
5. No, try another operator.
6. go back to 3 or, better yet, 2a.

Pat Lane
August 19, 2017 5:37 pm

The sharp-eyed will note the above algorithm has no end. As climate projects are funded on a per-year basis, this ensures the climate scientist will receive infinite funding.

August 19, 2017 5:32 pm

Thank you Bob!
My math courses in Engineering and grad studies (stats, linear programming, economic modelling, and surprising to me the toughest of all, something called “Math Theory”) were 50 years ago. But the reasoning that somehow anomalies are more precise or have less uncertainty than the absolute values upon which they were based set off bells and whistles in my old noggin. I was very hesitant though to raise any question for fear of displaying my ig’nance..
Maybe both of us are wrong, but now I know I’m in good company. 🙂

Rolf
August 19, 2017 11:03 pm

Me too !

Nick Stokes
August 20, 2017 8:57 am

“The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0”
But you don’t form the anomalies by subtracting a common average. You do it by subtracting the expected value for each site.
“how using anomalies lets us determine the maximum”
You don’t use anomalies to determine the maximum. You use it to determine the anomaly average. And you are interested in the average as representing a population mean, not just the numbers you sampled. The analogy figures here might be
521±3, 411±3, 598±3. Obviously it is an inhomogeneous population, and the average will depend far more on how you sample than how you measure. But if you can subtract out something that determines the big differences, then it can work.

commieBob
August 20, 2017 6:35 pm

That’s what you say. Here’s what Dr. Schmidt said:

But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

My example is a simplified version of the above. If you think Dr. Schmidt erred, that’s between you and him.

ferdberple
August 20, 2017 11:10 am

the accuracy is still +/- 3.
≠======
Of course it is. But what climate science does is to re-calculate the error statistically from the anomaly and come to the absurd conclusion that the error changed from 0.5 to 0.05. The nonsense is that averaging reduces the variance and gives the misleading impression that it provides a quick way to reduce error. And it does in very specific circumstances. Of which this is not one.

Aphan
August 19, 2017 4:27 pm

Extra! EXTRA! Read all about it! Gavin Schmidt of NASA ADMITS that there has been NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years!!!

SMC
August 19, 2017 4:36 pm

I’m in denial. A climate scientist actually told the truth… kind’a… sort’a… maybe… in a convoluted way? I don’t believe it. 🙂

Aphan
August 19, 2017 5:33 pm

He told the truth, and then rationalized why that truth is completely unimportant to the actual “science” involved in climate science. Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate. And if you don’t like unicorns or pizza parties, you’re a hating-hate-hater-denier and should be put to death.
ISIS is more tolerant.

SMC
August 19, 2017 9:44 pm

“Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate.”
What happened to the rainbows, fairy dust and hockey sticks?
“…hating-hate-hater-denier…”
You forgot lying, hypocritical, sexist, egotistical, homophobic, misogynist, deplorable bigot. :))

Aphan
August 19, 2017 10:33 pm

Thanks SMC….I knew I was forgetting something… 🙂

StuM
August 20, 2017 4:31 am

“NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years”
Earth’s Absolute Temperature has changed by roughly 4°C in every one of those lasts 30 years.
Surely that is statistically significant. 🙂

Cold in Wisconsin
August 19, 2017 4:44 pm

What is the sensitivity of the measuring device, and what are the significant figures? Can an average of thousands of measurements accurate to a tenth of a degree be more accurate than each individual measuring device? I am asking an honest question that someone here can answer accurately. We learned significant figures in chemistry, but wouldn’t they also apply to these examples? How accurate are land based temp records versus the satellite measuring devices? This has been a central question for me in all of this “warmest ever” hoopla, and I would appreciate a good explanation.

August 19, 2017 6:12 pm

Kip,
To compound that, in the sixties I was taught that, at least in Engineering, there existed MANY decision rules about whether to round a “5” up or down if it was the last significant digit, and that those recording data often failed to specify which rule they used. We were instructed to allow for that.
I don’t think Wiley Post or Will Rogers gave two shoots about how to round up or down factional temperatures at their airstrips in the 20’s or early 30’s.
Why modern “Climate Scientists” assume that those who recorded temperature at airports or agricultural stations in 1930 were aware that those figures would eventually be used to direct the economies of the world is typical of the “history is now” generation.

Clyde Spencer
August 19, 2017 7:00 pm

Kip,
The automated weather stations (ASOS) are STILL reading to the nearest degree F, and then converting to the nearest 0.1 deg C.

Walter Sobchak
August 19, 2017 7:08 pm

Those numbers were not anywhere near that good. How often were thermometers calibrated. Were they read with verniers or magnifiers? What did they use to illuminate thermometers for night time readings? Open flames? And don’t forget all of the issues that Anthony identified with his work on modern weather observation equipment.

Dr. S. Jeevananda Reddy
August 19, 2017 9:58 pm

The temperature data was and is recorded to the first place of decimal. The adjustment is carried out as: 33.15 [0.01 to 0.05] as 33.1, 33.16 as 33.2, 33.25 [0.05 to 0.09] as 33.3. This is also followed in averaging.
Dr.S. Jeevananda Reddy

Clyde Spencer
August 20, 2017 8:25 am
EE_Dan
August 20, 2017 10:56 am

Interesting specification from the ASOS description:
http://www.nws.noaa.gov/asos/aum-toc.pdf
Temperature measurement: From -58F to +122F RMS error=0.9F, max error 1.8F.
“Once each minute the ACU calculates the 5-minute
average ambient temperature and dew point temperature
from the 1-minute average observations (provided at least
4 valid 1-minute averages are available). These 5-minute
averages are rounded to the nearest degree Fahrenheit, con-
verted to the nearest 0.1 degree Celsius, and reported once
each minute as the 5-minute average ambient and dew point
temperatures. All mid-point temperature values are rounded
up
(e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to –
3.0°F; while -3.6 °F rounds to -4.0 °F).”
This is presumably adequate for most meteorological work. I’m not sure how we get to a point where we know the climate is warming but it is within the error band of the instruments. Forgive me I’m only a retired EE with 40+ years designing instrumentation systems (etc).

Mark - Helsinki
August 20, 2017 12:17 pm

“Can Kip, but you can’t know “if” it is.

Greg
August 19, 2017 5:04 pm

If you have one thermometer with a 1 degree scale you would attribute +/-0.5 degrees to a measurement. If it is scientific equipment, it will be made to ensure it is at least as accurate as the scale.
There is a rounding error when you read the scale and there is the instrumental error.
If you have many readings on different days, the rounding errors will average out. If you have thousands of observation stations , the calibration error the individual thermometers will average out.
That is the logic of averages being more accurate than the basic uncertainly of one reading.

August 19, 2017 6:34 pm

Accuracy of scale: If the thermometers from 1880 through early 20th century read in whole degree increments (which was “good enough” for their purposes) then how does one justify declaring this year was the hottest year ever, by tenths of a degree?
Rounding errors will only “average out” if everyone recording temps used a flip of the coin (figuratively) to determine abut what to record. The reality is some may have used a decision rule to go to the next HIGHEST temp and some the LOWER. Then there’s the dilemma about what to do with “5 tenths”; there were “rules” about that too. You cannot assume the “logic of averages” unless we know how those rules of thumb were applied.

commieBob
August 19, 2017 6:39 pm

Suppose that we have a sine wave of known frequency buried under twenty db of Gaussian noise. We can detect and reconstruct that signal even if our detector can only tell us if the signal plus noise is above or below zero volts (ie. it’s a comparator). By running the process for long enough we can get whatever accuracy we need. link
The problem is that Gaussian noise is a fiction. It’s physically impossible because it would have infinite bandwidth and therefore infinite power. Once the noise is non-Gaussian, our elegant experiment doesn’t work any more. It’s more difficult to extract signals from pink or red noise. link If we can’t accurately describe the noise, we can’t say anything about our accuracy.

crackers345
August 19, 2017 7:53 pm

kip, if there are n stations and
if the error of the individual
readings is s, the error of the
average will be s/squareroot(n).
small

tty
August 20, 2017 12:59 am

“if there are n stations and if the error of the individual readings is s, the error of the average will be s/squareroot(n).”
Ah, “the Law of large number”. Somebody always drags that up. Sorry but no, that only applies to independent identically distributed random variables.

Urederra
August 20, 2017 7:31 am

Following the child height example:
First case: If you take one child and you measure his/her height 10 times, the average is more accurate.
Second case: If you have 10 children and you measure their haight once per child. the average height is not more accurate than the individual accuracy.
The temperature in Minneapolis is different from the temperature in Miami. The Earth average temperature belongs to the second case. That is my understanding.
It does not matter, anyway, since the Earth is not in thermal equilibrium or even in thermodynamic equilibrium and therefore the term average temperature is meaningless.

catweazle666
August 20, 2017 6:14 pm

“the error of the
average will be s/squareroot(n).
small”

No it won’t.

Philo
August 19, 2017 5:59 pm

Cold(what else?) in Wisconsin- temperature is an Intensive property- the speed of the moving/vibrating atoms and molecules. Which for climate purposes is measured by a physical averaging process- the amount the temperature being measured changes the resistance of (usually now) some sort of calibrated resistor which can be very precise(to hundredths of a degree) but only as accurate as its calibration over a specific range. Averaging temperatures is pretty meaningless. You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing. Measuring how the temperature of the water changes tells you something about the amount of energy released by the burning gas but it’s a very crude calorimeter.
Like that example, the climate is driven by energy movements, not primarily by temperatures.

August 19, 2017 6:53 pm

I’m not a climate scientist (but I did see one on TV) but why aren’t those far more educated than me pointing out Phil’s point which should be obvious to anyone with a basic science education.

You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing.

In discussions with my academic son, I point out that I can take the temperature at the blue flame of a match stick and then the temperature of a comfortable bath tub and the the average of the two has no meaning.
The response of course is 97% of scientists say I’m deluded. (Argument from Authority).

Mick
August 19, 2017 6:58 pm

I have environment canada weather app on my phone. I noticed this summer they reported what it feels like rather than the measured number. Or, they use the inland numbers which are a few degrees higher, rather than the coastal number that they have been using at the same airport station for the last 80 years.
They especially do this on the radio weather reports. It feels like…30 degrees

crackers345
August 19, 2017 7:55 pm

george – scientists have
made it very clear that anyone
should expect a change of the
global average at their
locale.
but the global avg is good
for spotting the earth’s energy
imbalance. not perfect, but
good

tty
August 20, 2017 1:03 am

“but the global avg is good for spotting the earth’s energy imbalance. not perfect, but good”
Actually it is almost completely useless given the very low heat capacity of the atmosphere compared to the ocean (remember that it is the ocean that absorbs and emits the vast majority of solar energy).

TA
August 20, 2017 6:10 am

https://science.nasa.gov/science-news/science-at-nasa/1997/essd06oct97_1
Accurate “Thermometers” in Space
“An incredible amount of work has been done to make sure that the satellite data are the best quality possible. Recent claims to the contrary by Hurrell and Trenberth have been shown to be false for a number of reasons, and are laid to rest in the September 25th edition of Nature (page 342). The temperature measurements from space are verified by two direct and independent methods. The first involves actual in-situ measurements of the lower atmosphere made by balloon-borne observations around the world. The second uses intercalibration and comparison among identical experiments on different orbiting platforms. The result is that the satellite temperature measurements are accurate to within three one-hundredths of a degree Centigrade (0.03 C) when compared to ground-launched balloons taking measurements of the same region of the atmosphere at the same time. ”
The satellite measurements have been confirmed by the balloon measurements. Nothing confirms the bastardized surface temperature record.
And this:
http://www.breitbart.com/big-government/2016/01/15/climate-alarmists-invent-new-excuse-the-satellites-are-lying/
“This [satellite] accuracy was acknowledged 25 years ago by NASA, which said that “satellite analysis of the upper atmosphere is more accurate, and should be adopted as the standard way to monitor temperature change.”
end excerpts
Hope that helps.

Tony
August 19, 2017 4:52 pm

Watch me pull a rabbit out of my hat “±0.05ºC” … what utter rubbish!

Eric Stevens
August 19, 2017 4:55 pm

I am puzzled as to how it is how over a period of 30 years temperatures can be established to only ±0.5K but for the Gistemp 2016 baseline the uncertainty is only ±0.05ºC. How is the latter more precise? Is it that different measuring techniques are in use?

Greg
August 19, 2017 5:17 pm

The order of magnitude not necessarily wrong because they are different things. there is no reason why they should be the same but I don’t believe either 0.5 or the 0.05 figures.

Greg
August 19, 2017 5:38 pm

The problem is , while the instrumental and reading errors are random and will average out allowing a sqrt(N) error reduction, you can not apply the same logic to the number of stations and this is exactly what they do to get the silly uncertainties.
They try to argue that they have N-thousand measurements of the same thing : the mean temperature. This is not true because you can not measure a mean temperature, it is not physical, it is a statistic of individual measurements. Neither does the world have A temperature which you can try to measure at a thousand different places.
So all you have is thousands of measurements each with a fixed uncertainty That is not going more accurate if you go to do a thousand measurements on Mars and them claim that you know the mean temperature of the inner planets more accurately than you know the temperature of Earth.
The temperature at different places are really different. You don’t get a more accurate answer by measuring more different things.

Bob boder
August 19, 2017 6:34 pm

There no evidence if the error is mechanical in nature that it would average out with more samples anyway. Devices of the same type tend to drift or fail all in the same direction.

August 19, 2017 7:10 pm

But they are NOT “different things”.
If one is defined as a deviation from another, you can’t separate them, no matter how many statistical tricks you apply.

tty
August 20, 2017 1:11 am

“while the instrumental and reading errors are random and will average out allowing a sqrt(N) error reduction”
Just what makes you believe that?

August 20, 2017 5:01 am

Greg, you are moving from verifiable to hypothetical with the statement about errors averaging out. The mathematics is based on exact elements of a set having precise properties (IID).
Also one of the pillars of the scientific method is the Method of making measurements. You design the tools to achieve the resolution you want. Were the temperature measurements stations set up to measurement repeatably with sub-0.1K uncertainty? No they weren’t. Neither were the bucket measurements of SST.
And that is the fundamental problem with climate scientists. They are dealing in hypotheticals but believing that it is real. They have crossed into a different area.

wyzelli
August 19, 2017 5:07 pm

It is also well worth remembering (or learning) the difference between MEAN and MEDIAN and paying close attention to which one is used where in information sources.
So many reports that “the temperature is above the long term MEAN” where in a Normal Distribution exactly half of the samples are higher than the mean!
Its an interesting and worthwhile exercise to evaluate whether the temperature series in any particular station resembles a Normal Distribution…

wyzelli
August 19, 2017 5:11 pm

For comparison purposes, note that sea ice extent is usually referenced to the MEDIAN.

Stephen Greene
August 19, 2017 5:14 pm

I was looking at temp. and CO2 data last week to see if NASA, NOAA and GIST would pass FDA scrutiny if approval was sought. There is a lot to it but from acquisition to security to analysis as well as quality checks for biases in sampling, to missing data, not to mention changing historical data etc, the answer is no. NOT EVEN CLOSE! Blinding is a big deal. So, ethically I believe any climate scientist who is also an activist must blind ALL PARTS of a study to ensure quality. What about asking to audit all marchers on Washington’s who received federal grants but do not employ FDA level or greater quality standards? Considering Michael Mann would not turn over his data to the Canadian courts last month, this might be a hoot, and REALLY VALUABLE!

Rick C PE
August 19, 2017 5:15 pm

TonyL: I disagree that the use of the “anomalies” is well known.

a·nom·a·ly
[əˈnäməlē]
NOUN
something that deviates from what is standard, normal, or expected:
“there are a number of anomalies in the present system”
synonyms: oddity · peculiarity · abnormality · irregularity · inconsistency

While It is used extensively in climate science these days, it is a very uncommon approach in statistical analysis, engineering and many scientific fields. The term or process is not mentioned or described in any of my statistics text books. I have spent 40 years in the business of collecting and analysis of all kinds of measurements and have never seen the need to convert data to ‘anomalies’. It can be viewed as simply reducing a data set to the noise component. My main objection is that, like an average without an estimate of dispersion such as the variance or standard deviation, it serves to reduce the information conveyed. Also, as the definition of anomaly indicates, it implies abnormality, irregularity, etc. As has been widely argued here and elsewhere significant variability in temperature of our planet seems quite normal.

Robert of Ottawa
August 19, 2017 5:17 pm

I think this is a fine demonstration of the falacy of false precision. Also of statistical fraud.
We can’t let the prols think “Hey guess what, the temperature hasn’t changed!”

August 19, 2017 5:21 pm

On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless. OTH, a global average stationary station anomaly (correctly calculated) is meaningful, especially for climate trends. So useful if the stations are reliable (most aren’t),
On the other hand, useful anomalies hide a multitude of other climate sins. Not the least of which is the gross difference between absolute and ‘anomaly’ discrepancies in the CMIP5 archive of the most recent AR5 climate models. They get 0C wrong by +/-3 C! So CMIP5 not at all useful. Essay Models all the way Down in ebook Blowing Smoke covers the details of that, and more. See also previous guest post here ‘The Trouble with Models’.

Greg
August 19, 2017 6:00 pm

I agree that anomalies make more sense in principal, if you want to look at whether the earth has warmed due to changing radiation , for example.
The problem is the “climatololgy” for each month is the mean of 30days of that month over 30 years. 900 data. They will have a range of 5- 1- deg C for any given station with a distribution. You can take 2 std dev as the uncertainty of how representative that mean is and I’ll bet that is more than 0.05 deg C. So the uncertainty on your anomaly can never be lower than that.

Streetcred
August 19, 2017 7:16 pm

For anomalies to be useful in any respect , the original data should not be tampered with.

David Chappell
August 20, 2017 4:25 am

Ristvan: “On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless.”
What you are saying in simple terms is that a global average temperature is also a crock of fecal matter.

Tom Halla
August 19, 2017 6:14 pm

This is like the rules for stage psychics doing cold readings==>do not be specific on anything checkable.

Greg
August 19, 2017 6:18 pm

Another error they usually ignore is sampling error. Is the sample a true and accurate representation of the whole. In the case of SST almost certainly not.
Sampling patterns and methods have been horrendously variable and erratic over the years. The whole engine room / buckets fiasco is largely undocumented and is “corrected” based on guesswork, often blatantly ignore the written records.
What uncertainty needs to be added due to incomplete sampling?

Clyde Spencer
August 19, 2017 6:24 pm

KIP,
Something buried in the comments section of Gavin’s post is important and probably overlooked by most:
“…Whether it converges to a true value depends on whether there are systematic variations affecting the whole data set, but given a random component more measurements will converge to a more precise value.
[Response: Yes of course. I wasn’t thinking of this in my statement, so you are correct – it isn’t generally true. But in this instance, I’m not averaging the same variable multiple times, just adding two different random variables – no division by N, and no decrease in variance as sqrt(N).”
Gavin is putting to rest the claim by some that taking large numbers of temperature readings allows greater precision to be assigned to the mean value. To put it another way, the systematic seasonal variations swamp the random errors that might allow an increase in precision.
Another issue is that, by convention, the uncertainty represents +/- one (or sometimes two) standard deviations. He doesn’t explicitly state whether he is using one or two SD. Nor does he explain how the uncertainty is derived. I made a case in a recent post ( https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/ ) that the actual standard deviation for the global temperature readings for a year might be about two orders of magnitude greater than what Gavin is citing.

Gary Kerkin
August 19, 2017 6:38 pm

Schmidt cites two references as to why anomalies are preferred, one from NASA and one from NOAA. The latter is singularly useless as to why anomalies should be used. The opening paragraph of the NASA reference states:

The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.

Two factors are at work here. One is that the data is smoothed. The other is that the anomalies of two different geographical locations can be compared whilst the absolute temperatures cannot.
Is smoothed data useful? I guess that is moot but it is true to say that any smoothing processes loses fine detail, the most obvious of which is diurnal variation. Fine detail includes higher frequency information and removing it makes the analysis of natural processes more difficult.
Is a comparison of anomalies at geographically remote locations valid? I would think it would be, provided the statistics of the data from both locations are approximately the same. For example, since most analysis is based on unimodal gaussian distributions (and normally distributed at that), if the temperature distributions at the two locations are not normal, can a valid comparison be made? Having looked at distributions in several locations in New Zealand, I know that the distributions are not normal. Diurnal variation would suggest at least a bimodal distribution, but several stations exhibit at least trimodal distributions. The more smoothing applied to the data set the more closely the distribution will display normal, unimodal behaviour.
I suspect that smoothing the data is the primary objective, hiding the inconvenient truth that air temperature is a natural variable and is subject to a host of influences, many of which are not easily described, and incapable of successful, verifiable modeling.

August 19, 2017 8:41 pm

Re: Gary Kerkin (August 19, 2017 at 6:38 pm)
[James] Hansen is quoting himself again, it’s all very inbred when you start reading the supporting – or not – literature!
However the literature doesn’t agree and he knows that he is dissembling.
In [James] Hansen’s analysis, the isotropic component of the covariance of temperature, assumes a constant correlation decay* in all directions. However, “It has long been established that spatial scale of climate variables varies geographically and depends on the choice of directions” (Chen, D. et al.2016).
In the paper The spatial structure of monthly temperature anomalies over Australia, the BOM definitively demonstrated the inappropriateness of Hansen’s assumptions about correlation of temperature anomalies:

In reality atmospheric fields are rarely isotropic, and indeed the maintenance of westerly flow in the southern extratropics against frictional dissipation is only possible due to the northwest-southeast elongation of transient eddy activity (Peixoto and Oort 1993). Seaman (1982a) provides a graphic illustration of this anisotropy on weather time-scales for the Australian region…This observation of considerable anisotropy is in contrast with Hansen and Lebedeff (1987) for North America and Europe.. We also note the inappropriateness of the function used by P.D. Jones et al. (1997) for describing anisotropy (at least for Australian temperature), which limits the major and minor axes of the correlation ellipse to the zonal and meridional direction (see Seaman 1982b).
Clearly, anisotropy represents an important characteristic of Australian temperature anomalies, which should be accommodated in analyses of Australian climate variability.(Jones, D.A. & Trewin, B. 2000)

*Decreasing exponentially with their spatial distance, spatial scales are quantified using the e-folding decay constant.

August 19, 2017 11:01 pm

Mod or Mods! Whoops! I just realised that my comment above was directed at James Hansen of NASA but might be confused with the Author of the post, Kip Hansen!
To be clear, Gavin Schmidt(NASA), references James Hansen(NASA) quoting J.Hansen who references NASA(J.Hansen)! It’s turtles all the way down 😉

wyzelli
August 20, 2017 4:02 pm

It is true that temperature data is not Normally Distributed. At the very least most sets I have looked at are relatively skewed. The problem is that the variation from Normal in each station is different from other stations, and comparing, specifically averaging, non homogeneous data presents a whole other set of difficulties (i.e. shouldn’t be done).

Gunga Din
August 19, 2017 7:41 pm

Why use anomalies instead of actual temperatures?
They produce swingier trends?

Walter Sobchak
August 19, 2017 8:22 pm

Another reason to use anomalies instead of temperatures is that the graph of anomalies can be centered at zero and show increments of 0.1°. It can make noise movements look significant. If you use temperatures, any graph should show Kelvin with absolute zero. Construct a graph using those parameters, and the “warming” of the past 30 years looks like noise, which is what it is. A 1°K movement is only ~0.34% not much. It is just not clear why we should panic over a variation of that magnitude.

Greg
August 20, 2017 12:57 am

” If you use temperatures, any graph should show Kelvin with absolute zero. ”
nonsense, you scale the graph to show the data in the clearest way with appropriately labelled axes.

hunter
August 20, 2017 4:36 am

“Clearest”? or “most dramatic for our sales goals?”
There is a fine line between the two.
If 0.1 degree actually made an important difference to anything at all, then maybe scales used today would be informative.
Instead they are manipulative, giving the illusion of huge change when that is not the case.

hunter
August 20, 2017 4:40 am

If the scale was simply the reality reflecting the range of global temps the graph would be representing the chsnges honestly and people could make ingormed decisions.
That is counter to the goals of the consensus.

Alan Davidson
August 19, 2017 8:22 pm

Isn’t the real answer that if actual temperatures were used, graphical representations of temperature vs time would be nice non-scary horizontal lines?

hunter
August 20, 2017 4:37 am

Yep.
“Keep the fear alive” us an important tool in the climate consensus tool kit.

BigBubba
August 19, 2017 8:53 pm

From a management perspective it always pays to hire staff that give you 10 good reasons why something CAN be done rather than 10 good reasons why something CAN’T be done:
So the question is: Why has the temperature data not been presented in BOTH formats? Anomaly AND Absolute.

crackers345
August 19, 2017 8:59 pm

it has been presented
in both formats.
see karl et al’s 2015 paper in
Science.

hunter
August 20, 2017 4:41 am

But only the manipulative feat inducing scary scale is used in public duscussions.

crackers345
August 22, 2017 9:27 pm

hunter – conclusions are independent of scale.
obviously.

crackers345
August 22, 2017 9:28 pm

crackers345
August 22, 2017 9:35 pm

kip – giss doesn’t quote an
absolute temperature
their site has a long faq answer
https://data.giss.nasa.gov/gistemp/faq/abs_temp.html

TheOtherBobFromOttawa
August 19, 2017 9:12 pm

This is a very interesting discussion. I’ve been thinking about this for some time. Consider the following.
The temperature anomaly for a particular year, as I understand it, is obtained by subtracting the temperature for that year from the 30-year average temperature. Assuming both temperatures have an error of +/- 0.5C, the calculated anomaly will have an error of +/- 1.0C. When adding or subtracting numbers that have associated errors, one must ADD the errors of the numbers.
So the anomaly’s “real value” is even less certain than either of the 2 numbers it’s derived from.

Greg
August 20, 2017 1:00 am

If you can argue that the errors are independent and uncorrelated you can use the RMS error but yes, always larger than either individual uncertainty figure.

richard verney
August 20, 2017 1:31 am

Let me correct. It is not whether you can argue that the errors are independent and uncorrelated, but rather whether they truly are independent and uncorrelated.
Yet in the climate field, it would appear that the errors are neither independent nor uncorrelated. there would appear to be systemic biases such that uncertainty is not reduced.

crackers345
August 22, 2017 9:37 pm

+/- 0.5 C is way too high, esp
with modern equipment

Mark Johnson
August 19, 2017 9:40 pm

The take-away, and this is to be found in other disciplines as well, is “never let the facts get in the way of a good story.” The Left and the media just love to apply it.

August 19, 2017 10:00 pm

The desperation is to try to get some sort of important signal to show something important with climate, so they are looking at sample noise as data these days.

Phillip Bratby
August 19, 2017 10:53 pm

There is nothing anomalous about the global average temperature as it changes from year to year (well there wouldn’t be if such a thing as global average temperature existed). The global average temperature has always varied from year to year, without there being any anomalies.

crackers345
August 22, 2017 9:37 pm

explain the long-term trend

Aristoxenous
August 19, 2017 11:28 pm

The alarmists [NASA, NOAA, UK MET Office] cannot even agree among themselves what, ‘average global temp. means’. Freeman Dyson has stated that it is meaningless and impossible to calculate – he suggests that a reading would be needed for every square km. Like an isohyet is the measurement going to be reduced / increased to a given density altitude? What lapse rates – ambient or ISO?
The satellite observations are comparable because they relate to the same altitude with each measurement but anything measuring temps near the ground are a waste of time and prohibitively expensive at; one station / square km or even 100 square km.

crackers345
August 22, 2017 9:39 pm

why every sq km?
temperature stations aren’t free.
so the question is, what station density gives
the desired accuracy?

Aristoxenous
August 19, 2017 11:30 pm

ISA not ISO.

Mark - Helsinki
August 19, 2017 11:34 pm

As Dr Ball says and I agree, averages destroy accuracy of data points.
Given we need accuracy for science, no? we do. Absolute temperatures would be used and need to be used, science is numbers, actual numbers not averaged numbers. If science worked with averages we’d never had had steam engines.
Take model runs.
100 model runs. Out of that 100 runs, 1 of the runs is the most accurate (no two runs are the same) so one must be the most accurate (we can’t know which one and accuracy is really just luck given the instability of produced output)
Because we do not understand (why) and which run is the accurate one, we destroy that accuracy with the other 99 runs.
Probability is useless in this context as the averages and probabilities conceal the problem, we don’t know how accurate each run is.
This is then made worse by using multiple model ensembles, which serve to dilute the unknown accuracy even more to the point where we have a range of 2c to 4.5c or above, this is not science, it is guessing, it’s not probability, it is guessing.
The only use for using loads of model ensembles is to increase the range of “probability” and this probability does not relate to the real physical world, it’s a logical fallacy.
The range between different temperature anomaly data sets are performing the same function as the wide cast net of model ensembles.
Now you know why they don’t use absolute temperatures, because using those increases accuracy and reduces the “probabilities” and removes the averages which allow for the wide cast net of non-validated “probabilities”.
The uncertainty calculations are rubbish. We are given uncertainty from models, not the real world, the uncertainty only exists in averages and probabilities not in climate and actual real world temperatures.

Mark - Helsinki
August 19, 2017 11:42 pm

NOAA’s instability and wildly different runs prove my point. An average of garbage is garbage.
If NOAA perform 100 runs, take the two that vary most, and that is your evidence that they have no idea.

richard verney
August 20, 2017 1:27 am

Or at any rate, it gives an insight into the extent of error bounds.

August 20, 2017 4:10 am

Speaking of models and the breathtaking circularity inherent in the reasoning of much contemporary Climate Science!
The assessment of the reliability of sampling error estimates (In the application of anomalies to large scale temperature averages; in the real world), is tested using temperature data from 1000-year control runs of general GCMs! (Jones et al., 1997a)
And that is a real problem, because the models have the same inbuilt flaw; they only output gridded areal averages!
Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).
The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.

Mark - Helsinki
August 20, 2017 7:39 am

Grids set to preferred size position only serves to fool people.
We need a scientifically justified distance circumference for each data point grounded in topology and site location conditions (Anthony’s site survey would be critical for such)
Mountains hills and all manner of topology matters as do local large water bodies, as well as the usual suspects of urbanisation ect.
This is a massive task and we are better investing everything into satellites and developing that network further to solve some temporal issues for better clarity.
Still sats are good for anomalies if they pass the same location at the same time each day but we should depart from anomalies because they are transient and explaining why is nigh impossible.
A 50km depth chunk of the atmosphere is infinitely better than the surface station network for more reasons than not.
Defenders of the surface data sets are harming science

Mark - Helsinki
August 20, 2017 7:42 am

With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data.
This is not happening. It is never going to happen.

Mark - Helsinki
August 20, 2017 12:30 pm

I agree Kip, that was my point about it all, models are not for accuracy, but still, out of 100 runs 1 is the most accurate and the other 99 destroy that lucky accuracy.
My point also is that they don’t want accuracy (as they see it) because what if a really good model ran cool?
That wont do
They need a wide cast net to catch a wide range of outcomes in order to stay relevant.

Mark - Helsinki
August 20, 2017 12:32 pm

and to say, oh look the models predicted that.
Furthermore NOAA’s model output is an utter joke, if as I said you take the difference between the 2 most different runs from an ensemble, they vary widely, which shows the model is really casting such a wide net that it is hard to actually say it’s wrong or (way off the mark)
Of course, we cant model chaos. 🙂
Giving an average of chaos is what they are doing, and it’s nonsense.

crackers345
August 22, 2017 9:41 pm

Mark – Helsinki –
>> With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data. <<
not if the station
hasn't moved.
no one is interested in
absolute T.

Mark - Helsinki
August 19, 2017 11:44 pm

you can simply calculate the uncertainty for real in the 100 model runs by measuring the difference between the two most contrary runs. Given the difference in output per run at NOAA.. that means real uncertainty in that respect is well in excess of 50%

crackers345
August 22, 2017 9:42 pm

no.
that’s like saying you can flip a coin 100 times, and do this 100 times, and the
uncertainty is the max of the max and min counts.
that’s simply not how it’s done — the standard deviation
is easily calculated.

Tom Halla
August 22, 2017 9:52 pm

crackers, that is an invalid use of statistics. It in more analogous to shooting at 100 different targets with the same error in aim, not like measuring the same thing 100 times. The error remains the same, and does not even out.

crackers345
August 22, 2017 9:58 pm

no tom. shooting isn’t random; its results
contain several biases.
a true coin, when flipped sufficienly, does not

Tom Halla
August 22, 2017 11:33 pm

With both shooting and taking a temperature reading multiple times over a span of time, one is doing or measuring different things multiple times, not the measuring the same thing multiple times. Coin tosses are not equivalent.l

Mark - Helsinki
August 19, 2017 11:48 pm

As in, take 100 runs and take calculate how far the model can swing in either direction, for this you only need the two most different runs, there is your uncertainty.

August 22, 2017 2:37 am

Kip ==> This following part of my comment was about data collection in the real world:

Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).
The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.

I was trying to show how the “fudge” is achieved in the collection of raw data and how circular it is to then use gridded model outputs to estimate the sampling errors of that very methodology! 😉

August 19, 2017 11:48 pm

For your readers not familiar with physics. Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”
C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)
Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
https://en.wikipedia.org/wiki/Stefan–Boltzmann_law
You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.
But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.
But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?
Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.
You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.
Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.
However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.
https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/

crackers345
August 22, 2017 9:43 pm

August 19, 2017 11:53 pm

Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”
C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)
Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
https://en.wikipedia.org/wiki/Stefan–Boltzmann_law
You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.
But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.
But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?
Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.
You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.
Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.
However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.
https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/

Mark - Helsinki
August 20, 2017 1:11 am

yeah, where is the CRU raw?
What have they done with the data in the last 20 years.
Were they not caught cooling the 40s intentionally just to reduce anomalies? Yes they were caught removing the blip from data, something NASA JMA BEST ect have have done.
The level of agreement between these data sets over 130 years either shows 1 collusion or 2 relying on the same bad data
Nonsense.

Mark - Helsinki
August 20, 2017 1:13 am

as you probably already know, they are using revised history to assess current data sets. As such any assessments are useless.
We need all of the pure raw data, most of which does not exist any more.

Mark - Helsinki
August 20, 2017 7:29 am

Good post tbh.
“However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.”
The logical fallacy is real world temperature anomalies vs what GISS says they are.
The certainty that GISS is accurate, is actually unknown, which means uncertainty is closer to 90% than 5%

Mark - Helsinki
August 20, 2017 7:30 am

Schmidt must keep the discussion within the confines of GISS output.
Avoid bringing in the real world at every stage, in terms of equipment accuracy and lack of coverage.

crackers345
August 22, 2017 9:45 pm

all the groups get essentially the same surface trend — giss, noaa, hadcrut, jmo, best
so clearly giss is no an outlier. this isn’t rocket
science

Mark - Helsinki
August 20, 2017 12:27 pm

Indeed, data processing. Produces GISS GAMTA and also funnily produces Cosmic background radiation for NASA also.

Dan Davis
August 19, 2017 11:59 pm

Possible new source for temperature data: River water quality daily sets.
Graphs of temp. data across the regions and the globe would be quite interesting.
Probably a much more reliable daily set of records…

crackers345
August 22, 2017 10:05 pm

why more reliable?

knr
August 20, 2017 12:24 am

Gavin Schmidt who was , let us not forget, hand picked by Dr Doom to carry-on his ‘good work’
Given we simply lack the ability to take any such measurements in a scientifical meaningful way. All we have is a ,’guess’ therefore no matter what the approach what is being said is ‘we think it’s this but we cannot be sure’

crackers345
August 22, 2017 9:46 pm

so why can;’t
temperaure be
measured, in your
opinion?

C.K. Moore
August 20, 2017 1:19 am

Over the years I’ve noticed one thing about Gavin Schmidt’s explanations in RealClimate–they are excessively thorough and generally cast much darkness on the subject. If he was describing a cotter pin to you, you’d picture the engine room of the Queen Mary.

richard verney
August 20, 2017 1:20 am

This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?
I emphasise that
the sample set used to create the anomaly in 1880, is not the same sample set used to calculate the anomaly say in 1900 which in turn is not the same sample set used to calculate the anomaly in 1920 which in turn is not the same sample set used to calculate the anomaly in 1940 which in turn is not the same sample set used to calculate the anomaly in 1960 which in turn is not the same sample set used to calculate the anomaly in 1980 which in turn is not the same sample set used to calculate the anomaly in 2000 which in turn is not the same sample set used to calculate the anomaly in 2016
if one is not using the same sample set, the anomaly does not represent anything of meaning.
Gavin claims that “The climatology for 1981-2010 is 287.4±0.5K” however the sample set (the reporting stations) in say 1940 are not the stations reporting data in the climatology period 1981 to 2010 so we have no idea whether there is any anomaly to the data coming from the stations used in 1940. We do not know whether the temperature is more or less than 1940 since we are not measuring the same thing.
The time series land based thermometer data set needs complete re-evaluation. If one wants to know whether there many have been any change in temperature since say 1880, one should identify the stations that reported data in 1880 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1880 to 2016.
If one wants to know whether there has been any change in temperature say as from 1940, one performs a similar task, one should identify the stations that reported data in 1940 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1940 to 2016.
So one would end up with a series of time series, perhaps a series for every 5 year interlude. Of course, there would still be problems with such a series because of station moves, encroachment of UHI, changes in nearby land use, equipment changes etc, but at least one of the fundamental issues with the time series set would be overcome. Theoretically a valid comparison over time could be made, but error bounds would be large due to siting issues/changes in nearby land use, change of equipment, maintenance etc.

August 20, 2017 3:46 am

Re: richard verney (August 20, 2017 at 1:20 am)
To your charge Richard, James Hansen “doth protest too much” for my liking.

…a charge that has been bruited about frequently in the past year, specifically the claim that GISS has systematically reduced the number of stations used in its temperature analysis so as to introduce an artificial global warming. GISS uses all of the GHCN stations that are available, but the number of reporting meteorological stations in 2009 was only 2490, compared to [circa]6300 usable stations in the entire 130 year GHCN record. (Hansen et al. 2010)

He doesn’t address the problem (In that paper) to my satisfaction, because elsewhere in the literature it is made clear that the change in number and spatial distribution of station data is a source of error larger than the reported (Or purported!) trends.

Nick Stokes
August 20, 2017 9:02 am

“how do you calculate an anomaly when the sample set is never the same over time”
Because you don’t calculate the anomaly using a sample set. That is basic. You calculate each station anomaly from the average (1981-2010 or whatever) for that station alone. Then you can combine in an average, which is when you first have to deal with the sample set.

crackers345
August 22, 2017 9:48 pm

richard verney – >> how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing? <<
you take an average.
giss uses '51-'80.
but the choice is arbitrary

AndyG55
August 20, 2017 1:34 am

“0.56±0.05ºC
RUBBISH, No way GISS error is anywhere near that level

Clyde Spencer
August 20, 2017 8:37 am

AndyG55,
Yes, I have read the Real Climate page that Kip linked, and the links that Gavin provides to explain why anomalies are used, and nowhere do I see an explanation for how the stated uncertainty is derived or an explanation of how it can be an order of magnitude greater precision than the absolute temperatures. My suspicion is that it is an artifact of averaging, which removes the extreme values and thus makes it appear that the variance is lower than it really is.

crackers345
August 22, 2017 9:49 pm

August 20, 2017 1:53 am

The postulate that global temperatures have not increased in the last 100 years is easily supported after a proper error analysis is applied.
People driven by a wish to find danger in temperature almost universally fail proper error analysis. There is a deal of scattered, incomplete literature about using statistical approaches and 2 standard deviations and all that type of talk; but this addresses the precision variable more than the accuracy variable, These two variables act on the data and both have to be estimated in the search for proper confidence limits to bound the total error uncertainty.
This is not the place to discuss accuracy in the estimation of global temperature guesses because that takes pages. Instead, I will raise but one ‘new’ form of error and note the need to investigate this type of error elsewhere than here in Australia. It deal with the transition from ‘liquid in glass’ thermometry to the electronic thermocouple devised whose Aussie shorthand is ‘AWS’ for Automatic Weather Station. These largely replaced LIG in the 1990s here.
The crux is in an email from the Bureau of Meteorology to one of our little investigatory group.
“Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)
Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600”
When data captured at one second intervals are studied, there is a lot of noise. Tmax, for example, could be a degree or so higher than the one minute value around it. They seem to be recording a (signal+noise) when the more valid variable is just ‘signal’. One effect of this method of capture is to enhance the difference between high and low temperatures from the same day, adding to the meme of ‘extreme variability’ for what that is worth.
A more detailed description is at
https://kenskingdom.wordpress.com/2017/08/07/garbage-in-garbage-out/
https://kenskingdom.wordpress.com/2017/03/01/how-temperature-is-measured-in-australia-part-1/
https://kenskingdom.wordpress.com/2017/03/21/how-temperature-is-measured-in-australia-part-2/
This procedure is different in other countries. Therefore, other different countries are not collecting temperature in a way that will match ours here. There is an error of accuracy. It is large and it needs attention. Until it is fixed, there is no point to claims of global temperature increase of 0.8 deg C/century, or whatever the latest trendy guess is. Accuracy problems like this and other combine to put a more realistic +/- 2 deg C error bound on the global average, whatever that means.
Geoff.

richard verney
August 20, 2017 2:37 am

But think of the very different response of a LIG thermometer which could easily miss such T highs if of such short lived duration.
This is why retro-fitting with the same type of equipment used in the 1930s/1940s is so important if we are to assess whether there has truly been a change in temperature since the historic highs of the 1930s/1940s.

hunter
August 20, 2017 4:29 am

Yes. This is a reasonable and low cost way to test the current vs. the past instruments. It also tests the justifications of those who change the past.
I pointed this a few months ago but got nowhere with it. If you can think of a way to push the idea forward, God speed.

August 21, 2017 5:33 am

RV,
Experienced petrologists would agree with your test scheme. The puzzle is, why was it not done before, officially. Maybe it was, I do not know. Thank you for raising it again.
As you know, it remains rather difficult to get officials to adopt such suggestions. If you can help that way, that is where effort could be well invested. Geoff

August 20, 2017 5:10 am

Geoff,
I have said this on here before. The best post by far was Pat Frank’s about calibration of instruments. All that needed to be said was in that. A lot of us, including yourself, are all talking about the same idiocy. It’s nice to have it demonstrated.

August 21, 2017 5:39 am

Nc75,
Where have you seen this raised before? Have you commented before on the methods different countries use to treat this signal noise problem with AWS systems? It is possible that the BOM procedure, if we read it correctly, could have raised Australian Tmax by one or two tenths of a degree C compared with USA since the mid 1990s. Geoff

August 22, 2017 11:45 am

Geoff
If I recall Pat Frank’s paper looked at drift of electronic thermometers. It may be similar at least in approach to what you are talking about, but the general idea is that whatever techniques are used they have to be seen in a broader context of repeatability and microsite characterisation. Effects that appear to swamp any tenths of degrees and approach full degree variations.

Clyde Spencer
August 20, 2017 8:43 am

Geof,
Indeed, it is done slightly differently in the US. Our ASOS system collects 1 minute average temperatures in deg F, and then averages the 5-minute record of the 5 sets to the nearest deg F, converts to the nearest 0.1 deg C, and sends that information to the data center. http://www.nws.noaa.gov/asos/pdfs/aum-toc.pdf

August 22, 2017 1:03 am

Clyde,,
Can we please swap some email notes on this. sherro1 at optusnet dot com dot au
A project in prep is urgent if that is OK with you. Geoff

Clyde Spencer
August 22, 2017 9:05 pm

Kip,
You said, BTW — The °F recorder temps are thus +/- 0.5 °F and the °C are all +/- 0.278°C — just by the method.”
Strictly speaking, 0.5 °F is equivalent to 0.3 °C because multiplying a constant (5/9) with infinite precision by a number with only one (1) significant figure, one is only justified in retaining the same number of significant figures as the multiplier with the least number of significant figures!

August 20, 2017 3:52 pm

Kip,
I merely noted that the proposition of zero change fits between properly-constructed error bounds and gave an example of a large newish error. The plea is to fix the error calculations, not to fix the state of fear about minor changes in T. Geoff

August 21, 2017 5:48 am

Kip
I do not want to draw this out, but I see stuff all of any physical symptoms of temperature rise. What are the top 3 indicators that make you think that way? Remember that in Australia there is no permanent snow or ice, no glaciers, few trees tested for dendrothermometry, no sea level rise evidence above longer term normal, Antarctic territory howing nextbto no instrumental rise and a numer of falls, so a good stage to conclude that the players are mainly acting fiction. Geoff.you

August 22, 2017 1:00 am

Kip
I would be delighted to develop some ideas with you.
But not here. Do send me an opening email at sherro1 at optusnet dot com dot au
Geoff

crackers345
August 22, 2017 9:50 pm

temp is obviously
increasing, because (macro)
ice is melting and sea
level is rising.

John Soldier
August 20, 2017 2:01 am

Off topic somewhat:
Are you, like me, continuously annoyed by the way the media (especially TV weather reporters) refer to the plural of maximum and minimum temperatures as maximums and minimums.
This shows an ignorance of the English language as any decent dictionary will confirm.
The correct terms are of course maxima and minima.
The various editors and producers should get their acts into gear and correct this usage.

August 20, 2017 2:40 am

±0.05ºC

While the atmospheric thermocline varies within 80 K in the troposphere alone at any given time:
Gavin’s precision is high quality entertainment
http://rfscientific.eu/sites/default/files/imagecache/article_first_photo/articleimage/compressed_termo_untitled_cut_rot__1.jpg

Nik
August 20, 2017 2:47 am

21 June 2017
2017 90 108 112 88 88
Today
2017 98 113 114 94 89 68 83
https://web.archive.org/web/20170621154326/https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

DWR54
August 20, 2017 3:28 am

August 15, 2017: Starting with today’s update, the standard GISS analysis is no longer based on ERSST v4 but on the newer ERSST v5.

crackers345
August 22, 2017 9:51 pm

you should know
this

DWR54
August 20, 2017 2:57 am

This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?

In the case of estimating temperature change over time, surely that’s an argument in favour of using anomalies rather than absolute temperatures?
Absolute temperatures at 2 or more stations or in a region might differ in absolute terms by, say, 2 degrees C or more, depending on elevation and exposure. That’s important if absolute temperatures are what you’re interested in (at an airport for example); but if you’re interested in how temperatures at each station differ from their respective long term averages for a given date or period, then anomalies are preferable.
Absolute temperatures might differ considerably between stations in the same region, but their anomalies are likely to be similar.

hunter