Climate Science Double-Speak: Update

Update by Kip Hansen

 

mystery_solvedLast week I wrote about UCAR/NCAR’s very interesting discussion on “What is the average global temperature now?”.

[Adding link to previous post mentioned.]

Part of that discussion revolved around the question of why current practitioners of Climate Science insist on using Temperature Anomalies — the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average — instead of simply showing us a graph of the Absolute Global Average Temperature in degrees Fahrenheit or Celsius or Kelvin.

Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.

In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:

“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

You see, as Dr. Schmidt carefully explains for us non-climate-scientists, if they use Absolute Temperatures the recent years are all the same — no way to say this year is the warmest ever — and, of course, that just won’t do — not in “RealClimate Science”.

# # # # #

Author’s Comment Policy:

Same as always — and again, this is intended just as it sounds — a little tongue-in-cheek but serious as to the point being made.

Readers not sure why I make this point might read my more general earlier post:  What Are They Really Counting?

# # # # #

 

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Sweet Old Bob

What a tangled web we weave …….

Odd is it not that some fifty years ago the accepted standard for the world was 14.7C @ 1313 Mb.
I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.

It is not odd.
It is an embarrassment.

“Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.
In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:”

None of those titles claimed by Schmidt disguise the facts that Gavin Schmidt is an elitist who believes himself so superior, that Gavin will not meet others as equals.
A lack of quality that Gavin Schmidt proclaims loudly and displays smugly when facing scientists; one can imagine how far superior Schmidt considers himself above normal people.
As further proof of Schmidt’s total lack of honest forthright science is Gavin’s latest snake oil sales pitch “climate science double-speak”.

“wayne Job August 20, 2017 at 3:36 am
Odd is it not that some fifty years ago the accepted standard for the world was 14.7C @ 1313 Mb.
I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.”

Wayne job demonstrates superlatively that no matter how Gavin’s and his obedient goons adjust temperatures; they are unable to hide current temperatures from historical or common sense comparisons.
Gavin should be permanently and directly assigned to Antarctica where Gavin can await his dreaded “global warming” as the Antarctica witch.

Sceptical lefty

Sorry to be pedantic, but I believe that the pressure should have been 1013mb.
As an aside, it’s a real bitch when the inclusion of realistic error figures undermines one’s whole argument. This sort of subversive behaviour must be stopped!

PiperPaul

14.7 is also air pressure in PSI at sea level! I’m 97% sure there’s some kind of conspiracy here…

Pat Frank

Good point about the errors. Gavin shows the usual consensus abhorrence of tracking error.
If the cliimatology is known only to ±0.5 K and the measured absolute temperature is known to ±0.5 K, then the uncertainty in the anomaly is their root-sum-square = ±0.7 K.
There’s no avoidance of uncertainty by taking anomalies. It’s just that consensus climate scientists, apparently Gavin included, don’t know what they’re doing.
The anomalies will inevitably have a greater uncertainty than either of the entering temperatures.

Sorry the Mb should read a 1013, I do know that the temp was right as an old flight engineer they were the standard figures for engine and take off performance.

…when we practice to receive – grants, lots and lots of taxpayer funded grants!

Bill Hanson

Stunning.

We live in a world of absolute temperature numbers, not long term averages. Averages have no social meaning.

NW sage

Averages are a statistical method of trying to detect meaning when there is none.

Climatology is about averages. To know, for example,the 30 year average temperature at a given location is useful for some purposes. Climatologists erred when they began to try to predict these averages without identifying the statistical populations underlying their models for to predict without identifying this population is impossible.

george e. smith

NOTHING ever happens twice; something else happens instead. So any observation creates a data set with one element; the observation itself.
And the average value of a data set containing a single element is ALWAYS the value of that one element. So stick with the observed values they are automatically the correct numbers to use.
G

Gavin should learn little Math – specifically Significant Digits. If the climatology is to a precision of 0.1, then the Anomaly MAY NOT BE calculated to a precision greater than 0.1 degree. Absolute or Anomaly – both ought to show that the temperatures are the same.
i always wonder, if the Alarmists’ case is so strong, then why do they need to lie?

Santa Baby

In postmodernism nothing is truth. Except postmodern consensus policy based science?

Auto

Santa
“postmodern consensus policy based science” is the revealed and frighteningly enforceable truth.
Disagree and – no tenure.
Out on your ear.
Never mind scientific method.
Sad that science has descended into a belief system, isn’t it??
Auto

Bill Powers

in a somewhat different tack, check you local TV channel – weather meteorologists. I detected a pattern in markets I have lived. when the Temperature is above the average over time they almost always say that the “Temperature was above NORMAL today” but when it is below they say that the “Temperature was below the AVERAGE” for this date.
Now subliminally we are receiving a bad news message when the temperate is not normal but it comes across somewhat non newsworthy to be innocuously below an average, Do they teach them this in meteorology courses?
CAGW Hidden Persuaders? Check it out. Maybe it’s just my imagination.

Crispin in Waterloo but really in Bishkek

What I hear is a continuous reference to the ‘average’ temperature with no bounds as to what the range of ‘average’ is.
It is not nearly enough to say ‘average’ temperature for today is 25 C and not mention that the thirty years which contributed to that number had a range of 19-31. The CBC will happily say the temperature today is 2 degrees ‘above average’ but not say that it is well within the normal range experienced over the calibration period.
The use of an ‘anomaly’ number hides reality by pretending there is a ‘norm’ that ‘ought to be experienced’ were it not for the ‘influence’ of human activities.
All this is quite separate from the ridiculous precision claimed for Gavin’s numbers which are marketed to the public as ‘real’. These numbers are from measurements and the error propagation is not being done and reported properly.

crispin, the baseline is not the “norm.” it’s
just an arbitrary choice to compare temperatures
against. it can be changed at will. it
hides nothing

george e. smith

Well nuts ! the observed value IS the norm; it can never be anything else.
G

Patrick MJD

No, not your imagination. It’s to scare people, ie, the warm/cold is abnormal (Somehow) when it is perfectly normal. I am seeing this in Australian weather broadcasts more and more now.

tom s

I am a meteorologist…30yrs now. I cannot stand TV weather. I never watch it anymore as I do all my own forecasting myself. It’s catered to 7yr olds. It’s painful to watch. I need not listen to any of these dopes. No, I am not a TV weatherman.

AGW is not Science

I actually haven’t taken notice of the differences between how “above” and “below” average temps are referenced, but I have always abhorred the (frequent, and seemingly prevailing) use of the word “normal” in that respect.
As I like to say, “There IS no “normal” temperature – it is whatever it is.” What they are calling “normal” is an average temperature of a (fairly arbitrarily selected) 30-year period (and at one point they weren’t moving the reference period forward as they were supposed to, because they knew that was going to raise the “average” temps and thereby shrink the “anomalies,” thereby undermining (they felt) the “belief” in man-made climate catastrophe).
I object to the word “anomaly” as well, because it once again suggests that there is something “abnormal” about any temperature that is higher or lower than a 30-year average, which itself is nothing more than a midpoint of extremes. There IS NOTHING “ANOMALOUS” about a temperature that is not equal to ANY “average” of prior temperatures, which itself is nothing more than a midpoint of extremes. “Anomalies” are complete BS.
Great, revealing OP.

JohnWho

Wait, does that mean all the years are the “hottest ever” or none of them?
I note that Gavin states with certainty that it is uncertain and it is somewhat surprising that he does so.

Latitude

why current practitioners of Climate Science insist on using Temperature Anomalies….
…it’s easier to hide their cheating

Menicholas

Also, it becomes obvious that the amounts of difference they are screaming about are below the limits of detection to a person without instrumentation.

AGW is not Science

BINGO!

Tom in Florida

“Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”
And of course, you lose the ability to scare people into parting with their money.
Snake Oil Salesman: The phrase conjures up images of seedy profiteers trying to exploit an unsuspecting public by selling it fake cures.

Gunga Din

Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”

So…in other words, if the actual temperatures won’t make it “warmest year ever!”, we’ll use something else to make it the “swarmiest year ever!”.
(http://www.urbandictionary.com/define.php?term=Swarmy)

TonyL

The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.
This appears to be attempting to make an issue where there is none.
It’s a Nothingburger.
Fake News.

Greg

Agreed.

Greg

“The proper use of anomalies is well known and the reasons are sound. ”
Agreed.

— a little tongue-in-cheek but serious as to the point being made.

So what is the serious point being made? That you don’t understand why anomalies are used?

Latitude

” All of which appear to be the same within the uncertainty”

Greg

Gav would do better to try to explain why he is averaging ( ie adding ) temperatures of land and sea which are totally different physical media and thus not additive:
https://climategrog.wordpress.com/category/bad-methods/

seaice1

“So what is the serious point being made? That you don’t understand why anomalies are used?”
That appears to be the case. I suggest anyone who finds this amusing go and read the article at realclimate with an open mind and you may then understand why anomalies are used. Ho ho. As if that will happen! We can all share in the joke.

bobl

Actually the whole of climate science would do well to explain why they use the unreliable almost nonphysical concept of temperature to do anything useful since the actual physical parameter is energy. Temperatures represent vastly different energies depending on the phase of matter, and the medium it is being measured in. For example between a dry day and a humid day, or between smog or air, between ozone or oxygen. The assumption of constant relative humidity alone makes the whole thing a pseudoscience.

KTM

Bobl it is so they can take a high energy maximum daily temperature and directly add it to a low energy minimum temperature, then divide that value in half as if they are both equivalent to arrive at an average temperature without proper weighting.
When is the last time you heard a Warmist talking about maximum temperatures? It’s taboo to discuss those in polite society.

blcjr

In terms of statistics, the point is valid. To compare a “spot” temperature against an “average” (like a 30 year norm) ignores the uncertainty in the “average.” This is similar to the difference between a “confidence interval” and a “prediction interval” in regression analysis. The latter is much greater than the former. In the first case one is trying to predict the “average.” In the second case one is trying to predict a specific (“spot” in the jargon of stock prices) observation.
Implicitly, an anomaly is trying to measure changes in the average temperature, not changes in the actual temperature at which time the measurement is taken. If the anomaly in June of this year is higher than the anomaly in June of last year, that does not mean that the June temperature this year was necessarily higher than the June temperature last year. It means that there is some probability that the average temperature for June has increased, relative to the (usually) 30 year norm. But in absolute terms that does not mean we are certain that June this year was warmer than June last year.
Anomalies are okay, if understood and presented for what they are: a means of tracking changes in average temperature. But that is not how they are used by the warmistas. The ideologues use them to make claims about “warmest month ever,” and that is statistical malpractice.
Basil

Jim Gorman

blcjr: [anomalies are] ” a means of tracking changes in average temperature”. This is exactly what the CAGW quote. You are feeding their assumption. I know you are aware of the difference but the normal person does not; they simply read your text and say, “O, the normal temperature is going up or down”.
I usually try to explain the anomalies as a differential, that is, an infinitely small section of a line with the magnitude and direction of the change. The width of the change is no wider than a dot on the graph. This seems to make more sense to the most people.

rd50

Give us a link.

Kip Hansen

rd50 ==> Sorry — who? give you a link to what?

HAS

Actually it isn’t uncontroversial. One problem does lie with the uncertainty and its distribution. Another with working with linear transformations of variables in non-linear systems.

Aphan

TonyL
It gets better-
“[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)
If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.
BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”
Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.
Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.

It gets better-
“[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)
If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.
BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”
Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.
Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.

Robert of Ottawa

Yes indeed, 0.07 +/- 0.5 doesn’t appear to be very significnt does it 🙂

jorgekafkazar

Just think of it as a statistical rug under which to sweep tangled web weaving.

jorgekafkazar-
Right!
And yet they say “the Earth’s temperature is increasing” instead of “the Earth’s anomalies are increasingly warmer” etc. Al Gore says “the Earth has a temperature” instead of “The Earth has a higher anomaly”. And since Gav and the boys ALL ADMIT that it’s virtually impossible to know “exactly” what Earth’s actual global average temperature is, and that Earth is not adequately covered with thermometers, and that the thermometers we DO have are not in any way all properly cited and maintained and accurate… why in the crap do we let them get away with stating that “average surface temperatures were 0.07 F warmer” than a prior year? Why would any serious “Scientist” with any integrity use that kind of language when he’s really talking about something else??
Oh yeah…..rug weaving. 🙂

Sheri

Aphan: that “average surface temperatures were 0.07 F warmer” than a prior year
If only they did actually say that. They don’t even say that. It’s just “hottest year ever” with no quantification, usually.

Clyde Spencer

TonyL,
Yes, at least some of us are aware of the ‘proper’ use of anomalies. At issue is whether anomalies are being used properly. Gavin even admits that frequently they are not: “This means we need to very careful in combining these two analyses – and unfortunately, historically, we haven’t been and that is a continuing problem.”

TonyL

At issue is whether anomalies are being used properly.

Very True.
A closely related issue:
The ongoing story of the use, misuse, and abuse of statistics in ClimateScience! is the longest running soap opera in modern science.
The saga continues.

Rick C PE

TonyL: I disagree that the use of anomalies is well known.

Anomaly
NOUN
Something that deviates from what is standard, normal, or expected:
“there are a number of anomalies in the present system”
Synonyms: oddity, peculiarity, abnormality, irregularity, inconsistency

My objection is that the reporting of data as anomalies, like reporting averages without the variance, standard deviation or other measure of dispersion, simply reduces the value of the information conveyed. It eliminates the context. It is not a common practice in statistical analysis in engineering or most scientific fields. None of my statistics textbooks even mentions the term. It simply reduces a data set to the noise component.
While it seems to be common in climate science, the use of the term anomaly implies abnormal, irregular or inconsistent results. But, as has been extensively argued here and elsewhere, variation in the temperature of our planet seems to be entirely normal.
That said, I do get that when analyzing temperature records it is useful to look at temperatures for individual stations as deviations from some long term average. E.g. if the average annual temp. in Minneapolis has gone from 10 C (long term average) to 11 C and the temp. in Miami has gone from 20 to 21 C, we can say both have warmed by 1 C.
Of course, if one averages all the station anomalies and all the station baseline temperatures the sum would be identical to the average of all the actual measured temperatures.
But it is another thing to only report the average of the ‘anomalies’ over hundreds or thousands of stations without including any information about the dispersion of the input data. Presenting charts showing only average annual anomalies by year for 50, 120, 1000 years is pretty meaningless.

“TonyL August 19, 2017 at 4:13 pm
The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.
This appears to be attempting to make an issue where there is none.
It’s a Nothingburger.
Fake News.”

The “Fake news and nothingburger” start right with Gavin, his mouth, his writing and Gavin’s foul treatment of others.

“TonyL August 19, 2017 at 4:13 pm
The proper use of anomalies is well known and the reasons are sound.”

What absurd usage of “well known” and “the reasons are sound”, TonyL.
Just another fake consensus Argumentum ad Populum fallacy.
Use of anomalies can be proper under controlled conditions for specific measurements,
• When all data is kept and presented unsullied,
• When equipment is fully certified and verified,
• When measurements are parallel recorded before and after installation and impacts noted,
• When temperature equipment is properly installed everywhere,
• When temperature equipment installation represents all Latitudes, Longitudes, elevations, rural, suburban and urban environments,
• When temperatures and only temperatures are represented, not some edited version of data, data fill-in, smudged or other data imitation method is used.
Isn’t it astonishing, that “adjustments”, substitutions, deletions, adjustments or data creation based on distant stations, introduce obvious error bounds into temperature records; yet 0.5K is the alleged total error range?
Error bounds are not properly tracked, determined, applied or fully represented in end charts.
Gavin and his religious pals fail to track, qualify or quantify error rates making the official NOAA approach anti-science, anti-mathematical and anti-anomaly. NOAA far prefers displaying “snake oil”, derision, elitism, egotism and utter disdain for America and Americans.
“Double speak” is far too nice a description for Gavin and NOAA misrepresented temperatures. Climastrologists’ abuse of measurements, data keeping, error bounds and data presentation would bring criminal charges and civil suits if used in any industry producing real goods Americans depend upon.

NW sage

Kip – good post!
The REAL answer of course is normally called ‘success testing’. Using this philosophy the test protocol – in this case the way the raw data is treated/analyzed – is chosen in order to produce the kind of result desired. NOT an analysis to find out if the temperatures are warmer, colder, or the same but to produce results that show there is a warming trend.
The usual way of detecting this success testing phenomena is to read the protocol and see just how much scientific technobabble is there (think of the Startgate TV series). The more technobabble the less credible the result.

This is what is really going on. Station selection, data selection, methodology selection allows the gate-keepers of the temperature record and the global warming religion, the ability to produce the number they want.
Think of it as someone standing over the shoulder of a data analyst in the basement of the NCDC each month saying “we’ll, what happens if we pull out the 5 Africa stations in the eastern side? How about we just add in that station with all the warming errors? Let’s adjust the bouys up and pretend it is because of ship engine intakes that nobody can/will check? Why don’t we bump up the time of observation bias adjustment and make a new adjustment for the MMTS sensors? Show me all the stations that have the highest warming? Let’s just drop those 1500 stations that show no warming. The South American stations are obviously too low by 1.0C. Just change them and call it an error.
We”ll call it version 4.4.3.2.”

David A

…which explains why 50 percent of the data is often not used, made up, extrapolated.

Gavin had an analogy. If you’re measuring a bunch of kids to see who’s the tallest, running a ruler head to foot, you can get a good answer. If you measure the height of their heads above sea level, there is a lot more uncertainty. So which would you do?

D. Cohen

To continue the analogy, what people want to know is ***not*** which kid is tallest, but rather which kid is highest above sea level, allowing for the possibility that the “sea level” — that is, the global absolute temperature — may be changing over time (day by day and year by year) in a way that is very difficult to measure accurately.

Greg

No, the best way is to measure their height using low orbit satellite range finding, whilst getting the kids to jump up and down on a trampoline and measure the reflection off the surface of the trampoline at the bottom of the movement. This is accurate to within +/- 1mm as has been established for sea level measurements.

Mark - Helsinki

and yet actual absolute measurements are better than statistical output which is pure fantasy, it’s not an temperature anomaly, its a statistical anomaly, which requires a “leap of faith” to accept it as a temperature anomaly when talking GISS GAMTA

Clyde Spencer

NS,
The primary uncertainty is introduced by adding in the elevation above sea level. Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair. Therein lies the problem with temperature anomalies. We aren’t measuring the anomalies directly (height) but obtaining them indirectly from an imperfectly known temperature baseline!

“Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair.”
Exactly. And that is the case here, because we are talking not about individual locations, but the anomaly average vs absolute average. And we can calculate the anomaly average much better, just as we can measure better top to toe.
It has another useful analogue feature. Although we are uncertain of the altitude, that uncertainty does not actually affect relative differences, although that isn’t obvious if you just write it as a±b. The uncertainty of the absolute average doesn’t affect our knowledge of one year vs another, say. Because that component of error is the sae for both. So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015. The reason is that you took the same number 14.0±1 (abs normal), and added the anomalies of 0.7±0.1 and 0.5±0.1. The normal might have been 13 or 15, but 2016 will still be warmer than 2015.

TheOtherBobFromOttawa

You clearly have a different understanding of “error” than I do, Nick.
You wrote: “So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015.”
I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5. Since the temperature difference between 2015 & 2016 is well within the error range of both temperatures it’s impossible to know which year is warmer or cooler.
That’s what I remember from my first year Physics Prof, some 50 years ago. But maybe Physics has “evolved” since then. :))

Kip,
“if your ancestors are from Devon”
None from Devon, AFAIK. Lots from Wilts, Glos.

“I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5”
But not independently. If 2016 was at 13.7 because the estimate of normal was wrong on the low side (around 13), then that estimate is common to 2015, so there is no way that it could be 15+.
There are many things that can’t be explained by what you learnt in first year physics.

TheOtherBobFromOttawa

I don’t know what point you’re making in your comment.
And there are many things that Gavin & Co. do that can’t be explained by anyone – at least in a way that makes sense to most people. :))

Streetcred

No problem if all 5 boys are standing on the same level platform … but WE know that the platform is not level !

Urederra

One of the kids puts his hair in a bun.

P. Berberich

There is another analogy. This morning my wife asks: What’s the outside temperature today? My answer is: the temperature anomaly is 0.5 K. When I add you need no new clothes I will run into problems this day.

Nor will she nicely ask what the outside temperature is, again.
NOAA should reap equal amounts of derision for their abuse of anomalies.

Mark - Helsinki

what if 60% of the kids are not measured Nick, does Gavin just make it up?

commieBob

Suppose that we have a data set: 511, 512, 513, 510, 512, 514, 512 and the accuracy is +/- 3. The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0 and the accuracy is still +/- 3.
I don’t understand how using anomalies lets us determine the maximum any differently than using the absolute values. There has to be some mathematical bogusness going on in CAGW land. I suspect they think that if you have enough data it averages out and gives you greater accuracy. I can tell you from bitter experience that it doesn’t always work that way.

Pat Lane

But if you ADD the uncertainties together, you get zero!
Here’s the appropriate “world’s best practice” algorithm:
1. Pick a mathematical operator (+, -, /, *, sin, cos, tan, sinh, Chebychev polynomial etc.)
2. Set uncertainty = 0
2a. Have press conference announcing climate is “worse than originally thought”, “science is settled” and “more funding required.”
3. Calculate uncertainty after applying operator to (homoginised) temperature records
4. Is uncertainty still zero?
5. No, try another operator.
6. go back to 3 or, better yet, 2a.

Pat Lane

The sharp-eyed will note the above algorithm has no end. As climate projects are funded on a per-year basis, this ensures the climate scientist will receive infinite funding.

Thank you Bob!
My math courses in Engineering and grad studies (stats, linear programming, economic modelling, and surprising to me the toughest of all, something called “Math Theory”) were 50 years ago. But the reasoning that somehow anomalies are more precise or have less uncertainty than the absolute values upon which they were based set off bells and whistles in my old noggin. I was very hesitant though to raise any question for fear of displaying my ig’nance..
Maybe both of us are wrong, but now I know I’m in good company. 🙂

Rolf

Me too !

“The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0”
But you don’t form the anomalies by subtracting a common average. You do it by subtracting the expected value for each site.
“how using anomalies lets us determine the maximum”
You don’t use anomalies to determine the maximum. You use it to determine the anomaly average. And you are interested in the average as representing a population mean, not just the numbers you sampled. The analogy figures here might be
521±3, 411±3, 598±3. Obviously it is an inhomogeneous population, and the average will depend far more on how you sample than how you measure. But if you can subtract out something that determines the big differences, then it can work.

commieBob

That’s what you say. Here’s what Dr. Schmidt said:

But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

My example is a simplified version of the above. If you think Dr. Schmidt erred, that’s between you and him.

the accuracy is still +/- 3.
≠======
Of course it is. But what climate science does is to re-calculate the error statistically from the anomaly and come to the absurd conclusion that the error changed from 0.5 to 0.05. The nonsense is that averaging reduces the variance and gives the misleading impression that it provides a quick way to reduce error. And it does in very specific circumstances. Of which this is not one.

Extra! EXTRA! Read all about it! Gavin Schmidt of NASA ADMITS that there has been NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years!!!

SMC

I’m in denial. A climate scientist actually told the truth… kind’a… sort’a… maybe… in a convoluted way? I don’t believe it. 🙂

He told the truth, and then rationalized why that truth is completely unimportant to the actual “science” involved in climate science. Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate. And if you don’t like unicorns or pizza parties, you’re a hating-hate-hater-denier and should be put to death.
ISIS is more tolerant.

SMC

“Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate.”
What happened to the rainbows, fairy dust and hockey sticks?
“…hating-hate-hater-denier…”
You forgot lying, hypocritical, sexist, egotistical, homophobic, misogynist, deplorable bigot. :))

Thanks SMC….I knew I was forgetting something… 🙂

StuM

“NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years”
Earth’s Absolute Temperature has changed by roughly 4°C in every one of those lasts 30 years.
Surely that is statistically significant. 🙂

Cold in Wisconsin

What is the sensitivity of the measuring device, and what are the significant figures? Can an average of thousands of measurements accurate to a tenth of a degree be more accurate than each individual measuring device? I am asking an honest question that someone here can answer accurately. We learned significant figures in chemistry, but wouldn’t they also apply to these examples? How accurate are land based temp records versus the satellite measuring devices? This has been a central question for me in all of this “warmest ever” hoopla, and I would appreciate a good explanation.

Greg

If you have one thermometer with a 1 degree scale you would attribute +/-0.5 degrees to a measurement. If it is scientific equipment, it will be made to ensure it is at least as accurate as the scale.
There is a rounding error when you read the scale and there is the instrumental error.
If you have many readings on different days, the rounding errors will average out. If you have thousands of observation stations , the calibration error the individual thermometers will average out.
That is the logic of averages being more accurate than the basic uncertainly of one reading.

Accuracy of scale: If the thermometers from 1880 through early 20th century read in whole degree increments (which was “good enough” for their purposes) then how does one justify declaring this year was the hottest year ever, by tenths of a degree?
Rounding errors will only “average out” if everyone recording temps used a flip of the coin (figuratively) to determine abut what to record. The reality is some may have used a decision rule to go to the next HIGHEST temp and some the LOWER. Then there’s the dilemma about what to do with “5 tenths”; there were “rules” about that too. You cannot assume the “logic of averages” unless we know how those rules of thumb were applied.

commieBob

Suppose that we have a sine wave of known frequency buried under twenty db of Gaussian noise. We can detect and reconstruct that signal even if our detector can only tell us if the signal plus noise is above or below zero volts (ie. it’s a comparator). By running the process for long enough we can get whatever accuracy we need. link
The problem is that Gaussian noise is a fiction. It’s physically impossible because it would have infinite bandwidth and therefore infinite power. Once the noise is non-Gaussian, our elegant experiment doesn’t work any more. It’s more difficult to extract signals from pink or red noise. link If we can’t accurately describe the noise, we can’t say anything about our accuracy.

kip, if there are n stations and
if the error of the individual
readings is s, the error of the
average will be s/squareroot(n).
small

tty

“if there are n stations and if the error of the individual readings is s, the error of the average will be s/squareroot(n).”
Ah, “the Law of large number”. Somebody always drags that up. Sorry but no, that only applies to independent identically distributed random variables.

Urederra

Following the child height example:
First case: If you take one child and you measure his/her height 10 times, the average is more accurate.
Second case: If you have 10 children and you measure their haight once per child. the average height is not more accurate than the individual accuracy.
The temperature in Minneapolis is different from the temperature in Miami. The Earth average temperature belongs to the second case. That is my understanding.
It does not matter, anyway, since the Earth is not in thermal equilibrium or even in thermodynamic equilibrium and therefore the term average temperature is meaningless.

catweazle666

“the error of the
average will be s/squareroot(n).
small”

No it won’t.

Philo

Cold(what else?) in Wisconsin- temperature is an Intensive property- the speed of the moving/vibrating atoms and molecules. Which for climate purposes is measured by a physical averaging process- the amount the temperature being measured changes the resistance of (usually now) some sort of calibrated resistor which can be very precise(to hundredths of a degree) but only as accurate as its calibration over a specific range. Averaging temperatures is pretty meaningless. You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing. Measuring how the temperature of the water changes tells you something about the amount of energy released by the burning gas but it’s a very crude calorimeter.
Like that example, the climate is driven by energy movements, not primarily by temperatures.

I’m not a climate scientist (but I did see one on TV) but why aren’t those far more educated than me pointing out Phil’s point which should be obvious to anyone with a basic science education.

You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing.

In discussions with my academic son, I point out that I can take the temperature at the blue flame of a match stick and then the temperature of a comfortable bath tub and the the average of the two has no meaning.
The response of course is 97% of scientists say I’m deluded. (Argument from Authority).

Mick

I have environment canada weather app on my phone. I noticed this summer they reported what it feels like rather than the measured number. Or, they use the inland numbers which are a few degrees higher, rather than the coastal number that they have been using at the same airport station for the last 80 years.
They especially do this on the radio weather reports. It feels like…30 degrees

george – scientists have
made it very clear that anyone
should expect a change of the
global average at their
locale.
but the global avg is good
for spotting the earth’s energy
imbalance. not perfect, but
good

tty

“but the global avg is good for spotting the earth’s energy imbalance. not perfect, but good”
Actually it is almost completely useless given the very low heat capacity of the atmosphere compared to the ocean (remember that it is the ocean that absorbs and emits the vast majority of solar energy).

TA

https://science.nasa.gov/science-news/science-at-nasa/1997/essd06oct97_1
Accurate “Thermometers” in Space
“An incredible amount of work has been done to make sure that the satellite data are the best quality possible. Recent claims to the contrary by Hurrell and Trenberth have been shown to be false for a number of reasons, and are laid to rest in the September 25th edition of Nature (page 342). The temperature measurements from space are verified by two direct and independent methods. The first involves actual in-situ measurements of the lower atmosphere made by balloon-borne observations around the world. The second uses intercalibration and comparison among identical experiments on different orbiting platforms. The result is that the satellite temperature measurements are accurate to within three one-hundredths of a degree Centigrade (0.03 C) when compared to ground-launched balloons taking measurements of the same region of the atmosphere at the same time. ”
The satellite measurements have been confirmed by the balloon measurements. Nothing confirms the bastardized surface temperature record.
And this:
http://www.breitbart.com/big-government/2016/01/15/climate-alarmists-invent-new-excuse-the-satellites-are-lying/
“This [satellite] accuracy was acknowledged 25 years ago by NASA, which said that “satellite analysis of the upper atmosphere is more accurate, and should be adopted as the standard way to monitor temperature change.”
end excerpts
Hope that helps.

Tony

Watch me pull a rabbit out of my hat “±0.05ºC” … what utter rubbish!

Eric Stevens

I am puzzled as to how it is how over a period of 30 years temperatures can be established to only ±0.5K but for the Gistemp 2016 baseline the uncertainty is only ±0.05ºC. How is the latter more precise? Is it that different measuring techniques are in use?

wyzelli

It is also well worth remembering (or learning) the difference between MEAN and MEDIAN and paying close attention to which one is used where in information sources.
So many reports that “the temperature is above the long term MEAN” where in a Normal Distribution exactly half of the samples are higher than the mean!
Its an interesting and worthwhile exercise to evaluate whether the temperature series in any particular station resembles a Normal Distribution…

wyzelli

For comparison purposes, note that sea ice extent is usually referenced to the MEDIAN.

Stephen Greene

I was looking at temp. and CO2 data last week to see if NASA, NOAA and GIST would pass FDA scrutiny if approval was sought. There is a lot to it but from acquisition to security to analysis as well as quality checks for biases in sampling, to missing data, not to mention changing historical data etc, the answer is no. NOT EVEN CLOSE! Blinding is a big deal. So, ethically I believe any climate scientist who is also an activist must blind ALL PARTS of a study to ensure quality. What about asking to audit all marchers on Washington’s who received federal grants but do not employ FDA level or greater quality standards? Considering Michael Mann would not turn over his data to the Canadian courts last month, this might be a hoot, and REALLY VALUABLE!

Rick C PE

TonyL: I disagree that the use of the “anomalies” is well known.

a·nom·a·ly
[əˈnäməlē]
NOUN
something that deviates from what is standard, normal, or expected:
“there are a number of anomalies in the present system”
synonyms: oddity · peculiarity · abnormality · irregularity · inconsistency

While It is used extensively in climate science these days, it is a very uncommon approach in statistical analysis, engineering and many scientific fields. The term or process is not mentioned or described in any of my statistics text books. I have spent 40 years in the business of collecting and analysis of all kinds of measurements and have never seen the need to convert data to ‘anomalies’. It can be viewed as simply reducing a data set to the noise component. My main objection is that, like an average without an estimate of dispersion such as the variance or standard deviation, it serves to reduce the information conveyed. Also, as the definition of anomaly indicates, it implies abnormality, irregularity, etc. As has been widely argued here and elsewhere significant variability in temperature of our planet seems quite normal.

Robert of Ottawa

I think this is a fine demonstration of the falacy of false precision. Also of statistical fraud.
We can’t let the prols think “Hey guess what, the temperature hasn’t changed!”

KH, I am of two minds about your interesting guest post.
On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless. OTH, a global average stationary station anomaly (correctly calculated) is meaningful, especially for climate trends. So useful if the stations are reliable (most aren’t),
On the other hand, useful anomalies hide a multitude of other climate sins. Not the least of which is the gross difference between absolute and ‘anomaly’ discrepancies in the CMIP5 archive of the most recent AR5 climate models. They get 0C wrong by +/-3 C! So CMIP5 not at all useful. Essay Models all the way Down in ebook Blowing Smoke covers the details of that, and more. See also previous guest post here ‘The Trouble with Models’.

Greg

I agree that anomalies make more sense in principal, if you want to look at whether the earth has warmed due to changing radiation , for example.
The problem is the “climatololgy” for each month is the mean of 30days of that month over 30 years. 900 data. They will have a range of 5- 1- deg C for any given station with a distribution. You can take 2 std dev as the uncertainty of how representative that mean is and I’ll bet that is more than 0.05 deg C. So the uncertainty on your anomaly can never be lower than that.

Streetcred

For anomalies to be useful in any respect , the original data should not be tampered with.

David Chappell

Ristvan: “On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless.”
What you are saying in simple terms is that a global average temperature is also a crock of fecal matter.

Tom Halla

This is like the rules for stage psychics doing cold readings==>do not be specific on anything checkable.

Greg

Another error they usually ignore is sampling error. Is the sample a true and accurate representation of the whole. In the case of SST almost certainly not.
Sampling patterns and methods have been horrendously variable and erratic over the years. The whole engine room / buckets fiasco is largely undocumented and is “corrected” based on guesswork, often blatantly ignore the written records.
What uncertainty needs to be added due to incomplete sampling?

Clyde Spencer

KIP,
Something buried in the comments section of Gavin’s post is important and probably overlooked by most:
“…Whether it converges to a true value depends on whether there are systematic variations affecting the whole data set, but given a random component more measurements will converge to a more precise value.
[Response: Yes of course. I wasn’t thinking of this in my statement, so you are correct – it isn’t generally true. But in this instance, I’m not averaging the same variable multiple times, just adding two different random variables – no division by N, and no decrease in variance as sqrt(N).”
Gavin is putting to rest the claim by some that taking large numbers of temperature readings allows greater precision to be assigned to the mean value. To put it another way, the systematic seasonal variations swamp the random errors that might allow an increase in precision.
Another issue is that, by convention, the uncertainty represents +/- one (or sometimes two) standard deviations. He doesn’t explicitly state whether he is using one or two SD. Nor does he explain how the uncertainty is derived. I made a case in a recent post ( https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/ ) that the actual standard deviation for the global temperature readings for a year might be about two orders of magnitude greater than what Gavin is citing.

Gary Kerkin

Schmidt cites two references as to why anomalies are preferred, one from NASA and one from NOAA. The latter is singularly useless as to why anomalies should be used. The opening paragraph of the NASA reference states:

The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.

Two factors are at work here. One is that the data is smoothed. The other is that the anomalies of two different geographical locations can be compared whilst the absolute temperatures cannot.
Is smoothed data useful? I guess that is moot but it is true to say that any smoothing processes loses fine detail, the most obvious of which is diurnal variation. Fine detail includes higher frequency information and removing it makes the analysis of natural processes more difficult.
Is a comparison of anomalies at geographically remote locations valid? I would think it would be, provided the statistics of the data from both locations are approximately the same. For example, since most analysis is based on unimodal gaussian distributions (and normally distributed at that), if the temperature distributions at the two locations are not normal, can a valid comparison be made? Having looked at distributions in several locations in New Zealand, I know that the distributions are not normal. Diurnal variation would suggest at least a bimodal distribution, but several stations exhibit at least trimodal distributions. The more smoothing applied to the data set the more closely the distribution will display normal, unimodal behaviour.
I suspect that smoothing the data is the primary objective, hiding the inconvenient truth that air temperature is a natural variable and is subject to a host of influences, many of which are not easily described, and incapable of successful, verifiable modeling.

Re: Gary Kerkin (August 19, 2017 at 6:38 pm)
[James] Hansen is quoting himself again, it’s all very inbred when you start reading the supporting – or not – literature!
However the literature doesn’t agree and he knows that he is dissembling.
In [James] Hansen’s analysis, the isotropic component of the covariance of temperature, assumes a constant correlation decay* in all directions. However, “It has long been established that spatial scale of climate variables varies geographically and depends on the choice of directions” (Chen, D. et al.2016).
In the paper The spatial structure of monthly temperature anomalies over Australia, the BOM definitively demonstrated the inappropriateness of Hansen’s assumptions about correlation of temperature anomalies:

In reality atmospheric fields are rarely isotropic, and indeed the maintenance of westerly flow in the southern extratropics against frictional dissipation is only possible due to the northwest-southeast elongation of transient eddy activity (Peixoto and Oort 1993). Seaman (1982a) provides a graphic illustration of this anisotropy on weather time-scales for the Australian region…This observation of considerable anisotropy is in contrast with Hansen and Lebedeff (1987) for North America and Europe.. We also note the inappropriateness of the function used by P.D. Jones et al. (1997) for describing anisotropy (at least for Australian temperature), which limits the major and minor axes of the correlation ellipse to the zonal and meridional direction (see Seaman 1982b).
Clearly, anisotropy represents an important characteristic of Australian temperature anomalies, which should be accommodated in analyses of Australian climate variability.(Jones, D.A. & Trewin, B. 2000)

*Decreasing exponentially with their spatial distance, spatial scales are quantified using the e-folding decay constant.

Mod or Mods! Whoops! I just realised that my comment above was directed at James Hansen of NASA but might be confused with the Author of the post, Kip Hansen!
To be clear, Gavin Schmidt(NASA), references James Hansen(NASA) quoting J.Hansen who references NASA(J.Hansen)! It’s turtles all the way down 😉

wyzelli

It is true that temperature data is not Normally Distributed. At the very least most sets I have looked at are relatively skewed. The problem is that the variation from Normal in each station is different from other stations, and comparing, specifically averaging, non homogeneous data presents a whole other set of difficulties (i.e. shouldn’t be done).

Gunga Din

Why use anomalies instead of actual temperatures?
They produce swingier trends?

Walter Sobchak

Another reason to use anomalies instead of temperatures is that the graph of anomalies can be centered at zero and show increments of 0.1°. It can make noise movements look significant. If you use temperatures, any graph should show Kelvin with absolute zero. Construct a graph using those parameters, and the “warming” of the past 30 years looks like noise, which is what it is. A 1°K movement is only ~0.34% not much. It is just not clear why we should panic over a variation of that magnitude.

Greg

” If you use temperatures, any graph should show Kelvin with absolute zero. ”
nonsense, you scale the graph to show the data in the clearest way with appropriately labelled axes.

hunter

“Clearest”? or “most dramatic for our sales goals?”
There is a fine line between the two.
If 0.1 degree actually made an important difference to anything at all, then maybe scales used today would be informative.
Instead they are manipulative, giving the illusion of huge change when that is not the case.

hunter

If the scale was simply the reality reflecting the range of global temps the graph would be representing the chsnges honestly and people could make ingormed decisions.
That is counter to the goals of the consensus.

Alan Davidson

Isn’t the real answer that if actual temperatures were used, graphical representations of temperature vs time would be nice non-scary horizontal lines?

hunter

Yep.
“Keep the fear alive” us an important tool in the climate consensus tool kit.

BigBubba

From a management perspective it always pays to hire staff that give you 10 good reasons why something CAN be done rather than 10 good reasons why something CAN’T be done:
So the question is: Why has the temperature data not been presented in BOTH formats? Anomaly AND Absolute.

it has been presented
in both formats.
see karl et al’s 2015 paper in
Science.

hunter

But only the manipulative feat inducing scary scale is used in public duscussions.

hunter – conclusions are independent of scale.
obviously.

kip – link?

TheOtherBobFromOttawa

This is a very interesting discussion. I’ve been thinking about this for some time. Consider the following.
The temperature anomaly for a particular year, as I understand it, is obtained by subtracting the temperature for that year from the 30-year average temperature. Assuming both temperatures have an error of +/- 0.5C, the calculated anomaly will have an error of +/- 1.0C. When adding or subtracting numbers that have associated errors, one must ADD the errors of the numbers.
So the anomaly’s “real value” is even less certain than either of the 2 numbers it’s derived from.

Greg

If you can argue that the errors are independent and uncorrelated you can use the RMS error but yes, always larger than either individual uncertainty figure.

richard verney

Let me correct. It is not whether you can argue that the errors are independent and uncorrelated, but rather whether they truly are independent and uncorrelated.
Yet in the climate field, it would appear that the errors are neither independent nor uncorrelated. there would appear to be systemic biases such that uncertainty is not reduced.

+/- 0.5 C is way too high, esp
with modern equipment

Mark Johnson

The take-away, and this is to be found in other disciplines as well, is “never let the facts get in the way of a good story.” The Left and the media just love to apply it.

The desperation is to try to get some sort of important signal to show something important with climate, so they are looking at sample noise as data these days.

There is nothing anomalous about the global average temperature as it changes from year to year (well there wouldn’t be if such a thing as global average temperature existed). The global average temperature has always varied from year to year, without there being any anomalies.

explain the long-term trend

Aristoxenous

The alarmists [NASA, NOAA, UK MET Office] cannot even agree among themselves what, ‘average global temp. means’. Freeman Dyson has stated that it is meaningless and impossible to calculate – he suggests that a reading would be needed for every square km. Like an isohyet is the measurement going to be reduced / increased to a given density altitude? What lapse rates – ambient or ISO?
The satellite observations are comparable because they relate to the same altitude with each measurement but anything measuring temps near the ground are a waste of time and prohibitively expensive at; one station / square km or even 100 square km.

why every sq km?
temperature stations aren’t free.
so the question is, what station density gives
the desired accuracy?
and your answer is?
show your math

Aristoxenous

ISA not ISO.

Mark - Helsinki

As Dr Ball says and I agree, averages destroy accuracy of data points.
Given we need accuracy for science, no? we do. Absolute temperatures would be used and need to be used, science is numbers, actual numbers not averaged numbers. If science worked with averages we’d never had had steam engines.
Take model runs.
100 model runs. Out of that 100 runs, 1 of the runs is the most accurate (no two runs are the same) so one must be the most accurate (we can’t know which one and accuracy is really just luck given the instability of produced output)
Because we do not understand (why) and which run is the accurate one, we destroy that accuracy with the other 99 runs.
Probability is useless in this context as the averages and probabilities conceal the problem, we don’t know how accurate each run is.
This is then made worse by using multiple model ensembles, which serve to dilute the unknown accuracy even more to the point where we have a range of 2c to 4.5c or above, this is not science, it is guessing, it’s not probability, it is guessing.
The only use for using loads of model ensembles is to increase the range of “probability” and this probability does not relate to the real physical world, it’s a logical fallacy.
The range between different temperature anomaly data sets are performing the same function as the wide cast net of model ensembles.
Now you know why they don’t use absolute temperatures, because using those increases accuracy and reduces the “probabilities” and removes the averages which allow for the wide cast net of non-validated “probabilities”.
The uncertainty calculations are rubbish. We are given uncertainty from models, not the real world, the uncertainty only exists in averages and probabilities not in climate and actual real world temperatures.

Mark - Helsinki

NOAA’s instability and wildly different runs prove my point. An average of garbage is garbage.
If NOAA perform 100 runs, take the two that vary most, and that is your evidence that they have no idea.

richard verney

Or at any rate, it gives an insight into the extent of error bounds.

Speaking of models and the breathtaking circularity inherent in the reasoning of much contemporary Climate Science!
The assessment of the reliability of sampling error estimates (In the application of anomalies to large scale temperature averages; in the real world), is tested using temperature data from 1000-year control runs of general GCMs! (Jones et al., 1997a)
And that is a real problem, because the models have the same inbuilt flaw; they only output gridded areal averages!
Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).
The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.

Mark - Helsinki

Grids set to preferred size position only serves to fool people.
We need a scientifically justified distance circumference for each data point grounded in topology and site location conditions (Anthony’s site survey would be critical for such)
Mountains hills and all manner of topology matters as do local large water bodies, as well as the usual suspects of urbanisation ect.
This is a massive task and we are better investing everything into satellites and developing that network further to solve some temporal issues for better clarity.
Still sats are good for anomalies if they pass the same location at the same time each day but we should depart from anomalies because they are transient and explaining why is nigh impossible.
A 50km depth chunk of the atmosphere is infinitely better than the surface station network for more reasons than not.
Defenders of the surface data sets are harming science

Mark - Helsinki

With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data.
This is not happening. It is never going to happen.

Mark - Helsinki

I agree Kip, that was my point about it all, models are not for accuracy, but still, out of 100 runs 1 is the most accurate and the other 99 destroy that lucky accuracy.
My point also is that they don’t want accuracy (as they see it) because what if a really good model ran cool?
That wont do
They need a wide cast net to catch a wide range of outcomes in order to stay relevant.

Mark - Helsinki

and to say, oh look the models predicted that.
Furthermore NOAA’s model output is an utter joke, if as I said you take the difference between the 2 most different runs from an ensemble, they vary widely, which shows the model is really casting such a wide net that it is hard to actually say it’s wrong or (way off the mark)
Of course, we cant model chaos. 🙂
Giving an average of chaos is what they are doing, and it’s nonsense.

Mark – Helsinki –
>> With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data. <<
not if the station
hasn't moved.
no one is interested in
absolute T.

Mark - Helsinki

you can simply calculate the uncertainty for real in the 100 model runs by measuring the difference between the two most contrary runs. Given the difference in output per run at NOAA.. that means real uncertainty in that respect is well in excess of 50%

no.
that’s like saying you can flip a coin 100 times, and do this 100 times, and the
uncertainty is the max of the max and min counts.
that’s simply not how it’s done — the standard deviation
is easily calculated.

Tom Halla

crackers, that is an invalid use of statistics. It in more analogous to shooting at 100 different targets with the same error in aim, not like measuring the same thing 100 times. The error remains the same, and does not even out.

no tom. shooting isn’t random; its results
contain several biases.
a true coin, when flipped sufficienly, does not

Tom Halla

With both shooting and taking a temperature reading multiple times over a span of time, one is doing or measuring different things multiple times, not the measuring the same thing multiple times. Coin tosses are not equivalent.l

Mark - Helsinki

As in, take 100 runs and take calculate how far the model can swing in either direction, for this you only need the two most different runs, there is your uncertainty.

Kip ==> This following part of my comment was about data collection in the real world:

Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).
The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.

I was trying to show how the “fudge” is achieved in the collection of raw data and how circular it is to then use gridded model outputs to estimate the sampling errors of that very methodology! 😉

For your readers not familiar with physics. Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”
C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)
Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
https://en.wikipedia.org/wiki/Stefan–Boltzmann_law
You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.
But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.
But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?
Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.
You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.
Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.
However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.
https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/

link/cite to schmidt’s quote?

Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”
C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)
Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
https://en.wikipedia.org/wiki/Stefan–Boltzmann_law
You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.
But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.
But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?
Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.
You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.
Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.
However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.
https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/

Mark - Helsinki

yeah, where is the CRU raw?
What have they done with the data in the last 20 years.
Were they not caught cooling the 40s intentionally just to reduce anomalies? Yes they were caught removing the blip from data, something NASA JMA BEST ect have have done.
The level of agreement between these data sets over 130 years either shows 1 collusion or 2 relying on the same bad data
Nonsense.

Mark - Helsinki

as you probably already know, they are using revised history to assess current data sets. As such any assessments are useless.
We need all of the pure raw data, most of which does not exist any more.

Mark - Helsinki

Good post tbh.
“However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.”
The logical fallacy is real world temperature anomalies vs what GISS says they are.
The certainty that GISS is accurate, is actually unknown, which means uncertainty is closer to 90% than 5%

Mark - Helsinki

Schmidt must keep the discussion within the confines of GISS output.
Avoid bringing in the real world at every stage, in terms of equipment accuracy and lack of coverage.

all the groups get essentially the same surface trend — giss, noaa, hadcrut, jmo, best
so clearly giss is no an outlier. this isn’t rocket
science

Dan Davis

Possible new source for temperature data: River water quality daily sets.
Graphs of temp. data across the regions and the globe would be quite interesting.
Probably a much more reliable daily set of records…

why more reliable?

knr

Gavin Schmidt who was , let us not forget, hand picked by Dr Doom to carry-on his ‘good work’
Given we simply lack the ability to take any such measurements in a scientifical meaningful way. All we have is a ,’guess’ therefore no matter what the approach what is being said is ‘we think it’s this but we cannot be sure’

so why can;’t
temperaure be
measured, in your
opinion?

C.K. Moore

Over the years I’ve noticed one thing about Gavin Schmidt’s explanations in RealClimate–they are excessively thorough and generally cast much darkness on the subject. If he was describing a cotter pin to you, you’d picture the engine room of the Queen Mary.

richard verney

This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?
I emphasise that
the sample set used to create the anomaly in 1880, is not the same sample set used to calculate the anomaly say in 1900 which in turn is not the same sample set used to calculate the anomaly in 1920 which in turn is not the same sample set used to calculate the anomaly in 1940 which in turn is not the same sample set used to calculate the anomaly in 1960 which in turn is not the same sample set used to calculate the anomaly in 1980 which in turn is not the same sample set used to calculate the anomaly in 2000 which in turn is not the same sample set used to calculate the anomaly in 2016
if one is not using the same sample set, the anomaly does not represent anything of meaning.
Gavin claims that “The climatology for 1981-2010 is 287.4±0.5K” however the sample set (the reporting stations) in say 1940 are not the stations reporting data in the climatology period 1981 to 2010 so we have no idea whether there is any anomaly to the data coming from the stations used in 1940. We do not know whether the temperature is more or less than 1940 since we are not measuring the same thing.
The time series land based thermometer data set needs complete re-evaluation. If one wants to know whether there many have been any change in temperature since say 1880, one should identify the stations that reported data in 1880 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1880 to 2016.
If one wants to know whether there has been any change in temperature say as from 1940, one performs a similar task, one should identify the stations that reported data in 1940 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1940 to 2016.
So one would end up with a series of time series, perhaps a series for every 5 year interlude. Of course, there would still be problems with such a series because of station moves, encroachment of UHI, changes in nearby land use, equipment changes etc, but at least one of the fundamental issues with the time series set would be overcome. Theoretically a valid comparison over time could be made, but error bounds would be large due to siting issues/changes in nearby land use, change of equipment, maintenance etc.

Re: richard verney (August 20, 2017 at 1:20 am)
To your charge Richard, James Hansen “doth protest too much” for my liking.

…a charge that has been bruited about frequently in the past year, specifically the claim that GISS has systematically reduced the number of stations used in its temperature analysis so as to introduce an artificial global warming. GISS uses all of the GHCN stations that are available, but the number of reporting meteorological stations in 2009 was only 2490, compared to [circa]6300 usable stations in the entire 130 year GHCN record. (Hansen et al. 2010)

He doesn’t address the problem (In that paper) to my satisfaction, because elsewhere in the literature it is made clear that the change in number and spatial distribution of station data is a source of error larger than the reported (Or purported!) trends.

“how do you calculate an anomaly when the sample set is never the same over time”
Because you don’t calculate the anomaly using a sample set. That is basic. You calculate each station anomaly from the average (1981-2010 or whatever) for that station alone. Then you can combine in an average, which is when you first have to deal with the sample set.

richard verney – >> how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing? <<
you take an average.
giss uses '51-'80.
but the choice is arbitrary

AndyG55

“0.56±0.05ºC
RUBBISH, No way GISS error is anywhere near that level

Clyde Spencer

AndyG55,
Yes, I have read the Real Climate page that Kip linked, and the links that Gavin provides to explain why anomalies are used, and nowhere do I see an explanation for how the stated uncertainty is derived or an explanation of how it can be an order of magnitude greater precision than the absolute temperatures. My suspicion is that it is an artifact of averaging, which removes the extreme values and thus makes it appear that the variance is lower than it really is.

so write GS and ask

Geoff Sherrington

The postulate that global temperatures have not increased in the last 100 years is easily supported after a proper error analysis is applied.
People driven by a wish to find danger in temperature almost universally fail proper error analysis. There is a deal of scattered, incomplete literature about using statistical approaches and 2 standard deviations and all that type of talk; but this addresses the precision variable more than the accuracy variable, These two variables act on the data and both have to be estimated in the search for proper confidence limits to bound the total error uncertainty.
This is not the place to discuss accuracy in the estimation of global temperature guesses because that takes pages. Instead, I will raise but one ‘new’ form of error and note the need to investigate this type of error elsewhere than here in Australia. It deal with the transition from ‘liquid in glass’ thermometry to the electronic thermocouple devised whose Aussie shorthand is ‘AWS’ for Automatic Weather Station. These largely replaced LIG in the 1990s here.
The crux is in an email from the Bureau of Meteorology to one of our little investigatory group.
“Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)
Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600”
When data captured at one second intervals are studied, there is a lot of noise. Tmax, for example, could be a degree or so higher than the one minute value around it. They seem to be recording a (signal+noise) when the more valid variable is just ‘signal’. One effect of this method of capture is to enhance the difference between high and low temperatures from the same day, adding to the meme of ‘extreme variability’ for what that is worth.
A more detailed description is at
https://kenskingdom.wordpress.com/2017/08/07/garbage-in-garbage-out/
https://kenskingdom.wordpress.com/2017/03/01/how-temperature-is-measured-in-australia-part-1/
https://kenskingdom.wordpress.com/2017/03/21/how-temperature-is-measured-in-australia-part-2/
This procedure is different in other countries. Therefore, other different countries are not collecting temperature in a way that will match ours here. There is an error of accuracy. It is large and it needs attention. Until it is fixed, there is no point to claims of global temperature increase of 0.8 deg C/century, or whatever the latest trendy guess is. Accuracy problems like this and other combine to put a more realistic +/- 2 deg C error bound on the global average, whatever that means.
Geoff.

richard verney

But think of the very different response of a LIG thermometer which could easily miss such T highs if of such short lived duration.
This is why retro-fitting with the same type of equipment used in the 1930s/1940s is so important if we are to assess whether there has truly been a change in temperature since the historic highs of the 1930s/1940s.

hunter

Yes. This is a reasonable and low cost way to test the current vs. the past instruments. It also tests the justifications of those who change the past.
I pointed this a few months ago but got nowhere with it. If you can think of a way to push the idea forward, God speed.

Geoff Sherrington

RV,
Experienced petrologists would agree with your test scheme. The puzzle is, why was it not done before, officially. Maybe it was, I do not know. Thank you for raising it again.
As you know, it remains rather difficult to get officials to adopt such suggestions. If you can help that way, that is where effort could be well invested. Geoff

Geoff,
I have said this on here before. The best post by far was Pat Frank’s about calibration of instruments. All that needed to be said was in that. A lot of us, including yourself, are all talking about the same idiocy. It’s nice to have it demonstrated.

Geoff Sherrington

Nc75,
Where have you seen this raised before? Have you commented before on the methods different countries use to treat this signal noise problem with AWS systems? It is possible that the BOM procedure, if we read it correctly, could have raised Australian Tmax by one or two tenths of a degree C compared with USA since the mid 1990s. Geoff

Geoff
If I recall Pat Frank’s paper looked at drift of electronic thermometers. It may be similar at least in approach to what you are talking about, but the general idea is that whatever techniques are used they have to be seen in a broader context of repeatability and microsite characterisation. Effects that appear to swamp any tenths of degrees and approach full degree variations.

Clyde Spencer

Geof,
Indeed, it is done slightly differently in the US. Our ASOS system collects 1 minute average temperatures in deg F, and then averages the 5-minute record of the 5 sets to the nearest deg F, converts to the nearest 0.1 deg C, and sends that information to the data center. http://www.nws.noaa.gov/asos/pdfs/aum-toc.pdf

Geoff Sherrington

Clyde,,
Can we please swap some email notes on this. sherro1 at optusnet dot com dot au
A project in prep is urgent if that is OK with you. Geoff

Clyde Spencer

Kip,
You said, BTW — The °F recorder temps are thus +/- 0.5 °F and the °C are all +/- 0.278°C — just by the method.”
Strictly speaking, 0.5 °F is equivalent to 0.3 °C because multiplying a constant (5/9) with infinite precision by a number with only one (1) significant figure, one is only justified in retaining the same number of significant figures as the multiplier with the least number of significant figures!

temp is obviously
increasing, because (macro)
ice is melting and sea
level is rising.

John Soldier

Off topic somewhat:
Are you, like me, continuously annoyed by the way the media (especially TV weather reporters) refer to the plural of maximum and minimum temperatures as maximums and minimums.
This shows an ignorance of the English language as any decent dictionary will confirm.
The correct terms are of course maxima and minima.
The various editors and producers should get their acts into gear and correct this usage.

±0.05ºC

While the atmospheric thermocline varies within 80 K in the troposphere alone at any given time:comment image
Gavin’s precision is high quality entertainment
http://rfscientific.eu/sites/default/files/imagecache/article_first_photo/articleimage/compressed_termo_untitled_cut_rot__1.jpg

Nik

Oh! We just love adjustments and 2017 has just been adjusted upwards ready for this years “Hottest Ever” headlines.
21 June 2017
2017 90 108 112 88 88
Today
2017 98 113 114 94 89 68 83
https://web.archive.org/web/20170621154326/https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

DWR54

See the GISS updates page: https://data.giss.nasa.gov/gistemp/updates_v3/

August 15, 2017: Starting with today’s update, the standard GISS analysis is no longer based on ERSST v4 but on the newer ERSST v5.

adjustments lead to a lower trend
you should know
this

DWR54

This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?

In the case of estimating temperature change over time, surely that’s an argument in favour of using anomalies rather than absolute temperatures?
Absolute temperatures at 2 or more stations or in a region might differ in absolute terms by, say, 2 degrees C or more, depending on elevation and exposure. That’s important if absolute temperatures are what you’re interested in (at an airport for example); but if you’re interested in how temperatures at each station differ from their respective long term averages for a given date or period, then anomalies are preferable.
Absolute temperatures might differ considerably between stations in the same region, but their anomalies are likely to be similar.

hunter

Well clearly the consensus solution is to change and discard past data that does not fit the present desired result.

Clyde Spencer

I think the point being made is that a standard baseline should be established (say 30 years before the influence of industrialization) and then that should be used as the standard by everyone, and not changed over time.

Peta of Newark

The BBC are telling us all about Cassini and its adventures at Saturn.
Nice.
But they (strictly the European Space Agency whose sputnik it is) have come up with this line:

“It’s expected that the heavier helium is sinking down,” he told BBC News. “Saturn radiates more energy than it’s absorbing from the Sun, meaning there’s gravitational energy which is being lost.

From here: http://www.bbc.co.uk/news/science-environment-40902774
Presumably this means Saturn is collapsing – or – possibly falling into the sun?
(Maybe the other way round innit, like how the Moon is going away as tides on Earth pull energy out of it)

omg, no

old construction worker

“…the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average…..”
‘…the difference between the this current warm period average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average….’
There fixed