Update by Kip Hansen
Last week I wrote about UCAR/NCAR’s very interesting discussion on “What is the average global temperature now?”.
[Adding link to previous post mentioned.]
Part of that discussion revolved around the question of why current practitioners of Climate Science insist on using Temperature Anomalies — the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average — instead of simply showing us a graph of the Absolute Global Average Temperature in degrees Fahrenheit or Celsius or Kelvin.
Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.
In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:
“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”
You see, as Dr. Schmidt carefully explains for us non-climate-scientists, if they use Absolute Temperatures the recent years are all the same — no way to say this year is the warmest ever — and, of course, that just won’t do — not in “RealClimate Science”.
# # # # #
Author’s Comment Policy:
Same as always — and again, this is intended just as it sounds — a little tongue-in-cheek but serious as to the point being made.
Readers not sure why I make this point might read my more general earlier post: What Are They Really Counting?
# # # # #
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
What a tangled web we weave …….
Odd is it not that some fifty years ago the accepted standard for the world was 14.7C @ur momisugly 1313 Mb.
I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.
It is not odd.
It is an embarrassment.
None of those titles claimed by Schmidt disguise the facts that Gavin Schmidt is an elitist who believes himself so superior, that Gavin will not meet others as equals.
A lack of quality that Gavin Schmidt proclaims loudly and displays smugly when facing scientists; one can imagine how far superior Schmidt considers himself above normal people.
As further proof of Schmidt’s total lack of honest forthright science is Gavin’s latest snake oil sales pitch “climate science double-speak”.
Wayne job demonstrates superlatively that no matter how Gavin’s and his obedient goons adjust temperatures; they are unable to hide current temperatures from historical or common sense comparisons.
Gavin should be permanently and directly assigned to Antarctica where Gavin can await his dreaded “global warming” as the Antarctica witch.
Sorry to be pedantic, but I believe that the pressure should have been 1013mb.
As an aside, it’s a real bitch when the inclusion of realistic error figures undermines one’s whole argument. This sort of subversive behaviour must be stopped!
14.7 is also air pressure in PSI at sea level! I’m 97% sure there’s some kind of conspiracy here…
Good point about the errors. Gavin shows the usual consensus abhorrence of tracking error.
If the cliimatology is known only to ±0.5 K and the measured absolute temperature is known to ±0.5 K, then the uncertainty in the anomaly is their root-sum-square = ±0.7 K.
There’s no avoidance of uncertainty by taking anomalies. It’s just that consensus climate scientists, apparently Gavin included, don’t know what they’re doing.
The anomalies will inevitably have a greater uncertainty than either of the entering temperatures.
Sorry the Mb should read a 1013, I do know that the temp was right as an old flight engineer they were the standard figures for engine and take off performance.
…when we practice to receive – grants, lots and lots of taxpayer funded grants!
Stunning.
We live in a world of absolute temperature numbers, not long term averages. Averages have no social meaning.
Averages are a statistical method of trying to detect meaning when there is none.
….or to hide meanings that are …ahem…. inconvenient.
Climatology is about averages. To know, for example,the 30 year average temperature at a given location is useful for some purposes. Climatologists erred when they began to try to predict these averages without identifying the statistical populations underlying their models for to predict without identifying this population is impossible.
NOTHING ever happens twice; something else happens instead. So any observation creates a data set with one element; the observation itself.
And the average value of a data set containing a single element is ALWAYS the value of that one element. So stick with the observed values they are automatically the correct numbers to use.
G
Gavin should learn little Math – specifically Significant Digits. If the climatology is to a precision of 0.1, then the Anomaly MAY NOT BE calculated to a precision greater than 0.1 degree. Absolute or Anomaly – both ought to show that the temperatures are the same.
i always wonder, if the Alarmists’ case is so strong, then why do they need to lie?
In postmodernism nothing is truth. Except postmodern consensus policy based science?
Santa
“postmodern consensus policy based science” is the revealed and frighteningly enforceable truth.
Disagree and – no tenure.
Out on your ear.
Never mind scientific method.
Sad that science has descended into a belief system, isn’t it??
Auto
in a somewhat different tack, check you local TV channel – weather meteorologists. I detected a pattern in markets I have lived. when the Temperature is above the average over time they almost always say that the “Temperature was above NORMAL today” but when it is below they say that the “Temperature was below the AVERAGE” for this date.
Now subliminally we are receiving a bad news message when the temperate is not normal but it comes across somewhat non newsworthy to be innocuously below an average, Do they teach them this in meteorology courses?
CAGW Hidden Persuaders? Check it out. Maybe it’s just my imagination.
Bill ==> I don’t do TV News or Weather — I live on the ‘Net (boats move around too much for regular TV watching). Maybe some TV Weather followers will chime in on this.
In parts of Australia I have heard TV weather persons say that monthly rainfall was “less than what we should have received” as if it were some sort of entitlement rather than just a calculated average of widely fluctuating numbers. I grimace when I hear it.
What I hear is a continuous reference to the ‘average’ temperature with no bounds as to what the range of ‘average’ is.
It is not nearly enough to say ‘average’ temperature for today is 25 C and not mention that the thirty years which contributed to that number had a range of 19-31. The CBC will happily say the temperature today is 2 degrees ‘above average’ but not say that it is well within the normal range experienced over the calibration period.
The use of an ‘anomaly’ number hides reality by pretending there is a ‘norm’ that ‘ought to be experienced’ were it not for the ‘influence’ of human activities.
All this is quite separate from the ridiculous precision claimed for Gavin’s numbers which are marketed to the public as ‘real’. These numbers are from measurements and the error propagation is not being done and reported properly.
crispin, the baseline is not the “norm.” it’s
just an arbitrary choice to compare temperatures
against. it can be changed at will. it
hides nothing
crackers ==> “the baseline is not the “norm.” it’s just an arbitrary choice to compare temperatures against. it can be changed at will.” That, you see, is part of the problem — it is changed at will, often without making it clear that it has been changed or that differing baselines have been used. The MSM almost always confuses the baselines with the “norm” when communicating to the general public.
Well nuts ! the observed value IS the norm; it can never be anything else.
G
No, not your imagination. It’s to scare people, ie, the warm/cold is abnormal (Somehow) when it is perfectly normal. I am seeing this in Australian weather broadcasts more and more now.
I am a meteorologist…30yrs now. I cannot stand TV weather. I never watch it anymore as I do all my own forecasting myself. It’s catered to 7yr olds. It’s painful to watch. I need not listen to any of these dopes. No, I am not a TV weatherman.
I actually haven’t taken notice of the differences between how “above” and “below” average temps are referenced, but I have always abhorred the (frequent, and seemingly prevailing) use of the word “normal” in that respect.
As I like to say, “There IS no “normal” temperature – it is whatever it is.” What they are calling “normal” is an average temperature of a (fairly arbitrarily selected) 30-year period (and at one point they weren’t moving the reference period forward as they were supposed to, because they knew that was going to raise the “average” temps and thereby shrink the “anomalies,” thereby undermining (they felt) the “belief” in man-made climate catastrophe).
I object to the word “anomaly” as well, because it once again suggests that there is something “abnormal” about any temperature that is higher or lower than a 30-year average, which itself is nothing more than a midpoint of extremes. There IS NOTHING “ANOMALOUS” about a temperature that is not equal to ANY “average” of prior temperatures, which itself is nothing more than a midpoint of extremes. “Anomalies” are complete BS.
Great, revealing OP.
Wait, does that mean all the years are the “hottest ever” or none of them?
I note that Gavin states with certainty that it is uncertain and it is somewhat surprising that he does so.
JohnWho ==> If one reads the RC post carefully, it evolves that uncertainty only importantly affects Absolute Temperature — but anomalies can be magically calculated to a high degree of precision (even though the base periods are absolutes….)
If absolute temperatures carry uncertainties, why don’t anomalies? It seems to me that anomalies are usually less than the uncertainty and therefore are virtually equivalent to zero. So why are they allowed to use anomalies without revealing their corresponding uncertainties?
They do. Gavin states:
So he suggests that the error bounds of the anomalies are very small, only +/- 0.05degC Whether one considers that small error bound reasonable is a different matter.
I wrote to Realclimate many years ago about this stupidity. I got back the usual bile. One and only time I looked at that site.
Sorry folks, but probably a dumb question from an ill educated oaf.
Being that Stephenson screens with thermometers were probably still being used in 1981, and for some time after, with, presumably, a conventional thermometer, surely observations of the temperature couldn’t possibly be accurate to 0.5K i.e. 287.4±0.5K.
Nor do I believe it credible that every Stephenson screen was well maintained, and we know about the siting controversy. And I suspect not all were properly monitored, with the office tea boy being sent out into the snow to take the measurements, myopic technicians wiping rain off their specs. or the days when someone forgets, and just has a guess.
And I don’t suppose for a moment every Stephenson screen, at every location, was checked once every hour, possibly four times in 24 hours, or perhaps 8 times, in which case there are numerous periods when temperatures can spike (up or down) before declining or rising.
It therefore doesn’t surprise me one bit that with continual electronic monitoring we are seeing ‘hottest temperatures evah’ simply because they were missed in the past.
Sorry, a bit of a waffle.
I do from time to time look at the site, but I understand that comments are often censored or dismissed without proper explanation. I have posted a comment (awaiting moderation) inquiring about the time series data set and what the anomaly really represents. It will be interesting to see whether it gets posted and answered.
No apologies necessary. Nor is your question unreasonable and it is certainly not “dumb”; except to CAGW alarmists hiding the truth.
Everyone should read USA temperature station maintenance staff writings!
N.B.;
At no point do the maintenance or NOAA staff ever conduct side by side measurements to determine before/after impacts to data.
Stations are moved,
sensor housings are replaced,
sensors are replaced and even “upgraded”,
data transmission lines and connections are replaced, lengthened, shortened, crimped, bent, etc.,
data handling methods and code are changed,
etc.
None of these potential “temperature impacts” are ever quantified, verified introduced into Gavin’s mystical error bounds theology.
why current practitioners of Climate Science insist on using Temperature Anomalies….
…it’s easier to hide their cheating
Also, it becomes obvious that the amounts of difference they are screaming about are below the limits of detection to a person without instrumentation.
BINGO!
“Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”
And of course, you lose the ability to scare people into parting with their money.
Snake Oil Salesman: The phrase conjures up images of seedy profiteers trying to exploit an unsuspecting public by selling it fake cures.
So…in other words, if the actual temperatures won’t make it “warmest year ever!”, we’ll use something else to make it the “swarmiest year ever!”.
(http://www.urbandictionary.com/define.php?term=Swarmy)
The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.
This appears to be attempting to make an issue where there is none.
It’s a Nothingburger.
Fake News.
TonyL ==> I am having a little fun with this — freely admitted in both posts. Almost the entire post is made up of Dr. Schmidt’s exactly quoted words from the RealClimate site — which I paraphrase once more at the end.
Hardly fake anything — really maybe too real.
If you have questions about why Dr. Schmidt says what he says, his post is still active at RC here — you can ask him in comments there.
Agreed.
“The proper use of anomalies is well known and the reasons are sound. ”
Agreed.
So what is the serious point being made? That you don’t understand why anomalies are used?
” All of which appear to be the same within the uncertainty”
Gav would do better to try to explain why he is averaging ( ie adding ) temperatures of land and sea which are totally different physical media and thus not additive:
https://climategrog.wordpress.com/category/bad-methods/
“So what is the serious point being made? That you don’t understand why anomalies are used?”
That appears to be the case. I suggest anyone who finds this amusing go and read the article at realclimate with an open mind and you may then understand why anomalies are used. Ho ho. As if that will happen! We can all share in the joke.
Actually the whole of climate science would do well to explain why they use the unreliable almost nonphysical concept of temperature to do anything useful since the actual physical parameter is energy. Temperatures represent vastly different energies depending on the phase of matter, and the medium it is being measured in. For example between a dry day and a humid day, or between smog or air, between ozone or oxygen. The assumption of constant relative humidity alone makes the whole thing a pseudoscience.
Bobl it is so they can take a high energy maximum daily temperature and directly add it to a low energy minimum temperature, then divide that value in half as if they are both equivalent to arrive at an average temperature without proper weighting.
When is the last time you heard a Warmist talking about maximum temperatures? It’s taboo to discuss those in polite society.
In terms of statistics, the point is valid. To compare a “spot” temperature against an “average” (like a 30 year norm) ignores the uncertainty in the “average.” This is similar to the difference between a “confidence interval” and a “prediction interval” in regression analysis. The latter is much greater than the former. In the first case one is trying to predict the “average.” In the second case one is trying to predict a specific (“spot” in the jargon of stock prices) observation.
Implicitly, an anomaly is trying to measure changes in the average temperature, not changes in the actual temperature at which time the measurement is taken. If the anomaly in June of this year is higher than the anomaly in June of last year, that does not mean that the June temperature this year was necessarily higher than the June temperature last year. It means that there is some probability that the average temperature for June has increased, relative to the (usually) 30 year norm. But in absolute terms that does not mean we are certain that June this year was warmer than June last year.
Anomalies are okay, if understood and presented for what they are: a means of tracking changes in average temperature. But that is not how they are used by the warmistas. The ideologues use them to make claims about “warmest month ever,” and that is statistical malpractice.
Basil
blcjr: [anomalies are] ” a means of tracking changes in average temperature”. This is exactly what the CAGW quote. You are feeding their assumption. I know you are aware of the difference but the normal person does not; they simply read your text and say, “O, the normal temperature is going up or down”.
I usually try to explain the anomalies as a differential, that is, an infinitely small section of a line with the magnitude and direction of the change. The width of the change is no wider than a dot on the graph. This seems to make more sense to the most people.
Give us a link.
rd50 ==> Sorry — who? give you a link to what?
Actually it isn’t uncontroversial. One problem does lie with the uncertainty and its distribution. Another with working with linear transformations of variables in non-linear systems.
TonyL
It gets better-
“[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)
If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.
BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”
Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.
Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.
It gets better-
“[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)
If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.
BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”
Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.
Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.
Yes indeed, 0.07 +/- 0.5 doesn’t appear to be very significnt does it 🙂
Just think of it as a statistical rug under which to sweep tangled web weaving.
jorgekafkazar-
Right!
And yet they say “the Earth’s temperature is increasing” instead of “the Earth’s anomalies are increasingly warmer” etc. Al Gore says “the Earth has a temperature” instead of “The Earth has a higher anomaly”. And since Gav and the boys ALL ADMIT that it’s virtually impossible to know “exactly” what Earth’s actual global average temperature is, and that Earth is not adequately covered with thermometers, and that the thermometers we DO have are not in any way all properly cited and maintained and accurate… why in the crap do we let them get away with stating that “average surface temperatures were 0.07 F warmer” than a prior year? Why would any serious “Scientist” with any integrity use that kind of language when he’s really talking about something else??
Oh yeah…..rug weaving. 🙂
Aphan: that “average surface temperatures were 0.07 F warmer” than a prior year
If only they did actually say that. They don’t even say that. It’s just “hottest year ever” with no quantification, usually.
TonyL,
Yes, at least some of us are aware of the ‘proper’ use of anomalies. At issue is whether anomalies are being used properly. Gavin even admits that frequently they are not: “This means we need to very careful in combining these two analyses – and unfortunately, historically, we haven’t been and that is a continuing problem.”
Very True.
A closely related issue:
The ongoing story of the use, misuse, and abuse of statistics in ClimateScience! is the longest running soap opera in modern science.
The saga continues.
TonyL: I disagree that the use of anomalies is well known.
My objection is that the reporting of data as anomalies, like reporting averages without the variance, standard deviation or other measure of dispersion, simply reduces the value of the information conveyed. It eliminates the context. It is not a common practice in statistical analysis in engineering or most scientific fields. None of my statistics textbooks even mentions the term. It simply reduces a data set to the noise component.
While it seems to be common in climate science, the use of the term anomaly implies abnormal, irregular or inconsistent results. But, as has been extensively argued here and elsewhere, variation in the temperature of our planet seems to be entirely normal.
That said, I do get that when analyzing temperature records it is useful to look at temperatures for individual stations as deviations from some long term average. E.g. if the average annual temp. in Minneapolis has gone from 10 C (long term average) to 11 C and the temp. in Miami has gone from 20 to 21 C, we can say both have warmed by 1 C.
Of course, if one averages all the station anomalies and all the station baseline temperatures the sum would be identical to the average of all the actual measured temperatures.
But it is another thing to only report the average of the ‘anomalies’ over hundreds or thousands of stations without including any information about the dispersion of the input data. Presenting charts showing only average annual anomalies by year for 50, 120, 1000 years is pretty meaningless.
The “Fake news and nothingburger” start right with Gavin, his mouth, his writing and Gavin’s foul treatment of others.
What absurd usage of “well known” and “the reasons are sound”, TonyL.
Just another fake consensus Argumentum ad Populum fallacy.
Use of anomalies can be proper under controlled conditions for specific measurements,
• When all data is kept and presented unsullied,
• When equipment is fully certified and verified,
• When measurements are parallel recorded before and after installation and impacts noted,
• When temperature equipment is properly installed everywhere,
• When temperature equipment installation represents all Latitudes, Longitudes, elevations, rural, suburban and urban environments,
• When temperatures and only temperatures are represented, not some edited version of data, data fill-in, smudged or other data imitation method is used.
Isn’t it astonishing, that “adjustments”, substitutions, deletions, adjustments or data creation based on distant stations, introduce obvious error bounds into temperature records; yet 0.5K is the alleged total error range?
Error bounds are not properly tracked, determined, applied or fully represented in end charts.
Gavin and his religious pals fail to track, qualify or quantify error rates making the official NOAA approach anti-science, anti-mathematical and anti-anomaly. NOAA far prefers displaying “snake oil”, derision, elitism, egotism and utter disdain for America and Americans.
“Double speak” is far too nice a description for Gavin and NOAA misrepresented temperatures. Climastrologists’ abuse of measurements, data keeping, error bounds and data presentation would bring criminal charges and civil suits if used in any industry producing real goods Americans depend upon.
Kip – good post!
The REAL answer of course is normally called ‘success testing’. Using this philosophy the test protocol – in this case the way the raw data is treated/analyzed – is chosen in order to produce the kind of result desired. NOT an analysis to find out if the temperatures are warmer, colder, or the same but to produce results that show there is a warming trend.
The usual way of detecting this success testing phenomena is to read the protocol and see just how much scientific technobabble is there (think of the Startgate TV series). The more technobabble the less credible the result.
This is what is really going on. Station selection, data selection, methodology selection allows the gate-keepers of the temperature record and the global warming religion, the ability to produce the number they want.
Think of it as someone standing over the shoulder of a data analyst in the basement of the NCDC each month saying “we’ll, what happens if we pull out the 5 Africa stations in the eastern side? How about we just add in that station with all the warming errors? Let’s adjust the bouys up and pretend it is because of ship engine intakes that nobody can/will check? Why don’t we bump up the time of observation bias adjustment and make a new adjustment for the MMTS sensors? Show me all the stations that have the highest warming? Let’s just drop those 1500 stations that show no warming. The South American stations are obviously too low by 1.0C. Just change them and call it an error.
We”ll call it version 4.4.3.2.”
…which explains why 50 percent of the data is often not used, made up, extrapolated.
Gavin had an analogy. If you’re measuring a bunch of kids to see who’s the tallest, running a ruler head to foot, you can get a good answer. If you measure the height of their heads above sea level, there is a lot more uncertainty. So which would you do?
Nick ==> I’m afraid Dr. Schmidt’s “analogy” is crackers.
We want to find the difference between the heights (lengths, really) of the five boys in the class — which represent average temperatures of five years.
The current Anomaly Method is as follows — they measure all the tops of the heads of the five boys as “elevations above sea level”, then subtract from those elevations of the tops of each kids head the common “base elevation” for the room which is the elevation above sea level of the classroom floor, report these remainders as the “anomalies from the floor” of each “kid”, then compare the kids’ anomalies.
That is, of course, nutty.
If they want to know the difference in the length of each kid (the proper biological term for “height” of children), they only need have them lie down on an exam table, feet against the foot rest, and run the measure rule down to the top of their head. arriving at each kids length, then compare them. Length of children, boys or girls, has nothing to do with sea level or elevations — it is discernible from direct measurement — as is, of course, surface air temperature at a weather station — discernible by direct measurements — which can be compared with one another, year to year.
If the years are then all the same, within the uncertainties, then the years can and should be considered all the same.
elevation above sea level of the classroom floor….
…and then make adjustments for the weight of each child…..because they are making the floor sink
To continue the analogy, what people want to know is ***not*** which kid is tallest, but rather which kid is highest above sea level, allowing for the possibility that the “sea level” — that is, the global absolute temperature — may be changing over time (day by day and year by year) in a way that is very difficult to measure accurately.
No, the best way is to measure their height using low orbit satellite range finding, whilst getting the kids to jump up and down on a trampoline and measure the reflection off the surface of the trampoline at the bottom of the movement. This is accurate to within +/- 1mm as has been established for sea level measurements.
Cohen & Greg ==> Yes, that’s the method used to measure Global Average Sea Level. Wierd, huh?
and yet actual absolute measurements are better than statistical output which is pure fantasy, it’s not an temperature anomaly, its a statistical anomaly, which requires a “leap of faith” to accept it as a temperature anomaly when talking GISS GAMTA
NS,
The primary uncertainty is introduced by adding in the elevation above sea level. Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair. Therein lies the problem with temperature anomalies. We aren’t measuring the anomalies directly (height) but obtaining them indirectly from an imperfectly known temperature baseline!
“Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair.”
Exactly. And that is the case here, because we are talking not about individual locations, but the anomaly average vs absolute average. And we can calculate the anomaly average much better, just as we can measure better top to toe.
It has another useful analogue feature. Although we are uncertain of the altitude, that uncertainty does not actually affect relative differences, although that isn’t obvious if you just write it as a±b. The uncertainty of the absolute average doesn’t affect our knowledge of one year vs another, say. Because that component of error is the sae for both. So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015. The reason is that you took the same number 14.0±1 (abs normal), and added the anomalies of 0.7±0.1 and 0.5±0.1. The normal might have been 13 or 15, but 2016 will still be warmer than 2015.
You clearly have a different understanding of “error” than I do, Nick.
You wrote: “So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015.”
I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5. Since the temperature difference between 2015 & 2016 is well within the error range of both temperatures it’s impossible to know which year is warmer or cooler.
That’s what I remember from my first year Physics Prof, some 50 years ago. But maybe Physics has “evolved” since then. :))
TheOtherBob ==> I am afraid that you are right — though you may be misunderstanding what Nick means to say here. See my reply to him just below.
Nick Stokes ==> (if your ancestors are from Devon, England, we may be related).
“we can calculate the anomaly average much better, just as we can measure better top to toe.” My objection to this assertion is that no one is measuring an anomaly — the ‘toe to head’ number is an actual physical measurement, with known/knowable original measurement error margins. The ‘anomaly average’ is a calculated/derived number based on two uncertain measurements — the uncertainty of the long-term average of the base period and the uncertainty of the measurement of today’s/this years’s average temperature. If the uncertainty of the base period figure is +/- 0.5°C and the uncertainty of this years average temperature is +/- 0.5°C, then the uncertainties must be ADDED to one another to get the real uncertainty of the anomaly. This gives anomalies with a known uncertainty of +/- 1.0°C each. Averaging these anomalies does not remove the uncertainty — it remains +/- 1.0°C for the resultant average.
The idea that an average of anomalies has a smaller original measurement error than the original measurements is a fallacy.
Thanks Kip. Yes, my thoughts exactly. I didn’t want to repeat the point I made in my first post about adding the errors to get the anomaly error but you covered it most eloquently. Thanks for starting a very interesting discussion.
Kip,
“if your ancestors are from Devon”
None from Devon, AFAIK. Lots from Wilts, Glos.
Nick ==> If you want you can email me your oldest generation information and I’ll see if I can find any common ancestors. my first name at the domain i4 decimal net
“I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5”
But not independently. If 2016 was at 13.7 because the estimate of normal was wrong on the low side (around 13), then that estimate is common to 2015, so there is no way that it could be 15+.
There are many things that can’t be explained by what you learnt in first year physics.
I don’t know what point you’re making in your comment.
And there are many things that Gavin & Co. do that can’t be explained by anyone – at least in a way that makes sense to most people. :))
No problem if all 5 boys are standing on the same level platform … but WE know that the platform is not level !
One of the kids puts his hair in a bun.
There is another analogy. This morning my wife asks: What’s the outside temperature today? My answer is: the temperature anomaly is 0.5 K. When I add you need no new clothes I will run into problems this day.
Nor will she nicely ask what the outside temperature is, again.
NOAA should reap equal amounts of derision for their abuse of anomalies.
what if 60% of the kids are not measured Nick, does Gavin just make it up?
Suppose that we have a data set: 511, 512, 513, 510, 512, 514, 512 and the accuracy is +/- 3. The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0 and the accuracy is still +/- 3.
I don’t understand how using anomalies lets us determine the maximum any differently than using the absolute values. There has to be some mathematical bogusness going on in CAGW land. I suspect they think that if you have enough data it averages out and gives you greater accuracy. I can tell you from bitter experience that it doesn’t always work that way.
But if you ADD the uncertainties together, you get zero!
Here’s the appropriate “world’s best practice” algorithm:
1. Pick a mathematical operator (+, -, /, *, sin, cos, tan, sinh, Chebychev polynomial etc.)
2. Set uncertainty = 0
2a. Have press conference announcing climate is “worse than originally thought”, “science is settled” and “more funding required.”
3. Calculate uncertainty after applying operator to (homoginised) temperature records
4. Is uncertainty still zero?
5. No, try another operator.
6. go back to 3 or, better yet, 2a.
The sharp-eyed will note the above algorithm has no end. As climate projects are funded on a per-year basis, this ensures the climate scientist will receive infinite funding.
Thank you Bob!
My math courses in Engineering and grad studies (stats, linear programming, economic modelling, and surprising to me the toughest of all, something called “Math Theory”) were 50 years ago. But the reasoning that somehow anomalies are more precise or have less uncertainty than the absolute values upon which they were based set off bells and whistles in my old noggin. I was very hesitant though to raise any question for fear of displaying my ig’nance..
Maybe both of us are wrong, but now I know I’m in good company. 🙂
Me too !
“The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0”
But you don’t form the anomalies by subtracting a common average. You do it by subtracting the expected value for each site.
“how using anomalies lets us determine the maximum”
You don’t use anomalies to determine the maximum. You use it to determine the anomaly average. And you are interested in the average as representing a population mean, not just the numbers you sampled. The analogy figures here might be
521±3, 411±3, 598±3. Obviously it is an inhomogeneous population, and the average will depend far more on how you sample than how you measure. But if you can subtract out something that determines the big differences, then it can work.
That’s what you say. Here’s what Dr. Schmidt said:
My example is a simplified version of the above. If you think Dr. Schmidt erred, that’s between you and him.
the accuracy is still +/- 3.
≠======
Of course it is. But what climate science does is to re-calculate the error statistically from the anomaly and come to the absurd conclusion that the error changed from 0.5 to 0.05. The nonsense is that averaging reduces the variance and gives the misleading impression that it provides a quick way to reduce error. And it does in very specific circumstances. Of which this is not one.
Extra! EXTRA! Read all about it! Gavin Schmidt of NASA ADMITS that there has been NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years!!!
I’m in denial. A climate scientist actually told the truth… kind’a… sort’a… maybe… in a convoluted way? I don’t believe it. 🙂
He told the truth, and then rationalized why that truth is completely unimportant to the actual “science” involved in climate science. Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate. And if you don’t like unicorns or pizza parties, you’re a hating-hate-hater-denier and should be put to death.
ISIS is more tolerant.
“Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate.”
What happened to the rainbows, fairy dust and hockey sticks?
“…hating-hate-hater-denier…”
You forgot lying, hypocritical, sexist, egotistical, homophobic, misogynist, deplorable bigot. :))
Thanks SMC….I knew I was forgetting something… 🙂
“NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years”
Earth’s Absolute Temperature has changed by roughly 4°C in every one of those lasts 30 years.
Surely that is statistically significant. 🙂
What is the sensitivity of the measuring device, and what are the significant figures? Can an average of thousands of measurements accurate to a tenth of a degree be more accurate than each individual measuring device? I am asking an honest question that someone here can answer accurately. We learned significant figures in chemistry, but wouldn’t they also apply to these examples? How accurate are land based temp records versus the satellite measuring devices? This has been a central question for me in all of this “warmest ever” hoopla, and I would appreciate a good explanation.
Cold ==> There are answer to those questions…but not in this essay. One hint about air temperatures is that before the digital age, they were measured to “the nearest whole degree”. That means that each recorded temperature represented all the possible temperatures (say we recorded 72) between 71.5 and 72.5 (one of the .5s would be excluded) — a range. No amount of averaging gets rid of the +/- 0.5 of those measurements. ALL subsequent calculations using to=hose measurements must have the +/- 0.5 attached when the calculations are done. Results can be no more accurate/precise than the original measurements. One must ADD to the minimum uncertainty of the original measurement range the uncertainties involved in thermometers that were seldom (if ever) re-calibrated, not standardized in the first place, etc.
Kip,
To compound that, in the sixties I was taught that, at least in Engineering, there existed MANY decision rules about whether to round a “5” up or down if it was the last significant digit, and that those recording data often failed to specify which rule they used. We were instructed to allow for that.
I don’t think Wiley Post or Will Rogers gave two shoots about how to round up or down factional temperatures at their airstrips in the 20’s or early 30’s.
Why modern “Climate Scientists” assume that those who recorded temperature at airports or agricultural stations in 1930 were aware that those figures would eventually be used to direct the economies of the world is typical of the “history is now” generation.
Kip,
The automated weather stations (ASOS) are STILL reading to the nearest degree F, and then converting to the nearest 0.1 deg C.
Those numbers were not anywhere near that good. How often were thermometers calibrated. Were they read with verniers or magnifiers? What did they use to illuminate thermometers for night time readings? Open flames? And don’t forget all of the issues that Anthony identified with his work on modern weather observation equipment.
Cylde ==> Give that to me again? You are saying that the digital automatic weather stations read the temperature to the nearest whole degree (F or C?) and then do what with it? convert it to 0.1th of a degree? How the heck do they do that?
Walter ==> I have stood at one of the old style Stevenson screen weather stations (Santo Domingo, Dominican Republic).. The Automated digital stations kept getting blown away by hurricanes and storms, but they keep up the screen and the glass thermometer inside of it.
The elderly meteorologist explained exactly how they took the readings. Open the box, look at the thermometer, write down to the nearest degree. He explained that the shorter men were supposed to stand on the conveniently located concrete block so that their eye would be more or less level with the mercury in the thermometer (angle of viewing is somewhat important) but that they seldom did for =reasons of pride. Thus readings by short guys tended high.
The temperature data was and is recorded to the first place of decimal. The adjustment is carried out as: 33.15 [0.01 to 0.05] as 33.1, 33.16 as 33.2, 33.25 [0.05 to 0.09] as 33.3. This is also followed in averaging.
Dr.S. Jeevananda Reddy
Kip,
See the ASOS user’s guide link here: https://wattsupwiththat.com/2017/04/12/are-claimed-global-record-temperatures-valid/
Interesting specification from the ASOS description:
http://www.nws.noaa.gov/asos/aum-toc.pdf
Temperature measurement: From -58F to +122F RMS error=0.9F, max error 1.8F.
“Once each minute the ACU calculates the 5-minute
average ambient temperature and dew point temperature
from the 1-minute average observations (provided at least
4 valid 1-minute averages are available). These 5-minute
averages are rounded to the nearest degree Fahrenheit, con-
verted to the nearest 0.1 degree Celsius, and reported once
each minute as the 5-minute average ambient and dew point
temperatures. All mid-point temperature values are rounded
up
(e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to –
3.0°F; while -3.6 °F rounds to -4.0 °F).”
This is presumably adequate for most meteorological work. I’m not sure how we get to a point where we know the climate is warming but it is within the error band of the instruments. Forgive me I’m only a retired EE with 40+ years designing instrumentation systems (etc).
Clyde and others ==> I think I have this figured out. Last time I downloaded a data set of station reports (WBAN:64756, Millbrook, NY) the 5 minute recordings were in whole degrees Fahrenheit and whole plus tenths Celsius. I used this example in my essay on Average Averages.
I will check out the manual Clyde offers and see if they are converting to tenths of C from whole degrees F or what.
“Can Kip, but you can’t know “if” it is.
Mark ==> I usually find that if I put in the time and effort — like reading the manual for the whole automagic weather station system, then somewhere in all that verbiage the penny drops and “Wow, there it is!”
Occasionally, I have to write to the government employee in charge of the system and ask my question — almost always get a polite and helpful answer. Did that wit the NOAA CORS data.
If you have one thermometer with a 1 degree scale you would attribute +/-0.5 degrees to a measurement. If it is scientific equipment, it will be made to ensure it is at least as accurate as the scale.
There is a rounding error when you read the scale and there is the instrumental error.
If you have many readings on different days, the rounding errors will average out. If you have thousands of observation stations , the calibration error the individual thermometers will average out.
That is the logic of averages being more accurate than the basic uncertainly of one reading.
Greg ==> Yes, you correctly state the Fallacy of Averages. There is no reason to believe that the “original measurement errors averages out”. That is just a convenient way of brushing them under the rug.
Thermometer readings taken to the nearest degree are properly reported as “72°F +/- 0.5°F” — the average of two or a thousand different thermometer readings originally taken to the nearest degree are properly reported as “72.45°F +/- 0.5°F”. The rounding factor does not disappear by long division.
Accuracy of scale: If the thermometers from 1880 through early 20th century read in whole degree increments (which was “good enough” for their purposes) then how does one justify declaring this year was the hottest year ever, by tenths of a degree?
Rounding errors will only “average out” if everyone recording temps used a flip of the coin (figuratively) to determine abut what to record. The reality is some may have used a decision rule to go to the next HIGHEST temp and some the LOWER. Then there’s the dilemma about what to do with “5 tenths”; there were “rules” about that too. You cannot assume the “logic of averages” unless we know how those rules of thumb were applied.
Suppose that we have a sine wave of known frequency buried under twenty db of Gaussian noise. We can detect and reconstruct that signal even if our detector can only tell us if the signal plus noise is above or below zero volts (ie. it’s a comparator). By running the process for long enough we can get whatever accuracy we need. link
The problem is that Gaussian noise is a fiction. It’s physically impossible because it would have infinite bandwidth and therefore infinite power. Once the noise is non-Gaussian, our elegant experiment doesn’t work any more. It’s more difficult to extract signals from pink or red noise. link If we can’t accurately describe the noise, we can’t say anything about our accuracy.
kip, if there are n stations and
if the error of the individual
readings is s, the error of the
average will be s/squareroot(n).
small
“if there are n stations and if the error of the individual readings is s, the error of the average will be s/squareroot(n).”
Ah, “the Law of large number”. Somebody always drags that up. Sorry but no, that only applies to independent identically distributed random variables.
Following the child height example:
First case: If you take one child and you measure his/her height 10 times, the average is more accurate.
Second case: If you have 10 children and you measure their haight once per child. the average height is not more accurate than the individual accuracy.
The temperature in Minneapolis is different from the temperature in Miami. The Earth average temperature belongs to the second case. That is my understanding.
It does not matter, anyway, since the Earth is not in thermal equilibrium or even in thermodynamic equilibrium and therefore the term average temperature is meaningless.
“the error of the
average will be s/squareroot(n).
small”
No it won’t.
Cold(what else?) in Wisconsin- temperature is an Intensive property- the speed of the moving/vibrating atoms and molecules. Which for climate purposes is measured by a physical averaging process- the amount the temperature being measured changes the resistance of (usually now) some sort of calibrated resistor which can be very precise(to hundredths of a degree) but only as accurate as its calibration over a specific range. Averaging temperatures is pretty meaningless. You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing. Measuring how the temperature of the water changes tells you something about the amount of energy released by the burning gas but it’s a very crude calorimeter.
Like that example, the climate is driven by energy movements, not primarily by temperatures.
I’m not a climate scientist (but I did see one on TV) but why aren’t those far more educated than me pointing out Phil’s point which should be obvious to anyone with a basic science education.
In discussions with my academic son, I point out that I can take the temperature at the blue flame of a match stick and then the temperature of a comfortable bath tub and the the average of the two has no meaning.
The response of course is 97% of scientists say I’m deluded. (Argument from Authority).
I have environment canada weather app on my phone. I noticed this summer they reported what it feels like rather than the measured number. Or, they use the inland numbers which are a few degrees higher, rather than the coastal number that they have been using at the same airport station for the last 80 years.
They especially do this on the radio weather reports. It feels like…30 degrees
george – scientists have
made it very clear that anyone
should expect a change of the
global average at their
locale.
but the global avg is good
for spotting the earth’s energy
imbalance. not perfect, but
good
“but the global avg is good for spotting the earth’s energy imbalance. not perfect, but good”
Actually it is almost completely useless given the very low heat capacity of the atmosphere compared to the ocean (remember that it is the ocean that absorbs and emits the vast majority of solar energy).
crackers ==> Global Average Surface Air temperature (or its odd cousin Land and Sea Surface temperature) is good for something — just not spotting Earth’s “energy imbalance”. The old term temperature record shows that temperature rises and falls without regards to energy in/out balance (or the Sun has been really variable and plays an enormous role).
https://science.nasa.gov/science-news/science-at-nasa/1997/essd06oct97_1
Accurate “Thermometers” in Space
“An incredible amount of work has been done to make sure that the satellite data are the best quality possible. Recent claims to the contrary by Hurrell and Trenberth have been shown to be false for a number of reasons, and are laid to rest in the September 25th edition of Nature (page 342). The temperature measurements from space are verified by two direct and independent methods. The first involves actual in-situ measurements of the lower atmosphere made by balloon-borne observations around the world. The second uses intercalibration and comparison among identical experiments on different orbiting platforms. The result is that the satellite temperature measurements are accurate to within three one-hundredths of a degree Centigrade (0.03 C) when compared to ground-launched balloons taking measurements of the same region of the atmosphere at the same time. ”
The satellite measurements have been confirmed by the balloon measurements. Nothing confirms the bastardized surface temperature record.
And this:
http://www.breitbart.com/big-government/2016/01/15/climate-alarmists-invent-new-excuse-the-satellites-are-lying/
“This [satellite] accuracy was acknowledged 25 years ago by NASA, which said that “satellite analysis of the upper atmosphere is more accurate, and should be adopted as the standard way to monitor temperature change.”
end excerpts
Hope that helps.
Watch me pull a rabbit out of my hat “±0.05ºC” … what utter rubbish!
I am puzzled as to how it is how over a period of 30 years temperatures can be established to only ±0.5K but for the Gistemp 2016 baseline the uncertainty is only ±0.05ºC. How is the latter more precise? Is it that different measuring techniques are in use?
Greg ==> Reducing the uncertainty by an order of magnitude between “direct measurement” and “anomaly” is a magic trick…..
The order of magnitude not necessarily wrong because they are different things. there is no reason why they should be the same but I don’t believe either 0.5 or the 0.05 figures.
The problem is , while the instrumental and reading errors are random and will average out allowing a sqrt(N) error reduction, you can not apply the same logic to the number of stations and this is exactly what they do to get the silly uncertainties.
They try to argue that they have N-thousand measurements of the same thing : the mean temperature. This is not true because you can not measure a mean temperature, it is not physical, it is a statistic of individual measurements. Neither does the world have A temperature which you can try to measure at a thousand different places.
So all you have is thousands of measurements each with a fixed uncertainty That is not going more accurate if you go to do a thousand measurements on Mars and them claim that you know the mean temperature of the inner planets more accurately than you know the temperature of Earth.
The temperature at different places are really different. You don’t get a more accurate answer by measuring more different things.
There no evidence if the error is mechanical in nature that it would average out with more samples anyway. Devices of the same type tend to drift or fail all in the same direction.
But they are NOT “different things”.
If one is defined as a deviation from another, you can’t separate them, no matter how many statistical tricks you apply.
“while the instrumental and reading errors are random and will average out allowing a sqrt(N) error reduction”
Just what makes you believe that?
Greg, you are moving from verifiable to hypothetical with the statement about errors averaging out. The mathematics is based on exact elements of a set having precise properties (IID).
Also one of the pillars of the scientific method is the Method of making measurements. You design the tools to achieve the resolution you want. Were the temperature measurements stations set up to measurement repeatably with sub-0.1K uncertainty? No they weren’t. Neither were the bucket measurements of SST.
And that is the fundamental problem with climate scientists. They are dealing in hypotheticals but believing that it is real. They have crossed into a different area.
It is also well worth remembering (or learning) the difference between MEAN and MEDIAN and paying close attention to which one is used where in information sources.
So many reports that “the temperature is above the long term MEAN” where in a Normal Distribution exactly half of the samples are higher than the mean!
Its an interesting and worthwhile exercise to evaluate whether the temperature series in any particular station resembles a Normal Distribution…
For comparison purposes, note that sea ice extent is usually referenced to the MEDIAN.
I was looking at temp. and CO2 data last week to see if NASA, NOAA and GIST would pass FDA scrutiny if approval was sought. There is a lot to it but from acquisition to security to analysis as well as quality checks for biases in sampling, to missing data, not to mention changing historical data etc, the answer is no. NOT EVEN CLOSE! Blinding is a big deal. So, ethically I believe any climate scientist who is also an activist must blind ALL PARTS of a study to ensure quality. What about asking to audit all marchers on Washington’s who received federal grants but do not employ FDA level or greater quality standards? Considering Michael Mann would not turn over his data to the Canadian courts last month, this might be a hoot, and REALLY VALUABLE!
TonyL: I disagree that the use of the “anomalies” is well known.
While It is used extensively in climate science these days, it is a very uncommon approach in statistical analysis, engineering and many scientific fields. The term or process is not mentioned or described in any of my statistics text books. I have spent 40 years in the business of collecting and analysis of all kinds of measurements and have never seen the need to convert data to ‘anomalies’. It can be viewed as simply reducing a data set to the noise component. My main objection is that, like an average without an estimate of dispersion such as the variance or standard deviation, it serves to reduce the information conveyed. Also, as the definition of anomaly indicates, it implies abnormality, irregularity, etc. As has been widely argued here and elsewhere significant variability in temperature of our planet seems quite normal.
I think this is a fine demonstration of the falacy of false precision. Also of statistical fraud.
We can’t let the prols think “Hey guess what, the temperature hasn’t changed!”
KH, I am of two minds about your interesting guest post.
On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless. OTH, a global average stationary station anomaly (correctly calculated) is meaningful, especially for climate trends. So useful if the stations are reliable (most aren’t),
On the other hand, useful anomalies hide a multitude of other climate sins. Not the least of which is the gross difference between absolute and ‘anomaly’ discrepancies in the CMIP5 archive of the most recent AR5 climate models. They get 0C wrong by +/-3 C! So CMIP5 not at all useful. Essay Models all the way Down in ebook Blowing Smoke covers the details of that, and more. See also previous guest post here ‘The Trouble with Models’.
Ristvan ==> I am admittedly having a little fun by playing the explanations of the use of temperature anomalies back at the originators in their own words. It is a revealing exercise — if not very scientific.
I agree that anomalies make more sense in principal, if you want to look at whether the earth has warmed due to changing radiation , for example.
The problem is the “climatololgy” for each month is the mean of 30days of that month over 30 years. 900 data. They will have a range of 5- 1- deg C for any given station with a distribution. You can take 2 std dev as the uncertainty of how representative that mean is and I’ll bet that is more than 0.05 deg C. So the uncertainty on your anomaly can never be lower than that.
For anomalies to be useful in any respect , the original data should not be tampered with.
Ristvan: “On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless.”
What you are saying in simple terms is that a global average temperature is also a crock of fecal matter.
This is like the rules for stage psychics doing cold readings==>do not be specific on anything checkable.
Another error they usually ignore is sampling error. Is the sample a true and accurate representation of the whole. In the case of SST almost certainly not.
Sampling patterns and methods have been horrendously variable and erratic over the years. The whole engine room / buckets fiasco is largely undocumented and is “corrected” based on guesswork, often blatantly ignore the written records.
What uncertainty needs to be added due to incomplete sampling?
KIP,
Something buried in the comments section of Gavin’s post is important and probably overlooked by most:
“…Whether it converges to a true value depends on whether there are systematic variations affecting the whole data set, but given a random component more measurements will converge to a more precise value.
[Response: Yes of course. I wasn’t thinking of this in my statement, so you are correct – it isn’t generally true. But in this instance, I’m not averaging the same variable multiple times, just adding two different random variables – no division by N, and no decrease in variance as sqrt(N).”
Gavin is putting to rest the claim by some that taking large numbers of temperature readings allows greater precision to be assigned to the mean value. To put it another way, the systematic seasonal variations swamp the random errors that might allow an increase in precision.
Another issue is that, by convention, the uncertainty represents +/- one (or sometimes two) standard deviations. He doesn’t explicitly state whether he is using one or two SD. Nor does he explain how the uncertainty is derived. I made a case in a recent post ( https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/ ) that the actual standard deviation for the global temperature readings for a year might be about two orders of magnitude greater than what Gavin is citing.
Schmidt cites two references as to why anomalies are preferred, one from NASA and one from NOAA. The latter is singularly useless as to why anomalies should be used. The opening paragraph of the NASA reference states:
Two factors are at work here. One is that the data is smoothed. The other is that the anomalies of two different geographical locations can be compared whilst the absolute temperatures cannot.
Is smoothed data useful? I guess that is moot but it is true to say that any smoothing processes loses fine detail, the most obvious of which is diurnal variation. Fine detail includes higher frequency information and removing it makes the analysis of natural processes more difficult.
Is a comparison of anomalies at geographically remote locations valid? I would think it would be, provided the statistics of the data from both locations are approximately the same. For example, since most analysis is based on unimodal gaussian distributions (and normally distributed at that), if the temperature distributions at the two locations are not normal, can a valid comparison be made? Having looked at distributions in several locations in New Zealand, I know that the distributions are not normal. Diurnal variation would suggest at least a bimodal distribution, but several stations exhibit at least trimodal distributions. The more smoothing applied to the data set the more closely the distribution will display normal, unimodal behaviour.
I suspect that smoothing the data is the primary objective, hiding the inconvenient truth that air temperature is a natural variable and is subject to a host of influences, many of which are not easily described, and incapable of successful, verifiable modeling.
Re: Gary Kerkin (August 19, 2017 at 6:38 pm)
[James] Hansen is quoting himself again, it’s all very inbred when you start reading the supporting – or not – literature!
However the literature doesn’t agree and he knows that he is dissembling.
In [James] Hansen’s analysis, the isotropic component of the covariance of temperature, assumes a constant correlation decay* in all directions. However, “It has long been established that spatial scale of climate variables varies geographically and depends on the choice of directions” (Chen, D. et al.2016).
In the paper The spatial structure of monthly temperature anomalies over Australia, the BOM definitively demonstrated the inappropriateness of Hansen’s assumptions about correlation of temperature anomalies:
*Decreasing exponentially with their spatial distance, spatial scales are quantified using the e-folding decay constant.
Mod or Mods! Whoops! I just realised that my comment above was directed at James Hansen of NASA but might be confused with the Author of the post, Kip Hansen!
To be clear, Gavin Schmidt(NASA), references James Hansen(NASA) quoting J.Hansen who references NASA(J.Hansen)! It’s turtles all the way down 😉
SWB ==> Fixed that for you. — Kip
It is true that temperature data is not Normally Distributed. At the very least most sets I have looked at are relatively skewed. The problem is that the variation from Normal in each station is different from other stations, and comparing, specifically averaging, non homogeneous data presents a whole other set of difficulties (i.e. shouldn’t be done).
wyzelli ==> “specifically averaging, non homogeneous data presents a whole other set of difficulties (i.e. shouldn’t be done).” That is correct — it should not be done and when it is done, the results do not necessarily mean what one might think.