Climate Science Double-Speak: Update

Update by Kip Hansen

 

mystery_solvedLast week I wrote about UCAR/NCAR’s very interesting discussion on “What is the average global temperature now?”.

[Adding link to previous post mentioned.]

Part of that discussion revolved around the question of why current practitioners of Climate Science insist on using Temperature Anomalies — the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average — instead of simply showing us a graph of the Absolute Global Average Temperature in degrees Fahrenheit or Celsius or Kelvin.

Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.

In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:

“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

You see, as Dr. Schmidt carefully explains for us non-climate-scientists, if they use Absolute Temperatures the recent years are all the same — no way to say this year is the warmest ever — and, of course, that just won’t do — not in “RealClimate Science”.

# # # # #

Author’s Comment Policy:

Same as always — and again, this is intended just as it sounds — a little tongue-in-cheek but serious as to the point being made.

Readers not sure why I make this point might read my more general earlier post:  What Are They Really Counting?

# # # # #

 

Advertisements

289 thoughts on “Climate Science Double-Speak: Update

    • Odd is it not that some fifty years ago the accepted standard for the world was 14.7C @ 1313 Mb.
      I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.

      • It is not odd.
        It is an embarrassment.

        “Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.

        In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:”

        None of those titles claimed by Schmidt disguise the facts that Gavin Schmidt is an elitist who believes himself so superior, that Gavin will not meet others as equals.

        A lack of quality that Gavin Schmidt proclaims loudly and displays smugly when facing scientists; one can imagine how far superior Schmidt considers himself above normal people.

        As further proof of Schmidt’s total lack of honest forthright science is Gavin’s latest snake oil sales pitch “climate science double-speak”.

        “wayne Job August 20, 2017 at 3:36 am
        Odd is it not that some fifty years ago the accepted standard for the world was 14.7C @ 1313 Mb.
        I just converted Mr Schmidt’s Kelvin that he calculates as the average 287.8K = 14.650C so in the last fifty years there has been virtually no change. I want my warming, it is as cold as a witches tit where I live.”

        Wayne job demonstrates superlatively that no matter how Gavin’s and his obedient goons adjust temperatures; they are unable to hide current temperatures from historical or common sense comparisons.

        Gavin should be permanently and directly assigned to Antarctica where Gavin can await his dreaded “global warming” as the Antarctica witch.

      • Sorry to be pedantic, but I believe that the pressure should have been 1013mb.

        As an aside, it’s a real bitch when the inclusion of realistic error figures undermines one’s whole argument. This sort of subversive behaviour must be stopped!

      • 14.7 is also air pressure in PSI at sea level! I’m 97% sure there’s some kind of conspiracy here…

      • Good point about the errors. Gavin shows the usual consensus abhorrence of tracking error.

        If the cliimatology is known only to ±0.5 K and the measured absolute temperature is known to ±0.5 K, then the uncertainty in the anomaly is their root-sum-square = ±0.7 K.

        There’s no avoidance of uncertainty by taking anomalies. It’s just that consensus climate scientists, apparently Gavin included, don’t know what they’re doing.

        The anomalies will inevitably have a greater uncertainty than either of the entering temperatures.

      • Sorry the Mb should read a 1013, I do know that the temp was right as an old flight engineer they were the standard figures for engine and take off performance.

      • Climatology is about averages. To know, for example,the 30 year average temperature at a given location is useful for some purposes. Climatologists erred when they began to try to predict these averages without identifying the statistical populations underlying their models for to predict without identifying this population is impossible.

      • NOTHING ever happens twice; something else happens instead. So any observation creates a data set with one element; the observation itself.

        And the average value of a data set containing a single element is ALWAYS the value of that one element. So stick with the observed values they are automatically the correct numbers to use.

        G

      • Gavin should learn little Math – specifically Significant Digits. If the climatology is to a precision of 0.1, then the Anomaly MAY NOT BE calculated to a precision greater than 0.1 degree. Absolute or Anomaly – both ought to show that the temperatures are the same.

        i always wonder, if the Alarmists’ case is so strong, then why do they need to lie?

      • Santa
        “postmodern consensus policy based science” is the revealed and frighteningly enforceable truth.
        Disagree and – no tenure.
        Out on your ear.
        Never mind scientific method.

        Sad that science has descended into a belief system, isn’t it??

        Auto

  1. in a somewhat different tack, check you local TV channel – weather meteorologists. I detected a pattern in markets I have lived. when the Temperature is above the average over time they almost always say that the “Temperature was above NORMAL today” but when it is below they say that the “Temperature was below the AVERAGE” for this date.
    Now subliminally we are receiving a bad news message when the temperate is not normal but it comes across somewhat non newsworthy to be innocuously below an average, Do they teach them this in meteorology courses?
    CAGW Hidden Persuaders? Check it out. Maybe it’s just my imagination.

    • Bill ==> I don’t do TV News or Weather — I live on the ‘Net (boats move around too much for regular TV watching). Maybe some TV Weather followers will chime in on this.

      • In parts of Australia I have heard TV weather persons say that monthly rainfall was “less than what we should have received” as if it were some sort of entitlement rather than just a calculated average of widely fluctuating numbers. I grimace when I hear it.

    • What I hear is a continuous reference to the ‘average’ temperature with no bounds as to what the range of ‘average’ is.

      It is not nearly enough to say ‘average’ temperature for today is 25 C and not mention that the thirty years which contributed to that number had a range of 19-31. The CBC will happily say the temperature today is 2 degrees ‘above average’ but not say that it is well within the normal range experienced over the calibration period.

      The use of an ‘anomaly’ number hides reality by pretending there is a ‘norm’ that ‘ought to be experienced’ were it not for the ‘influence’ of human activities.

      All this is quite separate from the ridiculous precision claimed for Gavin’s numbers which are marketed to the public as ‘real’. These numbers are from measurements and the error propagation is not being done and reported properly.

      • crispin, the baseline is not the “norm.” it’s
        just an arbitrary choice to compare temperatures
        against. it can be changed at will. it
        hides nothing

      • crackers ==> “the baseline is not the “norm.” it’s just an arbitrary choice to compare temperatures against. it can be changed at will.” That, you see, is part of the problem — it is changed at will, often without making it clear that it has been changed or that differing baselines have been used. The MSM almost always confuses the baselines with the “norm” when communicating to the general public.

    • No, not your imagination. It’s to scare people, ie, the warm/cold is abnormal (Somehow) when it is perfectly normal. I am seeing this in Australian weather broadcasts more and more now.

    • I am a meteorologist…30yrs now. I cannot stand TV weather. I never watch it anymore as I do all my own forecasting myself. It’s catered to 7yr olds. It’s painful to watch. I need not listen to any of these dopes. No, I am not a TV weatherman.

    • I actually haven’t taken notice of the differences between how “above” and “below” average temps are referenced, but I have always abhorred the (frequent, and seemingly prevailing) use of the word “normal” in that respect.

      As I like to say, “There IS no “normal” temperature – it is whatever it is.” What they are calling “normal” is an average temperature of a (fairly arbitrarily selected) 30-year period (and at one point they weren’t moving the reference period forward as they were supposed to, because they knew that was going to raise the “average” temps and thereby shrink the “anomalies,” thereby undermining (they felt) the “belief” in man-made climate catastrophe).

      I object to the word “anomaly” as well, because it once again suggests that there is something “abnormal” about any temperature that is higher or lower than a 30-year average, which itself is nothing more than a midpoint of extremes. There IS NOTHING “ANOMALOUS” about a temperature that is not equal to ANY “average” of prior temperatures, which itself is nothing more than a midpoint of extremes. “Anomalies” are complete BS.

      Great, revealing OP.

  2. Wait, does that mean all the years are the “hottest ever” or none of them?

    I note that Gavin states with certainty that it is uncertain and it is somewhat surprising that he does so.

    • JohnWho ==> If one reads the RC post carefully, it evolves that uncertainty only importantly affects Absolute Temperature — but anomalies can be magically calculated to a high degree of precision (even though the base periods are absolutes….)

      • If absolute temperatures carry uncertainties, why don’t anomalies? It seems to me that anomalies are usually less than the uncertainty and therefore are virtually equivalent to zero. So why are they allowed to use anomalies without revealing their corresponding uncertainties?

      • If absolute temperatures carry uncertainties, why don’t anomalies?

        They do. Gavin states:

        The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC.

        So he suggests that the error bounds of the anomalies are very small, only +/- 0.05degC Whether one considers that small error bound reasonable is a different matter.

      • I wrote to Realclimate many years ago about this stupidity. I got back the usual bile. One and only time I looked at that site.

      • Sorry folks, but probably a dumb question from an ill educated oaf.

        Being that Stephenson screens with thermometers were probably still being used in 1981, and for some time after, with, presumably, a conventional thermometer, surely observations of the temperature couldn’t possibly be accurate to 0.5K i.e. 287.4±0.5K.

        Nor do I believe it credible that every Stephenson screen was well maintained, and we know about the siting controversy. And I suspect not all were properly monitored, with the office tea boy being sent out into the snow to take the measurements, myopic technicians wiping rain off their specs. or the days when someone forgets, and just has a guess.

        And I don’t suppose for a moment every Stephenson screen, at every location, was checked once every hour, possibly four times in 24 hours, or perhaps 8 times, in which case there are numerous periods when temperatures can spike (up or down) before declining or rising.

        It therefore doesn’t surprise me one bit that with continual electronic monitoring we are seeing ‘hottest temperatures evah’ simply because they were missed in the past.

        Sorry, a bit of a waffle.

      • I wrote to Realclimate many years ago about this stupidity. I got back the usual bile. One and only time I looked at that site.

        I do from time to time look at the site, but I understand that comments are often censored or dismissed without proper explanation. I have posted a comment (awaiting moderation) inquiring about the time series data set and what the anomaly really represents. It will be interesting to see whether it gets posted and answered.

        I must confess that I am having difficulty in understanding what this anomaly truly represents, given that the sample set is constantly changing over time.

        If the sample set were to remain true and the same throughout the time series, then it would be possible to have an anomaly across that data set, but that is not what is or has happened with the time series land based thermometer data set.

        The sample set of data used in say 1880 is not the same sample set used in 1900 which in turn is not the same sample set used in 1920, which in turn is not the same sample set used in 1940, which in turn is not the same sample set used in 1960, which in turn is not the same sample set used in 1980, which in turn is not the same sample set used in 2000, which in turn is not the same sample set used in 2016.

        You mention the climatology reference of 1981 to 2010 against which the anomaly is assessed, however, the data source that constitutes the sample set for the period 1981 to 2010, is not the same sample set used to ascertain the 1880 or 1920 or 1940 ‘data’. We do not know whether any calculated anomaly is no more than a variation in the sample set, as opposed to a true and real variation from that set.

        When the sample set is constantly changing over time, any comparison becomes meaningless. For example, if I wanted to assess whether the average height of Americans has changed over time, I cannot ascertain this by say using the statistic of 200 American men measured in 1920 and finding the average, then using the statistics of 200 Finnish men who speak English measured in 1940 and finding the average, then using the statistics of 100 American women and 100 Spanish men who speak English as measured in 1960 etc. etc

        It is not even as if we can claim that the sample set is representative since we all know that there is all but no data of the Southern hemisphere going back to say 1880 or 1900. In fact, there are relative few stations that have continuous records going back 60 years, still less about 140 years. Maybe it is possible to do something with the Northern Hemisphere, particularly the United States which is well sampled and which possesses historic data, but outside that, I do not see how any meaningful comparisons can be made.

        Your further thoughts would be welcome.

      • “HotScot August 20, 2017 at 1:53 am
        Sorry folks, but probably a dumb question from an ill educated oaf.

        Being that Stephenson screens with thermometers were probably still being used in 1981, and for some time after, with, presumably, a conventional thermometer, surely observations of the temperature couldn’t possibly be accurate to 0.5K i.e. 287.4±0.5K.

        Nor do I believe it credible that every Stephenson screen was well maintained, and we know about the siting controversy. And I suspect not all were properly monitored, with the office tea boy being sent out into the snow to take the measurements, myopic technicians wiping rain off their specs. or the days when someone forgets, and just has a guess.

        And I don’t suppose for a moment every Stephenson screen, at every location, was checked once every hour, possibly four times in 24 hours, or perhaps 8 times, in which case there are numerous periods when temperatures can spike (up or down) before declining or rising.

        It therefore doesn’t surprise me one bit that with continual electronic monitoring we are seeing ‘hottest temperatures evah’ simply because they were missed in the past.

        Sorry, a bit of a waffle.”

        No apologies necessary. Nor is your question unreasonable and it is certainly not “dumb”; except to CAGW alarmists hiding the truth.

        Everyone should read USA temperature station maintenance staff writings!

        What’s in that MMTS Beehive Anyway?
        – By Michael McAllister OPL, NWS Jacksonville, FL,

        If you’re not involved with cleaning a Maximum/Minimum Temperature Sensor (MMTS) sensor unit, you probably have not seen inside it. The white louvered “beehive” contains a thermistor in its center with two
        white wires. The wires connect it to the plug on the base of the unit. It’s really a very basic instrument. So what else is there to be discovered in the disassembly of the unit?

        I cannot vouch for the rest of the country, but here in northeast Florida and southeast Georgia, we regularly find various critters making their home inside the beehive. At the Jacksonville, FL, NWS office, we usually
        replace the beehive on our annual visits. After getting the dirty beehive back to the office, and before carefully taking it apart for cleaning, we leave it in a secure outside area for a day to let any “residents” inside vacate, then we dunk it in a bucket of water to flush out any reluctant squatters…”

        N.B.;
        At no point do the maintenance or NOAA staff ever conduct side by side measurements to determine before/after impacts to data.
        Stations are moved,
        sensor housings are replaced,
        sensors are replaced and even “upgraded”,
        data transmission lines and connections are replaced, lengthened, shortened, crimped, bent, etc.,
        data handling methods and code are changed,
        etc.

        None of these potential “temperature impacts” are ever quantified, verified introduced into Gavin’s mystical error bounds theology.

  3. why current practitioners of Climate Science insist on using Temperature Anomalies….
    …it’s easier to hide their cheating

  4. “Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”

    And of course, you lose the ability to scare people into parting with their money.

    Snake Oil Salesman: The phrase conjures up images of seedy profiteers trying to exploit an unsuspecting public by selling it fake cures.

  5. The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.

    This appears to be attempting to make an issue where there is none.
    It’s a Nothingburger.
    Fake News.

    • TonyL ==> I am having a little fun with this — freely admitted in both posts. Almost the entire post is made up of Dr. Schmidt’s exactly quoted words from the RealClimate site — which I paraphrase once more at the end.

      Hardly fake anything — really maybe too real.

      If you have questions about why Dr. Schmidt says what he says, his post is still active at RC here — you can ask him in comments there.

      • “The proper use of anomalies is well known and the reasons are sound. ”
        Agreed.

        — a little tongue-in-cheek but serious as to the point being made.

        So what is the serious point being made? That you don’t understand why anomalies are used?

      • “So what is the serious point being made? That you don’t understand why anomalies are used?”

        That appears to be the case. I suggest anyone who finds this amusing go and read the article at realclimate with an open mind and you may then understand why anomalies are used. Ho ho. As if that will happen! We can all share in the joke.

      • Actually the whole of climate science would do well to explain why they use the unreliable almost nonphysical concept of temperature to do anything useful since the actual physical parameter is energy. Temperatures represent vastly different energies depending on the phase of matter, and the medium it is being measured in. For example between a dry day and a humid day, or between smog or air, between ozone or oxygen. The assumption of constant relative humidity alone makes the whole thing a pseudoscience.

      • Bobl it is so they can take a high energy maximum daily temperature and directly add it to a low energy minimum temperature, then divide that value in half as if they are both equivalent to arrive at an average temperature without proper weighting.

        When is the last time you heard a Warmist talking about maximum temperatures? It’s taboo to discuss those in polite society.

      • In terms of statistics, the point is valid. To compare a “spot” temperature against an “average” (like a 30 year norm) ignores the uncertainty in the “average.” This is similar to the difference between a “confidence interval” and a “prediction interval” in regression analysis. The latter is much greater than the former. In the first case one is trying to predict the “average.” In the second case one is trying to predict a specific (“spot” in the jargon of stock prices) observation.

        Implicitly, an anomaly is trying to measure changes in the average temperature, not changes in the actual temperature at which time the measurement is taken. If the anomaly in June of this year is higher than the anomaly in June of last year, that does not mean that the June temperature this year was necessarily higher than the June temperature last year. It means that there is some probability that the average temperature for June has increased, relative to the (usually) 30 year norm. But in absolute terms that does not mean we are certain that June this year was warmer than June last year.

        Anomalies are okay, if understood and presented for what they are: a means of tracking changes in average temperature. But that is not how they are used by the warmistas. The ideologues use them to make claims about “warmest month ever,” and that is statistical malpractice.

        Basil

      • blcjr: [anomalies are] ” a means of tracking changes in average temperature”. This is exactly what the CAGW quote. You are feeding their assumption. I know you are aware of the difference but the normal person does not; they simply read your text and say, “O, the normal temperature is going up or down”.

        I usually try to explain the anomalies as a differential, that is, an infinitely small section of a line with the magnitude and direction of the change. The width of the change is no wider than a dot on the graph. This seems to make more sense to the most people.

    • Actually it isn’t uncontroversial. One problem does lie with the uncertainty and its distribution. Another with working with linear transformations of variables in non-linear systems.

    • TonyL

      It gets better-

      “[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)

      If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.

      BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”

      Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.

      Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.

    • It gets better-

      “[b>If we knew the absolute truth, we would use that instead of any estimates. So, your question seems a little difficult to answer in the real world. How do you know what the error on anything is if this is what you require? In reality, we model the errors – most usually these days with some kind of monte carlo simulation that takes into account all known sources of uncertainty. But there is always the possibility of unknown sources of error, but methods for accounting for those are somewhat unclear. The best paper on these issues is Morice et al (2012) and references therein. The Berkeley Earth discussion on this is also useful. – gavin]” (Dec 23, 2014 same thread)

      If we KNEW the truth (but we don’t) we’d use that. So we model the KNOWN errors, but we have no idea if we’ve got all of the errors at all, and how we account for the unknown errors isn’t clear.

      BUT NOAA said “Average surface temperatures in 2016, according to the National Oceanic and Atmospheric Administration, were 0.07 degrees Fahrenheit warmer than 2015 and featured eight successive months (January through August) that were individually the warmest since the agency’s records began in 1880.”

      Not even a HINT that it’s an “estimate”, or that it’s not the absolute truth, or that the margin of error…+/- 0.5K is WAYYYY bigger than the 0.07 F ESTIMATE.

      Perhaps this is why the “fairly astute” readership at WUWT has never viewed the use of “anomalies” in a positive manner or “absolutely” agreed with the idea that they are even a close approximation to Earths actual temperature.

      • jorgekafkazar-
        Right!
        And yet they say “the Earth’s temperature is increasing” instead of “the Earth’s anomalies are increasingly warmer” etc. Al Gore says “the Earth has a temperature” instead of “The Earth has a higher anomaly”. And since Gav and the boys ALL ADMIT that it’s virtually impossible to know “exactly” what Earth’s actual global average temperature is, and that Earth is not adequately covered with thermometers, and that the thermometers we DO have are not in any way all properly cited and maintained and accurate… why in the crap do we let them get away with stating that “average surface temperatures were 0.07 F warmer” than a prior year? Why would any serious “Scientist” with any integrity use that kind of language when he’s really talking about something else??

        Oh yeah…..rug weaving. :)

      • Aphan: that “average surface temperatures were 0.07 F warmer” than a prior year

        If only they did actually say that. They don’t even say that. It’s just “hottest year ever” with no quantification, usually.

    • TonyL,
      Yes, at least some of us are aware of the ‘proper’ use of anomalies. At issue is whether anomalies are being used properly. Gavin even admits that frequently they are not: “This means we need to very careful in combining these two analyses – and unfortunately, historically, we haven’t been and that is a continuing problem.”

      • At issue is whether anomalies are being used properly.

        Very True.
        A closely related issue:
        The ongoing story of the use, misuse, and abuse of statistics in ClimateScience! is the longest running soap opera in modern science.
        The saga continues.

    • TonyL: I disagree that the use of anomalies is well known.

      Anomaly
      NOUN
      Something that deviates from what is standard, normal, or expected:
      “there are a number of anomalies in the present system”
      Synonyms: oddity, peculiarity, abnormality, irregularity, inconsistency

      My objection is that the reporting of data as anomalies, like reporting averages without the variance, standard deviation or other measure of dispersion, simply reduces the value of the information conveyed. It eliminates the context. It is not a common practice in statistical analysis in engineering or most scientific fields. None of my statistics textbooks even mentions the term. It simply reduces a data set to the noise component.
      While it seems to be common in climate science, the use of the term anomaly implies abnormal, irregular or inconsistent results. But, as has been extensively argued here and elsewhere, variation in the temperature of our planet seems to be entirely normal.
      That said, I do get that when analyzing temperature records it is useful to look at temperatures for individual stations as deviations from some long term average. E.g. if the average annual temp. in Minneapolis has gone from 10 C (long term average) to 11 C and the temp. in Miami has gone from 20 to 21 C, we can say both have warmed by 1 C.
      Of course, if one averages all the station anomalies and all the station baseline temperatures the sum would be identical to the average of all the actual measured temperatures.
      But it is another thing to only report the average of the ‘anomalies’ over hundreds or thousands of stations without including any information about the dispersion of the input data. Presenting charts showing only average annual anomalies by year for 50, 120, 1000 years is pretty meaningless.

    • “TonyL August 19, 2017 at 4:13 pm
      The proper use of anomalies is well known and the reasons are sound. I would have thought that the use of anomalies would be entirely uncontroversial to the fairly astute readership at WUWT.

      This appears to be attempting to make an issue where there is none.
      It’s a Nothingburger.
      Fake News.”

      The “Fake news and nothingburger” start right with Gavin, his mouth, his writing and Gavin’s foul treatment of others.

      “TonyL August 19, 2017 at 4:13 pm
      The proper use of anomalies is well known and the reasons are sound.”

      What absurd usage of “well known” and “the reasons are sound”, TonyL.
      Just another fake consensus Argumentum ad Populum fallacy.

      Use of anomalies can be proper under controlled conditions for specific measurements,
      • When all data is kept and presented unsullied,
      • When equipment is fully certified and verified,
      • When measurements are parallel recorded before and after installation and impacts noted,
      • When temperature equipment is properly installed everywhere,
      • When temperature equipment installation represents all Latitudes, Longitudes, elevations, rural, suburban and urban environments,
      • When temperatures and only temperatures are represented, not some edited version of data, data fill-in, smudged or other data imitation method is used.

      Isn’t it astonishing, that “adjustments”, substitutions, deletions, adjustments or data creation based on distant stations, introduce obvious error bounds into temperature records; yet 0.5K is the alleged total error range?

      Error bounds are not properly tracked, determined, applied or fully represented in end charts.
      Gavin and his religious pals fail to track, qualify or quantify error rates making the official NOAA approach anti-science, anti-mathematical and anti-anomaly. NOAA far prefers displaying “snake oil”, derision, elitism, egotism and utter disdain for America and Americans.

      “Double speak” is far too nice a description for Gavin and NOAA misrepresented temperatures. Climastrologists’ abuse of measurements, data keeping, error bounds and data presentation would bring criminal charges and civil suits if used in any industry producing real goods Americans depend upon.

  6. Kip – good post!
    The REAL answer of course is normally called ‘success testing’. Using this philosophy the test protocol – in this case the way the raw data is treated/analyzed – is chosen in order to produce the kind of result desired. NOT an analysis to find out if the temperatures are warmer, colder, or the same but to produce results that show there is a warming trend.
    The usual way of detecting this success testing phenomena is to read the protocol and see just how much scientific technobabble is there (think of the Startgate TV series). The more technobabble the less credible the result.

    • This is what is really going on. Station selection, data selection, methodology selection allows the gate-keepers of the temperature record and the global warming religion, the ability to produce the number they want.

      Think of it as someone standing over the shoulder of a data analyst in the basement of the NCDC each month saying “we’ll, what happens if we pull out the 5 Africa stations in the eastern side? How about we just add in that station with all the warming errors? Let’s adjust the bouys up and pretend it is because of ship engine intakes that nobody can/will check? Why don’t we bump up the time of observation bias adjustment and make a new adjustment for the MMTS sensors? Show me all the stations that have the highest warming? Let’s just drop those 1500 stations that show no warming. The South American stations are obviously too low by 1.0C. Just change them and call it an error.

      We”ll call it version 4.4.3.2.”

  7. Gavin had an analogy. If you’re measuring a bunch of kids to see who’s the tallest, running a ruler head to foot, you can get a good answer. If you measure the height of their heads above sea level, there is a lot more uncertainty. So which would you do?

    • Nick ==> I’m afraid Dr. Schmidt’s “analogy” is crackers.

      We want to find the difference between the heights (lengths, really) of the five boys in the class — which represent average temperatures of five years.

      The current Anomaly Method is as follows — they measure all the tops of the heads of the five boys as “elevations above sea level”, then subtract from those elevations of the tops of each kids head the common “base elevation” for the room which is the elevation above sea level of the classroom floor, report these remainders as the “anomalies from the floor” of each “kid”, then compare the kids’ anomalies.

      That is, of course, nutty.

      If they want to know the difference in the length of each kid (the proper biological term for “height” of children), they only need have them lie down on an exam table, feet against the foot rest, and run the measure rule down to the top of their head. arriving at each kids length, then compare them. Length of children, boys or girls, has nothing to do with sea level or elevations — it is discernible from direct measurement — as is, of course, surface air temperature at a weather station — discernible by direct measurements — which can be compared with one another, year to year.

      If the years are then all the same, within the uncertainties, then the years can and should be considered all the same.

      • elevation above sea level of the classroom floor….

        …and then make adjustments for the weight of each child…..because they are making the floor sink

    • To continue the analogy, what people want to know is ***not*** which kid is tallest, but rather which kid is highest above sea level, allowing for the possibility that the “sea level” — that is, the global absolute temperature — may be changing over time (day by day and year by year) in a way that is very difficult to measure accurately.

      • No, the best way is to measure their height using low orbit satellite range finding, whilst getting the kids to jump up and down on a trampoline and measure the reflection off the surface of the trampoline at the bottom of the movement. This is accurate to within +/- 1mm as has been established for sea level measurements.

      • and yet actual absolute measurements are better than statistical output which is pure fantasy, it’s not an temperature anomaly, its a statistical anomaly, which requires a “leap of faith” to accept it as a temperature anomaly when talking GISS GAMTA

    • NS,
      The primary uncertainty is introduced by adding in the elevation above sea level. Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair. Therein lies the problem with temperature anomalies. We aren’t measuring the anomalies directly (height) but obtaining them indirectly from an imperfectly known temperature baseline!

      • “Neither sea level or the ground they are standing on is known with the same accuracy or precision as the distance between their feet and hair.”
        Exactly. And that is the case here, because we are talking not about individual locations, but the anomaly average vs absolute average. And we can calculate the anomaly average much better, just as we can measure better top to toe.

        It has another useful analogue feature. Although we are uncertain of the altitude, that uncertainty does not actually affect relative differences, although that isn’t obvious if you just write it as a±b. The uncertainty of the absolute average doesn’t affect our knowledge of one year vs another, say. Because that component of error is the sae for both. So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015. The reason is that you took the same number 14.0±1 (abs normal), and added the anomalies of 0.7±0.1 and 0.5±0.1. The normal might have been 13 or 15, but 2016 will still be warmer than 2015.

      • You clearly have a different understanding of “error” than I do, Nick.

        You wrote: “So if you unwisely say that 2016 was 14.7±1, and 2015 was 14.5±1 (numbers made up for this example), then you still know that 2016 was warmer than 2015.”

        I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5. Since the temperature difference between 2015 & 2016 is well within the error range of both temperatures it’s impossible to know which year is warmer or cooler.

        That’s what I remember from my first year Physics Prof, some 50 years ago. But maybe Physics has “evolved” since then. :))

      • TheOtherBob ==> I am afraid that you are right — though you may be misunderstanding what Nick means to say here. See my reply to him just below.

      • Nick Stokes ==> (if your ancestors are from Devon, England, we may be related).

        “we can calculate the anomaly average much better, just as we can measure better top to toe.” My objection to this assertion is that no one is measuring an anomaly — the ‘toe to head’ number is an actual physical measurement, with known/knowable original measurement error margins. The ‘anomaly average’ is a calculated/derived number based on two uncertain measurements — the uncertainty of the long-term average of the base period and the uncertainty of the measurement of today’s/this years’s average temperature. If the uncertainty of the base period figure is +/- 0.5°C and the uncertainty of this years average temperature is +/- 0.5°C, then the uncertainties must be ADDED to one another to get the real uncertainty of the anomaly. This gives anomalies with a known uncertainty of +/- 1.0°C each. Averaging these anomalies does not remove the uncertainty — it remains +/- 1.0°C for the resultant average.

        The idea that an average of anomalies has a smaller original measurement error than the original measurements is a fallacy.

      • Thanks Kip. Yes, my thoughts exactly. I didn’t want to repeat the point I made in my first post about adding the errors to get the anomaly error but you covered it most eloquently. Thanks for starting a very interesting discussion.

      • Nick ==> If you want you can email me your oldest generation information and I’ll see if I can find any common ancestors. my first name at the domain i4 decimal net

      • “I would say that the “real value” of the 2016 temperature could be anywhere from 13.7 to 15.7 and “real value” of the 2015 temperature could be anywhere from 13.5 to 15.5”
        But not independently. If 2016 was at 13.7 because the estimate of normal was wrong on the low side (around 13), then that estimate is common to 2015, so there is no way that it could be 15+.
        There are many things that can’t be explained by what you learnt in first year physics.

      • I don’t know what point you’re making in your comment.

        And there are many things that Gavin & Co. do that can’t be explained by anyone – at least in a way that makes sense to most people. :))

    • No problem if all 5 boys are standing on the same level platform … but WE know that the platform is not level !

    • There is another analogy. This morning my wife asks: What’s the outside temperature today? My answer is: the temperature anomaly is 0.5 K. When I add you need no new clothes I will run into problems this day.

      • Nor will she nicely ask what the outside temperature is, again.

        NOAA should reap equal amounts of derision for their abuse of anomalies.

  8. Suppose that we have a data set: 511, 512, 513, 510, 512, 514, 512 and the accuracy is +/- 3. The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0 and the accuracy is still +/- 3.

    I don’t understand how using anomalies lets us determine the maximum any differently than using the absolute values. There has to be some mathematical bogusness going on in CAGW land. I suspect they think that if you have enough data it averages out and gives you greater accuracy. I can tell you from bitter experience that it doesn’t always work that way.

    • But if you ADD the uncertainties together, you get zero!

      Here’s the appropriate “world’s best practice” algorithm:

      1. Pick a mathematical operator (+, -, /, *, sin, cos, tan, sinh, Chebychev polynomial etc.)
      2. Set uncertainty = 0
      2a. Have press conference announcing climate is “worse than originally thought”, “science is settled” and “more funding required.”
      3. Calculate uncertainty after applying operator to (homoginised) temperature records
      4. Is uncertainty still zero?
      5. No, try another operator.
      6. go back to 3 or, better yet, 2a.

      • The sharp-eyed will note the above algorithm has no end. As climate projects are funded on a per-year basis, this ensures the climate scientist will receive infinite funding.

    • Thank you Bob!
      My math courses in Engineering and grad studies (stats, linear programming, economic modelling, and surprising to me the toughest of all, something called “Math Theory”) were 50 years ago. But the reasoning that somehow anomalies are more precise or have less uncertainty than the absolute values upon which they were based set off bells and whistles in my old noggin. I was very hesitant though to raise any question for fear of displaying my ig’nance..
      Maybe both of us are wrong, but now I know I’m in good company. :)

    • “The average is 512. The anomalies are: -1, 0, +1, -2, 0 +2, 0”
      But you don’t form the anomalies by subtracting a common average. You do it by subtracting the expected value for each site.

      “how using anomalies lets us determine the maximum”
      You don’t use anomalies to determine the maximum. You use it to determine the anomaly average. And you are interested in the average as representing a population mean, not just the numbers you sampled. The analogy figures here might be
      521±3, 411±3, 598±3. Obviously it is an inhomogeneous population, and the average will depend far more on how you sample than how you measure. But if you can subtract out something that determines the big differences, then it can work.

      • That’s what you say. Here’s what Dr. Schmidt said:

        But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

        My example is a simplified version of the above. If you think Dr. Schmidt erred, that’s between you and him.

    • the accuracy is still +/- 3.
      ≠======
      Of course it is. But what climate science does is to re-calculate the error statistically from the anomaly and come to the absurd conclusion that the error changed from 0.5 to 0.05. The nonsense is that averaging reduces the variance and gives the misleading impression that it provides a quick way to reduce error. And it does in very specific circumstances. Of which this is not one.

  9. Extra! EXTRA! Read all about it! Gavin Schmidt of NASA ADMITS that there has been NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years!!!

    • I’m in denial. A climate scientist actually told the truth… kind’a… sort’a… maybe… in a convoluted way? I don’t believe it. :)

      • He told the truth, and then rationalized why that truth is completely unimportant to the actual “science” involved in climate science. Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate. And if you don’t like unicorns or pizza parties, you’re a hating-hate-hater-denier and should be put to death.

        ISIS is more tolerant.

      • “Because we ALL know that science is about approximations, estimates, conjectures, ideology, variety, inclusiveness, personal interpretations, pizza parties, casual Fridays (or should I say “causal” Fridays….harharhar), unicorns, pink fuzzy bunny slippers, the flying spaghetti monster and The Wheel of Climate.”
        What happened to the rainbows, fairy dust and hockey sticks?

        “…hating-hate-hater-denier…”
        You forgot lying, hypocritical, sexist, egotistical, homophobic, misogynist, deplorable bigot. :))

    • “NO statistically significant CHANGE IN EARTH’S ABSOLUTE TEMPERATURE in the last 30 years”

      Earth’s Absolute Temperature has changed by roughly 4°C in every one of those lasts 30 years.

      Surely that is statistically significant. :)

  10. What is the sensitivity of the measuring device, and what are the significant figures? Can an average of thousands of measurements accurate to a tenth of a degree be more accurate than each individual measuring device? I am asking an honest question that someone here can answer accurately. We learned significant figures in chemistry, but wouldn’t they also apply to these examples? How accurate are land based temp records versus the satellite measuring devices? This has been a central question for me in all of this “warmest ever” hoopla, and I would appreciate a good explanation.

    • Cold ==> There are answer to those questions…but not in this essay. One hint about air temperatures is that before the digital age, they were measured to “the nearest whole degree”. That means that each recorded temperature represented all the possible temperatures (say we recorded 72) between 71.5 and 72.5 (one of the .5s would be excluded) — a range. No amount of averaging gets rid of the +/- 0.5 of those measurements. ALL subsequent calculations using to=hose measurements must have the +/- 0.5 attached when the calculations are done. Results can be no more accurate/precise than the original measurements. One must ADD to the minimum uncertainty of the original measurement range the uncertainties involved in thermometers that were seldom (if ever) re-calibrated, not standardized in the first place, etc.

      • Kip,
        To compound that, in the sixties I was taught that, at least in Engineering, there existed MANY decision rules about whether to round a “5” up or down if it was the last significant digit, and that those recording data often failed to specify which rule they used. We were instructed to allow for that.

        I don’t think Wiley Post or Will Rogers gave two shoots about how to round up or down factional temperatures at their airstrips in the 20’s or early 30’s.

        Why modern “Climate Scientists” assume that those who recorded temperature at airports or agricultural stations in 1930 were aware that those figures would eventually be used to direct the economies of the world is typical of the “history is now” generation.

      • Kip,
        The automated weather stations (ASOS) are STILL reading to the nearest degree F, and then converting to the nearest 0.1 deg C.

      • Those numbers were not anywhere near that good. How often were thermometers calibrated. Were they read with verniers or magnifiers? What did they use to illuminate thermometers for night time readings? Open flames? And don’t forget all of the issues that Anthony identified with his work on modern weather observation equipment.

      • Cylde ==> Give that to me again? You are saying that the digital automatic weather stations read the temperature to the nearest whole degree (F or C?) and then do what with it? convert it to 0.1th of a degree? How the heck do they do that?

      • Walter ==> I have stood at one of the old style Stevenson screen weather stations (Santo Domingo, Dominican Republic).. The Automated digital stations kept getting blown away by hurricanes and storms, but they keep up the screen and the glass thermometer inside of it.

        The elderly meteorologist explained exactly how they took the readings. Open the box, look at the thermometer, write down to the nearest degree. He explained that the shorter men were supposed to stand on the conveniently located concrete block so that their eye would be more or less level with the mercury in the thermometer (angle of viewing is somewhat important) but that they seldom did for =reasons of pride. Thus readings by short guys tended high.

      • The temperature data was and is recorded to the first place of decimal. The adjustment is carried out as: 33.15 [0.01 to 0.05] as 33.1, 33.16 as 33.2, 33.25 [0.05 to 0.09] as 33.3. This is also followed in averaging.

        Dr.S. Jeevananda Reddy

      • Interesting specification from the ASOS description:
        http://www.nws.noaa.gov/asos/aum-toc.pdf
        Temperature measurement: From -58F to +122F RMS error=0.9F, max error 1.8F.

        “Once each minute the ACU calculates the 5-minute
        average ambient temperature and dew point temperature
        from the 1-minute average observations (provided at least
        4 valid 1-minute averages are available). These 5-minute
        averages are rounded to the nearest degree Fahrenheit, con-
        verted to the nearest 0.1 degree Celsius, and reported once
        each minute as the 5-minute average ambient and dew point
        temperatures. All mid-point temperature values are rounded
        up
        (e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to –
        3.0°F; while -3.6 °F rounds to -4.0 °F).”

        This is presumably adequate for most meteorological work. I’m not sure how we get to a point where we know the climate is warming but it is within the error band of the instruments. Forgive me I’m only a retired EE with 40+ years designing instrumentation systems (etc).

      • Clyde and others ==> I think I have this figured out. Last time I downloaded a data set of station reports (WBAN:64756, Millbrook, NY) the 5 minute recordings were in whole degrees Fahrenheit and whole plus tenths Celsius. I used this example in my essay on Average Averages.

        I will check out the manual Clyde offers and see if they are converting to tenths of C from whole degrees F or what.

      • Mark ==> I usually find that if I put in the time and effort — like reading the manual for the whole automagic weather station system, then somewhere in all that verbiage the penny drops and “Wow, there it is!”

        Occasionally, I have to write to the government employee in charge of the system and ask my question — almost always get a polite and helpful answer. Did that wit the NOAA CORS data.

    • If you have one thermometer with a 1 degree scale you would attribute +/-0.5 degrees to a measurement. If it is scientific equipment, it will be made to ensure it is at least as accurate as the scale.

      There is a rounding error when you read the scale and there is the instrumental error.

      If you have many readings on different days, the rounding errors will average out. If you have thousands of observation stations , the calibration error the individual thermometers will average out.

      That is the logic of averages being more accurate than the basic uncertainly of one reading.

      • Greg ==> Yes, you correctly state the Fallacy of Averages. There is no reason to believe that the “original measurement errors averages out”. That is just a convenient way of brushing them under the rug.

        Thermometer readings taken to the nearest degree are properly reported as “72°F +/- 0.5°F” — the average of two or a thousand different thermometer readings originally taken to the nearest degree are properly reported as “72.45°F +/- 0.5°F”. The rounding factor does not disappear by long division.

      • Accuracy of scale: If the thermometers from 1880 through early 20th century read in whole degree increments (which was “good enough” for their purposes) then how does one justify declaring this year was the hottest year ever, by tenths of a degree?

        Rounding errors will only “average out” if everyone recording temps used a flip of the coin (figuratively) to determine abut what to record. The reality is some may have used a decision rule to go to the next HIGHEST temp and some the LOWER. Then there’s the dilemma about what to do with “5 tenths”; there were “rules” about that too. You cannot assume the “logic of averages” unless we know how those rules of thumb were applied.

      • Suppose that we have a sine wave of known frequency buried under twenty db of Gaussian noise. We can detect and reconstruct that signal even if our detector can only tell us if the signal plus noise is above or below zero volts (ie. it’s a comparator). By running the process for long enough we can get whatever accuracy we need. link

        The problem is that Gaussian noise is a fiction. It’s physically impossible because it would have infinite bandwidth and therefore infinite power. Once the noise is non-Gaussian, our elegant experiment doesn’t work any more. It’s more difficult to extract signals from pink or red noise. link If we can’t accurately describe the noise, we can’t say anything about our accuracy.

      • “if there are n stations and if the error of the individual readings is s, the error of the average will be s/squareroot(n).”

        Ah, “the Law of large number”. Somebody always drags that up. Sorry but no, that only applies to independent identically distributed random variables.

      • Following the child height example:

        First case: If you take one child and you measure his/her height 10 times, the average is more accurate.

        Second case: If you have 10 children and you measure their haight once per child. the average height is not more accurate than the individual accuracy.

        The temperature in Minneapolis is different from the temperature in Miami. The Earth average temperature belongs to the second case. That is my understanding.

        It does not matter, anyway, since the Earth is not in thermal equilibrium or even in thermodynamic equilibrium and therefore the term average temperature is meaningless.

    • Cold(what else?) in Wisconsin- temperature is an Intensive property- the speed of the moving/vibrating atoms and molecules. Which for climate purposes is measured by a physical averaging process- the amount the temperature being measured changes the resistance of (usually now) some sort of calibrated resistor which can be very precise(to hundredths of a degree) but only as accurate as its calibration over a specific range. Averaging temperatures is pretty meaningless. You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing. Measuring how the temperature of the water changes tells you something about the amount of energy released by the burning gas but it’s a very crude calorimeter.

      Like that example, the climate is driven by energy movements, not primarily by temperatures.

      • I’m not a climate scientist (but I did see one on TV) but why aren’t those far more educated than me pointing out Phil’s point which should be obvious to anyone with a basic science education.

        You can average the temperature of the water in a pot and the temperature of the couple of cubic feet of gas heating it and learn nothing.

        In discussions with my academic son, I point out that I can take the temperature at the blue flame of a match stick and then the temperature of a comfortable bath tub and the the average of the two has no meaning.
        The response of course is 97% of scientists say I’m deluded. (Argument from Authority).

      • I have environment canada weather app on my phone. I noticed this summer they reported what it feels like rather than the measured number. Or, they use the inland numbers which are a few degrees higher, rather than the coastal number that they have been using at the same airport station for the last 80 years.
        They especially do this on the radio weather reports. It feels like…30 degrees

      • george – scientists have
        made it very clear that anyone
        should expect a change of the
        global average at their
        locale.

        but the global avg is good
        for spotting the earth’s energy
        imbalance. not perfect, but
        good

      • “but the global avg is good for spotting the earth’s energy imbalance. not perfect, but good”

        Actually it is almost completely useless given the very low heat capacity of the atmosphere compared to the ocean (remember that it is the ocean that absorbs and emits the vast majority of solar energy).

      • crackers ==> Global Average Surface Air temperature (or its odd cousin Land and Sea Surface temperature) is good for something — just not spotting Earth’s “energy imbalance”. The old term temperature record shows that temperature rises and falls without regards to energy in/out balance (or the Sun has been really variable and plays an enormous role).

    • https://science.nasa.gov/science-news/science-at-nasa/1997/essd06oct97_1

      Accurate “Thermometers” in Space

      “An incredible amount of work has been done to make sure that the satellite data are the best quality possible. Recent claims to the contrary by Hurrell and Trenberth have been shown to be false for a number of reasons, and are laid to rest in the September 25th edition of Nature (page 342). The temperature measurements from space are verified by two direct and independent methods. The first involves actual in-situ measurements of the lower atmosphere made by balloon-borne observations around the world. The second uses intercalibration and comparison among identical experiments on different orbiting platforms. The result is that the satellite temperature measurements are accurate to within three one-hundredths of a degree Centigrade (0.03 C) when compared to ground-launched balloons taking measurements of the same region of the atmosphere at the same time. ”

      The satellite measurements have been confirmed by the balloon measurements. Nothing confirms the bastardized surface temperature record.

      And this:

      http://www.breitbart.com/big-government/2016/01/15/climate-alarmists-invent-new-excuse-the-satellites-are-lying/

      “This [satellite] accuracy was acknowledged 25 years ago by NASA, which said that “satellite analysis of the upper atmosphere is more accurate, and should be adopted as the standard way to monitor temperature change.”

      end excerpts

      Hope that helps.

  11. I am puzzled as to how it is how over a period of 30 years temperatures can be established to only ±0.5K but for the Gistemp 2016 baseline the uncertainty is only ±0.05ºC. How is the latter more precise? Is it that different measuring techniques are in use?

    • Greg ==> Reducing the uncertainty by an order of magnitude between “direct measurement” and “anomaly” is a magic trick…..

      • The order of magnitude not necessarily wrong because they are different things. there is no reason why they should be the same but I don’t believe either 0.5 or the 0.05 figures.

      • The problem is , while the instrumental and reading errors are random and will average out allowing a sqrt(N) error reduction, you can not apply the same logic to the number of stations and this is exactly what they do to get the silly uncertainties.

        They try to argue that they have N-thousand measurements of the same thing : the mean temperature. This is not true because you can not measure a mean temperature, it is not physical, it is a statistic of individual measurements. Neither does the world have A temperature which you can try to measure at a thousand different places.

        So all you have is thousands of measurements each with a fixed uncertainty That is not going more accurate if you go to do a thousand measurements on Mars and them claim that you know the mean temperature of the inner planets more accurately than you know the temperature of Earth.

        The temperature at different places are really different. You don’t get a more accurate answer by measuring more different things.

      • There no evidence if the error is mechanical in nature that it would average out with more samples anyway. Devices of the same type tend to drift or fail all in the same direction.

      • But they are NOT “different things”.
        If one is defined as a deviation from another, you can’t separate them, no matter how many statistical tricks you apply.

      • “while the instrumental and reading errors are random and will average out allowing a sqrt(N) error reduction”

        Just what makes you believe that?

      • Greg, you are moving from verifiable to hypothetical with the statement about errors averaging out. The mathematics is based on exact elements of a set having precise properties (IID).

        Also one of the pillars of the scientific method is the Method of making measurements. You design the tools to achieve the resolution you want. Were the temperature measurements stations set up to measurement repeatably with sub-0.1K uncertainty? No they weren’t. Neither were the bucket measurements of SST.

        And that is the fundamental problem with climate scientists. They are dealing in hypotheticals but believing that it is real. They have crossed into a different area.

  12. It is also well worth remembering (or learning) the difference between MEAN and MEDIAN and paying close attention to which one is used where in information sources.

    So many reports that “the temperature is above the long term MEAN” where in a Normal Distribution exactly half of the samples are higher than the mean!

    Its an interesting and worthwhile exercise to evaluate whether the temperature series in any particular station resembles a Normal Distribution…

  13. I was looking at temp. and CO2 data last week to see if NASA, NOAA and GIST would pass FDA scrutiny if approval was sought. There is a lot to it but from acquisition to security to analysis as well as quality checks for biases in sampling, to missing data, not to mention changing historical data etc, the answer is no. NOT EVEN CLOSE! Blinding is a big deal. So, ethically I believe any climate scientist who is also an activist must blind ALL PARTS of a study to ensure quality. What about asking to audit all marchers on Washington’s who received federal grants but do not employ FDA level or greater quality standards? Considering Michael Mann would not turn over his data to the Canadian courts last month, this might be a hoot, and REALLY VALUABLE!

  14. TonyL: I disagree that the use of the “anomalies” is well known.

    a·nom·a·ly
    [əˈnäməlē]

    NOUN
    something that deviates from what is standard, normal, or expected:
    “there are a number of anomalies in the present system”
    synonyms: oddity · peculiarity · abnormality · irregularity · inconsistency

    While It is used extensively in climate science these days, it is a very uncommon approach in statistical analysis, engineering and many scientific fields. The term or process is not mentioned or described in any of my statistics text books. I have spent 40 years in the business of collecting and analysis of all kinds of measurements and have never seen the need to convert data to ‘anomalies’. It can be viewed as simply reducing a data set to the noise component. My main objection is that, like an average without an estimate of dispersion such as the variance or standard deviation, it serves to reduce the information conveyed. Also, as the definition of anomaly indicates, it implies abnormality, irregularity, etc. As has been widely argued here and elsewhere significant variability in temperature of our planet seems quite normal.

  15. I think this is a fine demonstration of the falacy of false precision. Also of statistical fraud.

    We can’t let the prols think “Hey guess what, the temperature hasn’t changed!”

  16. KH, I am of two minds about your interesting guest post.
    On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless. OTH, a global average stationary station anomaly (correctly calculated) is meaningful, especially for climate trends. So useful if the stations are reliable (most aren’t),
    On the other hand, useful anomalies hide a multitude of other climate sins. Not the least of which is the gross difference between absolute and ‘anomaly’ discrepancies in the CMIP5 archive of the most recent AR5 climate models. They get 0C wrong by +/-3 C! So CMIP5 not at all useful. Essay Models all the way Down in ebook Blowing Smoke covers the details of that, and more. See also previous guest post here ‘The Trouble with Models’.

    • Ristvan ==> I am admittedly having a little fun by playing the explanations of the use of temperature anomalies back at the originators in their own words. It is a revealing exercise — if not very scientific.

    • I agree that anomalies make more sense in principal, if you want to look at whether the earth has warmed due to changing radiation , for example.

      The problem is the “climatololgy” for each month is the mean of 30days of that month over 30 years. 900 data. They will have a range of 5- 1- deg C for any given station with a distribution. You can take 2 std dev as the uncertainty of how representative that mean is and I’ll bet that is more than 0.05 deg C. So the uncertainty on your anomaly can never be lower than that.

    • Ristvan: “On the one hand, because of latitudinal (temperate zone) and altitudinal (lapse rate) differences, a global average temp is meaningless.”
      What you are saying in simple terms is that a global average temperature is also a crock of fecal matter.

  17. This is like the rules for stage psychics doing cold readings==>do not be specific on anything checkable.

  18. Another error they usually ignore is sampling error. Is the sample a true and accurate representation of the whole. In the case of SST almost certainly not.

    Sampling patterns and methods have been horrendously variable and erratic over the years. The whole engine room / buckets fiasco is largely undocumented and is “corrected” based on guesswork, often blatantly ignore the written records.

    What uncertainty needs to be added due to incomplete sampling?

  19. KIP,

    Something buried in the comments section of Gavin’s post is important and probably overlooked by most:

    “…Whether it converges to a true value depends on whether there are systematic variations affecting the whole data set, but given a random component more measurements will converge to a more precise value.
    [Response: Yes of course. I wasn’t thinking of this in my statement, so you are correct – it isn’t generally true. But in this instance, I’m not averaging the same variable multiple times, just adding two different random variables – no division by N, and no decrease in variance as sqrt(N).”

    Gavin is putting to rest the claim by some that taking large numbers of temperature readings allows greater precision to be assigned to the mean value. To put it another way, the systematic seasonal variations swamp the random errors that might allow an increase in precision.

    Another issue is that, by convention, the uncertainty represents +/- one (or sometimes two) standard deviations. He doesn’t explicitly state whether he is using one or two SD. Nor does he explain how the uncertainty is derived. I made a case in a recent post ( https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/ ) that the actual standard deviation for the global temperature readings for a year might be about two orders of magnitude greater than what Gavin is citing.

  20. Schmidt cites two references as to why anomalies are preferred, one from NASA and one from NOAA. The latter is singularly useless as to why anomalies should be used. The opening paragraph of the NASA reference states:

    The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.

    Two factors are at work here. One is that the data is smoothed. The other is that the anomalies of two different geographical locations can be compared whilst the absolute temperatures cannot.

    Is smoothed data useful? I guess that is moot but it is true to say that any smoothing processes loses fine detail, the most obvious of which is diurnal variation. Fine detail includes higher frequency information and removing it makes the analysis of natural processes more difficult.

    Is a comparison of anomalies at geographically remote locations valid? I would think it would be, provided the statistics of the data from both locations are approximately the same. For example, since most analysis is based on unimodal gaussian distributions (and normally distributed at that), if the temperature distributions at the two locations are not normal, can a valid comparison be made? Having looked at distributions in several locations in New Zealand, I know that the distributions are not normal. Diurnal variation would suggest at least a bimodal distribution, but several stations exhibit at least trimodal distributions. The more smoothing applied to the data set the more closely the distribution will display normal, unimodal behaviour.

    I suspect that smoothing the data is the primary objective, hiding the inconvenient truth that air temperature is a natural variable and is subject to a host of influences, many of which are not easily described, and incapable of successful, verifiable modeling.

    • Re: Gary Kerkin (August 19, 2017 at 6:38 pm)

      [James] Hansen is quoting himself again, it’s all very inbred when you start reading the supporting – or not – literature!

      However the literature doesn’t agree and he knows that he is dissembling.

      In [James] Hansen’s analysis, the isotropic component of the covariance of temperature, assumes a constant correlation decay* in all directions. However, “It has long been established that spatial scale of climate variables varies geographically and depends on the choice of directions” (Chen, D. et al.2016).

      In the paper The spatial structure of monthly temperature anomalies over Australia, the BOM definitively demonstrated the inappropriateness of Hansen’s assumptions about correlation of temperature anomalies:

      In reality atmospheric fields are rarely isotropic, and indeed the maintenance of westerly flow in the southern extratropics against frictional dissipation is only possible due to the northwest-southeast elongation of transient eddy activity (Peixoto and Oort 1993). Seaman (1982a) provides a graphic illustration of this anisotropy on weather time-scales for the Australian region…This observation of considerable anisotropy is in contrast with Hansen and Lebedeff (1987) for North America and Europe.. We also note the inappropriateness of the function used by P.D. Jones et al. (1997) for describing anisotropy (at least for Australian temperature), which limits the major and minor axes of the correlation ellipse to the zonal and meridional direction (see Seaman 1982b).

      Clearly, anisotropy represents an important characteristic of Australian temperature anomalies, which should be accommodated in analyses of Australian climate variability.(Jones, D.A. & Trewin, B. 2000)

      *Decreasing exponentially with their spatial distance, spatial scales are quantified using the e-folding decay constant.

      • Mod or Mods! Whoops! I just realised that my comment above was directed at James Hansen of NASA but might be confused with the Author of the post, Kip Hansen!

        To be clear, Gavin Schmidt(NASA), references James Hansen(NASA) quoting J.Hansen who references NASA(J.Hansen)! It’s turtles all the way down ;-)

    • It is true that temperature data is not Normally Distributed. At the very least most sets I have looked at are relatively skewed. The problem is that the variation from Normal in each station is different from other stations, and comparing, specifically averaging, non homogeneous data presents a whole other set of difficulties (i.e. shouldn’t be done).

      • wyzelli ==> “specifically averaging, non homogeneous data presents a whole other set of difficulties (i.e. shouldn’t be done).” That is correct — it should not be done and when it is done, the results do not necessarily mean what one might think.

  21. Another reason to use anomalies instead of temperatures is that the graph of anomalies can be centered at zero and show increments of 0.1°. It can make noise movements look significant. If you use temperatures, any graph should show Kelvin with absolute zero. Construct a graph using those parameters, and the “warming” of the past 30 years looks like noise, which is what it is. A 1°K movement is only ~0.34% not much. It is just not clear why we should panic over a variation of that magnitude.

    • ” If you use temperatures, any graph should show Kelvin with absolute zero. ”

      nonsense, you scale the graph to show the data in the clearest way with appropriately labelled axes.

      • “Clearest”? or “most dramatic for our sales goals?”
        There is a fine line between the two.
        If 0.1 degree actually made an important difference to anything at all, then maybe scales used today would be informative.
        Instead they are manipulative, giving the illusion of huge change when that is not the case.

    • If the scale was simply the reality reflecting the range of global temps the graph would be representing the chsnges honestly and people could make ingormed decisions.
      That is counter to the goals of the consensus.

  22. Isn’t the real answer that if actual temperatures were used, graphical representations of temperature vs time would be nice non-scary horizontal lines?

    • Alan ==> Difficult to impossible to actually judge the motivations of others — better just to ask them and look at their answers — then if its wrong, at least its on them. That’s what I have done in this mini-series. If UCAR/NCAR doesn’t like their answer when they see it here 0– they can change their web site. If Dr. Schmidt doesn’t like his answer, he can change his web site.

  23. From a management perspective it always pays to hire staff that give you 10 good reasons why something CAN be done rather than 10 good reasons why something CAN’T be done:
    So the question is: Why has the temperature data not been presented in BOTH formats? Anomaly AND Absolute.

  24. This is a very interesting discussion. I’ve been thinking about this for some time. Consider the following.

    The temperature anomaly for a particular year, as I understand it, is obtained by subtracting the temperature for that year from the 30-year average temperature. Assuming both temperatures have an error of +/- 0.5C, the calculated anomaly will have an error of +/- 1.0C. When adding or subtracting numbers that have associated errors, one must ADD the errors of the numbers.

    So the anomaly’s “real value” is even less certain than either of the 2 numbers it’s derived from.

    • If you can argue that the errors are independent and uncorrelated you can use the RMS error but yes, always larger than either individual uncertainty figure.

      • Let me correct. It is not whether you can argue that the errors are independent and uncorrelated, but rather whether they truly are independent and uncorrelated.

        Yet in the climate field, it would appear that the errors are neither independent nor uncorrelated. there would appear to be systemic biases such that uncertainty is not reduced.

  25. The take-away, and this is to be found in other disciplines as well, is “never let the facts get in the way of a good story.” The Left and the media just love to apply it.

  26. The desperation is to try to get some sort of important signal to show something important with climate, so they are looking at sample noise as data these days.

  27. There is nothing anomalous about the global average temperature as it changes from year to year (well there wouldn’t be if such a thing as global average temperature existed). The global average temperature has always varied from year to year, without there being any anomalies.

  28. The alarmists [NASA, NOAA, UK MET Office] cannot even agree among themselves what, ‘average global temp. means’. Freeman Dyson has stated that it is meaningless and impossible to calculate – he suggests that a reading would be needed for every square km. Like an isohyet is the measurement going to be reduced / increased to a given density altitude? What lapse rates – ambient or ISO?

    The satellite observations are comparable because they relate to the same altitude with each measurement but anything measuring temps near the ground are a waste of time and prohibitively expensive at; one station / square km or even 100 square km.

    • why every sq km?
      temperature stations aren’t free.
      so the question is, what station density gives
      the desired accuracy?

      and your answer is?
      show your math

  29. As Dr Ball says and I agree, averages destroy accuracy of data points.

    Given we need accuracy for science, no? we do. Absolute temperatures would be used and need to be used, science is numbers, actual numbers not averaged numbers. If science worked with averages we’d never had had steam engines.

    Take model runs.
    100 model runs. Out of that 100 runs, 1 of the runs is the most accurate (no two runs are the same) so one must be the most accurate (we can’t know which one and accuracy is really just luck given the instability of produced output)

    Because we do not understand (why) and which run is the accurate one, we destroy that accuracy with the other 99 runs.
    Probability is useless in this context as the averages and probabilities conceal the problem, we don’t know how accurate each run is.

    This is then made worse by using multiple model ensembles, which serve to dilute the unknown accuracy even more to the point where we have a range of 2c to 4.5c or above, this is not science, it is guessing, it’s not probability, it is guessing.

    The only use for using loads of model ensembles is to increase the range of “probability” and this probability does not relate to the real physical world, it’s a logical fallacy.

    The range between different temperature anomaly data sets are performing the same function as the wide cast net of model ensembles.

    Now you know why they don’t use absolute temperatures, because using those increases accuracy and reduces the “probabilities” and removes the averages which allow for the wide cast net of non-validated “probabilities”.

    The uncertainty calculations are rubbish. We are given uncertainty from models, not the real world, the uncertainty only exists in averages and probabilities not in climate and actual real world temperatures.

    • NOAA’s instability and wildly different runs prove my point. An average of garbage is garbage.
      If NOAA perform 100 runs, take the two that vary most, and that is your evidence that they have no idea.

      • Speaking of models and the breathtaking circularity inherent in the reasoning of much contemporary Climate Science!

        The assessment of the reliability of sampling error estimates (In the application of anomalies to large scale temperature averages; in the real world), is tested using temperature data from 1000-year control runs of general GCMs! (Jones et al., 1997a)

        And that is a real problem, because the models have the same inbuilt flaw; they only output gridded areal averages!

        Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).

        The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.

      • Grids set to preferred size position only serves to fool people.

        We need a scientifically justified distance circumference for each data point grounded in topology and site location conditions (Anthony’s site survey would be critical for such)

        Mountains hills and all manner of topology matters as do local large water bodies, as well as the usual suspects of urbanisation ect.

        This is a massive task and we are better investing everything into satellites and developing that network further to solve some temporal issues for better clarity.

        Still sats are good for anomalies if they pass the same location at the same time each day but we should depart from anomalies because they are transient and explaining why is nigh impossible.

        A 50km depth chunk of the atmosphere is infinitely better than the surface station network for more reasons than not.

        Defenders of the surface data sets are harming science

      • With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data.

        This is not happening. It is never going to happen.

      • Mark, Scott, Richard ==> Models to not have accuracy — at all — they are not reporting on something real so there can be no accuracy. Accuracy is a measurement compared to the actuality. Models are graded on how close they come to the reality they are meant to model.

        Climate and Weather Model run outputs are chaotic in that output is extremely sensitive to initial conditions (starting inputs). The differences in model run outputs demonstrates this — all the runs are started with almost identical inputs but result in wildly differing outputs.

        See mine Chaos and Models.

      • I agree Kip, that was my point about it all, models are not for accuracy, but still, out of 100 runs 1 is the most accurate and the other 99 destroy that lucky accuracy.

        My point also is that they don’t want accuracy (as they see it) because what if a really good model ran cool?
        That wont do

        They need a wide cast net to catch a wide range of outcomes in order to stay relevant.

      • and to say, oh look the models predicted that.

        Furthermore NOAA’s model output is an utter joke, if as I said you take the difference between the 2 most different runs from an ensemble, they vary widely, which shows the model is really casting such a wide net that it is hard to actually say it’s wrong or (way off the mark)

        Of course, we cant model chaos. :)

        Giving an average of chaos is what they are doing, and it’s nonsense.

      • Mark – Helsinki –
        >> With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data. <<

        not if the station
        hasn't moved.
        no one is interested in
        absolute T.

    • you can simply calculate the uncertainty for real in the 100 model runs by measuring the difference between the two most contrary runs. Given the difference in output per run at NOAA.. that means real uncertainty in that respect is well in excess of 50%

      • no.
        that’s like saying you can flip a coin 100 times, and do this 100 times, and the
        uncertainty is the max of the max and min counts.
        that’s simply not how it’s done — the standard deviation
        is easily calculated.

      • crackers, that is an invalid use of statistics. It in more analogous to shooting at 100 different targets with the same error in aim, not like measuring the same thing 100 times. The error remains the same, and does not even out.

      • With both shooting and taking a temperature reading multiple times over a span of time, one is doing or measuring different things multiple times, not the measuring the same thing multiple times. Coin tosses are not equivalent.l

    • As in, take 100 runs and take calculate how far the model can swing in either direction, for this you only need the two most different runs, there is your uncertainty.

      • Kip ==> This following part of my comment was about data collection in the real world:

        Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).

        The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.

        I was trying to show how the “fudge” is achieved in the collection of raw data and how circular it is to then use gridded model outputs to estimate the sampling errors of that very methodology! ;-)

      • SWB ==> Read my comment to Geoff on auto weather station data. Data is rounded to near whole degree F, then converted to near tenth degree C…as it is being collected, even though the auto weather unit has a resolution of 0.1 degree F.

  30. For your readers not familiar with physics. Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”

    C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)

    Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
    https://en.wikipedia.org/wiki/Stefan–Boltzmann_law

    You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.

    But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.

    But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?

    Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.

    You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.

    Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.

    However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.

    Dr Schmidt is basing his claims on spurious precision in the processing of the data.

    https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/

  31. Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”

    C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)

    Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
    https://en.wikipedia.org/wiki/Stefan–Boltzmann_law

    You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.

    But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.

    But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?

    Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.

    You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.

    Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.

    However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.

    Dr Schmidt is basing his claims on spurious precision in the processing of the data.

    https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/

    • yeah, where is the CRU raw?

      What have they done with the data in the last 20 years.
      Were they not caught cooling the 40s intentionally just to reduce anomalies? Yes they were caught removing the blip from data, something NASA JMA BEST ect have have done.

      The level of agreement between these data sets over 130 years either shows 1 collusion or 2 relying on the same bad data

      Nonsense.

    • as you probably already know, they are using revised history to assess current data sets. As such any assessments are useless.

      We need all of the pure raw data, most of which does not exist any more.

    • Good post tbh.

      “However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.

      Dr Schmidt is basing his claims on spurious precision in the processing of the data.”

      The logical fallacy is real world temperature anomalies vs what GISS says they are.

      The certainty that GISS is accurate, is actually unknown, which means uncertainty is closer to 90% than 5%

    • Schmidt must keep the discussion within the confines of GISS output.

      Avoid bringing in the real world at every stage, in terms of equipment accuracy and lack of coverage.

      • all the groups get essentially the same surface trend — giss, noaa, hadcrut, jmo, best

        so clearly giss is no an outlier. this isn’t rocket
        science

    • Colbourne ==> “Dr Schmidt is basing his claims on spurious [apparent, seeming, but not real] precision in [resulting from] the processing of the data.”

      • Indeed, data processing. Produces GISS GAMTA and also funnily produces Cosmic background radiation for NASA also.

  32. Possible new source for temperature data: River water quality daily sets.
    Graphs of temp. data across the regions and the globe would be quite interesting.
    Probably a much more reliable daily set of records…

  33. Gavin Schmidt who was , let us not forget, hand picked by Dr Doom to carry-on his ‘good work’
    Given we simply lack the ability to take any such measurements in a scientifical meaningful way. All we have is a ,’guess’ therefore no matter what the approach what is being said is ‘we think it’s this but we cannot be sure’

  34. Over the years I’ve noticed one thing about Gavin Schmidt’s explanations in RealClimate–they are excessively thorough and generally cast much darkness on the subject. If he was describing a cotter pin to you, you’d picture the engine room of the Queen Mary.

  35. This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?

    I emphasise that the sample set used to create the anomaly in 1880, is not the same sample set used to calculate the anomaly say in 1900 which in turn is not the same sample set used to calculate the anomaly in 1920 which in turn is not the same sample set used to calculate the anomaly in 1940 which in turn is not the same sample set used to calculate the anomaly in 1960 which in turn is not the same sample set used to calculate the anomaly in 1980 which in turn is not the same sample set used to calculate the anomaly in 2000 which in turn is not the same sample set used to calculate the anomaly in 2016

    if one is not using the same sample set, the anomaly does not represent anything of meaning.

    Gavin claims that “The climatology for 1981-2010 is 287.4±0.5K” however the sample set (the reporting stations) in say 1940 are not the stations reporting data in the climatology period 1981 to 2010 so we have no idea whether there is any anomaly to the data coming from the stations used in 1940. We do not know whether the temperature is more or less than 1940 since we are not measuring the same thing.

    The time series land based thermometer data set needs complete re-evaluation. If one wants to know whether there many have been any change in temperature since say 1880, one should identify the stations that reported data in 1880 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1880 to 2016.

    If one wants to know whether there has been any change in temperature say as from 1940, one performs a similar task, one should identify the stations that reported data in 1940 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1940 to 2016.

    So one would end up with a series of time series, perhaps a series for every 5 year interlude. Of course, there would still be problems with such a series because of station moves, encroachment of UHI, changes in nearby land use, equipment changes etc, but at least one of the fundamental issues with the time series set would be overcome. Theoretically a valid comparison over time could be made, but error bounds would be large due to siting issues/changes in nearby land use, change of equipment, maintenance etc.

    • Re: richard verney (August 20, 2017 at 1:20 am)

      To your charge Richard, James Hansen “doth protest too much” for my liking.

      …a charge that has been bruited about frequently in the past year, specifically the claim that GISS has systematically reduced the number of stations used in its temperature analysis so as to introduce an artificial global warming. GISS uses all of the GHCN stations that are available, but the number of reporting meteorological stations in 2009 was only 2490, compared to [circa]6300 usable stations in the entire 130 year GHCN record. (Hansen et al. 2010)

      He doesn’t address the problem (In that paper) to my satisfaction, because elsewhere in the literature it is made clear that the change in number and spatial distribution of station data is a source of error larger than the reported (Or purported!) trends.

    • “how do you calculate an anomaly when the sample set is never the same over time”
      Because you don’t calculate the anomaly using a sample set. That is basic. You calculate each station anomaly from the average (1981-2010 or whatever) for that station alone. Then you can combine in an average, which is when you first have to deal with the sample set.

    • richard verney – >> how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing? <<

      you take an average.

      giss uses '51-'80.

      but the choice is arbitrary

    • AndyG55,
      Yes, I have read the Real Climate page that Kip linked, and the links that Gavin provides to explain why anomalies are used, and nowhere do I see an explanation for how the stated uncertainty is derived or an explanation of how it can be an order of magnitude greater precision than the absolute temperatures. My suspicion is that it is an artifact of averaging, which removes the extreme values and thus makes it appear that the variance is lower than it really is.

  36. The postulate that global temperatures have not increased in the last 100 years is easily supported after a proper error analysis is applied.
    People driven by a wish to find danger in temperature almost universally fail proper error analysis. There is a deal of scattered, incomplete literature about using statistical approaches and 2 standard deviations and all that type of talk; but this addresses the precision variable more than the accuracy variable, These two variables act on the data and both have to be estimated in the search for proper confidence limits to bound the total error uncertainty.
    This is not the place to discuss accuracy in the estimation of global temperature guesses because that takes pages. Instead, I will raise but one ‘new’ form of error and note the need to investigate this type of error elsewhere than here in Australia. It deal with the transition from ‘liquid in glass’ thermometry to the electronic thermocouple devised whose Aussie shorthand is ‘AWS’ for Automatic Weather Station. These largely replaced LIG in the 1990s here.
    The crux is in an email from the Bureau of Meteorology to one of our little investigatory group.

    “Firstly, we receive AWS data every minute. There are 3 temperature values:
    1. Most recent one second measurement
    2. Highest one second measurement (for the previous 60 secs)
    3. Lowest one second measurement (for the previous 60 secs)
    Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600”

    When data captured at one second intervals are studied, there is a lot of noise. Tmax, for example, could be a degree or so higher than the one minute value around it. They seem to be recording a (signal+noise) when the more valid variable is just ‘signal’. One effect of this method of capture is to enhance the difference between high and low temperatures from the same day, adding to the meme of ‘extreme variability’ for what that is worth.
    A more detailed description is at
    https://kenskingdom.wordpress.com/2017/08/07/garbage-in-garbage-out/
    https://kenskingdom.wordpress.com/2017/03/01/how-temperature-is-measured-in-australia-part-1/
    https://kenskingdom.wordpress.com/2017/03/21/how-temperature-is-measured-in-australia-part-2/

    This procedure is different in other countries. Therefore, other different countries are not collecting temperature in a way that will match ours here. There is an error of accuracy. It is large and it needs attention. Until it is fixed, there is no point to claims of global temperature increase of 0.8 deg C/century, or whatever the latest trendy guess is. Accuracy problems like this and other combine to put a more realistic +/- 2 deg C error bound on the global average, whatever that means.
    Geoff.

    • But think of the very different response of a LIG thermometer which could easily miss such T highs if of such short lived duration.

      This is why retro-fitting with the same type of equipment used in the 1930s/1940s is so important if we are to assess whether there has truly been a change in temperature since the historic highs of the 1930s/1940s.

      • Yes. This is a reasonable and low cost way to test the current vs. the past instruments. It also tests the justifications of those who change the past.
        I pointed this a few months ago but got nowhere with it. If you can think of a way to push the idea forward, God speed.

      • RV,
        Experienced petrologists would agree with your test scheme. The puzzle is, why was it not done before, officially. Maybe it was, I do not know. Thank you for raising it again.
        As you know, it remains rather difficult to get officials to adopt such suggestions. If you can help that way, that is where effort could be well invested. Geoff

    • Geoff,

      I have said this on here before. The best post by far was Pat Frank’s about calibration of instruments. All that needed to be said was in that. A lot of us, including yourself, are all talking about the same idiocy. It’s nice to have it demonstrated.

      • Nc75,
        Where have you seen this raised before? Have you commented before on the methods different countries use to treat this signal noise problem with AWS systems? It is possible that the BOM procedure, if we read it correctly, could have raised Australian Tmax by one or two tenths of a degree C compared with USA since the mid 1990s. Geoff

      • Geoff

        If I recall Pat Frank’s paper looked at drift of electronic thermometers. It may be similar at least in approach to what you are talking about, but the general idea is that whatever techniques are used they have to be seen in a broader context of repeatability and microsite characterisation. Effects that appear to swamp any tenths of degrees and approach full degree variations.

    • Geof,
      Indeed, it is done slightly differently in the US. Our ASOS system collects 1 minute average temperatures in deg F, and then averages the 5-minute record of the 5 sets to the nearest deg F, converts to the nearest 0.1 deg C, and sends that information to the data center. http://www.nws.noaa.gov/asos/pdfs/aum-toc.pdf

      • Clyde,,
        Can we please swap some email notes on this. sherro1 at optusnet dot com dot au
        A project in prep is urgent if that is OK with you. Geoff

      • Geoff ==> at the link provided by Clyde, pdf page 19, internal number page 11.

        “Once each minute the ACU calculates the 5-minute
        average ambient temperature and dew point temperature
        from the 1-minute average observations (provided at least
        4 valid 1-minute averages are available). These 5-minute
        averages are rounded to the nearest degree Fahrenheit, converted
        to the nearest 0.1 degree Celsius, and reported once
        each minute as the 5-minute average ambient and dew point
        temperatures. All mid-point temperature values are rounded
        up (e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to –
        3.0°F; while -3.6 °F rounds to -4.0 °F).”

        BTW — The °F recorder temps are thus +/- 0.5 °F and the °C are all +/- 0.278°C — just by the method.

        In the chart above the text ambient temps show:
        Temperature In the range: -58°F to +122°F RMSE (root mean sq error) 0.9°F Max Error ± 1.8°F but a Resolution of 0.1°F (which is then rounded to whole °F per above)

      • Kip,

        You said, BTW — The °F recorder temps are thus +/- 0.5 °F and the °C are all +/- 0.278°C — just by the method.”

        Strictly speaking, 0.5 °F is equivalent to 0.3 °C because multiplying a constant (5/9) with infinite precision by a number with only one (1) significant figure, one is only justified in retaining the same number of significant figures as the multiplier with the least number of significant figures!

    • Geoff ==> Actually, we (the humans — even me) are “pretty sure” that the average air temperature has increased over the last 100-150 years — we are “pretty darned sure” that things are generally warmer now than they were during the widespread Little Ice Age.

      There are a lot of other questions about GAST not answered by that statement, though, and that’s what CliSci supposed to be answering (among other things).

      • Kip,
        I merely noted that the proposition of zero change fits between properly-constructed error bounds and gave an example of a large newish error. The plea is to fix the error calculations, not to fix the state of fear about minor changes in T. Geoff

      • Geoff ==> Not arguing with you, really. Your “The postulate that global temperatures have not increased in the last 100 years is easily supported after a proper error analysis is applied.” is true-ish but the point I make has to be noted — even if the postulate of “no change” could be supported by error analysis — we are none the less “pretty danged sure” that that postulate is not true. i just state the obvious — in CliSci there has been far too much relying on maths principles and statistical principles and too much ignoring the pragmatic physical facts.

      • Kip
        I do not want to draw this out, but I see stuff all of any physical symptoms of temperature rise. What are the top 3 indicators that make you think that way? Remember that in Australia there is no permanent snow or ice, no glaciers, few trees tested for dendrothermometry, no sea level rise evidence above longer term normal, Antarctic territory howing nextbto no instrumental rise and a numer of falls, so a good stage to conclude that the players are mainly acting fiction. Geoff.you

      • Geoff ==> Your question raises an opportunity for an in-depth essay.

        The quick answer is that the literature is full of studies supporting the idea that there was a world-wdie or at least wide-spread Little Ice Age — more and more studies have come to the fore as certain elements in CliSci have tried to downplay the Little Ice Age.

        With the Little Ice Age now well-supported, the absence of the Little Ice Age now is probably the best supporting evidence for the “warmer now” concept — which is, as I have said, is the major reason we are “pretty sure”.

        I’ll put the idea on my list for future essays.

      • Kip
        I would be delighted to develop some ideas with you.
        But not here. Do send me an opening email at sherro1 at optusnet dot com dot au
        Geoff

  37. Off topic somewhat:
    Are you, like me, continuously annoyed by the way the media (especially TV weather reporters) refer to the plural of maximum and minimum temperatures as maximums and minimums.
    This shows an ignorance of the English language as any decent dictionary will confirm.
    The correct terms are of course maxima and minima.
    The various editors and producers should get their acts into gear and correct this usage.

  38. This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?

    In the case of estimating temperature change over time, surely that’s an argument in favour of using anomalies rather than absolute temperatures?

    Absolute temperatures at 2 or more stations or in a region might differ in absolute terms by, say, 2 degrees C or more, depending on elevation and exposure. That’s important if absolute temperatures are what you’re interested in (at an airport for example); but if you’re interested in how temperatures at each station differ from their respective long term averages for a given date or period, then anomalies are preferable.

    Absolute temperatures might differ considerably between stations in the same region, but their anomalies are likely to be similar.

    • Well clearly the consensus solution is to change and discard past data that does not fit the present desired result.

    • I think the point being made is that a standard baseline should be established (say 30 years before the influence of industrialization) and then that should be used as the standard by everyone, and not changed over time.

    • DWR54 ==> Give us a clear explanation on why we might be “interested in how temperatures at each station differ from their respective long term averages for a given date or period, then anomalies are preferable.” (Reasonable for a single station — but a planet wide average?)

  39. The BBC are telling us all about Cassini and its adventures at Saturn.
    Nice.

    But they (strictly the European Space Agency whose sputnik it is) have come up with this line:

    “It’s expected that the heavier helium is sinking down,” he told BBC News. “Saturn radiates more energy than it’s absorbing from the Sun, meaning there’s gravitational energy which is being lost.

    From here: http://www.bbc.co.uk/news/science-environment-40902774

    Presumably this means Saturn is collapsing – or – possibly falling into the sun?
    (Maybe the other way round innit, like how the Moon is going away as tides on Earth pull energy out of it)

  40. “…the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average…..”
    ‘…the difference between the this current warm period average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average….’
    There fixed

    • hunter ==> Hardly a critique — I am just quoting Dr. Schimdt on the subject – read his whole post (always a good idea).

  41. It would be informative for Steve Mosher to explain, in a non hostile way, what he thinks of this.

  42. The question I have is: “Is the surface temperature meaningful in any way other than weather?” We live in the boundary layer at the surface, so surface temperature will effect us. I usually get that effect in Boston in february when I have to run the snowblower and in august when I have to run the air conditioner. Obviously, the length of the growing season matters. So yes, surface temperature does matter. The next question is one of the relation of the surface temperature to things like CO2. The earth is a dynamic process. The transport of enthalpy/entropy causes massive changes on a very short term basis. Stuff like CO2 and solar effects are gradual at most. Yes, all things being equal, a bit more CO2 will cause a bit more stored enthalpy over the long term, but only the long term.

    A lot of the hysteria here is caused by a total lack of ability of our NEA edcratered (sic) public to frame temperature in a way that is meaningful in terms of physics. If you go up to the moron on the street and ask the question: “Yesterday it was 70 degrees F, today it is 71 degrees F. In percent, how much warmer is it today than yesterday.” You will get the answer of over 1% almost all the time. Truth be known, the correct answer is around 0.2% With that sort of massive framing error out of the block, trying to get any sort of rational action out of the populace at large is a lost effort.

    • Local temperatures are meaningful. Global average temperature is meaningless. It’s like determining if any car violated the speed limit by averaging the speeds of all the cars that passed the highway. Notice weather news don’t report global average temperatures. Who cares?

      • Local temperatures are only meaningful if you can explain why the temperature is a certain value. This requires factoring in the environment around that measurement.

        A station next to a lake, the water temperature of that lake has a bearing on the temperature read by the station as does wind speeds and what the air is doing.

        If you cannot explain and calculate the impact of those elements, then your reading is useless for causes

      • Water temperature of lake, wind speed, etc. are local effects. You measure the effects before you attribute causes. Are you saying global temperature is your Ultimate Cause?

      • I mean only meaningful to assess sensitivity if you know all the other factors, or as many as we can know, the actual measurement alone without the other data is useless for sensitivity studies

  43. In addition to make the uncertainty to appear smaller, there’s the added benefit that a one degree change all by itself may seem large (when compared to the baseline anomaly of zero) but a 1 degree change over a baseline value of 288 appears ridiculous.

    • David L.

      … a one degree change all by itself may seem large (when compared to the baseline anomaly of zero) but a 1 degree change over a baseline value of 288 appears ridiculous.

      Consider though that the ‘baseline anomaly of zero’ is itself arbitrarily set. Zero celsius is 273.15 K. We set it there for a good reason though – the range from ~273 – ~ 373 K is pretty much the zone we operate in as human beings. Water freezes and boils within that range. These are things that are important to us as a species; so that is why we arbitrarily set ‘zero’ to ~ 273 K for everyday use.

      In terms of climate, the range is much closer. By most estimates, global average surface temperature during the entire Holocene period, over the past ~10,000 years, has varied by as little as +/- 0.5 C of ‘average’, meaning about 14C, +/- 0.5C (or 287 +/- 0.5 K). It’s been a very steady system.

      Far from being ‘ridiculous’, a +1 C change in global average surface temperature, if indeed it has occurred in recent decades, could actually be a very big deal in the grand scheme of things.

      • DWR54 ==> The serious point of this post (and its predecessor) is that the Global Temperature change between years is really indiscernible — all the years are within the known margins of the original measurement error — which we know for most of the thermometer era to be at least +/-0.5°C, possibly as much as +/-2.0°C (certainly is this magnitude for the 1930s and before).

  44. Using anomalies it is also possible to state that this year is warmer than last year every single year. No matter that if the temperature went up say, 0.1 C for 30 years then it would be an unbelievable 3.0 C hotter. Nobody would be able to tell unless they make the direct comparison with previous absolute temperatures. Using anomalies makes it easy to avoid the difficulties of this compounding of errors, or lies (take your pick).

    Additionally, it also helps to continuously change past years to lower temperature. With this technique you really can make temperatures continue up. Forever. Maybe it’s just coincidence that that is what seems to happen. Maybe not. But it is indeed very convenient for those who have staked their reputation on continuously rising temperatures.
    There is, or will be, probably some physical location in the temperature record where the warming has ostensibly risen from below zero Celsius average to above zero, but there is still ice on the ground.

  45. I have a little green paperback book entitled “How To Lie With Statistics.” Still available, lord knows how many reprints, mine from decades ago was in the 20 something reprint range. Problem is, it was designed to allow you to understand how statistics and things like the OP are used to lie and obfuscate, but today apparently most people, especially “climate scientists” use it as a how to manual.

  46. I’m still scratching my head as to why the supposed genius Stephen Hawking thinks global warming is going to make Earth uninhabitable, but somehow the Mars enviroment is adaptable. WTF Einstein?

    • Sometimes, an exceptional ability in one intellectual area is accompanied by a deficiency in another area. Calculating idiot savants are an extreme example.

  47. Epilogue:

    Thank you all for reading and engaging in civil conversation. It has been rather surprising to me that simply quoting the Climate Consensus’ explanation for the use of Anomalies over Absolute Temperatures, once from UCAR and once from GISS, has resulted in so much discussion.

    The main points from supporters of the use of anomalies are two: 1. “We’ve explained all this before” and 2. “Anomalies have much smaller error bars — they are much more precise.”

    I’m afraid Point #1 fails, as all I have done is quote their explanations — and they say what they say — they (the Climate Consensus supporters) are free to change their explanations if they don’t like them — but they can’t blame me.

    Point #2 is highly questionable — the facts are that the creation of averages of anomalies involves the same uncertain measurements as the absolute temperatures, only twice as many, equally uncertain measurements. Subtracting the one from the other (this years average from base period average) DOUBLES the uncertainty of the resultant anomaly. Averaging these more-uncertain anomalies does not, can not, result in a more precise answer.

    As Randy Newman says in the theme song to the TV series “MONK” — “I could be wrong now, but I don’t think so”.

    Let me point out once more that I was intentionally having some fun by quoting UCAR and Schmidt “explaining” their use of Anomalies, playing their own words back at them.

    The first essay https://wattsupwiththat.com/2017/08/16/climate-science-double-speak/ pointed out the the actual Absolute Global Temperature Average was — surprise! — the very same 15°C that it has always been and that CliSci has no agreed upon definition of Global Average Surface Temperture and no agreed upon method of calculating that metric.

    This, the second essay in the mini-series, pointed out that Dr. Schmidt says that they should avoid Absolute Temperatures because if they use them all the recent years fall within their own error bars and thus must be considered the same. That won’t do, of course, because they just know that the temperature is rising — even though it stays the same…..

    My best to you all, apologies if I’ve failed to answer your question or concern. You may email me at my first name at the domain i4 decimal net.

  48. I am confused about Gavin’s entire argument. One must compare apples to apples or oranges to oranges. He, instead, compares apples to oranges.

    If one is going to compare absolutes, then one needs to compare absolutes. If one is going to compare averages (30 years in this case), then one must compare 30 year averages.

    The absolute temperature Gavin cites for a baseline is 287.4 to 287.5±0.5°K. 2016 came in at 288.0±0.5°K. So, the hottest year on record 2016 (in the middle of an El Nino) was at the high end of the error range (due to a rounding error) or just 0.1°C outside the error range. Hardly impressive, in my opinion, especially given diurnal temperature fluctuations which are generally at least two orders of magnitude higher than the minute increase (or possible increase) in absolute temperature which occurred.

    As stated, if one is going to compare averages, one must compare averages. I do not know how one compares a year (the weather of an El Nino year like 2016 for example) to a climate data set of 30 years. Gavin uses 1981-2010 for his 30 year average.

    The 1981-2010 anomaly was 0.56±0.05°C according to Gavin. The 1987-2016 anomaly is 0.54°C (I guess ±0.05°C). Not only is 1987-2016 climate data set inside the error ratio, but it is actually 0.02°C less than the baseline average he uses.

    So Gavin argues one cannot use absolutes because “we lose the ability to judge which year was the warmest”, but using the absolute actually makes 2016 the hottest year, albeit just in/outside the margin of error. But this argument would have actually shown an increase in absolute temperature. He instead compares a year to a 30 year average to the hottest year 2016, but when looking at the 30 year average up to 2016, that average is less than the average he uses.

    In my opinion, Gavin would have been better at using 287.1°K as his absolute temperature from the NOAA NCEI average of 13.9°C he cites. The problem is this 0.9°K/C temperature increase is still minuscule to:

    A) Diurnal temperature fluctuations which are often about 10°C (or more in drier areas),

    B) Temperature fluctuations due to uncontrollable variables, like cloud cover (water vapor), El Nino’s/La Nina’s, Milankovitch cycles, etc which produce much more error (fluctuations) into the overall climate system,

    C) The 287-288°K/C overall temperature of the Earth of which 0.9°K/C is only 0.3% of a change in overall temperature.

    While I agree it has probably gotten warmer (maybe 0.9°C/K) over the last century (plus), this temperature increase is very minuscule in the grand scheme of things and is generally inside of margins of error of instruments used to measure temperature over this same period or may be caused from human error looking at a thermometer (short guy versus tall guy) as noted in above comments.

  49. The use of anomalies rather than absolutes has a number of justifiable reasons…and then I have seen some hilarious ones, like claiming that anomalies are used because climate change is an anomaly.

    In any case, the failure of absolute temperature to discriminate from one year to the next certainly calls into question how “robust” any claims of any such year being warmer than another may be based on the anomalies.

  50. I understand that Global climate models do not work in absolute temperature, only anomalies. So output runs have to be base-lined.

    How a GCM can be accurate working in anomalies and not absolute temperatures is beyond me. What about phase changes from solid Liquid Vapour and what about snow cover?

    • My error, I was looking at both Highest and Lowest. There are no High Temps in this century in any state, but there was one in the 1898 in Oregon.

  51. Ahem… Current policy generation in action. Speak now.

    =============================
    Quote: “Help AIChE Craft Its Climate Policy

    by AIChE’s Public Affairs & Information Committee (PAIC)

    As evidenced by the many recent threads on Engage, climate change and climate policy are top-of-mind for many AIChE members. Like other professional societies, AIChE is finding increased member interest in public policy and advocacy. With this in mind, AIChE’s Public Affairs and Information Committee (PAIC) Climate Change Task Force, specifically its Climate Change Policy Review Project Team (PRP Team), is spearheading a broad effort to review and revise the Institute’s existing climate policy through communication with, and input from, members and the standard Board review and approval process. The PAIC PRP Team welcomes AIChE members to our discussion with this “Welcome Blog.”

    Communication and Participation

    The policy development process will take place throughout 2017 and will include:
    •A series of blog post on AIChE.org/ChEnected authored by members of PAIC’s Climate Change Task Force and others.
    •A series of discussions on AIChE Engage related to the aforementioned blog posts
    •CEP articles written by PAIC representatives

    PAIC Process and Substantive Scope

    Examine Contribution of Anthropogenic Climate Change

    a. Vet the Science Underlying US Regulation of Anthropogenic Climate Change: The first step in PAIC’s review and revision process will be to vet the body of science regarding data validity and attribution at issue in the regulation of anthropogenic emission sources of greenhouse gases (GHGs) to mitigate climate change. See Topics 1-4 below.

    b. Review the Uncertainties Regarding Models Projecting Climate Change Impacts: The second step is to review and summarize the scope of uncertainties in future projections and implications regarding overall risk. See Topics 5 and 6 below.

    Examine Mitigation Approaches

    The third step is to review and summarize climate change mitigation methods and implementation considerations, ranging from GHG regulation to utilization of non-fossil-carbon-based fuels, including renewables and nuclear; GHG emission control, including carbon capture and sequestration; and carbon emission avoidance methods, including decarbonization, energy efficiency, and bulk energy storage. See Topic 7 below.

    Examine Adaptation and Resilience

    Finally, PAIC will invite discussion regarding climate-change adaptation and resilience methods and implementation considerations regarding response to rising sea levels, storm surges and other flooding, increased storm events, drought, and water-source limitations, including subsidence, intense heat, and resulting drain on power grids. The response should focus on protecting manufacturing facilities and community infrastructure to ensure resilient production, social resilience, chemical process safety, employee safety and regulatory compliance. See Topic 8 below.

    Revisit AIChE Climate Change Policy

    Following the development of AIChE member discussion in Engage and publication of the CEP articles, PAIC will draft a revised AIChE Climate Change Policy for review by the Board. With this body of AIChE member input, PAIC believes it has the best chance of proposing an updated AIChE Climate Change Policy that balances member concerns while promoting chemical engineers’ contributions to addressing these topics.

    Organizational Approach

    For PAIC’s organizational purposes, topical categories below are currently planned for discussion sequentially as shown, with each topic announced upon opening for discussion with an opening ChEnected blog post:

    1.General Approaches and Project Principles; appropriate level of scientific certainty; status of popular opinion on climate change; international context of IPCC and the Paris Accord; (Wednesday, August 9, 2017)

    2.Validity of Observed/Measured Data I: temperatures, ice coverage, sea level rise, weather patterns, species;

    3.Validity of Observed/Measured Data II: greenhouse gas concentrations;

    4.Attribution of Observed Climate Change: causes of warming and increased greenhouse gas concentrations;

    5.Validity of Future Projections I: temperatures, ice coverage, sea level rise, weather patterns, species;

    6.Validity of Future Projections II: greenhouse gas concentrations;

    7.Climate Change Mitigation Approaches and Implementation;

    8.Adaptation and Resilience Approaches and Implementation.

    To assist in preparing to participate in this discussion, participants might turn for reference to the science supporting current U.S. law, specifically the factual issues (body of science and decision-making) considered by EPA in adopting its GHG Endangerment Findings and supporting its decision to deny reconsideration of the Findings, in Resources and Tools, as well as the D.C. Circuit Court of Appeals 2012 decision upholding the Endangerment Finding in Coalition for Responsible Regulation v. EPA (D.C. Cir. Index No. 09-1322), at Arnold & Porter’s Climate Case Chart.

    After publishing ChEnected blog posts on each of these topics, followed by moderated Engage discussion, the full scope of points articulated regarding each issue will be summarized and used as the basis for an article in CEP.

    Join The Conversation – and Remember the Code of Conduct

    PAIC invites you to join the discussions on Engage over the next several months. Note that there will be separately moderated discussions based on the above-mentioned technical topics. The goal is to stay on the topic at hand. The discussions are not meant to be political in nature. Member posts to the Climate Statement discussion threads must conform to AIChE Engage’s Code of Conduct; this is required of all posts to Discussion Central. Posts to the Climate Statement discussion threads must also meet these additional requirements:

    •Posts must pertain to the specific Climate Policy topic currently under discussion

    •Factual posts must be evidence-based and provide links to evidence, e.g., links to data, links to analysis of data, or citations of peer-reviewed literature whose authors conduct research in the topic under discussion
    as their profession

    •Factual posts must introduce any link they provide

    •A factual post must not be a repeat of a previous factual post

    •Posts that are declined can be edited to correct deficiencies and resubmitted

    •Most valuable will be posts updating previously accepted scientific positions and/or that legitimately rebut with new science an EPA conclusion supporting previous rulemaking

    •Each topic will be discussed for two weeks and when it is closed, the next topic will be introduced

    Posts that do not meet these requirements will be moderated and will not appear in the Climate Statement thread. If a member’s post is moderated, the member will be given the opportunity to edit the post so that it meets the requirements for the Climate Statement threads and resubmit it. If a post is moderated, AIChE staff will provide a specific explanation of the reason it was moderated.

    Climate Policy topics will be open for a two-week period. Once a topic is closed, posts to that topic will not be accepted and a new topic will be introduced.

    Threads will be monitored closely and any posts that stray off the topic at hand or make factual statements without providing data from reputable sources will be flagged for moderation. Posters will have an opportunity to re-post with appropriate modifications of the original post.”

    http://www.pressreleasepoint.com/help-aiche-craft-its-climate-policy

Comments are closed.