Almost Earth-like, We’re Certain

Guest Essay by Kip Hansen

There has been a lot of news recently about exoplanets. An extrasolar planet is a planet outside our solar system.  The Wiki article has a list of exoplanets.  I only mention exoplanets because there is a set of criteria for specifications of what could turn out to be an “Earth-like planet”, of interest to space scientists, I suppose, as they might harbor “life-as-we-know-it” and/or be a potential colonization target.

One of those specifications for an Earth-like planet is an appropriate average surface temperature, usually said to be 15 °C.  In fact, our planet, Sol 3 or simply Earth, is very close to qualifying as Earth-like as far as surface temperature goes. Here’s that featured image full-sized:

1374177879538

This chart from the Chemical Society, [shows] that Earth’s observed average temperature should be about 15°C, and note that our atmosphere contains mostly Nitrogen (78%), Oxygen (21%) and Argon (0.9%) which makes up 99.9% of the total — leaving about one-tenth of one percent for the trace gases water (H2O and CO2).

Let’s look at the thermometer:

planetary_therrm_400

 

We see the temperatures believed to exist on the surfaces of the eight planets and Pluto (poor Pluto…).

The ideal Earth is right there sat 15°C or 59 °F.

Mercury and Venus are up at the top, one due to proximity to the Sun and the other due to a crushingly dense atmosphere, well out of range for Earth-like planets.

Mars is down below the freezing temperature of water, due to distance from the sun and mostly a lack of atmosphere coming in at  70°F (20°C) near the equator but at night plummeting to about minus 100° F (minus 73°C) with an estimated average of about -28°C.  The average is a little low, but mankind lives on Earth in places with a similar temperature range, at least on an annual basis, so with adequate shelter and clothing (modified for lack of breathable atmosphere), it might do.

The other four planets and Pluto (poor Pluto) don’t have a chance of being Earth-like.

This next thermometer shows that Earth provides a temperature range suitable for human and Earth-type life, ranging from 56.7°C (134°F) at the high end down to -89.2°C (-128.6°F), with an average of 15°C (59°F).

 

 

Hi_lo_therm_400

 

Like Mars, Earth has a comfortable average that falls easily in what most people would consider to be a comfortable range, avoiding extremes,  if properly dressed for the weather.  For me, a southern California surfer boy by birth, 59°F (15°C) is sweater weather – or more properly, Pendleton wool shirt weather.  59°F (15°C) is the average Fall/Winter temperature of the surf at Malibu and most of us required wetsuits that would keep us warm in the water.

 

 

 

 

 

 

 

 

 

 

 

 

 

close-up_200Taking a closer peep at the middle of our little graphic, we see that the IDEAL Earth-like planet would have an average surface temperature of 15°C.  But, in the 1880s through 1910, we were running a bit cool — 13.7°C.   Luckily, after the mid-century point of 1950, we started to warm up a little and got all the way up to 14°C, just 1°C short of the ideal.

So, how have we done since then?

There is good news.  Since the middle of the last century, when Earth was running a little cool compared to the ideal Earth-like temperature expected of it, we have made some gains.

21st_Centruy_v_2

By 2014, Earth has warmed up to an almost-there 14.55°C  (with an uncertainty of +/- 0.5°C).

With the uncertainty in mind, we can see how close we came to the target of 15°C.  The uncertainty bracket on the left for 2014 almost reaches 15°C.

2016 was a banner year, at 14.85°C and could have been, uncertainty taken in, a tiny bit over 15°C!

The numbers used in this image come from NASA GISS’s Director (and co-founder of the private climate blog that he and his pals work on while being paid by the government with your taxes) Gavin Schmidt.  They are from his blog post in August 2017 — and, as always, have already been adjusted to be a bit higher.  The current, adjusted-higher,  numbers show 2017 0.1°C lower than 2016, which is what I have used.

That RealClimate blog post is quite a wonderful thing — it reveals to us several things, some of which I have written about in the past, which is the part I quote under the image. Dr. Schmidt kindly informs us about one of the miracles of modern climate science.  This miracle involves taking data that has rather wide uncertainty — a full degree Centigrade wide, being plus 0.5°C or minus 0.5°C and turning it to accurate and precise data with almost no uncertainty at all!

Dr. Schmidt explained to us why GISS uses “anomalies” instead of “absolute global mean temperature” (in degrees) in the blog post (repeating the link):

“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second, that reduces to 288.0±0.5K. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”

So, by changing the annual temperatures to “anomalies” they get rid of that nasty uncertainty and produce near certainty!

the-miracleSource: https://data.giss.nasa.gov/gistemp/graphs_v3/           annotated-kh

Dr. Schmidt and the ClimateTeam have managed to take very uncertain data, so uncertain that the last four years of Global Average Surface Temperature data, when straightforwardly presented as degrees Centigrade with its proper +/- 0.5°C uncertainty, cannot be distinguished from one another, have now, through the miracle of “anomalization” been turned into a new, improved sort of data, an anomaly, which is so precise that they don’t even bother to mention the uncertainty — except to add (at least on the above graph) a single uncertainty bar for the modern data which is 0.1°C wide, or in the language used in science, +/- 0.05°C.   The uncertainty in the Global Average Surface Temperature has magically become a whole order of magnitude less uncertain….  And all that without a single new measurement being made.

The miracle is accomplished by the marvel of subtraction!  That is, one simply has to take the current temperature in degrees, which has an uncertainty of +/- 0.5°C,  and subtract from that the climatic-term mean (current 1981-2010) and “voila” — the anomaly with a wee tiny uncertainty of only +/- 0.1°C.

Let’s see if that really works:

miracle_grid_flat

Here’s the grid of all the possibilities with a range of +/- 0.5°C for the 2015 temperature average in absolute degrees, and the 1981-2010 climatic mean in degrees, with the same +/- 0.5°C uncertainty range, both of which are given by Dr. Schmidt in his blog post.  One still gets the +/- 0.5°C or 1°C wide uncertainty range.   It did not magically reduce to a range one-tenth of that — it didn’t turn out to be 0.1°C wide as shown in Dr. Schmidt’s graph. I’m pretty sure of my arithmetic, so what happened?

How does GISS justify the new-improved wee-tiny uncertainty?  Ah — they use statistics!  They ignore the actual uncertainty in the data itself, and shift to using the “the 95% uncertainties on the estimate of the mean.”  Truthfully, they fudge on that a little bit as well, which you can see in their original data. [ In their monthly figures, the statistical uncertainty (+/- 2 Standard Deviations) is a bit wider than the illustrated “0.1°C”.]

So rather than use the actual original measurement uncertainty, they use subtraction to find the difference from the climatic mean and then pretend that this allows the uncertainty to be reduced to the statistical construct of the “uncertainties” of the mean — standard deviations.

This is a fine example of what Charles Manski is talking about in his new paper:

The Lure of Incredible Certitude”, a paper recently highlighted at Judith Curry’s Climate Etc.  While Dr. Curry accepts Manski’s compliments paid to climate science based on his perception that many “Published articles on climate science often make considerable effort to quantify uncertainty.”, we see here the purposeful obfuscation of the real uncertainty of Global Average Surface Temperature annual data, replacing the admitted wide uncertainty ranges with the narrow “uncertainties on the estimate of the mean”.  

Graphically, it looks like this:

GIS_Temp_animation_3sec

Although I was a semi-professional magician in my youth,   I have nothing that compares to the magic trick shown above — a totally scientifically spurious transformation of Uncertain Data into Certain Anomalies,  reducing the uncertainty of annual Global Average Surface Temperatures by a whole order of magnitude — using only subtraction and a statistical definition.  Note that the data and its original uncertainty are not affected by this magical transformation at all — like all stage magic,  it’s just a trick.

A trick to fulfill the need of the science field we call Climate Science to hide the uncertainty in global temperatures — an act of “disregard of uncertainty when reporting research findings” that will “harm formation of public policy.” (quotes from Manski).

The true answer to “Why does Climate Science report temperature anomalies instead of just showing us the temperature graphs in absolute degrees?” is exactly as Gavin Schmidt has stated — if Climate Science used absolute numbers they annual temperatures would “All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”

Thus, they use anomalies and pretend that the uncertainty has been reduced.

 It is nothing other than a pretense.  It is a trick to cover-up known large uncertainty.

# # # # #

 

While I was preparing this essay, I  thought to attempt to illustrate the true magnitude of the recent warming (since 1880) in a way that would satisfy the requirements of a Climatically Important Difference.  Below we see the UAH Lower Troposphere temperatures (even these as errorless anomalies) graphed at a scale of temperatures allowed in my personal living quarters before our family resorts to either heating or cooling for comfort — about 8°C (15 °F — as in 79°F down to 64°F).  Overlaid in light blue is a 2°C range, into which the entire most recent satellite record fits comfortably and in purple, the prescribed 3-to-3.5°C comfort range from the Canadian Centre for Occupational Health & Safety for an office setting.  If the office temperature varies more than this, then the HVAC system is meant to correct it by heating or cooling.  As we can see, the Global Average lower troposphere is very well regulated by this standard.

Untitled-y8ygf1An interesting aside is that the Canadian COHS allows an extra 0.5°C in the winter, increasing the comfort range to 3.5°C, accounting for the differences in perception of temperature during the colder months.

# # # # #

Author’s Comment Policy:

Hope you enjoyed this rather light reading.

I am not so convinced by the hopeful thinking of astronomers regarding exoplanets.  I believe they  are out there (the planets, not the astronomers), and for the record I believe there are other intelligent beings out there as well, I just have serious doubts that our rather primitive instruments can identify them at such great distances.

For you budding scientists out there, Gavin Schmidt has given you a useful tool for turning your sloppy uncertain data in highly certain data using nothing more complicated than subtraction and a dictionary.  Good luck to you!

Feel free to leave your interpretations of what the GISS Temp global graph in absolute temperatures (degrees) really tells us based on the two uncertainty ranges shown.

Oh, and against all odds, some things are better than we thought.  The Earth, if we allow her to warm up just a tiny bit more, will finally be at the expected, ideal temperature for an Earth-like planet.  Who could ask for more?

# # # # #

Advertisements

279 thoughts on “Almost Earth-like, We’re Certain

  1. Good comments Kip. The people who want to get rid of that nasty uncertainty probably think denial is a river in Egypt. Another characteristic for an earth-like exoplanet would be a magnetosphere, wherein the gases that form an atmosphere are protected from solar winds.

    • Ron ==> Magnetosphere….Mars? It was my understanding that Mars is thought to have once had a robust atmosphere but has lost it for unknown reasons.

      • The books that I’ve read say that Mars lost it’s atmosphere when it’s magnetosphere became weak enough that it could no longer protect the atmosphere from the solar wind.

        • Exactly. By the way our magnetic field strength is weakening substantially, but it may be a lead-in to a reversal and not something more alarming.

          • The Earth’s magnetic field has been reversing about every 100K to 120K for at least as long as the Atlantic has been growing.
            The current reversal is nothing unusual.

      • Mars is slowly losing its atmosphere even today, as H (from H2O disassociation) , N2, He, Ne,and Ar are not fully bound.
        And Venus has lost its H20 from disassociation and H escape.

        • Also, Mars does not have sufficient mass to create enough gravity to hold these gases in it’s atmosphere, hence most of what is left is CO2 – a ‘heavier’ gas (higher molecular weight). Mars was never going to hold it’s atmosphere long term because of this.

      • Tom,

        Good question.

        Venus doesn’t presently have an internally-generated magnetosphere, for whatever reasons. It does however sport an external magnetosphere, thanks (ironically?) to the solar wind.

        Venus’ small magnetic field is created by interaction of its ionosphere with the solar wind. This weak field differs from the common intrinsic magnetic fields (generated by planetary cores).

        It’s possible that Venus has an intrinsic field, but has been in a polarity reversal since its magnetism has been observed. But probably it simply lacks a core-generated magnetosphere.

        Whether the small field generated in its ionosphere is sufficient to protect its atmosphere from the solar wind, I don’t know.

        • Thanks for that, John. It was also my understanding that the Sun’s magnetic field was possibly involved in protecting the atmosphere of Venus. I haven’t heard about any other theories to explain it That was the reason for my question, to see if there were other theories out there.

  2. our atmosphere contains mostly Nitrogen (78%), Oxygen (21%) and Argon (0.9%) which makes up 99.9% of the total — leaving about one-tenth of one percent for the trace gases water (H2O and CO2).

    I think this is wrong. Water vapour makes (ON AVERAGE) 1% of our atmosphere (this was the case last time I checked). The other percentages add up to 99,9% because those values are their concentrations IN ABSENCE OF water vapour. They are generally very well mixed, which is totally not the case for water vapour, that’s why the water vapour concentration is provided separately.

      • Mass yes, but volume, roughly 1% (H2O is quite light compared to O2, N2 and Ar). In the end it is the volume that tells you about the number of molecules, which is also what matters for the GH effect.

    • Nylo & Henry ==> I am not sure what the Chemical Society used as their standard for “%”…mass or volume. I think it is only meant to be “illustrative”, not strictly quantitative.

      • Kip
        Either way
        Mass/mass
        or
        Vols/vols
        your Society is wrong?
        Important to note is that mass / mass, H2O is 10 x higher than CO2.
        That makes nuclear not more beneficial than e.g. burning gas (since it produces a lot more water vapor) in respect of GH gases

        I.e if you believe there is some man made warming caused by GH gases.

        • henry ==> Nuclear cooling water (steam) that goes up the stack can easily be recaptured as clean fresh water — and if salt water is used for cooling, the plant becomes a desal plan as well.

          • yes, Kip,
            one of the reasons I dislike nuclear is that the one plant here in the Cape (South Africa) killed all the fish in the surrounding ocean where the warmer water is being dumped. If one plant can do so much damage to the [local] climate, we donot want to build anymore plants?

            Warmer ocean/river water ultimately leads to more H2O (g) in the atmosphere?
            It is simple arithmetic?

            Not that I believe there even is a GH effect but if there were, then thinking that nuclear is the solution seems to me like not such a bright idea.

            Since the Cape here has such a shortage of water I am certainly interested to hear your plan in changing this warmer water to de-ionized / distilled water. If it were that easy I am sure the powers that be here would be making a plan?

          • “one of the reasons I dislike nuclear is that the one plant here in the Cape (South Africa) killed all the fish in the surrounding ocean where the warmer water is being dumped. ”

            I have no doubt that the local enviro’s blamed the power plant, but I doubt that is what happened.

            BTW, pretty much all power plants require cooling, the amount needed for nuclear is the same that’s needed for an equally sized fossil fuel plant.

          • MarkW

            BTW, pretty much all power plants require cooling, the amount needed for nuclear is the same that’s needed for an equally sized fossil fuel plant.

            No, I will disagree with you there. because of their more conservative departure from nucleate boiling criteria, most nuclear plants do not use superheated steam in their secondary cycle process through the turbines, so an equally-sized (electric delivery) nuclear plant will discharge slightly hotter water into its cooling tower, cooling ponds, or cooling lake, or through-pass river. But BECAUSE the difference in heat energy is easily calculated, is prevented by extra cooling towers or lower heat output in very hot weather – depending on the local mitigation approved process – and is much smaller in any case than “life threatening” cases, there is great doubt that the wtory spread by the enviro claims is correct.

          • Thx. To stop a nuclear reaction needs a lot of cooling water. Just to switch off the gas needs how much cooling water?

          • HenryP

            To stop a nuclear reaction needs a lot of cooling water. Just to switch off the gas needs how much cooling water?

            Sort of. To “stop” a nuclear reactor only requires that the control rods be inserted. Reaction stops, reactor (and steam generators and turbines and pipes and oncdensors) begins cooling down < Because the primary heat source is, indeed, shut down. A reactor does continue to generate decay heat from the core (which begins around 7% of the previous power level), then goes down exponentially with time. This is the decay heat that must continually be removed after shut down. But 7%-3%-1.5%-0.75% are small amounts of the previous 100% cooling water flow needed at 100% power levels.

            Now, in sharp contrast, a gas turbine combined cycle plant (3 x 250 Megawatt for example) is very different. The two gas turbines generate 500 Meg's of power with almost 0.0 cooling water: they want their blades and burners running as hot as possible, only the lube oil and secondary air must be cooled a little bit. So, on shutdown and while running, there is almost no cooling water needed at all. The tertiary steam generator runs on the steam generated from the waste heat from the two GT's, so it must reject all of its waste energy to the cooling water-condenser water just like any other steam plant. But only 250 Meg of cooling water is needed from a 750 Meg GT+GT+ST combined cycle plant. The actual numbers are a bit more complex to calculate, but I hope you get the point.

          • Thx for the explanation. But I I can see the results [of more nuclear]. Some have reported an increase in growth both in size and numbers of the fish \round the plant when it is using river water as cooling water….

          • Well, yes, henryp – I believe that there IS a shortage of fish around The Cape. But maybe all of the uncontrolled (mainly Korean) fishing offshore, and the poaching inshore, just MIGHT have some effect on the populations?

    • We have plenty of time. We still have to finish the search for intelligent life on this planet… /Sarc

      • “We still have to finish the search for intelligent life on this planet.”

        News flash . . . we found it, but it is dying off rapidly.

    • fretslider ==> Yeah, I don’t think we’re dropping in on any exoplanets for tea this afternoon…..

      If we figure out fusion power sometime soon, it may lead to a drive source for interplanetary exploration. Interstellar will take a new understanding of basic physics…

  3. This is a very interesting article and demonstrates the miss use of statistics. But the problems are even deeper rooted since it is absurd to claim that there is GLOBAL data going back to the 19th century. Even in the mid 1950s, the Southern Hemisphere data, at any rate that south of the tropics, is largely made up as Phil Jones so candidly noted in the Climategate emails. In truth, we only have worthwhile data covering teh Northern Hemisphere.

    Then the data has been so heavily massaged with adjustments exceeding 1 degC to render it worthless for scientific scrutiny and study. An example is that a couple of months back Willis reviewed the BEST data set to see whether the 20 largest volcano eruptions could be found in the data sets. Despite the scientific consensus that volcanoes have a material impact on temperatures. I suggested that the reason one could discern the impact of the largest 20 volcanic eruptions on temperature was due the adjustments/homogenistion of the data rendering it worthless.

    At the moment Tony Heller is running an article on “Close enough for Government work” It is well worth reading this since it is right on point:
    https://realclimatescience.com/2018/09/close-enough-for-government-work/

    The National Climate Assessment (https://science2017.globalchange.gov/chapter/6/) has the below graph, which shows how much hotter the US used to be.
    https://realclimatescience.com/wp-content/uploads/2018/09/2018-09-03201341_shadow.png

    I will set out a couple of more of hiss plots (but read the article for more detail).
    https://realclimatescience.com/wp-content/uploads/2018/09/1900-2018-At-All-US-Historical-Climatology-Network-Stations-Red-Line-Is-0-Year-Mean-PercentOfDaysAbove90_shadow.png

    And to show the impact of adjustmenst:
    https://realclimatescience.com/wp-content/uploads/2018/09/2018-09-03225121_shadow.png

    The truth of the matter is that we have no idea as to the temperature of the planet such that all we can say is that it has warmed since the depth of the the LIA and that there are large amounts of multidecadal variation but we do not know whether the planet today is any warmer than it was in the 1940s, or for that matter the 1980s.

    • Even claiming we have useful information for the Northern Hemisphere is a bit much.

      It’s more like we have useful information for the Eastern US and most of Northern Europe.
      It get’s pretty spotty outside those regions.

    • Richard ==> Thanks for the link to Heller’s essay. The whole CliSci field is polluted with “hypothesis confirming” statistical manipulations of of messy data sets.
      Using “cute tricks” as proofs….

    • “…the miss use of statistics…”
      I thought that was more on the lines of 39-24-36.

      [The mods point out that example may be a miss using statistics, and mrs’ing her target. .mod] …

    • >>
      I suggested that the reason one could discern the impact . . . .
      <<

      I think Mr. Verney meant “could not discern.” His statement would then make more sense in context. I agree with the points he’s making.

      Jim

  4. The true answer to “Why does Climate Science report temperature anomalies instead of just showing us the temperature graphs in absolute degrees?” is…

    Thermometers aren’t scary enough.

    https://debunkhouse.files.wordpress.com/2018/05/context2.png

    That said, there is a scientific basis for using anomalies rather than absolute temperature values. It’s the best way to evaluate tiny variations in a highly variable time series, with a baseline 15 to 30 times the size of the anomalies. It’s similar to the rationale for logarithmic scales.

    However, the Climatariat clearly do it to make a minuscule number look very big. If geology was done by the Climatariat, no photos would ever include a lens cap for scale… 😎

    • David ==> Oh yes, anomalies are quite proper where they are used honestly to discover something about a deep highlhy variable data set….what they DON’T DO is get rid of the original measurement uncertainty. Used to disguise uncertainty, it is just yet another way of scientists fooling themselves….

    • David ==> I did a series of digital photos of flowers in the Caribbean….I took at least two shots of each subject — one macro of the flower and one that included my left thumb for a size reference.

      • Top Ten Signs You Might Be A Geologist:

        10. You have ever had to respond “yes” to the question, “What have you got in here, rocks?”

        9. You have ever taken a 15-passenger van over “roads” that were really intended only for cattle

        8. You have ever found yourself trying to explain to airport security that a rock hammer isn’t really a weapon

        7. Your rock garden is located inside your house

        6. You have ever hung a picture using a Brunton as a level, and your rock hammer as your hammer

        5. Your collection of beer cans and/or bottles rivals the size of your rock collection

        4. You consider a “recent event” to be anything that has happened in the last hundred thousand years

        3. Your photos include people only for scale and you have more pictures of your rock hammer and lens cap than of your family

        2. You have ever been on a field trip that included scheduled stops at a gravel pit and/or a liquor store

        And the #1 sign you might be a geologist:

        1. You have ever uttered the phrase “have you tried licking it” with no sexual connotations involved

        http://groups.colgate.edu/geologicalsociety/Features/Geology%20Jokes.htm

        😉

        • OMG – guilty as charged and I left geology related employment decades ago. Amateur geology highlights include having to hand over a carefully packed box of rock samples to the Irish army at a roadblock near the Northern Ireland border (they thought the heavy box could contain armaments) and, much more recently, being prevented from taking a cracking layered igneous pebble (destined for inclusion in my garden wall) in my hand luggage on a flight from Jersey as the officers believed I could use it as a weapon on board.

  5. I had desired to post something much more detailed but if one sets out a number of citations then the comment is chucked into moderation.

    It is well worth having a look at the recent National Climate Assessment Report, and in particular Chapter 6.1.2 Temperature Extremes. I will set out some of their plots which demonstrate the misuse of statistics.

    See figure 6.4:
    https://science2017.globalchange.gov/img/styles/figure6_4-1200@2x.png

    Note the large number of warm days and hot spells in the 1930s, and also the large number of cold days in the 1930s.

    Now look at how they combine this data (in figure 6.5) to show the number of extremes to give the impression that extremes were very minimal in the 1930s and the climate is far more extreme today:
    https://science2017.globalchange.gov/img/styles/figure6_5-1200@2x.png

    Tony Heller has done a very good video discussing the extremes of 1936.

    • Richard ==> Yes, propaganda is the art of showing the people only what you want them to see about a topic. If I could teach a Critical Thinking class at Uni, I would include my essay on What Are They Really Counting?.
      You show Hot Days, Colds Days and they show “Ratio of Daily Temperature Records” — as if that were a measure of whether or not the country is warming.

  6. I don’t wish to put words into Gavin’s mouth but I’m pretty certain that he’d say that if the uncertainty was wider, it’d mean that things could be worse than we thought. He does, after all, like to look on the gloomy side.

  7. Good work Kip, I enjoy your writing particularly.

    I’ve long be struck by the fact that the reason given in Meteorology for the use of anomalies is that they provide more useful information about a particular place – against the background of its local climate – than absolute temperature, which does not indicate that information directly.

    That this “usefulness” now extends to the globe* has always seemed a rather anomalous usage to me! 😉

    *The use of local anomalies from vastly different climates to calculate a single global temperature.

    • Scott ==> Thanks … Anomalies don’t provide more useful information…a properly scaled graph, maybe with a bit of smoothing, shows the data, and if informed with the true uncertainty, let’s us see what is going on with that metric. If the data has 1°C wide uncertainty, then the public MUST be shown that, and it must be explained so they understand what they uncertainty means in practical terms.
      Hiding uncertainty is a Bad Thing — it is Bad for everyone.

      • Kip Hansen, replying to Scott Bennett (Adding Joe Bastardi to the conversation search)

        Scott ==> Thanks … Anomalies don’t provide more useful information…a properly scaled graph, maybe with a bit of smoothing, shows the data, and if informed with the true uncertainty, let’s us see what is going on with that metric. If the data has 1°C wide uncertainty, then the public MUST be shown that, and it must be explained so they understand what they uncertainty means in practical terms.

        No, I will politely disagree with you there.

        Anomalies can be very, very useful. But the entire calculation of the anomaly MUST BE considered, the purpose of using the anomaly (instead of the actual temperature itself, and the temperature error analysis and its std deviation) and the anomaly’s error analysis and its std deviation.

        This generalization will be true for all anomalies, not just temperature – but the CAGW climate community has seized on the difference of a Single Temperature Anomaly from what they have chosen as the “Global Average Temperature” (for a flat plate earth irradiating a uniform average atmosphere in a constant orbit around a constant average sun), so that classroom environment is their world. So let’s use the Global Average Temperature as they do. More accurately, the Global Average Temperature Anomaly (difference).

        If all of the world’s surface (air) temperatures were accurately known for each hour of each day of the year.
        If those surface temperatures were accurately recorded for a sufficient nomber of years so each hour’s “weather” could be averaged for a sufficient number of seasons.
        If those seasonal average daily temperatures did accurately reflect the local climate (not local ever-changing urban-suburban-asphalt-concrete-farming-forests-fields-brush and woodlands and deserts and beach conditions and ocean and sea conditions).
        Then you could average together sufficient local hourly records to calculate that thermometer’s average daily (seasonal) changing temperature.
        Then you could subtract the hourly measured temperature from the long-term seasonal hourly average and claim you have generated one anomaly. For one place, for one location of that specific thermometer – assuming (as above) that the local environment around the thermometer has not affected your recent temperature measurements.
        Given enough accurate hourly temperature anomalies from enough locations worldwide, theoretically you now have a global average temperature ANOMALY. (Not global average temperature for that hour, but the theoretical hourly temperature difference from your assumed local standard hourly temperature. )

        But that’s the problem – not just all of the assumptions about measuring each hour’s temperature accurately, but the assumptions about what that entire process requires. Including the fundamental assumption that the earth (globally) was in a stable thermal equilibrium at some global average temperature at some time in the past.

        The earth has never been at thermal equilibrium – it can’t be because there is no natural “thermostat” set by an Infinite Mother Nature. Rather, the earth continuously cycles from a “too hot” condition (when it loses more heat to space than is gained for that period of time and thus cools), through an unstable transient period somewhere near the average of “too hot” and “too cool”, towards a period when too little heat energy is being lost to space (and thus the received energy is more than is being radiated to space.)

        Like a swing whose “average speed” is only momentarily ever measured “at average” – but whose “average speed” can be constantly measured at always changing values; whose height is always changing but which only momentarily is ever at its “average height”; and whose average potential energy is thus also always changing; and whose average kinetic energy is also ever-changing – you can best discuss its state by using the anomaly (position, speed, mass, velocity) of that swing from the “instantaneous expected perfect state.”

        Go in the classroom, and you can perfectly calculate every perfect theoretical piece of information you wish. Then determine a difference from that perfect theoretical state, then write a paper about the anomaly, the value of that anomaly, and the trend of that anomaly into the infinite future.

        But the actual state of even the edge of that swing in the real world? Can’t predict it even above the molecular (much less atomic level.) Too many real world interferences such as wind, air friction, pivoting friction, motion and rocking of the impulse pushing the swing, movement of the body on the swing, changes in mass of the swing, its chain, the body on the swing, and the friction between every link of every point on the chain changing due to wear and air friction.

        The myth of the climatrologists is that they CAN predict the far future of the earth’s weather by ignoring all of the small parts of each of these events by concentrating on the global averages of events, yet pretend they are calculating every effect by focusing on the individual dust particles everywhere, the individual CO2 concentrations as they control the global average clouds and global average humidity and global average pressure.

        • “but the CAGW climate community has seized on the difference of a Single Temperature Anomaly from what they have chosen as the “Global Average Temperature” (for a flat plate earth irradiating a uniform average atmosphere in a constant orbit around a constant average sun)”

          No, that is exactly what they don’t do, and I keep explaining why. What you describe would be quite wrong. Instead they form anomalies locally, by subtracting local climatology (averaged from that environment) from each region, before any spatial averaging. They avoid computing a global average temperature.

      • Kip ==> Sure, I don’t disagree because I see what you are saying… it is even more complicated than has been explicated in comments here and the errors are large even at the local climate stage of anomaly preparation/homogenisation.

        I also have many more concerns about the use of anomalies but one thing that hasn’t been discussed here yet, is their unequal application.

        To restate, the problem with the use of anomalies globally, is that their relevance is unequally represented.

        The greatest variation – in terms of anomalous temperature – occurs in the temperate zones – which just happens to be where the vast majority of the world’s population resides (Especially in the Northern Hemisphere, due to its greater land mass). While the least anomalous zones,** the oceanic (Surrounded by sea), the tropical/subtropical and the cold/polar regions are often also the most sparsely measured!

        To repeat; the most thoroughly and carefully monitored places on Earth, also happen to be the the most anomalous!

        I fear that this weighting is not being accounted for adequately and therefore any result* will carry significant bias.

        *Global average
        **Particularly in the Southern Hemisphere

        • Scott ==> Well, no one is tying to be “fair” — they are trying to find a way to discover if “the Earth is warming because of CO2”. In the last 20 years, they have been scrambling to find even the tiniest changes as long as they are generally UP. (See Cowtan and way).
          One accurate way of illustrating the magnitude of the problem is illustrated in the essay “GISS Temps” in the animated gif. When scaled in absolute degrees with the 0.5°C uncertainty range shown, the first 100 years (1880-1980) all fall inside of the uncertainty range of 1880. 1980-Present all fall into their own uncertainty range.

  8. “reducing the uncertainty of annual Global Average Surface Temperatures by a whole order of magnitude”
    Well, we’ve been through all that before. Yes, if you subtract the variable that incorporates most of the uncertainty (location/seasonal mean), you know the remainder much better.

    But there is a clear contradiction in the broad fuzz that is supposed to be the alternative fact. Expected variability actually means something. It means that you expect to see randomness of that amplitude. And you simply don’t see it. It isn’t there. There is nothing like variability of the order of ±0.5°C (sd). The graph is self-disproving.

    • Maybe you just need to remove your special Warmunist goggles, which filter what you don’t want to see.

    • The absolute errors in any measurement do (almost always) follow a statistical curve, with the true value being in the most probable region of that curve.

      But you CANNOT change the shape of that curve with ANY mathematical operation. If, as Schmidt claims, his anomalies are within the 95% region at 0.1C – this is identical to a claim that his absolute measurements are also within the 95% region at 0.1C.

      Does he claim this? No. Which leads to a simple binary conclusion – statistical incompetency, or pseudo-statistical fraud.

      • Writing Observer ==> Nick just mixes up statistical concepts with measurement concepts. The whole trick relies on overlaying a statistical construct on top of a measurement to hide the actual measurement uncertainty.

        • “overlaying a statistical construct on top of a measurement”
          You don’t measure a global average. You calculate it. And you calculate the effect of the measurement variability on the calculated average. That is statistics.

          • >>
            That is statistics.
            <<

            That may be statistics, but there’s no way to calculate a global average using physics. Temperature is an intensive thermodynamic property of a system. It applies to systems that are in equilibrium.

            Jim

          • Jim,
            “That may be statistics, but there’s no way to calculate a global average using physics.”
            It’s the only way to calculate an average of anything. Temperature applies to any system with local thermodynamic equilibrium. If you don’t have that, you can’t get a temperature in the first place. If you do (and we do), you can average.

          • >>
            If you don’t have that, you can’t get a temperature in the first place. If you do (and we do), you can average.
            <<

            Okay. I have two beakers of hot water. One’s at 50 degrees Celsius and the other is at 80 degrees Celsius. The beakers each contain different amounts of water. I pour both beakers into a third beaker that can contain all the water. What is the final temperature of the water after the temperature equalizes (assuming no heat loss or gain)? And just for grins, what’s the temperature of the water mix halfway through the process?

            Jim

          • Jim
            “The beakers each contain different amounts of water.”
            That then is the sampling issue. If you want to say that the final temperature is the average of the water before (reasonable) you have to decide how to sample that, so the sample average reflects the final. Each beaker is presumably homogeneous, so you can get those averages easily. But the between beakers is not homogeneous, so you have to sample in correct proportions – ie weighted according to mass in each beaker. Then you’ll get the right answer. Standard stats. Or, you could say, averaging by volume integration.

            And if you have combined them with limited mixing, again the average is the same, subject to proper sampling (which would be hard, since it is changing). The sampling issue diminishes as you mix, decreasing inhomogeneity.

          • >>
            That then is the sampling issue.
            <<

            Heh, and where is my answer? You got two temperatures with an LTE exact temperature. I even gave you a final LTE condition for the answer. The in-between value resembles the atmosphere. The atmosphere is never in equilibrium.

            >>
            The sampling issue diminishes as you mix, decreasing inhomogeneity.
            <<

            So no temperature with hundredths of a degree precision. How about tenths of a degree? Maybe even within a degree? I’m flexible here.

            Jim

          • Jim
            “Heh, and where is my answer?”
            As I said, more information needed, namely, the masses of the two beaker contents. If it’s 200 gm at 50C, and 100 gm at 80C, then the answer is (50*200+80*100)/300 = 60C. That is just properly weighted sampling, or, equivalently, volume integration.

            The in-between average is also 60C. We know that from conservation. Sampling needs care then, and will introduce some error. But not much, and diminishing as mixing proceeds. A few well-placed thermometers would get it very accurately, as on Earth.

          • >>
            As I said, more information needed . . . .
            <<

            Finally we get to the crux of the situation. Yes, you need more information, as the rest of us knew from the beginning. But when doing the Earth’s atmosphere, you’re fine with the information I gave you. There is no way you can “calculate” a global temperature from the few thermometers we have access to. AND it is impossible–to boot.

            >>
            We know that from conservation.
            <<

            No we don’t. You don’t know how I mixed the water. I could have poured all the water from one beaker in first and then all the water from the other beaker–or any combination and order from the two beakers. You’re assuming I poured both in simultaneously. The midway case is totally unknown and totally incalculable.

            >>
            A few well-placed thermometers would get it very accurately, as on Earth.
            <<

            Phooey!

            Jim

          • “The midway case is totally unknown and totally incalculable.”
            No. There are x Joules in the water before mixing, and density and specific heat are assumed uniform (OK, you can correct for temperature if you want). The average T before, during and after mixing is x/ρcₚ. Confirming that by measure is a matter of sampling. Simple before, with correct mass weighting. Simple after, if well mixed. Needs care in between, but can be done with good sampling.

          • >>
            Simple before, with correct mass weighting. Simple after, if well mixed. Needs care in between, but can be done with good sampling.
            <<

            You don’t know how much water is in either beaker, so you can’t calculate the final temperature. There’s no sampling during or after–we’re supposed to calculate all that. The mixing action is obviously chaotic. There’s not enough sampling in the Universe to capture all of that. What a dream-world you live in.

            Jim

          • A few missing pieces of information:
            What is room temperature, and air velocity in the room?
            Is the room substantially larger than the three containers?
            What is the material coefficients of the containers, their three masses, and each surface area and wall thickness of the three containers?
            Is the wall thickness constant cross all three surfaces of all three containers?
            Are the three containers insulated on the bottom from the table? (If so, i will ignore conduction losses to the (unnamed, unspecified table top.)
            Are the two beakers poured into the third from a height that generates a spray and droplets, or are they poured in a continuous steady stream?
            Is the top of the containers closed, open, or insulated?
            What is the time from start to stop of the evolution? (If short, I will ignore evaporation losses from the liquids. If not, what is the relative humidity in the room?)
            Are the room walls at “room temperature”? (If so, I will ignore radiation losses.)

            Unless otherwise notified, I will assume pure water, if that’s an adequate assumption.

          • RACook, you forgot about altitude above MSL, barometric pressure, and the orientation of the room with respect to the magnetic field of the earth. The presence or absence of Leprechaun farts is not important.

          • Nah. Covered relative humidity, so that will pick up the pressure effect on temperature coefficients of the assumed pure water convection and evaporation, won’t it? if we’ve got relative humidity, would the evaporation effects of water at 80 deg C change with absolute pressure of the atmosphere?

            Yes, I will have to assume outside cosmic radiation influenced by the regional magnetic field and latest solar flares is too small to measure. Good point! Thank you -> Prompt, Proper, Polite, Public Peer review is essential.

            Leprechaun farts? Got to think about that. Those do flow with the ley lines above the pot of gold, ebbing as the exchange rate varies.

            And Stokes was only worried about mass of the two water volumes, thermal mass of the thermometers! Guess he didn’t think of remote IR thermometers = No thermal influence on the object measured, if the proper emissitivity is chosen for calibration beforehand.

          • “And Stokes was only worried about mass of the two water volumes, thermal mass of the thermometers!”
            It’s nothing to do with thermal mass of a thermometer. And all your nonsense about humidity etc is irrelevant. Surface temperature measurements on earth have nothing to do with possible variations during a physical mixing experiment. The question only makes sense in this context in terms of assessing the average temperature of a mass of water, as expressed after mixing. And I describe what you have to do to calculate that average temperature.

          • >>
            RACookPE1978

            A few missing pieces of information:
            <<

            Did you really miss the point of my example? It’s obvious Mr. Stokes did (or pretends he did).

            Jim

          • Yes, I most likely did – Taking it near-seriously at one point. Sorry about that.

            Then again, it IS easier to “measure the real world” rather than ASSUME you have accounted for all of the approximations needed to “calculate” the approximations necessary in even beginning the calculations for dynamic heat transfer. For one example, in a separate engineering web site, we are debating the change in heat transfer for a 100 deg C fluid in a vertical tank with round corners, or square corners – and does not even take into effect the extra cost of fabricating the rounded corners and insulating them, compared to a simpler/cheaper square corner. (That Original Poster still has not answered what the air flow through the room is – another part of the problem.

            In another example, I DID “measure” the surface temperature at each 50 mm from the end of a horizontal mild steel bar 25 mm x 25 mm suspended in mid-air in the 23 deg C workshop with still air (natural convection only) with one end heated to 1350 deg F by an oxy-acetylene torch. The measurements over 45 minutes by IR thermometer at one minute intervals down the steel bar did correspond closely (+/- 15 deg) with the theoretical dynamic values for the same points over the same period. Calculations “can be” correct, but ONLY if all of the conditions are known correctly, if ALL of the correct equations for the physical coefficients for the physical conditions are correctly approximated, and if the margin of error between the real world and calculated approximations are acceptable.

            Your “beakers” for example, do almost certainly have rounded corners, are most likely made of un-insulated glass or Pyrex, and most likely are not receiving solar IR if they are indoors in a “room temperature” lab, but are the they cylindrical walls or have a conical shape? How far up the walls of the three beakers does the initial water go? 8<)

          • >>
            [Y]es, I most likely did – Taking it near-seriously at one point.
            . . .
            Your “beakers” for example . . . .
            <<

            I did state that no heat was gained or lost–and if you assume that no matter was gained or lost then we are dealing with an isolated system.

            In any case, I apologize for being snippy.

            It was obvious to you, to me, and even to Mr. Stokes that more was needed to answer that problem correctly. Yet Mr. Stokes claims that with a few randomly placed thermometers we can calculate the average temperature of the entire surface (atmosphere?) of the Earth. And that we can start with a set of temperatures with one-degree precision and obtain hundredths of a degree accuracy. It does boggle the mind.

            Jim

          • Jim Masterson

            Ah, but dear sir! Mass IS lost (in a real-world basis) if no cover/lid is assumed on the beakers!

            And Energy IS LOST from the moment the “experiment” begins, even if the original two beakers are plugged up and have no evaporative/convective losses as the vapors exchange:

            Heat energy is lost from the 80 deg C water through the water-film barrier to the first beaker wall (probably Pyrex) to the outer air-film wall to the (probably natural convection) heat transfer to the (near-infinite) room air and room walls (include radiation losses here to the room walls-floor-ceiling (each has a different view factor, probably a different emmissivity as well), conduction losses to the table (probably at room temperature at t=0) through the beaker base plate (if not perfectly insulated).

            OK, so now you have the dynamic heat exchange for beaker_80 deg C (at t=0) to the room environment.
            Repeat for beaker-50 deg C.
            Assume the beaker-final started at room temperature on a table at room temperature … And that is just some of the initial conditions!

            IF you are going to “calculate the final temperature of …” then you CANNOT remove ANY simplification to the process, UNLESS you also remove your calculation from any relevence to the real world! That is what I was trying to say to the engineer asking about the rounded edges of his “tank holding 100 deg C water” : What difference does it make?
            What is important? (Energy lost, energy potentially saved?
            Total cost?
            “Keep the water at 100 deg C regardless of expense and material!!!”
            “NEVER let the water heat up to 100 deg C regardless of cost, time, money, material, investment, instruments!!!!”
            Or even, what difference does it make it I make the tank out of a round pipe with flat ends? (As long as I keep a vent on the tank so it can never turn into an unlicensed, deadly pressure vessel.” )

          • Nick ==> Calculating is not statistics. Statistics is “Branch of mathematics concerned with collection, classification, analysis, and interpretation of numerical facts, for drawing inferences on the basis of their quantifiable likelihood (probability)”.
            Averages, means, etc have calculations from measurements — they are not inferences on the basis of probability.
            Mixing the the two fields is where scientists get into trouble.
            The average of 2, 8, and 10 is 10. There are no probabilities involved.
            When Gavin Schmidt says the GAST for 2016 was 288.0±0.5K , he is talking an average of measurements (a complicated average, but an average none the less). He is not talking the probability of the GAST. His GAST figure has an uncertainty range, because the measurement natively had that range — it must remain. He gives the same range for the Climatic Mean — which is a simple arithmetic average of 30 years of annual data. None of these are probabilities.
            Pretending that uncertain measurement data can be magically reduced to precise anomalies is FALLACY – and a misuse of statistics to produce unjustifiable certainty about uncertain data.

          • “The average of 2, 8, and 10 is 10. There are no probabilities involved.”
            Not statistics as I learnt it. An average is a statistic. But the point is that the dependence of the mean on the variation of the data is certainly a matter of statistics. And probability is basic to the notion of error.

          • Nick ==> That’s alright then — the average of 12, 8 and 10 is 10 though.
            I have always suspected that you see everything as statistics…..

          • Kip,
            “His GAST figure has an uncertainty range, because the measurement natively had that range — it must remain.”
            Not true. The uncertainty of GAST comes mainly from the sampling error (choice of locations) amplified by the great inhomogeneity of absolute temperature (altitude etc). It doesn’t come from the measurement error at an individual location. The reason the anomaly average error is so much lower is that the anomalies are much more homogeneous.

          • Kip Hansen

            Averages, means, etc have calculations from measurements — they are not inferences on the basis of probability.
            Mixing the the two fields is where scientists get into trouble.
            The average of 2, 8, and 10 is 10. There are no probabilities involved. …

            Pretending that uncertain measurement data can be magically reduced to precise anomalies is FALLACY – and a misuse of statistics to produce unjustifiable certainty about uncertain data.

            Check the arithmetic in your example, please: (2+8+10)/3 = 6.66 Not 10. 8<) Ask the mods to edit, if you wish.

          • RA ==> Quite right, typed and calculated mental in haste…I meant to type 12, 8 and 10…..somehow that little sneaky “1” got away.

            [A 20 (vice 10) could work as well. .mod]

          • Kip you are flat out wrong. The average of all the measurements is a statistical estimator of the GAST, and being an estimator there is a “probability” of it being correct. As you well know, the probability of it’s “correctness” can be increased with increasing numbers of observations that go into calculating the estimator. I hope everyone is aware that you cannot measure the Earth’s surface temperature, we can only estimate it based on a multitude of distinct individual thermometer readings in space and time. Please tell me Kip that I don’t need to go into a discussion with you about the relationship between population means and sample means

          • David ==> All temperature data used in all temperature calculations are RANGES, one degree F wide in the US, auto-converted to degrees C rounded to 1/10 degree C. The uncertainty range is 1 degree F from the get-go. (That range widens to bit due to all the other uncertainties involved in measuring temperature.) A daily average temperature is not the average temperature of the day, but the arithmetical mid-point between the high and low for the day. The monthly average for a station is the arithmetical mean of all the day’s averages (add all the days, divide by number of days). Nothing so far is anything other than elementary school arithmetic, some of it technically wrong (day average). Nothing in this involves the probability of anything — it is arithematic.

            There are no population means or sample means — there is no sample (and where there is a sample — daily mean – it is the wrong sample).

            GAST has almost nothing to do with the actual temperatures records of actual stations over time — it is a smeared and leveled hodge-podge of a metric. Read the BEST Methods paper. According to Stokes and Mosher, they don’t even try to find the Global Average of anything — they do something quite different, and then just call it the GAST.

            Had they actually been trying to find the average, it would be an arithmetical MEAN of all the points — not a probability. (Granted, they would have fudged a majority of the data filling the grid, kriging or smoothing or just outright guessing) but in the end, they find the arithmetical mean of all the means of all the grids.)

            Arithmetical means are just that — there is no probability figured anywhere.

          • And IF you are NOAA you can assume to two decimal places. Despite climate dot gov saying this –

            “Across inaccessible areas that have few measurements, scientists use surrounding temperatures and other information to estimate the missing values.”

    • I agree that what we are measuring, within uncertainty, corresponds to what is happening. Many proxies agree with the periods of warming and cooling for the past 170 years indicating that they are real.

      And all that debate about the right or wrong databases is just silly. They are all measuring essentially the same.

      https://oz4caster.files.wordpress.com/2018/09/m0-gta-2014-2018-08a.gif

      What people have trouble understanding is that a difference of ±0.5°C is very little on a daily or monthly scale, but it is huge on a millennial scale. The 6000-year Neoglacial cooling has taken place at a rate of ~ 0.2°C/millennium.

      • Javier ==> Thus, my last graph with bounds….
        In measurement of Global Temperature, an UNCERTAINTY of 0.5°C is HUGE — and yet is purposefully hidden in modern CliSci.

        • I disagree that the uncertainty is as high as you represent it. If that was true we should see much bigger oscillations and a much bigger spread in the measurements. The uncertainty is probably half that.

          • Javier ==> Here we are talking Original Measurement Uncertainty — which Dr. Gavin Schmidt and I agree is properly represented as +/- 0.5°C for all of the following: individual station temperature measurements, monthly and annual station averages, climatic period averages, and annual global averages.
            Thus, the original measurement uncertainty also properly devolves to the ANOMALY of those last two metrics if we want to show the REAL uncertainty of the “thing” (global temperature) that the anomalies are meant to represent.
            This is the truth nugget of the Manski paper — there are “statistically valid” ways to show data that totally misrepresent the uncertainty in the data itself.
            In my re=draw of the graph, I added 0.5°C to the top-most trace and subtracted 0.5°C from the bottom-most trace, so the extra width comes from the inclusion of the spread of the data traces selected as well …. this adds about 0.55°C to each set of uncertainty ranges….it would have been about about 2/3 the width if I had used the mean of the traces, so it does look a little exaggerated. Good eye!

          • The problem with the global uncertainty is that we have several orders of magnitude too few stations to accurately portray the planets temperature. Even if we had a way to perfectly measure the temperature at the stations we do have.

          • “Here we are talking Original Measurement Uncertainty — which Dr. Gavin Schmidt and I agree is properly represented as +/- 0.5°C”
            You are misrepresenting Gavin. He does not say (there) that 0.5 is Original Measurement Uncertainty. He says it is the expected error of a global average of climatology (time averaged temperature). It isn’t the error in individual temperature readings. The expected error of a global anomaly average is something quite different again.

          • Nick ==> The data given by Schmidt is what he gives. I can’t read his mind — so you might be right — he may think that it is merely a coincidence that the uncertainty in the GAST and Climatic Mean are the same as the uncertainty in the original measurements. But when one averages any data that is most correctly represented as a range, the average — the mean — of the data maintains the same uncertainty range.
            It can’t be otherwise.

          • Kip,
            “I can’t read his mind”
            One can read the link, though. He says:
            “However, and this is important, because of the biases and the difficulty in interpolating, the estimates of the global mean absolute temperature are not as accurate as the year to year changes.”
            He does not mention thermometer error. Biases relates to things like latitudinal difference, and interpolation to variations on the interpolation scale, such as topography and land/sea boundary.

          • You mean like reader error? Like the Australia post employee who was reading temperatures from the wrong end of the slide? Reported in The Australian.

          • Nick ==> They are “estimates” because of the process which is not statistical in nature, but simply arithmetical — starting with the original individual measurements we have uncertainties (much wider than the eventual global anomalies). The year-to-year changes are only considered more accurate because the uncertainties are dropped off and ignored.

    • Nick ==> As you well know, we don’t see “variability of the order of ±0.5°C (sd)” in the annual GAST because it has been smoothed out by prior rounding of daily then monthly then annual averages. Original measurement uncertainty is not, and never was, a “standard deviation”. That is a statistical concept, not a measurement concept.

      You’ll have to take this up with Dr. Schmidt — the _/- 0.5°C is his (and I thoroughly agree). It is also his (correct) claim that using the actual uncertainty in Absolute GAST in degrees means that we cannot distinguish between the reported temperatures of recent years (see the last graph in the essay proper).
      I’d love to see you argue the point with Gavin…..

      • Kip,
        “it has been smoothed out by prior rounding of daily then monthly then annual averages”
        Well, yes, that is the point. Averaging, whether over time and space, reduces variability. That is often the reason for seeking an average. It isn’t added smoothing; it is intrinsic.

        “You’ll have to take this up with Dr. Schmidt — the _/- 0.5°C is his (and I thoroughly agree).”
        I agree too. The 0.5 is the error you would expect in an average of absolute temperature. It comes from the lack of knowledge of the average climatology – the spatial average of the time averages. What GISS, and everyone else with sense, calculate is the average anomaly. That doesn’t have that uncertainty. You don’t calculate the average absolute, then subtract the average climatology – that can’t work. You subtract the local climatology (not subject to the 0.5 uncertainty) from the local temperature (==>anomlay), then average. That is Gavin’s point, long made by GISS and others. One way has big errors, the other doesn’t. They use the right one.

        • Nick ==> Well, we’ll have to disagree. You cannot get rid of the original uncertainty by simply claiming that the “local climatology” is not subject to uncertainty. Of course it is — there is no way to remove it that is not just PRETENSE.

          By the way, read Gavin’s post — he subtracts the Annual GAST from the Climatic Mean — not some local climatology statistic — to find the Annual Anomaly.

          Making public policy depends on a proper reporting of the uncertainty in scientific reports. The insistence that GAST Anomalies are [nearly] uncertainty free is just a way of covering up that we can’t really be sure of any difference between recent years. This “problem” is a feature — the real representation of scientific uncertainty regarding Global Temperature annual averages. It is not a “bug” to be overcome through shifting to anomalies with wee-tiny SDs pretending to represent our uncertainty.

          • Kip,
            “he subtracts the Annual GAST from the Climatic Mean — not some local climatology statistic — to find the Annual Anomaly.”
            He may do that for working with reanalysis data. You can do that there because the values are on a regular grid which never changes, so whether you subtract the local or global climatology gives exactly the same result. But with GISS and other GAST products (including TempLS), it isn’t the same. There is always a different mix of stations each month. That means the climatology average (spatial) would change each month. You could calculate that, but simpler and equivalent is to subtract the climatology locally (station-wise) to get a local anomaly before averaging. This is fundamental, and all products do it. Well, almost all – USHCN used to average absolute temperatures, but to make that work they had to interpolate missing values, so that they could fulfil the alternative requirement of having an unchanging set of sample locations. Not optimal.

            There is no claim that local climatology is free of uncertainty. What matters is that the spatial anomaly average is much less uncertain than spatial temperature average, as Gavin is explaining.

          • Nick ==> Data with uncertainty is a RANGE. All subsequent calculations must use the entire range.

            You are using a trick of definitions to reduce the uncertainty about Global Temperature and its changes.

            The GAST is uncertain to 0.5….thus a range from GAST+0.5 down to GAST-0.5 — one degree. The ClimaticMean is uncertain to 0.5 …thus is a range from CM+0.5 to CM-0.5. Subtract CM from GAST. The correct answer is a range from (GAST+0.5-CM-0.5) down to (GAST-0.5)-(CM+0.5). The new range — the range of the anomaly — is 2 degrees top to bottom.

            One only gets your tiny nearly-uncertainty-free anomalies by dropping the uncertainty of both the GASTannual and CM.

  9. Why poor Pluto? If he had not kidnapped Proserpine and fed her pomegranates there would be no winter, just balmy comfortable temperatures and continuously growing crops. I feel it is Pluto we have to blame for all of our problems.

    /grins

    • Richard ==> I have pity on poor Pluto — having been demoted by the high-minded autocrats in astronomy, simply because he is too small, an obvious case of iotaphobia.

      • Pluto wasn’t demoted because it was too small, it was demoted because it hasn’t cleared it’s orbit of other objects.

        The Earth hasn’t cleared its orbit of other objects either, yet we call it a planet. Poor Pluto is right! As far as I’m concerned, Pluto is still the Ninth planet. 🙂

  10. It seems that most scientists think that if you average enough samples, you will reduce uncertainty and get a more accurate result.

    It’s common to call our measurements the signal and to call the errors noise. If the errors are truly random then the assumption, that averaging reduces uncertainty, is correct. One of my favorite demonstrations is to extract a small signal from data where the noise is a hundred times as strong as the signal. It’s a powerful technique.

    The problem comes when the noise is not truly random. Then we can have red noise. If you play red noise and random noise over a speaker you can hear the difference. The red noise has more low frequencies and fewer high frequencies.

    Natural processes tend to produce red noise. Averaging does not reduce uncertainty where there is red noise. link

    I’m not a statistician but I have lots of experience processing signals. I strongly suspect that the uncertainty of the global temperature is understated. The idea that any kind of processing can reduce that uncertainty doesn’t pass the smell test.

    Just as James Hansen, with no rigorous mathematical justification, threw Bode feedback analysis at climate sensitivity, it seems that climate scientists have done the same with averaging. In both cases the method is unjustified. There should be a rule that, where statistics is invoked, a statistician should be involved.

    • commie ==> As you might remember, I wrote a whole series on averages. I think the whole signal/noise thing is as misplaced as Bode feedback.
      Anomalies are an easy way to look at a changing metric, but it is not scientifically proper to remove the Original Measurement Uncertainty — that is not valid and masks, intentionally or not, the true uncertainty. Masking uncertainty leads to the weird beliefs we see in CliSci today — where tiny changes in long-term averages are considered not only significant (scientifically) but Important and Dangerous.
      I urge all readers here to read Manski’s paper which under discussion over at
      Climate Etc..

      • I think the whole signal/noise thing is as misplaced as Bode feedback.

        A lot of statistics was developed in the context of electronic communications, starting with the telegraph. That’s why the math texts refer to signal and noise. You can think of signal as error free data and noise as errors.

        The main point is that it is very easy, and super tempting to misapply math, be it boxcar averaging or feedback analysis.

        Early in my education, a friend’s thesis advisor told me that students tend to apply any old formula in situations where it totally doesn’t apply. By the end of my career, I realized that that unfortunate habit lasts the whole of some scientists’ careers. The good thing is that it gets crushed out of engineers’ souls within about a year of graduation. 🙂

    • With a background in mechanical, not electrical, engineering, I am willing to state categorically that I cannot measure an object with a ruler denominated in eighths of an inch (0.125in) 1000 times and pronounce its length to an accuracy of 0.001 in. Nor can I measure 1000 similar objects with that ruler and pronounce their difference to an accuracy of 0.001 in. I leave it to the readers to guess whether “averaging” thousands of readings of thousands of thermometers by thousands of different people, recorded to the nearest 0.5 K, can produce a meaningful “anomaly” of 0.01K. The characteristics of “signal” and “noise” be damned.

      • skorrent1 ==> Thousands of different thermometers measuring millions of different temperatures in thousands of different places read by thousands of different people (or electronics) then rounded to the nearest degree F — then automagically converted to degrees C to the nearest 0.5 — then hundreds of measurements used to find the daily mean of Highest and Lowest, then averaged over a month, then averaged over a year…….eventually reduced to an “annual anomaly” (sans uncertainty) to the nearest 1/100th of a degree C.
        Is is, really, much worse than we thought….

          • If you follow that logic, your cell phone can’t possibly work.

            As an example, consider the problem of extracting the tiny signal from the Voyager 1 satellite from the background noise. link

            The reason folks think they can improve their uncertainty is that such things are possible given the right circumstances. My beef is with the scientists who don’t understand the ‘circumstances’ part.

    • The problem most are seeing is the obvious on that statistics are pointless without understanding what is under them on. The classic is the average number of children per couple which is some fraction 1.5 or 2.5 … everyone knows you can’t have a fractional child.

      The problem with what they are doing is along the same lines. The system has feedbacks and delays and the force/driver they are measuring is a Quantum process (it is what makes radiative transfer difficult).

      They may have convinced themselves that the statistics mean something but the laws of nature have a habit of making statisticians look stupid.

    • “Averaging does not reduce uncertainty where there is red noise.”
      Just not true, and your link does not say that. There are degrees of redness. If numbers are correlated, the uncertainty of the mean does not reduce by the simple OLS quadrature, but it does reduce. In fact, the covariance matrix simply enters the quadratic sum.

      • Red noise usually implies a slow drift. In that case the uncertainty/error of the individual measurements may well be better (ie. lower) than that of the average of all the samples. Been there, done that, got the t-shirt.

  11. Just a retired engjneer and amateur astronomer but my confidence in the minuscule star wobbles in defining the very existence of a planet and ‘habitable zones’ definitions is low. Too many other variables effect habitability; stellar eruptions, solar wind, planet rotation, magnetic field, atmosphere and its density and composition and on and on. Most of the “earthlike” planet hype is just that, hype. But the actual search is great for expanding science. Much too much certainty is implied, however, in most of the hype.

    • “In astronomy and astrobiology, the circumstellar habitable zone (CHZ or sometimes “ecosphere”, “liquid-water belt”, “HZ”, “life zone” or “Goldilocks zone”) is the region around a star where a planet with sufficient atmospheric pressure can maintain liquid water on its surface.”

      A distant planet is considered a good place to colonize by above definition once we ruin Earth by raising the temperature by a few degrees?

      Earth would still have water on the surface if temperatures rose 10 degrees.

      SR

      • How would earth get ruined even with a 10 C average increase? Most of the ice sheets would still not melt with that temperature rise. Inland Antarctica is way below -10C even only 10 metres under the surface even in the summer time.

  12. Christy and Spencer report UAH results as anomalies as well. How does that figure into this analysis?

    • Mark ==> The use of anomalies is just so “standard” in CliSci that it may be impossible to rectify — part of the problem, as Dr. Schmidt admitted in his blog post, is that they keep changing the absolute data points….they are forever and ever “adjusting” the annual mean GAST (for myriad reasons, some valid, some spurious) that everyone notices….and advancing the climatic period mean. What they want to show is “it is getting hotter” — and “we’re sure”.

      The reality is: “It may be getting a little warmer, we’re pretty sure on the century scale, not so sure on the decadal scale”.

  13. I made a very similar case in my papers examining uncertainty due to systematic measurement error in the global air temperature record, here (900 kb pdf), and here (1 mb pdf).

    Everyone in the field, GISS, UKMet/UEA, BEST, and RSS and even UA Huntsville for the satellite temps, assume measurement error is a constant that subtracts away in an anomaly.

    That assumption is untested and unverified. As you directly imply, Kip, the entire field rests on false precision.

    • Unfortunately the UAH data is the only temp data that both sides trust. Nick Stokes has said he doesnt even trust that data set but he has inadvertently argued countless times that it represents his alarmist viewpoint. That is because some skeptic will point out some UAH decrease and Nick will jump in arguing that the decrease is meaningless when you look at the overall trend of the UAH data. That proves that Nick Stokes is willing to accept the UAH figures. All the other alarmists implicitly accept that data as well , even though they are loath to admit it. Unfortunately for us skeptics the satellite era of temp measurement began in 1979. That was a low point for actual global temp. We will have to live with that cherry picked starting point. For now there is a definite short term upward trend. However if there is no long term trend then the data will sort itself out eventually. The alarmists cannot really argue against the UAH dataset because that would be saying that they think that Christy and Spencer are fudging the figures to make it look as if there will never be any warming. However if there really was CAGW and Christy and Spencer were fudging the figures then the alarmists would have to argue that those 2 scientists would be putting all humanity at risk by doing that. What possible gain to Christy and Spencer would there be, to put all of humanity at risk? That assertion is ludicrous in the extreme.

      The other side of the coin is not true however. When an alarmist argues for CAGW he is not putting all of humanity at risk by him being wrong. He is only condemning humanity to poverty by carbon taxes.

      • Well, if Spencer and Christy are fudging the figures, then so are the operators of the weather balloons since both sets of temperature data are in agreement.

        http://www.cgd.ucar.edu/cas/catalog/satellite/msu/comments.html

        “A recent comparison (1) of temperature readings from two major climate monitoring systems – microwave sounding units on satellites and thermometers suspended below helium balloons – found a “remarkable” level of agreement between the two.

        To verify the accuracy of temperature data collected by microwave sounding units, John Christy compared temperature readings recorded by “radiosonde” thermometers to temperatures reported by the satellites as they orbited over the balloon launch sites.

        He found a 97 percent correlation over the 16-year period of the study. The overall composite temperature trends at those sites agreed to within 0.03 degrees Celsius (about 0.054° Fahrenheit) per decade. The same results were found when considering only stations in the polar or arctic regions.”

        end excerpt

        • I asked Roy Spencer about systematic error in the satellite measurements when I met him at the Heartland Institute. He agreed the systematic measurement error was ±0.3 C, but then said it subtracted away in an anomaly. It’s an untested assumption.

          Agreement between measurements doesn’t dismiss uncertainty. Radiosondes have problems of their own.

          • No Frank, Spencer is correct…….when anomalies are used, the systematic error is eliminated. It’s not an assumption, it can be shown to be true mathematically.

          • If you use a broken ruler to measure the height of your growing child, you can easily see the growth, even if you don’t know how many centimeters tall your child is.

          • David Dirkse

            If you use a broken ruler to measure the height of your growing child, you can easily see the growth, even if you don’t know how many centimeters tall your child is.

            No, I cannot. Unless I scratch a line in the wall at uniform periods using the sharpened tip of the broken ruler at exactly the same relative height from the child’s head. This assumes the broken ruler is long enough to reach the wall from the top of the child’s head. Then I cannot use the broken ruler at all.

          • You missed the point RACook. If the thermometer used to collect temperature data at a location was off by -3 degrees for every reading it took in it’s 30 years of data collection, the anomalies calculated from it would be correct.

          • off by -3 degrees for every reading

            And how do you know that’s true, David? It’s never been ascertained to be true for satellite temperatures.

          • Pat Frank needs to learn how to read. My statement was: ” IF the thermometer used to collect temperature data at a location was off by -3 degrees”

            Look up how the word “if” is used in logic Mr. Frank. You of all people should understand what “hypotheses” means.

          • Dear Mr Frank. I guess I will have to school you in predicate logic. Here is the truth table for implication https://appliedsentience.files.wordpress.com/2014/02/p-then-q1.png

            ..
            You will note that even if the “assumption” (hypothesis) is false, the implication is true irrespective of the truth value of the consequence. So your claim that my assumption is wrong holds no weight.
            ..
            Is that “adult” enough for you? ( I guess you missed the fact that the “if” in my statement was both capitalized and bold faced )

            What you have to show to object to my logic is that a true hypothesis leads to a false conclusion.

            [???? .mod]

          • Pretty simple Mr “mod” The only time an implication is false, is when the hypothesis is true, and the consequence is false. See the truth table I posted. Frank makes no mention of the consequence in my argument. .

          • We’re talking science, not logic David. Your rules of logic are irrelevant.

            In science, a false hypothesis predicts incorrect physical states. Such a hypothesis can be logically sound and internally coherent. But it’s false, nevertheless.

            On the other hand, and again in science, a hypothesis is physically true when its predictions (your implications) are found physically correct.

            Your constant negative-3-C-off thermometer was wrong in concept and, as you later interpreted it, your revised “IF” statement carried your argument into fatuity-land.

          • What makes you think science follows your linear logic, David?

            Science is deductive, falsifiable, and causal. If your assumption is false, your theory is wrong, and the deduced physical state is incorrect.

            It doesn’t matter if a value predicted by your wrong theory matches some observation. It tells you nothing about physical reality.

            Next, and specifically with respect to your vaunted “IF“-then: if you propose a non-existent and substantively irrelevant thermometer (one off by a constant -3 C), then your argument is both a non-sequitur and fatuous.

          • WRONG Frank, any implication deduced from a false hypothesis is TRUE. The truth value of the conclusion/consequence of a false hypothesis in said deduction is indeterminate, You fail Logic 101.
            ..
            In science (which you should be familiar with) you cannot deduct a false conclusion/consequence from a true hypothesis. You should also know that you can deduce ANYTHING from a false hypothesis. Furthermore, all you need to do to prove a hypothesis false is to deduce a contradiction from it, which everyone knows as reducto ad absurdum.

            Now Frank, please prove the following statement false: ” If the thermometer used to collect temperature data at a location was off by -3 degrees for every reading it took in it’s 30 years of data collection, the anomalies calculated from it would be correct.”
            ..
            Thank you in advance.

          • Now, if you find trouble in this Frank, I suggest you do the following.
            .
            1) Get a dataset for a temperature measuring site that has data for 30+ years.
            2) Calculate the 30-year average, then calculate the anomaly for a given time period.
            3) Take the dataset, and subtract 3 degrees from every single reading in it
            4) Re-calculate the 30-year average, then recalculate the anomaly for the same time period.
            5) You’ll notice that the anomaly in both calculations are identical.

          • Here ya go Franky:
            .
            Dataset A: { 3, 4, 8, 1, 2, 4, 6, 11, 9}
            Dataset B: { 0, 1, 5, -2, -1, 1, 3, 8, 6}
            Average A: 48/9 = 5.333
            Average B: 21/9 = 2.333

            Anomaly A: { -2.333, -1.333, 3.333, -4.333, -3.333, -1.333, 1.333, 6.333, 4.333}
            Anomaly B: { -2.333, -1.333, 3.333, -4.333, -3.333, -1.333, 1.333, 6.333, 4.333}

          • Irrelevant, David. The point is that systematic measurement error is known to be inconstant for surface temperatures and is not known to be constant for satellite temperatures.

            In such cases, anomalies not only retain the measurement uncertainty but increase it by the root-mean-square of the error in the measurements entering the difference.

            In your example, your data values neglect the ±uncertainty present in all real measurements. That makes your example a set-piece error.

            You have also violated the limit of significant figures. Your values are represented as good to integer accuracy, but you quote the averages and the anomalies to three significant figures past the decimal. Very wrong.

            The uncertainty in integer value measurements can be approximated as ±0.40, which in every case rounds to the given integer. The root-mean-square (rms) uncertainty in your averages is then sqrt[(9*(0.4)^2)/8] = ±0.42

            When you take the anomaly, the uncertainty is the rms of the uncertainty in the value and in the average, which is then ±0.58.

            Your anomalies A and B are then, -2±0.58, 2±0.58; -1±0.58, -1±0.58; … etc. Now how do you know they’re identical?

            Your continued focus on false demonstrations indicates you plain don’t know what you’re talking about. Evidenced also by your increasing recourse to derision.

          • The definition of “true” in the context of your reply is, “follows from the premise”.

            The definition of true in science is correctly predicted using a falsifiable physical theory.

            Your entire argument is an exercise in the the equivocation falsehood. Therein, different meanings applied to the same word are used to leverage a false argument into the guise of truth.

            Your argument fails. Your logic is irrelevant and wrongly applied.

            You are wrong to suppose one can deduce anything from a false hypothesis. Were that correct, falsification would be impossible. A hypothesis of valid standing in science, whether true or false, predicts one outcome.

            Falsification in science is not deduction of a contradiction. It is an incorrect prediction of an observation or experiment.

            Your application of logic is one long malaprop.

            Finally, the point at issue is that your claim of constant systematic error is wrong. Your request for proof of a substantive non-sequitur is a specious attempt to evade facing your failure.

          • I’m not a mean guy, Tom, honest. 🙂

            I’m just trying to defend science from those who are trying to destroy it. David has been misled by the real culprits.

          • Frank fails Logic 101 “Falsification in science is not deduction of a contradiction.”

            When the observation of an experiment contradicts the prediction derived from the hypothesis, you have a contradiction.
            The reasoning goes like this:
            A=hypothesis
            B=consequence
            A—>B is the prediction
            A—>not-B is the experimental results.
            ….
            Since B and not-B is a contradiction, the truth table for implication only allows this if A IS FALSE
            ….
            See Frank? That is how science “works.” That is how a hypothesis is falsified.

          • Here’s what you wrote in your truth-table post, David: “You will note that even if the “assumption” (hypothesis) is false, the implication is true irrespective of the truth value of the consequence. (my bold)”

            Now you’re writing, “the truth table for implication only allows [a contradiction] if A IS FALSE.”

            Your “implication” is equivalent to a prediction. But now you’ve moved on to consequence, i.e., to an experimental or observational result.

            Your truth table doesn’t account for consequence at all. It merely relates the logic of a hypothesis and its prediction.

            You’re shifting your ground, as you did in the discussion of systematic error and anomalies.

            You’re just leveraging the equivocation fallacy again.

          • I’m not shifting any ground, you just don’t understand simple things. The prediction is the implication that the consequence follows from the hypothesis. The experiment is the implication that the observation follows from the SAME hypothesis. I’m surprised that a professional scientist such as yourself is ignorant of basic logic. You need to understand how THIS SIMPLE EXAMPLE works before you can even think about the logic of statistical hypothesis testing.

          • When the prediction and the observation contradict each other, the only way this is possible is if the hypothesis (singular) that generated both the prediction and the experimental observation is FALSE.

          • An experimental observation cannot be “False” -> Provided no errors were made and the experiment is a valid test, the results are “Fact” – And cannot be labelled “True” or “False” to create a logical argument/paradox/though experiment.

          • The truth value of the observation is irrelevant, all that is necessary is for the observation to contradict the prediction.
            ..
            Now,, RACook, you have introduced an additional hypothesis, namely IF no errors were made and the experiment is a valid test……

          • Additionally Mr RACook, in the event that the prediction matches the observation you cannot conclude the hypothesis is TRUE. This is an additional result from the logic/truth table, and it shows us why Science cannot PROVE anything true it can only falsify stuff.

          • Physical theories are non-provable because any such proof requires an unbounded infinity of data. The non-provability of a physical theory does not follow from your logic table.

            The ‘proof’ distinction between science and logic (and mathematics) arises from the fact that physical theories are not axiomatic.

            No deductive logic can prove physical theory, any new observation/experiment can disprove it. Physical science is categorically unlike logic or mathematics.

          • WRONG Frank, a FALSE hypothesis can imply a TRUE conclusion, and the implication is valid. That is why you can never “prove” a statement TRUE.

          • Deductions from physical theory describe the physical state of the system under study.

            A false theory will describe an incorrect physical state. The implication consisting of that incorrect state is never, ever true.

          • Observations must be independently validated as physically accurate before they have standing to challenge a prediction from a falsifiable physical theory.

            RACook is correct. You’re not, David.

            You continue to think of ‘logical truth’ as strictly relevant to a scientific context. Big mistake.

          • Logic provides the mechanism that guarantees a hypothesis if FALSE when observation contradicts prediction . The pairwise implications of observation and prediction can only both be true when the hypothesis is FALSE.

            This is basic philosophy of science Frank, you seem not to be aware of how it all works.

          • My discussion concerns your original claim, David. Not your slippery and evasive revisions of your false initial proposition.

            Observations were no part of your original argument.

            The philosophy of science is not science.

            Science is theory and result. Logical coherence is a necessary tool.

          • My original claim stands, as all subsequent statements I have made. You don’t understand the first thing about logic, and even less about implication.
            A sequence of If A, and if B, and if C and if D, and if E, then we should observe X is an implication. The observation is then made in an experiment. The prediction and the observation are both consequences of the conjoined hypothesis. You know the rest.

          • But that wasn’t your original argument, David.

            Your original argument was “even if the “assumption” (hypothesis) is false, the implication is true.”

            You’re now in the position of contradicting yourself. Oh, what tangled webs … and all that.

          • Logic is logic. The logic of science is the same as the logic of mathematics, is the same as the logic of philosophy. No Equivocation, you are trying to make a distinction where one doesn’t exist. You’ve already displayed your ignorance of basic logic when you crossed out “consequence.” Stick to chemistry buddy………you’re in over your head otherwise.

          • Let’s see how long it takes you to realize that your question inheres your original two-value logic, David.

            You have once again contradicted your own later attempts to speciously include consequence.

            Everyone here who notes your tactic of personal disparagement will understand it as the recourse of someone who has lost the substantive part of the debate.

          • No Frank your inability to provide a concrete example of an implication that is false and contains a false hypothesis shows EVERYONE here that you don’t know logic. You can’t even explain to us why such and example does not exist.

          • Straw man argument, David.

            I’m sure you learned about those in your logic classes. Isn’t it nice to apply in practice what you learned in theory.

            Your case fails on your argumentative shift of ground to falsely bring observation into your argument where it did not exist a priori. E.g., here: September 6, 2018 11:13 am.

            I didn’t dispute your lovely logic table. I disputed the relevance of your logic to science, e.g., here: September 5, 2018 9:53 pm.

            You’ve been wrong in every science question you’ve taken on.

          • Not a “strawman” You cannot provide an example of an implication that is False that has a false hypothesis. Do you even know what a “strawman is?”

          • The logic of science is deduction from an analytical surmise about physical reality. Physical result is independent of the surmise and can refute the surmise.

            The logic of mathematics and the logic of philosophy is deduction from asserted or assumed axioms. All results are deduced from the axioms, no result is independent of the axioms, and nothing in the system can refute the axioms.

            You continue to be wrong, David.

          • I have already shown you that Quantum Mechanics is an axiomatic system……give it up while you are ahead Frank.

          • You’re leveraging your standard equivocation fallacy fall-back tactic again, David.

            Axioms that can be disproved (QM) are not axioms that are kept forever (philosophy and religion).

            Same word, categorically different meaning. Equivocation fallacy, writ obvious.

            That difference is fatal to your case. The fact you can’t see that, indicates either a refractory death-grip on a face-saving fatuity or just plain incompetence.

          • Your original logic table doesn’t have “consequences,” David. It doesn’t even really have predictions.

            I.e., your opening: “Here is the truth table for implication (my bold)” It has only implications following from hypotheses.

            You subsequently grafted on consequences, perhaps when you realized the poverty of your original post.

            Let me fix your follow-up for you, to bring it into coherence with your actual position:
            A=hypothesis
            B=consequence implication
            A—>B is the prediction logical inference
            A—>not-B is the experimental results. logical contradiction.

            That was your original argument. Consequence is nowhere to be found.

            Your approach to a losing debate is clearly to shift your ground every time your prior argument is defeated.

          • No Frank, you are way off base and grasping at straws.

            In Logic, implication is the relationship between two statements, A, and B. Crossing out “consequence” and substituting “implication” shows you don’t know what you are talking about.

            There’s no point to talking to you anymore because you are clueless. You get an “F” in Logic 101.

          • Right. So, when you wrote, “Here is the truth table for implication (my bold),” you didn’t mean implication.

            You meant implication and subsequent cohering observation. But forgot to include that latter part until your initial argument proved false.

            You’re a great piece of work, David.

          • Just to show you how clueless your are, my original table is the definition of implication, and the first statement is considered a “hypothesis” and the second statement is considered the “consequence”. Crossing out the formal definition of the terms and substituting what you think they mean is funny.
            ….
            Now although a lot of people here will object to a link from wikipedia, I’m going to post one here for you because you really really need to take a look at it before making any more grossly inaccurate statements. https://en.wikipedia.org/wiki/Material_conditional

          • You’re making the same mistake as with Wiki, David. The logic table in philosophy.lander.edu still has only two elements, not three.

            Whether you call the second element implication or consequence, the latter of those words does not mean, and never means, observation. It always means prediction.

            Observation is not part of your original argument.

            Your continual attempts to slide between definitions in an attempt to establish your incorrect claim by false means (the equivocation fallacy) will avail you nothing.

            You make the same mistake over and over again.

            Were it anyone but you, David, what with your reputation for towering integrity and all, the persistent repetition of an obviously false argument would lead me to wonder about the honesty of my disputant.

          • The Wiki article merely expresses a synonym relationship: “The material conditional (also known as material implication, material consequence, or simply implication, implies, or conditional)

            The Wiki Table includes only two elements, hypothesis and implication, which latter Wiki also expresses as consequence.

            That is, for Wiki, consequence = implication.
            And for you, consequence = observation.

            Your recourse to that confusion of definitions is a very clear example of your continual abuse of the equivocation fallacy.

            Your source defines consequence as implication. You then turn around and redefine consequence as observation.

            Equivocation fallacy. You redefine the word in mid-stream, and then claim victory.

            Nice try, David. Wrong again.

          • Suprising that someone like you can’t understand a simple Wiki article. There three distinct items, namely 1) hypothesis, 2) conlusion/consequence and 3) implication
            ..
            An implication is a compound statement of the relationship between a hypothesis and a consequence.

            There…..one sentence. Should be simple enough for you to understand.
            .
            Wiki does not define the consequence as an implication.

            Not good enough? hows this: In an implication the hypothesis implies the consequence. Does that clear up your fog?

          • Maybe I should have quoted the Wiki article a little further to help you along, David.

            Here it is: “The material conditional (also known as material implication, material consequence, or simply implication, implies, or conditional) is a logical connective (or a binary operator)… (my bold)”

            Guess how many elements there are in a binary operator, David.
            Hint: not three.

            I have already pointed out that Wiki defines consequence as a synonym of implication. It has no independent standing.

            Your continued and specious attempts to include consequence as a third item, as in hypothesis, implication, consequence (=observation), is truly fatuous.

          • Frank, I stated: “There three distinct items”

            One is the hypothesis, the second the consequent and the third is combination of the two with the operator.
            ..
            Reading is fundamental .

          • Glad you referenced reading skills, David.

            You gave two, not three, items in your original logic table. Read it here: September 5, 2018 1:52 pm

            Two items here: September 5, 2018 5:49 pm

            I brought up the necessity of empirical observations here: September 5, 2018 9:53 pm

            After that, you shifted your ground. Initially, your truth-table “consequence” followed from the logical coherence of the implication, as I already showed here: September 6, 2018 11:13 am and; here: September 6, 2018 3:45 pm.

            The proof of your mistake is in your claim that, “You should also know that you can deduce ANYTHING from a false hypothesis.

            I showed that claim to be wrong here: September 5, 2018 9:53 pm and; here: September 6, 2018 6:46 pm.

            In science, your claim is categorically wrong. A valid hypothesis in science, right or wrong, is logically self-coherent and deduces a single prediction.

            It is tested by independent observation. It is falsified by a failed prediction, not by a deduced logical contradiction.

            After I showed your formulation to be irrelevant to science, you tried to save yourself by shifting the meaning of consequence from implication (item 2 in your two-value logic) to observation, which appeared nowhere in your original argument.

            This is shown by your own statement right under your truth table, to wit: “You will note that even if the “assumption” (hypothesis) is false, the implication is true irrespective of the truth value of the consequence,” referring to rows three and four in your table.

            Your “consequence” there is “p→q,” where hypothesis p implies condition q.

            Nothing in your table indicates the method of science, in which p→q, followed by both p and q negated (or verified) by ‘o,” observation, and ‘o’ is independent of p and q.

            You subsequently and falsely grafted observation onto your argument, after I raised the necessity.

            Squirm and falsely revise as you like, David. Your logic reveals nothing about the method or mechanism of science.

            You’ve were wrong from the outset, and insistently wrong ever since.

          • Frank says: “You gave two, not three, items in your original logic table.”

            Frank cannot count. There are THREE columns in the table

          • Table items: hypothesis (1), implication (2), hypothesis yields implication [(1) & (2)]. Three columns, two items.

            Observation is nowhere to be found.

          • >>
            Now although a lot of people here will object to a link from wikipedia . . . .
            <<

            I think you need to reread that link. I will quote:

            The material conditional is used to form statements of the form p → q (termed a conditional statement) which is read as “if p then q”. Unlike the English construction “if… then…”, the material conditional statement p → q does not specify a causal relationship between p and q. It is merely to be understood to mean “if p is true, then q is also true” such that the statement p → q is false only when p is true and q is false.  The material conditional only states that q is true when (but not necessarily only when) p is true, and makes no claim that p causes q.

            That supports my original statement from yesterday. You are confusing the definition of implication with its logical effect. If p is true (and p → q), then q must be true. If p is false, then q can have any value.

            Jim

          • LOL @ Pat Frank.
            ..
            Pat says: “What makes you think science follows your linear logic”
            ..
            Here is a clue Pat. Mathematics follows linear logic, and science would fall apart without the linear logic of mathematics.

          • Mathematics is the language of science, David. Mathematics does not govern the content of science or the logic of science.

            “LOL,” by the way, does not improve your arguments.

          • Frank, there is “logic”……when you say “logic of science” it is not different. Both are identical.

          • The structure of logic depends from its axioms. Logic is axiomatic. Science is not. They are not identical.

            Quantum Mechanics upended the logic of classical electromagnetic theory. Such upending would never happen in an axiomatic system.

          • Quantum Mechanics can be falsified by experiment, David.

            That means what MIT calls its axioms are conditional on disproof. Disproof requires their abandonment.

            If there was anything that separated science from philosophy, that mortal threat is it.

            You’re once again leveraging the equivocation fallacy. “Axiom” as used at MIT is not “axiom” as used in philosophy.

            If you understood anything about science, you’d not continue to embarrass yourself like this.

          • >>
            Here is the truth table for implication . . . .
            <<

            I don't know what you are arguing about, but you're misusing the logic of implication. Here is a Venn diagram of implication:
            https://jamesbat.files.wordpress.com/2018/09/venn.jpg

            As you can see, P is contained and inside of Q. Implication places a restriction on the values that P and Q can take. When P is true, then Q must be true. It says nothing about the value of Q when P is false. Since P cannot exist outside of Q the following Boolean expression holds:

            P and not Q = false

            If we apply De Morgan's Law, we get:

            not P or Q = true

            This leads to your truth table above. The mistake you're making is trying to change the values of independent variables P and Q. When P is true and implication holds, then Q must also be true. When P is false it says nothing about the truth value of Q–which may be either true or false.

            Notice that when P implies Q, then not Q implies not P.

            Also if P implies Q and Q implies P then P is identical to Q.

            Lastly, if A implies B and B implies C then A implies C.

            Jim

          • Jim, if the premise (hypothesis) of an implication is false, the implication is true irrespective of the truth value of the conclusion. There are four distinct cases for the truth table which your Venn diagram does not display.

          • >>
            Jim, if the premise (hypothesis) of an implication is false, the implication is true irrespective of the truth value of the conclusion.
            <<

            Your statement is true, but you’re using it wrong. For example:

            \displaystyle A+\overline{A}=1,

            that is, A or not A is always true. That doesn’t mean that A is always true. Likewise for implication:

            \displaystyle \overline{P}+Q=1.

            In other words, not P or Q is always true. That doesn’t mean that not P is always true or Q is always true.

            >>
            There are four distinct cases for the truth table which your Venn diagram does not display.
            <<

            Actually, it displays all that it needs to display. The region inside P represents P and Q or \displaystyle P\cdot Q. The region outside of P but inside of Q represents not P and Q or \displaystyle \overline{P}\cdot Q. The region outside of Q is also outside of P and represents not P and not Q or \displaystyle \overline{P}\cdot \overline{Q}. Taken together they represent the universe, so we can “or” them and set it equal to 1.

            \displaystyle P\cdot Q+\overline{P}\cdot Q+\overline{P}\cdot \overline{Q}=1

            Now we can apply some Boolean identities to simplify the expression. The first is to use the following identity to add another term:

            \displaystyle A=A+A

            \displaystyle P\cdot Q+\overline{P}\cdot Q+\overline{P}\cdot Q+\overline{P}\cdot \overline{Q}=1

            Factoring we get:

            \displaystyle Q\cdot (P+\overline{P})+\overline{P}\cdot (Q+\overline{Q})=1

            Because we know that \displaystyle A+\overline{A}=1 we can reduce to:

            \displaystyle Q\cdot (1)+\overline{P}\cdot (1)=1

            And since \displaystyle A\cdot 1=A, we have:

            \displaystyle Q+\overline{P}=1.

            The commutative law holds for the Boolean operator “or” and “and” so we can rearrange:

            \displaystyle \overline{P}+Q=1

            Applying De Morgan’s law we get:

            \displaystyle P\cdot \overline{Q}=0,

            and there’s our missing fourth term. It’s not included in our Venn diagram because it’s not part of the universe for implication.

            Jim

          • Jim says: “When P is false it says nothing about the truth value of Q–which may be either true or false.”
            ..
            that is true but when P is false the implication is true. Pat Frank states that P is “false” so anything implicated from his statement is true.

          • >>
            No I am not.
            <<

            I figured out what you are doing wrong. You’re assuming P implies Q is always true. P implies Q is one of the things you need to prove. There are a couple of ways to do that. One is to prove the intersection of P and not Q is always false. Another is to prove that not P or’ed with Q is always true. After that, you must prove that P is true. Those two conditions will make Q true.

            So, yes, you’re misusing the logic of implication.

            Jim

          • For example Frank, if you visit the home I grew up in and look at the back of the door to the closet in the living room, you’ll see scratch marks with dates next to them. Never measured the absolute height of the marks, but it’s pretty easy to see how rapidly I grew up as a child, since the marks were a measurement of my height on every birthday I had.

          • ±(how many inches), David?

            How do you know the error is identical with every scratch?

            Your assumption of constant systematic error has never been tested for satellite temperatures and is not known to be true.

            It has been tested for surface air temperatures, and is known to be false.

          • For a class 1/2 station, the measurement taken MAY be a relatively accurate representation of the true temperature of the area it represents.

            For a class 4 station, it’s just not an accurate representation of the true temperature, regardless of how precise the measuring instrument claims to be.

          • Calibration experiments done using well-sited and well-maintained platinum resistance surface air temperature sensors show about ±0.35 C systematic measurement error due to the impacts of environmental variables.

            This ±0.35 C would be the very minimum accuracy limit of any class 1 or class 2 station.

          • … and, of course, every single scratch-mark is infinitely precise, incorrect by the utterly identical offset, and its height from the floor (which hasn’t settled an angstrom) can now be evaluated with perfect accuracy.

            That’s your argument, David.

            By now, I’d expect even you can see it’s wrong. If not, well, then, there’s no help for you.

          • It’s an assumption that the systematic error is a constant, David. That’s not known to be true.

            And when systematic error is due to uncontrolled variables, as is almost certainly the case in satellite and radiosonde temperatures, and is indeed certainly the case in surface air temperatures, then it will never be constant.

          • Pat & David => This is a specious argument.

            If everyone was using the same broken ruler, and all measurements were of one thing with one broken ruler, then we would be able to judge the change on its own.

            Temperature measurements from the get-go are acknowledged to be RANGES — that how we record them 72 +/-0.5 — because we mark down 72 for any thermometer reading between 72.5 and 71.5. This is not a broken ruler, this is a notation of a temperature range. The true temperature could be 72.4 for ten days running. There is no reason to assume that the actual temperatures are “normally distributed” — the true expectation is that they are evenly distributed. Thus, every subsequent calculation using temperatures MUST calculate them as ranges, not as discrete numbers.

            Ignoring this is how CliSci gets false certainty.

          • Hi Kip – you’re apparently talking about instrumental resolution (the limit of instrumental accuracy). You’re right that the consensus people utterly ignore it.

            In a direct confrontation over resolution after my Erice talk three years ago, Richard Muller of BEST pretty much said it’s unimportant. He ignored it then, he still ignores it, and he’s wrong.

            The other measurement issue is systematic error from uncontrolled environmental variables, primarily wind speed and solar irradiance. They both put time-variable errors into the measurements.

            This is the issue with David Dirkse, who clearly and incorrectly thinks that systematic error is constant in time and space.

          • “Systematic error from any given singular instrument is eliminated by the use a anomalies. Spencer was correct.”

            So when the error goes from 0.5 to 0.05 by changing from absolute Temps to anomalies, and you argue this is because systematic measurements errors have been removed, you are saying that the error introduced by systematic instrument measurements is 10x larger than the errors introduced by all other influence combined.

            Does that sound reasonable to you?

          • No, it is not, David, because the systematic measurement error varies in time and space. Calibration experiments show this.

          • Yes it is, and the example datasets I provided shows how using anomalies removes systematic error. Spencer is right. And Spencer should know, because he has been doing this kind of work and analysis for his entire career, as opposed to a chemist that thinks he knows something about data analysis.

          • You didn’t even know about significant figures, David; something taught in a freshman chemistry class. How are you able to decide who is correct?

            Your sample data set, apart from being wrong, was also irrelevant because you deliberately departed from the point at issue, namely the inconstancy of systematic measurement error due to the impact of uncontrolled environmental variables.

            I’m a physical methods experimental chemist of long-standing, David. My work principally involves X-ray absorption spectroscopy. I sweat measurement error all the time. Here is a recent paper, for your critical examination. It is explicit and complete about experimental error and uncertainty.

            Your disparagement of my professional competence ranks right up there with your wrong anomaly set: proof you don’t know what you’re talking about.

          • ” I’m a physical methods experimental chemist of long-standing” who doesn’t even know that Quantum Mechanics is axiomatic.

          • Appealing to authority is a logical fallacy, and appealing to your own authority is super hilarious.

          • You wrote, “as opposed to a chemist that thinks he knows something about data analysis.,” David.

            You’ll note I included a link to a recent paper that demonstrates my expertise in data analysis, including analysis of physical error.

            You ignored the evidence that you’re wrong, just as you have ignored every prior demonstration that your thinking is incorrect.

            Demonstration is the opposite of an appeal to authority. You’ve conflated two opposed categories — right up to par with your other examples of logical integrity.

          • QM is not axiomatic, in the sense you mean axiom, David.

            QM can be falsified. The axioms you love so much cannot.

            The difference cannot be greater.

          • Quantum Mechanics is subject to falsification. So, then, are its axioms. QM is not axiomatic in the sense of philosophy and religion.

            Your argument is equivocation all the way down.

          • Any axiomatic system is subject to falsification.
            ..
            For example, in Euclidean geometry, the parallel postulate (axiom) can be jettisoned, and one can get spherical or hyperbolic geometries. Now tell me Mr. Frank, how does one “falsify” the Euclidean parallel postulate? Are hyperbolic and spherical geometries “false?” or is Euclidean geometry “false?”
            .
            .
            Do you have a clue what you are talking about?

          • Quantum Mechanics is an axiomatic system, When you post: “QM is not axiomatic”
            ..
            YOU ARE WRONG.
            ..
            Now reign in your oversized ego and admit it.

          • Tell all of us Frank, if one day, someone discovers concrete proof that in fact Steve Janus was crucified 2000 years ago on the cross instead of Jesus Christ, does that “falsify” Christianity?

          • Christianity rests on the teachings of a man that walked on the face of the Earth about 2000 years ago. If one discovers the physical remains of said person, it kinda destroys the whole gig Franky boy. Seems to me that not only do you not know that QM is axiomatic, your understanding of Christianity is lacking also.

          • Which of the resurrection stories do you like best, David. Is it the one where dead people emerge from their graves and walk around?

            There’s a startling, “Hi honey! I’m home!”, isn’t it.

          • Purported to have walked the earth 2000 years ago, David. There is no indisputable historical record supporting your claim.

            I’ve pointed out twice now that the so-called axioms of QM are under mortal threat of falsification. That makes them categorically separate from the axioms of logic that are under no mortal threat.

            Axioms_QM ≢ Axioms_logic.

            Once again you’re leveraging a word using the equivocation fallacy.

            Your tactic is become tediously predictable.

            Although it seems you are incapable of grasping that fundamental distinction, I expect your apparently refractory ignorance is just a tactic of denial.

            If you admitted the obvious, you’d lose the point.

          • >>
            Your tactic is become tediously predictable.
            <<

            David’s also stopped replying to my comments, because he knows his interpretation of implication logic was wrong.

            Jim

          • Among all those debating here, only you, David, evidently think that insulting diminutives improves an argument.

            That makes you special.

          • >>
            Wrong Jimmy-boi, the truth table is correct. If you think it is not correct, please tell us all why.
            <<

            Well, I was trying to be kind an educate you on the logic of implication, but you are too naive and ignorant of logic to even know when you’re wrong.

            I’ll try again, but I doubt it’ll work.

            The truth table is correct for implication. The main point about implication is that when P → Q, then P is a logical subset of Q. If P → Q and P is true, then Q is also true. That’s all.

            Notice that the truth table has one false term and three true terms. The false term: \displaystyle P\cdot \overline{Q} can be set to false (or zero). That gives us:

            \displaystyle P\cdot \overline{Q}=0

            If we “not” both sides and use De Morgan’s law we get:

            \displaystyle \overline{P\cdot \overline{Q}}=\overline{0}

            and

            \displaystyle \overline{P}+Q=1.

            This last expression is the one that defines implication. We can also get the same result by or’ing all the true terms, setting them equal to true (or one) and simplifying. I’ve already done this to prove that my Venn diagram represents implication correctly.

            Now, I don’t understand why you keep trying to prove implication true. (It’s probably because you don’t really know what you’re doing.) Most would just say: “P implies Q” or “if P implies Q.” That is sufficient to assume that implication is true. Then you must show that P is true. If P implies Q AND P is true, then Q is also true. If P implies Q AND P is false, then Q can be either true or false–there’s no restriction on Q’s value in that case.

            That David, is the correct way to use implication. And I didn’t violate the truth table for implication.

            Jim

          • Axioms are axioms, logic is an axiomatic system, and QM is an axiomatic system even though you posted:

            1)”the fact that physical theories are not axiomatic. ”
            ..
            2)”QM is not axiomatic,”
            ..
            3) “QM is not axiomatic in the sense of philosophy ”

            (All of these quotes are visible in this thread)
            ..
            Plenty of evidence that Pat Frank doesn’t know much about QM

          • Apparently you don’t understand the word, ‘falsifiable,’ David. QM is falsifiable, formal logic is not.

            As I noted above, Axioms_QM ≢ Axioms_Logic.

            I’ve pointed out that inequivalence several different ways, but you’re clearly determined to reject the obvious.

          • All your example shows, David, is that different axioms produce a different mathematics.

            Nothing about Riemann Geometry falsifies Euclidean Geometry because they start from different axiomatic premises.

            Physics is non-axiomatic in the sense of mathematics and philosophy. Unlike those two, all of physics is provisional, and is subject to falsification and discard, right down to bed-rock.

            You have once again employed the equivocation fallacy in your example from geometry, because you equate observational falsification with alternative formalisms.

            You have equated orthogonal categories. Good job.

            You clearly love to boast about your expertise in logic, and then go on to repeatedly and insistently make sloppy errors of definition.

            Your argument has invariably relied on opportunistic slip-slidery.

          • Also Frank, you do realize that parts of QM are not falsifiable due to issues related to the Church-Turing thesis.

          • All physical theories are under-determined. Physical proofs from mathematics are axiomatically limited, and cannot anticipate the diagnoses from future independent and presently unpredictable observations.

            Meaning, in short, your comment is grounded in ignorance.

          • Axioms_QM ≢ Axioms_logic.

            Equivocation fallacy. As noted three times now.

            At least you’re consistent in your mistake, David.

            What logical inference can one derive about someone (you) who insists on repeatedly making the same obvious mistake?

          • Pat ==> Instrumental resolution and more simply the fact that even with modern electronic stations, temperatures are recorded as whole degrees F regardless of the resolution of the measuring instrument — to agree with past method.

        • “Well, if Spencer and Christy are fudging the figures, then so are the operators of the weather balloons since both sets of temperature data are in agreement.”

          Not true
          UAH is a clear cold outlier and a long way from correlating with radiosondes.

          https://i0.wp.com/postmyimage.com/img2/995_Tropospheretrends.png

          And to boot since ~2000 the latest sensor on NOAA 15 has been displaying an even more distinct cooling bias vs other tropospheric measurement AND it’s predecessor onboard NOAA 14 ….

          https://postmyimage.com/img2/792_UAHRatpacvalidation2.png

          Both UAH and RSS acknowledge this discrepancy. UAH by saying that the present one is the latest and therefore likely correct. RSS, take the pragmatic approach by saying they don’t know which is wrong and split the difference….

          https://tamino.files.wordpress.com/2016/04/diff.jpeg

          • “Not true
            UAH is a clear cold outlier and a long way from correlating with radiosondes.”

            How so? Christy says they correlated. Do you know something he doesn’t?

  14. Regarding the phrase “life-as-we-know-it” that was implicitly tied into the thermometer graphic and specified temperatures in this article, here is the range of temperatures that life on Earth has demonstrated (based on current human knowledge):

    “. . . another group of scientists has found a microbe from deep-sea vents that is able to survive at 122C. And there are hints that even this is not the ultimate limit for life. A new microbe, for now called “Strain 121”, has since been discovered in a thermal vent deep in the Pacific Ocean. The microbe thrives at 121C and there are claims that it can even survive for two hours at 130C. However the finding is still contentious, as the strain has not been made publicly available to study.” — source http://www.bbc.com/earth/story/20160209-this-is-how-to-survive-if-you-spend-your-life-in-boilin-water

    “The study, published in PLoS One, reveals that below -20 °C, single-celled organisms dehydrate, sending them into a vitrified – glass-like – state during which they are unable to complete their life cycle. The researchers propose that, since the organisms cannot reproduce below this temperature, -20 °C is the lowest temperature limit for life on Earth. Scientists placed single-celled organisms in a watery medium, and lowered the temperature. As the temperature fell, the medium started to turn into ice and as the ice crystals grew, the water inside the organisms seeped out to form more ice. This left the cells first dehydrated, and then vitrified. Once a cell has vitrified, scientists no longer consider it living as it cannot reproduce, but cells can be brought back to life when temperatures rise again. This vitrification phase is similar to the state plant seeds enter when they dry out. ‘The interesting thing about vitrification is that in general a cell will survive, where it wouldn’t survive freezing, if you freeze internally you die. But if you can do a controlled vitrification you can survive,’ says Professor Andrew Clarke of NERC’s British Antarctic Survey, lead author of the study. ‘Once a cell is vitrified it can continue to survive right down to incredibly low temperatures. It just can’t do much until it warms up. ‘ ” — source https://phys.org/news/2013-08-lowest-temperature-life.html

    • Gordon ==> Thank you for your sharing your link.
      Life is very adaptable, as your input shows. While scientists would be thrilled to find anything alive outside of Earth’s environment, the biggest hope is for other life forms with we humans can relate and communicate….

      • Kip, one sign of a good article is that it promotes additional thoughts and comments from readers. Your article was one of the best I’ve read in a long time and provided some great points on how many scientists don’t even recognize that the “magic trick” of reducing reported data uncertainties has occurred right under their noses, despite “peer review”.

        The chart you posted at 7:44 am on Sept 4 in response to Javier, showing the tracking anomalies from start-2014 through mid-2018 with the bands of data uncertainty (+/- 0.5 C overlaid added onto the yearly anomaly variation), should be all that is needed to quiet anyone asserting that climate temperature “anomalies” are meaningful at 0.1 C precision.

        I wholeheartedly share in the hope to discover advanced intelligent, “conversational”, ET life forms, but will celebrate even if it is discovered in its most primitive form.

        To paraphrase Arthur C. Clarke (my caps), “Two possibilities exist: either we are alone in the Universe or we are not. ONLY THE FIRST IS TERRIFYING.”

        Thank you for your article!

        • Dressler ==> As a teen I read everything Clarke wrote — short stories to novels and non-fiction. A true giant among giants of his day.
          Thanks for reminding me of his comment on ET life.

  15. Worse, we have made only limited near-observations at the edge of our solar system. The signals from outside are limited and the missing information is inferred based on Earth-biased assumptions/assertions.

  16. If the “expanded” error was correct, then the plotted data should be outside the error about 5% of the time. The fact that this relationship does not hold between the original plot and your “corrected” error range shows that something is wrong with the “corrected plot” (the original does appear to follow the rule).

    • Obviously, as plotted the error will always enclose the plotted line, as we don’t know the “correct” answer. I am referring to the fact that the “wobble” of the plotted line does not reflect the amplitude of the error range.

  17. Kip:

    Would you agree an anomaly is anchored by a base number? If so, how is that defined? I am working on a comment comparing the two things as you did.

    • Raggnar ==> Well, an anomaly is the difference between a metric which is changing compared to a stable, unchanging number.

      For instance, Body Temperature is often stated as a “fever” in degrees. In Fahrenheit, my doctor daddy would say “the baby has a 4 degree fever” when the baby’s temperature was 102.6F, the 4 degrees being the anomaly from “normal” 98.6°F [That means the baby is sick, but not particularly serious for a baby.]

  18. Thanks for sharing, I’ve been making the point about absolute temps for years, but never this well.

    We’re pretty sure it’s gotten warmer, but we really don’t know a lot beyond that, despite all the claims about “warmest years.”

    And plotting model uncertainty against temperature uncertainty, you quickly realize models are not the sharpest policy tool in the shed, and indeed are barely falsifiable in time ranges of interest.

  19. “…transformation of Uncertain Data into Certain Anomalies,  reducing the uncertainty of annual Global Average Surface Temperatures by a whole order of magnitude…”

    I think you have not highlighted the whole problem. The problem is partly nomenclature and partly not following through with the anomaly calculation.

    The uncertainty has two elements: what the actual measurement is and where the centre of the range is. The measurement uncertainty for each individual measurement is fixed by the instrument used. Full stop. That never disappear because it is inherent in the instrument.

    Taking a number of measurements of the same thing, conceptualized as making more than one measurement of the same phenomenon in the same circumstances, produces a range of answers. The more one takes, the more data there is to work with. Using say, 30 measurements of the length of a piece of string one can calculate the standard deviation about the mean (average) of all the lengths. With these additional measurements one can say where the average lies admitting there is a NEW uncertainty about that calculation, which is rooted in the number of measurements taken, nothing to do with the uncertainty of each individual measurement.

    The point is that GISS shows the uncertainty about where the average lies, the middle of the range, but has not shown the uncertainty propagated from the original measurements, which remains unchanged.

    Plotting the anomaly and the uncertainty of each centre-of-range value based on the number of readings taken, has nothing to do with the uncertainty of the measurement value itself – they just left out that part of the sum. The calculation is incomplete.

    You can complete the chart by adding the uncertainty bars up and down to the upper and lower limits shown in the anomaly chart.

    The error is very typical of work done by learners who take the output of any calculation or estimation as a ‘fact’, and then use it in a new formula. It is like taking model output as data and running it through another model.

    Error propagation is well studied. It is impossible that GISS doesn’t know they are leaving out a step. Uncertainty in the measurement values propagates through to the anomaly chart, plus the uncertainty of where the centre of the readings is minus the baseline.

    As is, the anomaly chart is a wrong answer. An error, because it incorporates a mistake – a failure to propagate the measurement uncertainty through to the calculated result, whatever that result is – it doesn’t have to be an anomaly chart. Uncertainty ALWAYS increases with additional calculations. Every physicist knows this. GISS is pretending they don’t.

    • Crispin ==> Email me, if you want, at my first name at the domain i4 decimal net. I’ll send you a proper calculation.

  20. Kip, Excellent, your article re-enforces beliefs I had for a long time as to why the “warmists” use anomalies instead of absolute global mean temperatures. I carried my beliefs over from when I was looking at monthly Sea Surface Temperatures and SST anomalies in studying large marine pelagic fish. I became concerned in the late 1970s when NOAA began to change how they were determining their anomalies. Never did figure out why they changed their methodologies. The only explanation I ever got had to do how satellite data was being used, somehow their “new” methodology was more “accurate” and they were “changing computer systems(?).” Shortly thereafter I moved on to other research so never bothered to follow up any further.

  21. Let us also talk about sea level measurements. How can they give us 1/100mm accuracy if systematic uncertainty is ca. 3-4mm? You can’t do better than your systematic uncertainty. Therefore the satellite sea level measurements are complete BS.

    • Van Doren

      I think you should say “1/100 mm precision. You see, you can have as much precision as you like, correct precision, false precision, any precision. None of that changes the accuracy, do you follow? Precision is. The number of digits, basically, the then accuracy is an inherent property of the instrument.

      Everyone reading here must become familiar with this topic and be able to identify mistakes in logic or terminology.

      • There was a post on WUWT several months ago explaining precision vs accuracy. Maybe someone better at searching can provide link.

        It’s easy to use the two terms interchangeably but it is not correct to do so outside the local pub, and then maybe not so even there.

  22. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

    This little quote tells us what the game is. They need to be able to tell us that each year is the warmest ever. OK, once in a while there’s a concession to make it the second-warmest ever.

    The actual argument is totally specious, as Kip points out. It’s not an argument at all; it’s just throwing some words around and hoping that nobody notices the absence of logic.

  23. –Oh, and against all odds, some things are better than we thought. The Earth, if we allow her to warm up just a tiny bit more, will finally be at the expected, ideal temperature for an Earth-like planet. Who could ask for more?–

    Well, 15 C is better then 14 or 13 C, but I think 16 C is better than 15 C.
    In terms worlds to colonize, I would think a planet with 1/2 of Earth gravity is better than 1 gee world. And 1/2 atm 1 gee world better than 1 atm.
    We don’t know the effects of 1/3 gee world like Mars or Mercury would have on human biology- though we don’t know what 1/2 gee does either. It seems that worlds with least amount gravity, but don’t have adverse health effects, would be best.

    And the average temperature or distance from sun/star doesn’t matter much. The coldest of Mars is not much of factor, due to it’s thin atmosphere, though if Mars was at Mercury distance, again temperature is not much of factor, but at Mercury distance one would have more solar energy available- which would useful for colonists.
    With Mercury there is a large portion of planet with doesn’t have a high surface temperature- and has sunlight available. The polar regions of Mercury are thought to frozen water ice- or there are permanent shadowed crater which always have surface temperature below 100 K. But outside of permanently shadowed craters, the sunlight is always low on horizon and low angle of sunlight doesn’t warm the sunlit region by much- though anything not level to surface or say vertical the surface gets full onslaught of very intense sunlight.
    One could call this polar region, a region of shadows- anything sticking up above a level surface casts long shadows. And such land of shadows is not just limited to the polar regions, and region of the dawn and sunset. Or we have 15 degree of longitude as an hour of a day- so region of planet 1 hour before sunset or 1 hour after dawn would temporary [depending rotation] land of shadows.
    So polar regions and region of sunset and dawn is a large surface area of the planet. Or it would quite different than being in region in which the sun was near zenith.
    And if had say 1/2 atm or less of an atmosphere, the air temperature could be fairly cool in these shadow lands.

  24. Epilogue:

    A fine discussion. We see that those who are steeped and mired in statistical theory absolutely insist that real world uncertainty can be reduced simply by shifting definitions to those used in the statistical world.

    The other major point is the refusal to admit that temperature measurements are really ranges and are recorded as discrete numbers for convenience. Failure to subsequently treat them as ranges allows almost all of the later confusion about accuracy and precision. The measurements in the first instance are imprecise, 1 degree F wide ranges.

    Gotta love those statisticians — they are real bulldog adherents to a point of view.

    Thanks to all for commenting, and as always, Thanks for Reading!

    Oh — for those just beside themselves with the need to say something to me or ask a question, you can email me at my first name at the domain i4 decimal net.

    # # # # #

    • Thanks for emphasizing the RANGE concept for temperatures. I always worry about that in comparison of “average surface temperatures” for different planets in the solar system.

      It’s very important to know the range over which this average might exist. A livable planet would have to have a livable RANGE, not just an “average” over an extreme difference of temperatures.

      I wonder, then, how can we possibly ever know whether another planet has a livable RANGE for life, without observing how the range of temperatures varies over the entire planet?

      • Robert ==> The way this is handled in Science Fiction is that coloniZation often takes place on the one island/continent that has a Earth-like range suitable for humans (with some controls, like weather-controlled domes).
        Just like we can’t live as free-living animals at the North and South Poles, but we can easily in Central America.

Comments are closed.