Almost Earth-like, We’re Certain

Guest Essay by Kip Hansen

There has been a lot of news recently about exoplanets. An extrasolar planet is a planet outside our solar system.  The Wiki article has a list of exoplanets.  I only mention exoplanets because there is a set of criteria for specifications of what could turn out to be an “Earth-like planet”, of interest to space scientists, I suppose, as they might harbor “life-as-we-know-it” and/or be a potential colonization target.

One of those specifications for an Earth-like planet is an appropriate average surface temperature, usually said to be 15 °C.  In fact, our planet, Sol 3 or simply Earth, is very close to qualifying as Earth-like as far as surface temperature goes. Here’s that featured image full-sized:

1374177879538

This chart from the Chemical Society, [shows] that Earth’s observed average temperature should be about 15°C, and note that our atmosphere contains mostly Nitrogen (78%), Oxygen (21%) and Argon (0.9%) which makes up 99.9% of the total — leaving about one-tenth of one percent for the trace gases water (H2O and CO2).

Let’s look at the thermometer:

planetary_therrm_400

 

We see the temperatures believed to exist on the surfaces of the eight planets and Pluto (poor Pluto…).

The ideal Earth is right there sat 15°C or 59 °F.

Mercury and Venus are up at the top, one due to proximity to the Sun and the other due to a crushingly dense atmosphere, well out of range for Earth-like planets.

Mars is down below the freezing temperature of water, due to distance from the sun and mostly a lack of atmosphere coming in at  70°F (20°C) near the equator but at night plummeting to about minus 100° F (minus 73°C) with an estimated average of about -28°C.  The average is a little low, but mankind lives on Earth in places with a similar temperature range, at least on an annual basis, so with adequate shelter and clothing (modified for lack of breathable atmosphere), it might do.

The other four planets and Pluto (poor Pluto) don’t have a chance of being Earth-like.

This next thermometer shows that Earth provides a temperature range suitable for human and Earth-type life, ranging from 56.7°C (134°F) at the high end down to -89.2°C (-128.6°F), with an average of 15°C (59°F).

 

 

Hi_lo_therm_400

 

Like Mars, Earth has a comfortable average that falls easily in what most people would consider to be a comfortable range, avoiding extremes,  if properly dressed for the weather.  For me, a southern California surfer boy by birth, 59°F (15°C) is sweater weather – or more properly, Pendleton wool shirt weather.  59°F (15°C) is the average Fall/Winter temperature of the surf at Malibu and most of us required wetsuits that would keep us warm in the water.

 

 

 

 

 

 

 

 

 

 

 

 

 

close-up_200Taking a closer peep at the middle of our little graphic, we see that the IDEAL Earth-like planet would have an average surface temperature of 15°C.  But, in the 1880s through 1910, we were running a bit cool — 13.7°C.   Luckily, after the mid-century point of 1950, we started to warm up a little and got all the way up to 14°C, just 1°C short of the ideal.

So, how have we done since then?

There is good news.  Since the middle of the last century, when Earth was running a little cool compared to the ideal Earth-like temperature expected of it, we have made some gains.

21st_Centruy_v_2

By 2014, Earth has warmed up to an almost-there 14.55°C  (with an uncertainty of +/- 0.5°C).

With the uncertainty in mind, we can see how close we came to the target of 15°C.  The uncertainty bracket on the left for 2014 almost reaches 15°C.

2016 was a banner year, at 14.85°C and could have been, uncertainty taken in, a tiny bit over 15°C!

The numbers used in this image come from NASA GISS’s Director (and co-founder of the private climate blog that he and his pals work on while being paid by the government with your taxes) Gavin Schmidt.  They are from his blog post in August 2017 — and, as always, have already been adjusted to be a bit higher.  The current, adjusted-higher,  numbers show 2017 0.1°C lower than 2016, which is what I have used.

That RealClimate blog post is quite a wonderful thing — it reveals to us several things, some of which I have written about in the past, which is the part I quote under the image. Dr. Schmidt kindly informs us about one of the miracles of modern climate science.  This miracle involves taking data that has rather wide uncertainty — a full degree Centigrade wide, being plus 0.5°C or minus 0.5°C and turning it to accurate and precise data with almost no uncertainty at all!

Dr. Schmidt explained to us why GISS uses “anomalies” instead of “absolute global mean temperature” (in degrees) in the blog post (repeating the link):

“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second, that reduces to 288.0±0.5K. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”

So, by changing the annual temperatures to “anomalies” they get rid of that nasty uncertainty and produce near certainty!

the-miracleSource: https://data.giss.nasa.gov/gistemp/graphs_v3/           annotated-kh

Dr. Schmidt and the ClimateTeam have managed to take very uncertain data, so uncertain that the last four years of Global Average Surface Temperature data, when straightforwardly presented as degrees Centigrade with its proper +/- 0.5°C uncertainty, cannot be distinguished from one another, have now, through the miracle of “anomalization” been turned into a new, improved sort of data, an anomaly, which is so precise that they don’t even bother to mention the uncertainty — except to add (at least on the above graph) a single uncertainty bar for the modern data which is 0.1°C wide, or in the language used in science, +/- 0.05°C.   The uncertainty in the Global Average Surface Temperature has magically become a whole order of magnitude less uncertain….  And all that without a single new measurement being made.

The miracle is accomplished by the marvel of subtraction!  That is, one simply has to take the current temperature in degrees, which has an uncertainty of +/- 0.5°C,  and subtract from that the climatic-term mean (current 1981-2010) and “voila” — the anomaly with a wee tiny uncertainty of only +/- 0.1°C.

Let’s see if that really works:

miracle_grid_flat

Here’s the grid of all the possibilities with a range of +/- 0.5°C for the 2015 temperature average in absolute degrees, and the 1981-2010 climatic mean in degrees, with the same +/- 0.5°C uncertainty range, both of which are given by Dr. Schmidt in his blog post.  One still gets the +/- 0.5°C or 1°C wide uncertainty range.   It did not magically reduce to a range one-tenth of that — it didn’t turn out to be 0.1°C wide as shown in Dr. Schmidt’s graph. I’m pretty sure of my arithmetic, so what happened?

How does GISS justify the new-improved wee-tiny uncertainty?  Ah — they use statistics!  They ignore the actual uncertainty in the data itself, and shift to using the “the 95% uncertainties on the estimate of the mean.”  Truthfully, they fudge on that a little bit as well, which you can see in their original data. [ In their monthly figures, the statistical uncertainty (+/- 2 Standard Deviations) is a bit wider than the illustrated “0.1°C”.]

So rather than use the actual original measurement uncertainty, they use subtraction to find the difference from the climatic mean and then pretend that this allows the uncertainty to be reduced to the statistical construct of the “uncertainties” of the mean — standard deviations.

This is a fine example of what Charles Manski is talking about in his new paper:

The Lure of Incredible Certitude”, a paper recently highlighted at Judith Curry’s Climate Etc.  While Dr. Curry accepts Manski’s compliments paid to climate science based on his perception that many “Published articles on climate science often make considerable effort to quantify uncertainty.”, we see here the purposeful obfuscation of the real uncertainty of Global Average Surface Temperature annual data, replacing the admitted wide uncertainty ranges with the narrow “uncertainties on the estimate of the mean”.  

Graphically, it looks like this:

GIS_Temp_animation_3sec

Although I was a semi-professional magician in my youth,   I have nothing that compares to the magic trick shown above — a totally scientifically spurious transformation of Uncertain Data into Certain Anomalies,  reducing the uncertainty of annual Global Average Surface Temperatures by a whole order of magnitude — using only subtraction and a statistical definition.  Note that the data and its original uncertainty are not affected by this magical transformation at all — like all stage magic,  it’s just a trick.

A trick to fulfill the need of the science field we call Climate Science to hide the uncertainty in global temperatures — an act of “disregard of uncertainty when reporting research findings” that will “harm formation of public policy.” (quotes from Manski).

The true answer to “Why does Climate Science report temperature anomalies instead of just showing us the temperature graphs in absolute degrees?” is exactly as Gavin Schmidt has stated — if Climate Science used absolute numbers they annual temperatures would “All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”

Thus, they use anomalies and pretend that the uncertainty has been reduced.

 It is nothing other than a pretense.  It is a trick to cover-up known large uncertainty.

# # # # #

 

While I was preparing this essay, I  thought to attempt to illustrate the true magnitude of the recent warming (since 1880) in a way that would satisfy the requirements of a Climatically Important Difference.  Below we see the UAH Lower Troposphere temperatures (even these as errorless anomalies) graphed at a scale of temperatures allowed in my personal living quarters before our family resorts to either heating or cooling for comfort — about 8°C (15 °F — as in 79°F down to 64°F).  Overlaid in light blue is a 2°C range, into which the entire most recent satellite record fits comfortably and in purple, the prescribed 3-to-3.5°C comfort range from the Canadian Centre for Occupational Health & Safety for an office setting.  If the office temperature varies more than this, then the HVAC system is meant to correct it by heating or cooling.  As we can see, the Global Average lower troposphere is very well regulated by this standard.

Untitled-y8ygf1An interesting aside is that the Canadian COHS allows an extra 0.5°C in the winter, increasing the comfort range to 3.5°C, accounting for the differences in perception of temperature during the colder months.

# # # # #

Author’s Comment Policy:

Hope you enjoyed this rather light reading.

I am not so convinced by the hopeful thinking of astronomers regarding exoplanets.  I believe they  are out there (the planets, not the astronomers), and for the record I believe there are other intelligent beings out there as well, I just have serious doubts that our rather primitive instruments can identify them at such great distances.

For you budding scientists out there, Gavin Schmidt has given you a useful tool for turning your sloppy uncertain data in highly certain data using nothing more complicated than subtraction and a dictionary.  Good luck to you!

Feel free to leave your interpretations of what the GISS Temp global graph in absolute temperatures (degrees) really tells us based on the two uncertainty ranges shown.

Oh, and against all odds, some things are better than we thought.  The Earth, if we allow her to warm up just a tiny bit more, will finally be at the expected, ideal temperature for an Earth-like planet.  Who could ask for more?

# # # # #

0 0 vote
Article Rating
279 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ron Long
September 4, 2018 2:36 am

Good comments Kip. The people who want to get rid of that nasty uncertainty probably think denial is a river in Egypt. Another characteristic for an earth-like exoplanet would be a magnetosphere, wherein the gases that form an atmosphere are protected from solar winds.

MarkW
Reply to  Kip Hansen
September 4, 2018 7:32 am

The books that I’ve read say that Mars lost it’s atmosphere when it’s magnetosphere became weak enough that it could no longer protect the atmosphere from the solar wind.

Ron Long
Reply to  MarkW
September 4, 2018 8:03 am

Exactly. By the way our magnetic field strength is weakening substantially, but it may be a lead-in to a reversal and not something more alarming.

MarkW
Reply to  Ron Long
September 4, 2018 12:22 pm

The Earth’s magnetic field has been reversing about every 100K to 120K for at least as long as the Atlantic has been growing.
The current reversal is nothing unusual.

donb
Reply to  Kip Hansen
September 4, 2018 8:51 am

Mars is slowly losing its atmosphere even today, as H (from H2O disassociation) , N2, He, Ne,and Ar are not fully bound.
And Venus has lost its H20 from disassociation and H escape.

Eric
Reply to  donb
September 4, 2018 10:11 am

Also, Mars does not have sufficient mass to create enough gravity to hold these gases in it’s atmosphere, hence most of what is left is CO2 – a ‘heavier’ gas (higher molecular weight). Mars was never going to hold it’s atmosphere long term because of this.

Tom Abbott
Reply to  Ron Long
September 4, 2018 2:44 pm

Why hasn’t the Sun stripped the atmosphere from Venus?

John Tillman
Reply to  Tom Abbott
September 4, 2018 5:17 pm

Tom,

Good question.

Venus doesn’t presently have an internally-generated magnetosphere, for whatever reasons. It does however sport an external magnetosphere, thanks (ironically?) to the solar wind.

Venus’ small magnetic field is created by interaction of its ionosphere with the solar wind. This weak field differs from the common intrinsic magnetic fields (generated by planetary cores).

It’s possible that Venus has an intrinsic field, but has been in a polarity reversal since its magnetism has been observed. But probably it simply lacks a core-generated magnetosphere.

Whether the small field generated in its ionosphere is sufficient to protect its atmosphere from the solar wind, I don’t know.

Tom Abbott
Reply to  John Tillman
September 5, 2018 6:55 am

Thanks for that, John. It was also my understanding that the Sun’s magnetic field was possibly involved in protecting the atmosphere of Venus. I haven’t heard about any other theories to explain it That was the reason for my question, to see if there were other theories out there.

Nylo
September 4, 2018 2:38 am

our atmosphere contains mostly Nitrogen (78%), Oxygen (21%) and Argon (0.9%) which makes up 99.9% of the total — leaving about one-tenth of one percent for the trace gases water (H2O and CO2).

I think this is wrong. Water vapour makes (ON AVERAGE) 1% of our atmosphere (this was the case last time I checked). The other percentages add up to 99,9% because those values are their concentrations IN ABSENCE OF water vapour. They are generally very well mixed, which is totally not the case for water vapour, that’s why the water vapour concentration is provided separately.

HenryP
Reply to  Nylo
September 4, 2018 3:58 am

I think the average water vapor is between 0 4 and 0.5%. CO2 0.04%.
That is mass %

Nylo
Reply to  HenryP
September 4, 2018 4:55 am

Mass yes, but volume, roughly 1% (H2O is quite light compared to O2, N2 and Ar). In the end it is the volume that tells you about the number of molecules, which is also what matters for the GH effect.

HenryP
Reply to  Kip Hansen
September 4, 2018 8:45 am

Kip
Either way
Mass/mass
or
Vols/vols
your Society is wrong?
Important to note is that mass / mass, H2O is 10 x higher than CO2.
That makes nuclear not more beneficial than e.g. burning gas (since it produces a lot more water vapor) in respect of GH gases

I.e if you believe there is some man made warming caused by GH gases.

Reply to  Kip Hansen
September 4, 2018 1:15 pm

yes, Kip,
one of the reasons I dislike nuclear is that the one plant here in the Cape (South Africa) killed all the fish in the surrounding ocean where the warmer water is being dumped. If one plant can do so much damage to the [local] climate, we donot want to build anymore plants?

Warmer ocean/river water ultimately leads to more H2O (g) in the atmosphere?
It is simple arithmetic?

Not that I believe there even is a GH effect but if there were, then thinking that nuclear is the solution seems to me like not such a bright idea.

Since the Cape here has such a shortage of water I am certainly interested to hear your plan in changing this warmer water to de-ionized / distilled water. If it were that easy I am sure the powers that be here would be making a plan?

MarkW
Reply to  henryp
September 4, 2018 6:54 pm

“one of the reasons I dislike nuclear is that the one plant here in the Cape (South Africa) killed all the fish in the surrounding ocean where the warmer water is being dumped. ”

I have no doubt that the local enviro’s blamed the power plant, but I doubt that is what happened.

BTW, pretty much all power plants require cooling, the amount needed for nuclear is the same that’s needed for an equally sized fossil fuel plant.

RACookPE1978
Editor
Reply to  MarkW
September 4, 2018 7:31 pm

MarkW

BTW, pretty much all power plants require cooling, the amount needed for nuclear is the same that’s needed for an equally sized fossil fuel plant.

No, I will disagree with you there. because of their more conservative departure from nucleate boiling criteria, most nuclear plants do not use superheated steam in their secondary cycle process through the turbines, so an equally-sized (electric delivery) nuclear plant will discharge slightly hotter water into its cooling tower, cooling ponds, or cooling lake, or through-pass river. But BECAUSE the difference in heat energy is easily calculated, is prevented by extra cooling towers or lower heat output in very hot weather – depending on the local mitigation approved process – and is much smaller in any case than “life threatening” cases, there is great doubt that the wtory spread by the enviro claims is correct.

HenryP
Reply to  RACookPE1978
September 5, 2018 1:23 pm

Thx. To stop a nuclear reaction needs a lot of cooling water. Just to switch off the gas needs how much cooling water?

RACookPE1978
Editor
Reply to  HenryP
September 5, 2018 5:18 pm

HenryP

To stop a nuclear reaction needs a lot of cooling water. Just to switch off the gas needs how much cooling water?

Sort of. To “stop” a nuclear reactor only requires that the control rods be inserted. Reaction stops, reactor (and steam generators and turbines and pipes and oncdensors) begins cooling down < Because the primary heat source is, indeed, shut down. A reactor does continue to generate decay heat from the core (which begins around 7% of the previous power level), then goes down exponentially with time. This is the decay heat that must continually be removed after shut down. But 7%-3%-1.5%-0.75% are small amounts of the previous 100% cooling water flow needed at 100% power levels.

Now, in sharp contrast, a gas turbine combined cycle plant (3 x 250 Megawatt for example) is very different. The two gas turbines generate 500 Meg's of power with almost 0.0 cooling water: they want their blades and burners running as hot as possible, only the lube oil and secondary air must be cooled a little bit. So, on shutdown and while running, there is almost no cooling water needed at all. The tertiary steam generator runs on the steam generated from the waste heat from the two GT's, so it must reject all of its waste energy to the cooling water-condenser water just like any other steam plant. But only 250 Meg of cooling water is needed from a 750 Meg GT+GT+ST combined cycle plant. The actual numbers are a bit more complex to calculate, but I hope you get the point.

Reply to  RACookPE1978
September 6, 2018 4:51 am

Thx for the explanation. But I I can see the results [of more nuclear]. Some have reported an increase in growth both in size and numbers of the fish \round the plant when it is using river water as cooling water….

Russ Wood
Reply to  henryp
September 6, 2018 2:55 am

Well, yes, henryp – I believe that there IS a shortage of fish around The Cape. But maybe all of the uncontrolled (mainly Korean) fishing offshore, and the poaching inshore, just MIGHT have some effect on the populations?

Reply to  Russ Wood
September 6, 2018 4:41 am

no. a lot of fish cannot stand the warmer water,

fretslider
September 4, 2018 2:53 am

Exoplanets… fantastic

And on current technology at least 40,000 years away.

Reply to  fretslider
September 4, 2018 3:04 am

We have plenty of time. We still have to finish the search for intelligent life on this planet… /Sarc

Gordon Dressler
Reply to  David Middleton
September 4, 2018 8:38 am

“We still have to finish the search for intelligent life on this planet.”

News flash . . . we found it, but it is dying off rapidly.

richard verney
September 4, 2018 3:01 am

This is a very interesting article and demonstrates the miss use of statistics. But the problems are even deeper rooted since it is absurd to claim that there is GLOBAL data going back to the 19th century. Even in the mid 1950s, the Southern Hemisphere data, at any rate that south of the tropics, is largely made up as Phil Jones so candidly noted in the Climategate emails. In truth, we only have worthwhile data covering teh Northern Hemisphere.

Then the data has been so heavily massaged with adjustments exceeding 1 degC to render it worthless for scientific scrutiny and study. An example is that a couple of months back Willis reviewed the BEST data set to see whether the 20 largest volcano eruptions could be found in the data sets. Despite the scientific consensus that volcanoes have a material impact on temperatures. I suggested that the reason one could discern the impact of the largest 20 volcanic eruptions on temperature was due the adjustments/homogenistion of the data rendering it worthless.

At the moment Tony Heller is running an article on “Close enough for Government work” It is well worth reading this since it is right on point:
https://realclimatescience.com/2018/09/close-enough-for-government-work/

The National Climate Assessment (https://science2017.globalchange.gov/chapter/6/) has the below graph, which shows how much hotter the US used to be.
comment image

I will set out a couple of more of hiss plots (but read the article for more detail).
comment image

And to show the impact of adjustmenst:
comment image

The truth of the matter is that we have no idea as to the temperature of the planet such that all we can say is that it has warmed since the depth of the the LIA and that there are large amounts of multidecadal variation but we do not know whether the planet today is any warmer than it was in the 1940s, or for that matter the 1980s.

MarkW
Reply to  richard verney
September 4, 2018 6:54 am

Even claiming we have useful information for the Northern Hemisphere is a bit much.

It’s more like we have useful information for the Eastern US and most of Northern Europe.
It get’s pretty spotty outside those regions.

David Chappell
Reply to  richard verney
September 4, 2018 7:37 am

“…the miss use of statistics…”
I thought that was more on the lines of 39-24-36.

[The mods point out that example may be a miss using statistics, and mrs’ing her target. .mod] …

Jim Masterson
Reply to  richard verney
September 4, 2018 1:40 pm

>>
I suggested that the reason one could discern the impact . . . .
<<

I think Mr. Verney meant “could not discern.” His statement would then make more sense in context. I agree with the points he’s making.

Jim

Editor
September 4, 2018 3:17 am

The true answer to “Why does Climate Science report temperature anomalies instead of just showing us the temperature graphs in absolute degrees?” is…

Thermometers aren’t scary enough.

comment image

That said, there is a scientific basis for using anomalies rather than absolute temperature values. It’s the best way to evaluate tiny variations in a highly variable time series, with a baseline 15 to 30 times the size of the anomalies. It’s similar to the rationale for logarithmic scales.

However, the Climatariat clearly do it to make a minuscule number look very big. If geology was done by the Climatariat, no photos would ever include a lens cap for scale… 😎

Reply to  Kip Hansen
September 4, 2018 7:59 am

Top Ten Signs You Might Be A Geologist:

10. You have ever had to respond “yes” to the question, “What have you got in here, rocks?”

9. You have ever taken a 15-passenger van over “roads” that were really intended only for cattle

8. You have ever found yourself trying to explain to airport security that a rock hammer isn’t really a weapon

7. Your rock garden is located inside your house

6. You have ever hung a picture using a Brunton as a level, and your rock hammer as your hammer

5. Your collection of beer cans and/or bottles rivals the size of your rock collection

4. You consider a “recent event” to be anything that has happened in the last hundred thousand years

3. Your photos include people only for scale and you have more pictures of your rock hammer and lens cap than of your family

2. You have ever been on a field trip that included scheduled stops at a gravel pit and/or a liquor store

And the #1 sign you might be a geologist:

1. You have ever uttered the phrase “have you tried licking it” with no sexual connotations involved

http://groups.colgate.edu/geologicalsociety/Features/Geology%20Jokes.htm

😉

eyesonu
Reply to  David Middleton
September 4, 2018 10:25 am

I got 5 out of 10. So am I half a geologist?

Ian Magness
Reply to  David Middleton
September 4, 2018 10:54 am

OMG – guilty as charged and I left geology related employment decades ago. Amateur geology highlights include having to hand over a carefully packed box of rock samples to the Irish army at a roadblock near the Northern Ireland border (they thought the heavy box could contain armaments) and, much more recently, being prevented from taking a cracking layered igneous pebble (destined for inclusion in my garden wall) in my hand luggage on a flight from Jersey as the officers believed I could use it as a weapon on board.

Reply to  David Middleton
September 4, 2018 8:41 am

To make it even less scary, convert it to Kelvin and include zero in the scale.

richard verney
September 4, 2018 3:28 am

I had desired to post something much more detailed but if one sets out a number of citations then the comment is chucked into moderation.

It is well worth having a look at the recent National Climate Assessment Report, and in particular Chapter 6.1.2 Temperature Extremes. I will set out some of their plots which demonstrate the misuse of statistics.

See figure 6.4:
comment image

Note the large number of warm days and hot spells in the 1930s, and also the large number of cold days in the 1930s.

Now look at how they combine this data (in figure 6.5) to show the number of extremes to give the impression that extremes were very minimal in the 1930s and the climate is far more extreme today:
comment image

Tony Heller has done a very good video discussing the extremes of 1936.

John Garrett
September 4, 2018 3:41 am

Thanks, Kip.

You made my day.

Bloke down the pub
September 4, 2018 4:08 am

I don’t wish to put words into Gavin’s mouth but I’m pretty certain that he’d say that if the uncertainty was wider, it’d mean that things could be worse than we thought. He does, after all, like to look on the gloomy side.

Scott Bennett
September 4, 2018 4:14 am

Good work Kip, I enjoy your writing particularly.

I’ve long be struck by the fact that the reason given in Meteorology for the use of anomalies is that they provide more useful information about a particular place – against the background of its local climate – than absolute temperature, which does not indicate that information directly.

That this “usefulness” now extends to the globe* has always seemed a rather anomalous usage to me! 😉

*The use of local anomalies from vastly different climates to calculate a single global temperature.

RACookPE1978
Editor
Reply to  Kip Hansen
September 4, 2018 9:00 am

Kip Hansen, replying to Scott Bennett (Adding Joe Bastardi to the conversation search)

Scott ==> Thanks … Anomalies don’t provide more useful information…a properly scaled graph, maybe with a bit of smoothing, shows the data, and if informed with the true uncertainty, let’s us see what is going on with that metric. If the data has 1°C wide uncertainty, then the public MUST be shown that, and it must be explained so they understand what they uncertainty means in practical terms.

No, I will politely disagree with you there.

Anomalies can be very, very useful. But the entire calculation of the anomaly MUST BE considered, the purpose of using the anomaly (instead of the actual temperature itself, and the temperature error analysis and its std deviation) and the anomaly’s error analysis and its std deviation.

This generalization will be true for all anomalies, not just temperature – but the CAGW climate community has seized on the difference of a Single Temperature Anomaly from what they have chosen as the “Global Average Temperature” (for a flat plate earth irradiating a uniform average atmosphere in a constant orbit around a constant average sun), so that classroom environment is their world. So let’s use the Global Average Temperature as they do. More accurately, the Global Average Temperature Anomaly (difference).

If all of the world’s surface (air) temperatures were accurately known for each hour of each day of the year.
If those surface temperatures were accurately recorded for a sufficient nomber of years so each hour’s “weather” could be averaged for a sufficient number of seasons.
If those seasonal average daily temperatures did accurately reflect the local climate (not local ever-changing urban-suburban-asphalt-concrete-farming-forests-fields-brush and woodlands and deserts and beach conditions and ocean and sea conditions).
Then you could average together sufficient local hourly records to calculate that thermometer’s average daily (seasonal) changing temperature.
Then you could subtract the hourly measured temperature from the long-term seasonal hourly average and claim you have generated one anomaly. For one place, for one location of that specific thermometer – assuming (as above) that the local environment around the thermometer has not affected your recent temperature measurements.
Given enough accurate hourly temperature anomalies from enough locations worldwide, theoretically you now have a global average temperature ANOMALY. (Not global average temperature for that hour, but the theoretical hourly temperature difference from your assumed local standard hourly temperature. )

But that’s the problem – not just all of the assumptions about measuring each hour’s temperature accurately, but the assumptions about what that entire process requires. Including the fundamental assumption that the earth (globally) was in a stable thermal equilibrium at some global average temperature at some time in the past.

The earth has never been at thermal equilibrium – it can’t be because there is no natural “thermostat” set by an Infinite Mother Nature. Rather, the earth continuously cycles from a “too hot” condition (when it loses more heat to space than is gained for that period of time and thus cools), through an unstable transient period somewhere near the average of “too hot” and “too cool”, towards a period when too little heat energy is being lost to space (and thus the received energy is more than is being radiated to space.)

Like a swing whose “average speed” is only momentarily ever measured “at average” – but whose “average speed” can be constantly measured at always changing values; whose height is always changing but which only momentarily is ever at its “average height”; and whose average potential energy is thus also always changing; and whose average kinetic energy is also ever-changing – you can best discuss its state by using the anomaly (position, speed, mass, velocity) of that swing from the “instantaneous expected perfect state.”

Go in the classroom, and you can perfectly calculate every perfect theoretical piece of information you wish. Then determine a difference from that perfect theoretical state, then write a paper about the anomaly, the value of that anomaly, and the trend of that anomaly into the infinite future.

But the actual state of even the edge of that swing in the real world? Can’t predict it even above the molecular (much less atomic level.) Too many real world interferences such as wind, air friction, pivoting friction, motion and rocking of the impulse pushing the swing, movement of the body on the swing, changes in mass of the swing, its chain, the body on the swing, and the friction between every link of every point on the chain changing due to wear and air friction.

The myth of the climatrologists is that they CAN predict the far future of the earth’s weather by ignoring all of the small parts of each of these events by concentrating on the global averages of events, yet pretend they are calculating every effect by focusing on the individual dust particles everywhere, the individual CO2 concentrations as they control the global average clouds and global average humidity and global average pressure.

Reply to  RACookPE1978
September 4, 2018 6:29 pm

“but the CAGW climate community has seized on the difference of a Single Temperature Anomaly from what they have chosen as the “Global Average Temperature” (for a flat plate earth irradiating a uniform average atmosphere in a constant orbit around a constant average sun)”

No, that is exactly what they don’t do, and I keep explaining why. What you describe would be quite wrong. Instead they form anomalies locally, by subtracting local climatology (averaged from that environment) from each region, before any spatial averaging. They avoid computing a global average temperature.

Scott Bennett
Reply to  Kip Hansen
September 5, 2018 4:24 am

Kip ==> Sure, I don’t disagree because I see what you are saying… it is even more complicated than has been explicated in comments here and the errors are large even at the local climate stage of anomaly preparation/homogenisation.

I also have many more concerns about the use of anomalies but one thing that hasn’t been discussed here yet, is their unequal application.

To restate, the problem with the use of anomalies globally, is that their relevance is unequally represented.

The greatest variation – in terms of anomalous temperature – occurs in the temperate zones – which just happens to be where the vast majority of the world’s population resides (Especially in the Northern Hemisphere, due to its greater land mass). While the least anomalous zones,** the oceanic (Surrounded by sea), the tropical/subtropical and the cold/polar regions are often also the most sparsely measured!

To repeat; the most thoroughly and carefully monitored places on Earth, also happen to be the the most anomalous!

I fear that this weighting is not being accounted for adequately and therefore any result* will carry significant bias.

*Global average
**Particularly in the Southern Hemisphere

September 4, 2018 4:15 am

“reducing the uncertainty of annual Global Average Surface Temperatures by a whole order of magnitude”
Well, we’ve been through all that before. Yes, if you subtract the variable that incorporates most of the uncertainty (location/seasonal mean), you know the remainder much better.

But there is a clear contradiction in the broad fuzz that is supposed to be the alternative fact. Expected variability actually means something. It means that you expect to see randomness of that amplitude. And you simply don’t see it. It isn’t there. There is nothing like variability of the order of ±0.5°C (sd). The graph is self-disproving.

Bruce Cobb
Reply to  Nick Stokes
September 4, 2018 5:25 am

Maybe you just need to remove your special Warmunist goggles, which filter what you don’t want to see.

Reply to  Nick Stokes
September 4, 2018 5:27 am

The absolute errors in any measurement do (almost always) follow a statistical curve, with the true value being in the most probable region of that curve.

But you CANNOT change the shape of that curve with ANY mathematical operation. If, as Schmidt claims, his anomalies are within the 95% region at 0.1C – this is identical to a claim that his absolute measurements are also within the 95% region at 0.1C.

Does he claim this? No. Which leads to a simple binary conclusion – statistical incompetency, or pseudo-statistical fraud.

Reply to  Kip Hansen
September 4, 2018 1:45 pm

“overlaying a statistical construct on top of a measurement”
You don’t measure a global average. You calculate it. And you calculate the effect of the measurement variability on the calculated average. That is statistics.

Jim Masterson
Reply to  Nick Stokes
September 4, 2018 2:24 pm

>>
That is statistics.
<<

That may be statistics, but there’s no way to calculate a global average using physics. Temperature is an intensive thermodynamic property of a system. It applies to systems that are in equilibrium.

Jim

Reply to  Jim Masterson
September 4, 2018 3:41 pm

Jim,
“That may be statistics, but there’s no way to calculate a global average using physics.”
It’s the only way to calculate an average of anything. Temperature applies to any system with local thermodynamic equilibrium. If you don’t have that, you can’t get a temperature in the first place. If you do (and we do), you can average.

Jim Masterson
Reply to  Nick Stokes
September 4, 2018 4:36 pm

>>
If you don’t have that, you can’t get a temperature in the first place. If you do (and we do), you can average.
<<

Okay. I have two beakers of hot water. One’s at 50 degrees Celsius and the other is at 80 degrees Celsius. The beakers each contain different amounts of water. I pour both beakers into a third beaker that can contain all the water. What is the final temperature of the water after the temperature equalizes (assuming no heat loss or gain)? And just for grins, what’s the temperature of the water mix halfway through the process?

Jim

Reply to  Jim Masterson
September 4, 2018 4:56 pm

Jim
“The beakers each contain different amounts of water.”
That then is the sampling issue. If you want to say that the final temperature is the average of the water before (reasonable) you have to decide how to sample that, so the sample average reflects the final. Each beaker is presumably homogeneous, so you can get those averages easily. But the between beakers is not homogeneous, so you have to sample in correct proportions – ie weighted according to mass in each beaker. Then you’ll get the right answer. Standard stats. Or, you could say, averaging by volume integration.

And if you have combined them with limited mixing, again the average is the same, subject to proper sampling (which would be hard, since it is changing). The sampling issue diminishes as you mix, decreasing inhomogeneity.

Jim Masterson
Reply to  Nick Stokes
September 4, 2018 5:50 pm

>>
That then is the sampling issue.
<<

Heh, and where is my answer? You got two temperatures with an LTE exact temperature. I even gave you a final LTE condition for the answer. The in-between value resembles the atmosphere. The atmosphere is never in equilibrium.

>>
The sampling issue diminishes as you mix, decreasing inhomogeneity.
<<

So no temperature with hundredths of a degree precision. How about tenths of a degree? Maybe even within a degree? I’m flexible here.

Jim

Reply to  Jim Masterson
September 4, 2018 6:16 pm

Jim
“Heh, and where is my answer?”
As I said, more information needed, namely, the masses of the two beaker contents. If it’s 200 gm at 50C, and 100 gm at 80C, then the answer is (50*200+80*100)/300 = 60C. That is just properly weighted sampling, or, equivalently, volume integration.

The in-between average is also 60C. We know that from conservation. Sampling needs care then, and will introduce some error. But not much, and diminishing as mixing proceeds. A few well-placed thermometers would get it very accurately, as on Earth.

Jim Masterson
Reply to  Nick Stokes
September 4, 2018 7:46 pm

>>
As I said, more information needed . . . .
<<

Finally we get to the crux of the situation. Yes, you need more information, as the rest of us knew from the beginning. But when doing the Earth’s atmosphere, you’re fine with the information I gave you. There is no way you can “calculate” a global temperature from the few thermometers we have access to. AND it is impossible–to boot.

>>
We know that from conservation.
<<

No we don’t. You don’t know how I mixed the water. I could have poured all the water from one beaker in first and then all the water from the other beaker–or any combination and order from the two beakers. You’re assuming I poured both in simultaneously. The midway case is totally unknown and totally incalculable.

>>
A few well-placed thermometers would get it very accurately, as on Earth.
<<

Phooey!

Jim

Reply to  Jim Masterson
September 4, 2018 8:54 pm

“The midway case is totally unknown and totally incalculable.”
No. There are x Joules in the water before mixing, and density and specific heat are assumed uniform (OK, you can correct for temperature if you want). The average T before, during and after mixing is x/ρcₚ. Confirming that by measure is a matter of sampling. Simple before, with correct mass weighting. Simple after, if well mixed. Needs care in between, but can be done with good sampling.

Jim Masterson
Reply to  Nick Stokes
September 4, 2018 11:03 pm

>>
Simple before, with correct mass weighting. Simple after, if well mixed. Needs care in between, but can be done with good sampling.
<<

You don’t know how much water is in either beaker, so you can’t calculate the final temperature. There’s no sampling during or after–we’re supposed to calculate all that. The mixing action is obviously chaotic. There’s not enough sampling in the Universe to capture all of that. What a dream-world you live in.

Jim

RACookPE1978
Editor
Reply to  Jim Masterson
September 4, 2018 7:20 pm

A few missing pieces of information:
What is room temperature, and air velocity in the room?
Is the room substantially larger than the three containers?
What is the material coefficients of the containers, their three masses, and each surface area and wall thickness of the three containers?
Is the wall thickness constant cross all three surfaces of all three containers?
Are the three containers insulated on the bottom from the table? (If so, i will ignore conduction losses to the (unnamed, unspecified table top.)
Are the two beakers poured into the third from a height that generates a spray and droplets, or are they poured in a continuous steady stream?
Is the top of the containers closed, open, or insulated?
What is the time from start to stop of the evolution? (If short, I will ignore evaporation losses from the liquids. If not, what is the relative humidity in the room?)
Are the room walls at “room temperature”? (If so, I will ignore radiation losses.)

Unless otherwise notified, I will assume pure water, if that’s an adequate assumption.

Reply to  RACookPE1978
September 4, 2018 7:43 pm

RACook, you forgot about altitude above MSL, barometric pressure, and the orientation of the room with respect to the magnetic field of the earth. The presence or absence of Leprechaun farts is not important.

RACookPE1978
Editor
Reply to  David Dirkse
September 4, 2018 7:55 pm

Nah. Covered relative humidity, so that will pick up the pressure effect on temperature coefficients of the assumed pure water convection and evaporation, won’t it? if we’ve got relative humidity, would the evaporation effects of water at 80 deg C change with absolute pressure of the atmosphere?

Yes, I will have to assume outside cosmic radiation influenced by the regional magnetic field and latest solar flares is too small to measure. Good point! Thank you -> Prompt, Proper, Polite, Public Peer review is essential.

Leprechaun farts? Got to think about that. Those do flow with the ley lines above the pot of gold, ebbing as the exchange rate varies.

And Stokes was only worried about mass of the two water volumes, thermal mass of the thermometers! Guess he didn’t think of remote IR thermometers = No thermal influence on the object measured, if the proper emissitivity is chosen for calibration beforehand.

Reply to  RACookPE1978
September 4, 2018 8:48 pm

“And Stokes was only worried about mass of the two water volumes, thermal mass of the thermometers!”
It’s nothing to do with thermal mass of a thermometer. And all your nonsense about humidity etc is irrelevant. Surface temperature measurements on earth have nothing to do with possible variations during a physical mixing experiment. The question only makes sense in this context in terms of assessing the average temperature of a mass of water, as expressed after mixing. And I describe what you have to do to calculate that average temperature.

Jim Masterson
Reply to  RACookPE1978
September 4, 2018 11:06 pm

>>
RACookPE1978

A few missing pieces of information:
<<

Did you really miss the point of my example? It’s obvious Mr. Stokes did (or pretends he did).

Jim

RACookPE1978
Editor
Reply to  Jim Masterson
September 5, 2018 4:01 pm

Yes, I most likely did – Taking it near-seriously at one point. Sorry about that.

Then again, it IS easier to “measure the real world” rather than ASSUME you have accounted for all of the approximations needed to “calculate” the approximations necessary in even beginning the calculations for dynamic heat transfer. For one example, in a separate engineering web site, we are debating the change in heat transfer for a 100 deg C fluid in a vertical tank with round corners, or square corners – and does not even take into effect the extra cost of fabricating the rounded corners and insulating them, compared to a simpler/cheaper square corner. (That Original Poster still has not answered what the air flow through the room is – another part of the problem.

In another example, I DID “measure” the surface temperature at each 50 mm from the end of a horizontal mild steel bar 25 mm x 25 mm suspended in mid-air in the 23 deg C workshop with still air (natural convection only) with one end heated to 1350 deg F by an oxy-acetylene torch. The measurements over 45 minutes by IR thermometer at one minute intervals down the steel bar did correspond closely (+/- 15 deg) with the theoretical dynamic values for the same points over the same period. Calculations “can be” correct, but ONLY if all of the conditions are known correctly, if ALL of the correct equations for the physical coefficients for the physical conditions are correctly approximated, and if the margin of error between the real world and calculated approximations are acceptable.

Your “beakers” for example, do almost certainly have rounded corners, are most likely made of un-insulated glass or Pyrex, and most likely are not receiving solar IR if they are indoors in a “room temperature” lab, but are the they cylindrical walls or have a conical shape? How far up the walls of the three beakers does the initial water go? 8<)

Jim Masterson
Reply to  RACookPE1978
September 5, 2018 4:43 pm

>>
[Y]es, I most likely did – Taking it near-seriously at one point.
. . .
Your “beakers” for example . . . .
<<

I did state that no heat was gained or lost–and if you assume that no matter was gained or lost then we are dealing with an isolated system.

In any case, I apologize for being snippy.

It was obvious to you, to me, and even to Mr. Stokes that more was needed to answer that problem correctly. Yet Mr. Stokes claims that with a few randomly placed thermometers we can calculate the average temperature of the entire surface (atmosphere?) of the Earth. And that we can start with a set of temperatures with one-degree precision and obtain hundredths of a degree accuracy. It does boggle the mind.

Jim

RACookPE1978
Editor
Reply to  Jim Masterson
September 5, 2018 6:12 pm

Jim Masterson

Ah, but dear sir! Mass IS lost (in a real-world basis) if no cover/lid is assumed on the beakers!

And Energy IS LOST from the moment the “experiment” begins, even if the original two beakers are plugged up and have no evaporative/convective losses as the vapors exchange:

Heat energy is lost from the 80 deg C water through the water-film barrier to the first beaker wall (probably Pyrex) to the outer air-film wall to the (probably natural convection) heat transfer to the (near-infinite) room air and room walls (include radiation losses here to the room walls-floor-ceiling (each has a different view factor, probably a different emmissivity as well), conduction losses to the table (probably at room temperature at t=0) through the beaker base plate (if not perfectly insulated).

OK, so now you have the dynamic heat exchange for beaker_80 deg C (at t=0) to the room environment.
Repeat for beaker-50 deg C.
Assume the beaker-final started at room temperature on a table at room temperature … And that is just some of the initial conditions!

IF you are going to “calculate the final temperature of …” then you CANNOT remove ANY simplification to the process, UNLESS you also remove your calculation from any relevence to the real world! That is what I was trying to say to the engineer asking about the rounded edges of his “tank holding 100 deg C water” : What difference does it make?
What is important? (Energy lost, energy potentially saved?
Total cost?
“Keep the water at 100 deg C regardless of expense and material!!!”
“NEVER let the water heat up to 100 deg C regardless of cost, time, money, material, investment, instruments!!!!”
Or even, what difference does it make it I make the tank out of a round pipe with flat ends? (As long as I keep a vent on the tank so it can never turn into an unlicensed, deadly pressure vessel.” )

Reply to  Kip Hansen
September 4, 2018 3:57 pm

“The average of 2, 8, and 10 is 10. There are no probabilities involved.”
Not statistics as I learnt it. An average is a statistic. But the point is that the dependence of the mean on the variation of the data is certainly a matter of statistics. And probability is basic to the notion of error.

Reply to  Kip Hansen
September 4, 2018 4:03 pm

Kip,
“His GAST figure has an uncertainty range, because the measurement natively had that range — it must remain.”
Not true. The uncertainty of GAST comes mainly from the sampling error (choice of locations) amplified by the great inhomogeneity of absolute temperature (altitude etc). It doesn’t come from the measurement error at an individual location. The reason the anomaly average error is so much lower is that the anomalies are much more homogeneous.

RACookPE1978
Editor
Reply to  Kip Hansen
September 4, 2018 4:20 pm

Kip Hansen

Averages, means, etc have calculations from measurements — they are not inferences on the basis of probability.
Mixing the the two fields is where scientists get into trouble.
The average of 2, 8, and 10 is 10. There are no probabilities involved. …

Pretending that uncertain measurement data can be magically reduced to precise anomalies is FALLACY – and a misuse of statistics to produce unjustifiable certainty about uncertain data.

Check the arithmetic in your example, please: (2+8+10)/3 = 6.66 Not 10. 8<) Ask the mods to edit, if you wish.

Reply to  Kip Hansen
September 4, 2018 7:56 pm

Kip you are flat out wrong. The average of all the measurements is a statistical estimator of the GAST, and being an estimator there is a “probability” of it being correct. As you well know, the probability of it’s “correctness” can be increased with increasing numbers of observations that go into calculating the estimator. I hope everyone is aware that you cannot measure the Earth’s surface temperature, we can only estimate it based on a multitude of distinct individual thermometer readings in space and time. Please tell me Kip that I don’t need to go into a discussion with you about the relationship between population means and sample means

lee
Reply to  Nick Stokes
September 4, 2018 10:52 pm

And IF you are NOAA you can assume to two decimal places. Despite climate dot gov saying this –

“Across inaccessible areas that have few measurements, scientists use surrounding temperatures and other information to estimate the missing values.”

Javier
Reply to  Nick Stokes
September 4, 2018 6:45 am

I agree that what we are measuring, within uncertainty, corresponds to what is happening. Many proxies agree with the periods of warming and cooling for the past 170 years indicating that they are real.

And all that debate about the right or wrong databases is just silly. They are all measuring essentially the same.

comment image

What people have trouble understanding is that a difference of ±0.5°C is very little on a daily or monthly scale, but it is huge on a millennial scale. The 6000-year Neoglacial cooling has taken place at a rate of ~ 0.2°C/millennium.

Javier
Reply to  Kip Hansen
September 4, 2018 8:17 am

I disagree that the uncertainty is as high as you represent it. If that was true we should see much bigger oscillations and a much bigger spread in the measurements. The uncertainty is probably half that.

MarkW
Reply to  Kip Hansen
September 4, 2018 12:27 pm

The problem with the global uncertainty is that we have several orders of magnitude too few stations to accurately portray the planets temperature. Even if we had a way to perfectly measure the temperature at the stations we do have.

Reply to  Kip Hansen
September 4, 2018 2:13 pm

“Here we are talking Original Measurement Uncertainty — which Dr. Gavin Schmidt and I agree is properly represented as +/- 0.5°C”
You are misrepresenting Gavin. He does not say (there) that 0.5 is Original Measurement Uncertainty. He says it is the expected error of a global average of climatology (time averaged temperature). It isn’t the error in individual temperature readings. The expected error of a global anomaly average is something quite different again.

Reply to  Kip Hansen
September 4, 2018 5:23 pm

Kip,
“I can’t read his mind”
One can read the link, though. He says:
“However, and this is important, because of the biases and the difficulty in interpolating, the estimates of the global mean absolute temperature are not as accurate as the year to year changes.”
He does not mention thermometer error. Biases relates to things like latitudinal difference, and interpolation to variations on the interpolation scale, such as topography and land/sea boundary.

lee
Reply to  Nick Stokes
September 4, 2018 11:23 pm

You mean like reader error? Like the Australia post employee who was reading temperatures from the wrong end of the slide? Reported in The Australian.

Reply to  Kip Hansen
September 4, 2018 1:55 pm

Kip,
“it has been smoothed out by prior rounding of daily then monthly then annual averages”
Well, yes, that is the point. Averaging, whether over time and space, reduces variability. That is often the reason for seeking an average. It isn’t added smoothing; it is intrinsic.

“You’ll have to take this up with Dr. Schmidt — the _/- 0.5°C is his (and I thoroughly agree).”
I agree too. The 0.5 is the error you would expect in an average of absolute temperature. It comes from the lack of knowledge of the average climatology – the spatial average of the time averages. What GISS, and everyone else with sense, calculate is the average anomaly. That doesn’t have that uncertainty. You don’t calculate the average absolute, then subtract the average climatology – that can’t work. You subtract the local climatology (not subject to the 0.5 uncertainty) from the local temperature (==>anomlay), then average. That is Gavin’s point, long made by GISS and others. One way has big errors, the other doesn’t. They use the right one.

Reply to  Kip Hansen
September 4, 2018 5:39 pm

Kip,
“he subtracts the Annual GAST from the Climatic Mean — not some local climatology statistic — to find the Annual Anomaly.”
He may do that for working with reanalysis data. You can do that there because the values are on a regular grid which never changes, so whether you subtract the local or global climatology gives exactly the same result. But with GISS and other GAST products (including TempLS), it isn’t the same. There is always a different mix of stations each month. That means the climatology average (spatial) would change each month. You could calculate that, but simpler and equivalent is to subtract the climatology locally (station-wise) to get a local anomaly before averaging. This is fundamental, and all products do it. Well, almost all – USHCN used to average absolute temperatures, but to make that work they had to interpolate missing values, so that they could fulfil the alternative requirement of having an unchanging set of sample locations. Not optimal.

There is no claim that local climatology is free of uncertainty. What matters is that the spatial anomaly average is much less uncertain than spatial temperature average, as Gavin is explaining.

Richard of NZ
September 4, 2018 4:20 am

Why poor Pluto? If he had not kidnapped Proserpine and fed her pomegranates there would be no winter, just balmy comfortable temperatures and continuously growing crops. I feel it is Pluto we have to blame for all of our problems.

/grins

Tom Abbott
Reply to  Kip Hansen
September 4, 2018 3:29 pm

Pluto wasn’t demoted because it was too small, it was demoted because it hasn’t cleared it’s orbit of other objects.

The Earth hasn’t cleared its orbit of other objects either, yet we call it a planet. Poor Pluto is right! As far as I’m concerned, Pluto is still the Ninth planet. 🙂

commieBob
September 4, 2018 4:30 am

It seems that most scientists think that if you average enough samples, you will reduce uncertainty and get a more accurate result.

It’s common to call our measurements the signal and to call the errors noise. If the errors are truly random then the assumption, that averaging reduces uncertainty, is correct. One of my favorite demonstrations is to extract a small signal from data where the noise is a hundred times as strong as the signal. It’s a powerful technique.

The problem comes when the noise is not truly random. Then we can have red noise. If you play red noise and random noise over a speaker you can hear the difference. The red noise has more low frequencies and fewer high frequencies.

Natural processes tend to produce red noise. Averaging does not reduce uncertainty where there is red noise. link

I’m not a statistician but I have lots of experience processing signals. I strongly suspect that the uncertainty of the global temperature is understated. The idea that any kind of processing can reduce that uncertainty doesn’t pass the smell test.

Just as James Hansen, with no rigorous mathematical justification, threw Bode feedback analysis at climate sensitivity, it seems that climate scientists have done the same with averaging. In both cases the method is unjustified. There should be a rule that, where statistics is invoked, a statistician should be involved.

commieBob
Reply to  Kip Hansen
September 4, 2018 8:47 am

I think the whole signal/noise thing is as misplaced as Bode feedback.

A lot of statistics was developed in the context of electronic communications, starting with the telegraph. That’s why the math texts refer to signal and noise. You can think of signal as error free data and noise as errors.

The main point is that it is very easy, and super tempting to misapply math, be it boxcar averaging or feedback analysis.

Early in my education, a friend’s thesis advisor told me that students tend to apply any old formula in situations where it totally doesn’t apply. By the end of my career, I realized that that unfortunate habit lasts the whole of some scientists’ careers. The good thing is that it gets crushed out of engineers’ souls within about a year of graduation. 🙂

skorrent1
Reply to  commieBob
September 4, 2018 7:03 am

With a background in mechanical, not electrical, engineering, I am willing to state categorically that I cannot measure an object with a ruler denominated in eighths of an inch (0.125in) 1000 times and pronounce its length to an accuracy of 0.001 in. Nor can I measure 1000 similar objects with that ruler and pronounce their difference to an accuracy of 0.001 in. I leave it to the readers to guess whether “averaging” thousands of readings of thousands of thermometers by thousands of different people, recorded to the nearest 0.5 K, can produce a meaningful “anomaly” of 0.01K. The characteristics of “signal” and “noise” be damned.

Alan Tomalty
Reply to  Kip Hansen
September 4, 2018 10:36 am

Any attempt to separate the noise from the signal will by necessity create an additional error.

commieBob
Reply to  Alan Tomalty
September 4, 2018 12:23 pm

If you follow that logic, your cell phone can’t possibly work.

As an example, consider the problem of extracting the tiny signal from the Voyager 1 satellite from the background noise. link

The reason folks think they can improve their uncertainty is that such things are possible given the right circumstances. My beef is with the scientists who don’t understand the ‘circumstances’ part.

LdB
Reply to  commieBob
September 4, 2018 9:08 am

The problem most are seeing is the obvious on that statistics are pointless without understanding what is under them on. The classic is the average number of children per couple which is some fraction 1.5 or 2.5 … everyone knows you can’t have a fractional child.

The problem with what they are doing is along the same lines. The system has feedbacks and delays and the force/driver they are measuring is a Quantum process (it is what makes radiative transfer difficult).

They may have convinced themselves that the statistics mean something but the laws of nature have a habit of making statisticians look stupid.

Reply to  commieBob
September 4, 2018 2:03 pm

“Averaging does not reduce uncertainty where there is red noise.”
Just not true, and your link does not say that. There are degrees of redness. If numbers are correlated, the uncertainty of the mean does not reduce by the simple OLS quadrature, but it does reduce. In fact, the covariance matrix simply enters the quadratic sum.

commieBob
Reply to  Nick Stokes
September 4, 2018 7:01 pm

Red noise usually implies a slow drift. In that case the uncertainty/error of the individual measurements may well be better (ie. lower) than that of the average of all the samples. Been there, done that, got the t-shirt.

JimG1
September 4, 2018 6:16 am

Kip,

Excellent.

JimG1
September 4, 2018 7:02 am

Just a retired engjneer and amateur astronomer but my confidence in the minuscule star wobbles in defining the very existence of a planet and ‘habitable zones’ definitions is low. Too many other variables effect habitability; stellar eruptions, solar wind, planet rotation, magnetic field, atmosphere and its density and composition and on and on. Most of the “earthlike” planet hype is just that, hype. But the actual search is great for expanding science. Much too much certainty is implied, however, in most of the hype.

Steve Reddish
Reply to  JimG1
September 4, 2018 8:54 am

“In astronomy and astrobiology, the circumstellar habitable zone (CHZ or sometimes “ecosphere”, “liquid-water belt”, “HZ”, “life zone” or “Goldilocks zone”) is the region around a star where a planet with sufficient atmospheric pressure can maintain liquid water on its surface.”

A distant planet is considered a good place to colonize by above definition once we ruin Earth by raising the temperature by a few degrees?

Earth would still have water on the surface if temperatures rose 10 degrees.

SR

Alan Tomalty
Reply to  Steve Reddish
September 4, 2018 10:42 am

How would earth get ruined even with a 10 C average increase? Most of the ice sheets would still not melt with that temperature rise. Inland Antarctica is way below -10C even only 10 metres under the surface even in the summer time.

Mark Whitney
September 4, 2018 8:24 am

Christy and Spencer report UAH results as anomalies as well. How does that figure into this analysis?

Pat Frank
September 4, 2018 8:31 am

I made a very similar case in my papers examining uncertainty due to systematic measurement error in the global air temperature record, here (900 kb pdf), and here (1 mb pdf).

Everyone in the field, GISS, UKMet/UEA, BEST, and RSS and even UA Huntsville for the satellite temps, assume measurement error is a constant that subtracts away in an anomaly.

That assumption is untested and unverified. As you directly imply, Kip, the entire field rests on false precision.

Alan Tomalty
Reply to  Pat Frank
September 4, 2018 11:02 am

Unfortunately the UAH data is the only temp data that both sides trust. Nick Stokes has said he doesnt even trust that data set but he has inadvertently argued countless times that it represents his alarmist viewpoint. That is because some skeptic will point out some UAH decrease and Nick will jump in arguing that the decrease is meaningless when you look at the overall trend of the UAH data. That proves that Nick Stokes is willing to accept the UAH figures. All the other alarmists implicitly accept that data as well , even though they are loath to admit it. Unfortunately for us skeptics the satellite era of temp measurement began in 1979. That was a low point for actual global temp. We will have to live with that cherry picked starting point. For now there is a definite short term upward trend. However if there is no long term trend then the data will sort itself out eventually. The alarmists cannot really argue against the UAH dataset because that would be saying that they think that Christy and Spencer are fudging the figures to make it look as if there will never be any warming. However if there really was CAGW and Christy and Spencer were fudging the figures then the alarmists would have to argue that those 2 scientists would be putting all humanity at risk by doing that. What possible gain to Christy and Spencer would there be, to put all of humanity at risk? That assertion is ludicrous in the extreme.

The other side of the coin is not true however. When an alarmist argues for CAGW he is not putting all of humanity at risk by him being wrong. He is only condemning humanity to poverty by carbon taxes.

Tom Abbott
Reply to  Alan Tomalty
September 4, 2018 3:39 pm

Well, if Spencer and Christy are fudging the figures, then so are the operators of the weather balloons since both sets of temperature data are in agreement.

http://www.cgd.ucar.edu/cas/catalog/satellite/msu/comments.html

“A recent comparison (1) of temperature readings from two major climate monitoring systems – microwave sounding units on satellites and thermometers suspended below helium balloons – found a “remarkable” level of agreement between the two.

To verify the accuracy of temperature data collected by microwave sounding units, John Christy compared temperature readings recorded by “radiosonde” thermometers to temperatures reported by the satellites as they orbited over the balloon launch sites.

He found a 97 percent correlation over the 16-year period of the study. The overall composite temperature trends at those sites agreed to within 0.03 degrees Celsius (about 0.054° Fahrenheit) per decade. The same results were found when considering only stations in the polar or arctic regions.”

end excerpt

Pat Frank
Reply to  Tom Abbott
September 4, 2018 8:15 pm

I asked Roy Spencer about systematic error in the satellite measurements when I met him at the Heartland Institute. He agreed the systematic measurement error was ±0.3 C, but then said it subtracted away in an anomaly. It’s an untested assumption.

Agreement between measurements doesn’t dismiss uncertainty. Radiosondes have problems of their own.

Reply to  Pat Frank
September 4, 2018 8:18 pm

No Frank, Spencer is correct…….when anomalies are used, the systematic error is eliminated. It’s not an assumption, it can be shown to be true mathematically.

Reply to  David Dirkse
September 4, 2018 8:20 pm

Anomalies show differences from the mean, not the absolute value of the item being measured by the instrument.

Reply to  David Dirkse
September 4, 2018 8:21 pm

If you use a broken ruler to measure the height of your growing child, you can easily see the growth, even if you don’t know how many centimeters tall your child is.

RACookPE1978
Editor
Reply to  David Dirkse
September 4, 2018 8:30 pm

David Dirkse

If you use a broken ruler to measure the height of your growing child, you can easily see the growth, even if you don’t know how many centimeters tall your child is.

No, I cannot. Unless I scratch a line in the wall at uniform periods using the sharpened tip of the broken ruler at exactly the same relative height from the child’s head. This assumes the broken ruler is long enough to reach the wall from the top of the child’s head. Then I cannot use the broken ruler at all.

Reply to  RACookPE1978
September 4, 2018 8:36 pm

You missed the point RACook. If the thermometer used to collect temperature data at a location was off by -3 degrees for every reading it took in it’s 30 years of data collection, the anomalies calculated from it would be correct.

Pat Frank
Reply to  David Dirkse
September 4, 2018 11:36 pm

off by -3 degrees for every reading

And how do you know that’s true, David? It’s never been ascertained to be true for satellite temperatures.

Reply to  Pat Frank
September 5, 2018 10:37 am

Pat Frank needs to learn how to read. My statement was: ” IF the thermometer used to collect temperature data at a location was off by -3 degrees”

Look up how the word “if” is used in logic Mr. Frank. You of all people should understand what “hypotheses” means.

Pat Frank
Reply to  David Dirkse
September 5, 2018 12:47 pm

I pointed out that your assumption is wrong, David. Figure out the adult response.

Reply to  Pat Frank
September 5, 2018 1:52 pm

Dear Mr Frank. I guess I will have to school you in predicate logic. Here is the truth table for implicationcomment image

..
You will note that even if the “assumption” (hypothesis) is false, the implication is true irrespective of the truth value of the consequence. So your claim that my assumption is wrong holds no weight.
..
Is that “adult” enough for you? ( I guess you missed the fact that the “if” in my statement was both capitalized and bold faced )

What you have to show to object to my logic is that a true hypothesis leads to a false conclusion.

[???? .mod]

Reply to  David Dirkse
September 5, 2018 4:39 pm

Pretty simple Mr “mod” The only time an implication is false, is when the hypothesis is true, and the consequence is false. See the truth table I posted. Frank makes no mention of the consequence in my argument. .

Pat Frank
Reply to  David Dirkse
September 5, 2018 5:35 pm

We’re talking science, not logic David. Your rules of logic are irrelevant.

In science, a false hypothesis predicts incorrect physical states. Such a hypothesis can be logically sound and internally coherent. But it’s false, nevertheless.

On the other hand, and again in science, a hypothesis is physically true when its predictions (your implications) are found physically correct.

Your constant negative-3-C-off thermometer was wrong in concept and, as you later interpreted it, your revised “IF” statement carried your argument into fatuity-land.

Pat Frank
Reply to  David Dirkse
September 5, 2018 5:25 pm

What makes you think science follows your linear logic, David?

Science is deductive, falsifiable, and causal. If your assumption is false, your theory is wrong, and the deduced physical state is incorrect.

It doesn’t matter if a value predicted by your wrong theory matches some observation. It tells you nothing about physical reality.

Next, and specifically with respect to your vaunted “IF“-then: if you propose a non-existent and substantively irrelevant thermometer (one off by a constant -3 C), then your argument is both a non-sequitur and fatuous.

Reply to  Pat Frank
September 5, 2018 5:49 pm

WRONG Frank, any implication deduced from a false hypothesis is TRUE. The truth value of the conclusion/consequence of a false hypothesis in said deduction is indeterminate, You fail Logic 101.
..
In science (which you should be familiar with) you cannot deduct a false conclusion/consequence from a true hypothesis. You should also know that you can deduce ANYTHING from a false hypothesis. Furthermore, all you need to do to prove a hypothesis false is to deduce a contradiction from it, which everyone knows as reducto ad absurdum.

Now Frank, please prove the following statement false: ” If the thermometer used to collect temperature data at a location was off by -3 degrees for every reading it took in it’s 30 years of data collection, the anomalies calculated from it would be correct.”
..
Thank you in advance.

Reply to  David Dirkse
September 5, 2018 5:53 pm

Now, if you find trouble in this Frank, I suggest you do the following.
.
1) Get a dataset for a temperature measuring site that has data for 30+ years.
2) Calculate the 30-year average, then calculate the anomaly for a given time period.
3) Take the dataset, and subtract 3 degrees from every single reading in it
4) Re-calculate the 30-year average, then recalculate the anomaly for the same time period.
5) You’ll notice that the anomaly in both calculations are identical.

Reply to  David Dirkse
September 5, 2018 5:54 pm

Good luck.

Reply to  David Dirkse
September 5, 2018 5:55 pm

Spencer was correct, and you were wrong.

Reply to  David Dirkse
September 5, 2018 6:25 pm

Here ya go Franky:
.
Dataset A: { 3, 4, 8, 1, 2, 4, 6, 11, 9}
Dataset B: { 0, 1, 5, -2, -1, 1, 3, 8, 6}
Average A: 48/9 = 5.333
Average B: 21/9 = 2.333

Anomaly A: { -2.333, -1.333, 3.333, -4.333, -3.333, -1.333, 1.333, 6.333, 4.333}
Anomaly B: { -2.333, -1.333, 3.333, -4.333, -3.333, -1.333, 1.333, 6.333, 4.333}

Pat Frank
Reply to  David Dirkse
September 5, 2018 10:16 pm

Irrelevant, David. The point is that systematic measurement error is known to be inconstant for surface temperatures and is not known to be constant for satellite temperatures.

In such cases, anomalies not only retain the measurement uncertainty but increase it by the root-mean-square of the error in the measurements entering the difference.

In your example, your data values neglect the ±uncertainty present in all real measurements. That makes your example a set-piece error.

You have also violated the limit of significant figures. Your values are represented as good to integer accuracy, but you quote the averages and the anomalies to three significant figures past the decimal. Very wrong.

The uncertainty in integer value measurements can be approximated as ±0.40, which in every case rounds to the given integer. The root-mean-square (rms) uncertainty in your averages is then sqrt[(9*(0.4)^2)/8] = ±0.42

When you take the anomaly, the uncertainty is the rms of the uncertainty in the value and in the average, which is then ±0.58.

Your anomalies A and B are then, -2±0.58, 2±0.58; -1±0.58, -1±0.58; … etc. Now how do you know they’re identical?

Your continued focus on false demonstrations indicates you plain don’t know what you’re talking about. Evidenced also by your increasing recourse to derision.

Pat Frank
Reply to  David Dirkse
September 5, 2018 9:55 pm

Your post marks a transition from wrong right into obsessive stupidity, David.

Pat Frank
Reply to  David Dirkse
September 5, 2018 9:53 pm

The definition of “true” in the context of your reply is, “follows from the premise”.

The definition of true in science is correctly predicted using a falsifiable physical theory.

Your entire argument is an exercise in the the equivocation falsehood. Therein, different meanings applied to the same word are used to leverage a false argument into the guise of truth.

Your argument fails. Your logic is irrelevant and wrongly applied.

You are wrong to suppose one can deduce anything from a false hypothesis. Were that correct, falsification would be impossible. A hypothesis of valid standing in science, whether true or false, predicts one outcome.

Falsification in science is not deduction of a contradiction. It is an incorrect prediction of an observation or experiment.

Your application of logic is one long malaprop.

Finally, the point at issue is that your claim of constant systematic error is wrong. Your request for proof of a substantive non-sequitur is a specious attempt to evade facing your failure.

Tom Abbott
Reply to  Pat Frank
September 6, 2018 4:10 am

I’m not going to argue with *you*, Frank! 🙂

David probably shouldn’t argue with you, either.

Pat Frank
Reply to  Tom Abbott
September 6, 2018 6:43 pm

I’m not a mean guy, Tom, honest. 🙂

I’m just trying to defend science from those who are trying to destroy it. David has been misled by the real culprits.

Reply to  Pat Frank
September 6, 2018 9:46 am

Frank fails Logic 101 “Falsification in science is not deduction of a contradiction.”

When the observation of an experiment contradicts the prediction derived from the hypothesis, you have a contradiction.
The reasoning goes like this:
A=hypothesis
B=consequence
A—>B is the prediction
A—>not-B is the experimental results.
….
Since B and not-B is a contradiction, the truth table for implication only allows this if A IS FALSE
….
See Frank? That is how science “works.” That is how a hypothesis is falsified.

Pat Frank
Reply to  David Dirkse
September 6, 2018 11:13 am

Here’s what you wrote in your truth-table post, David: “You will note that even if the “assumption” (hypothesis) is false, the implication is true irrespective of the truth value of the consequence. (my bold)”

Now you’re writing, “the truth table for implication only allows [a contradiction] if A IS FALSE.”

Your “implication” is equivalent to a prediction. But now you’ve moved on to consequence, i.e., to an experimental or observational result.

Your truth table doesn’t account for consequence at all. It merely relates the logic of a hypothesis and its prediction.

You’re shifting your ground, as you did in the discussion of systematic error and anomalies.

You’re just leveraging the equivocation fallacy again.

Reply to  Pat Frank
September 6, 2018 11:30 am

I’m not shifting any ground, you just don’t understand simple things. The prediction is the implication that the consequence follows from the hypothesis. The experiment is the implication that the observation follows from the SAME hypothesis. I’m surprised that a professional scientist such as yourself is ignorant of basic logic. You need to understand how THIS SIMPLE EXAMPLE works before you can even think about the logic of statistical hypothesis testing.

Reply to  David Dirkse
September 6, 2018 11:33 am

When the prediction and the observation contradict each other, the only way this is possible is if the hypothesis (singular) that generated both the prediction and the experimental observation is FALSE.

RACookPE1978
Editor
Reply to  David Dirkse
September 6, 2018 11:37 am

An experimental observation cannot be “False” -> Provided no errors were made and the experiment is a valid test, the results are “Fact” – And cannot be labelled “True” or “False” to create a logical argument/paradox/though experiment.

Reply to  RACookPE1978
September 6, 2018 11:47 am

The truth value of the observation is irrelevant, all that is necessary is for the observation to contradict the prediction.
..
Now,, RACook, you have introduced an additional hypothesis, namely IF no errors were made and the experiment is a valid test……

Reply to  David Dirkse
September 6, 2018 11:49 am

Additionally Mr RACook, in the event that the prediction matches the observation you cannot conclude the hypothesis is TRUE. This is an additional result from the logic/truth table, and it shows us why Science cannot PROVE anything true it can only falsify stuff.

Pat Frank
Reply to  David Dirkse
September 6, 2018 4:01 pm

Physical theories are non-provable because any such proof requires an unbounded infinity of data. The non-provability of a physical theory does not follow from your logic table.

The ‘proof’ distinction between science and logic (and mathematics) arises from the fact that physical theories are not axiomatic.

No deductive logic can prove physical theory, any new observation/experiment can disprove it. Physical science is categorically unlike logic or mathematics.

Reply to  Pat Frank
September 6, 2018 4:18 pm

WRONG Frank, a FALSE hypothesis can imply a TRUE conclusion, and the implication is valid. That is why you can never “prove” a statement TRUE.

Reply to  David Dirkse
September 6, 2018 4:19 pm

Physical science is subservient and beholden to both Mathematics and Logic.

Pat Frank
Reply to  David Dirkse
September 6, 2018 6:46 pm

Science uses both and is independent of each.

Pat Frank
Reply to  David Dirkse
September 6, 2018 6:46 pm

Deductions from physical theory describe the physical state of the system under study.

A false theory will describe an incorrect physical state. The implication consisting of that incorrect state is never, ever true.

Pat Frank
Reply to  David Dirkse
September 6, 2018 3:53 pm

Observations must be independently validated as physically accurate before they have standing to challenge a prediction from a falsifiable physical theory.

RACook is correct. You’re not, David.

You continue to think of ‘logical truth’ as strictly relevant to a scientific context. Big mistake.

Reply to  Pat Frank
September 6, 2018 4:14 pm

Logic provides the mechanism that guarantees a hypothesis if FALSE when observation contradicts prediction . The pairwise implications of observation and prediction can only both be true when the hypothesis is FALSE.

This is basic philosophy of science Frank, you seem not to be aware of how it all works.

Pat Frank
Reply to  David Dirkse
September 6, 2018 6:57 pm

My discussion concerns your original claim, David. Not your slippery and evasive revisions of your false initial proposition.

Observations were no part of your original argument.

The philosophy of science is not science.

Science is theory and result. Logical coherence is a necessary tool.

Reply to  Pat Frank
September 6, 2018 7:05 pm

My original claim stands, as all subsequent statements I have made. You don’t understand the first thing about logic, and even less about implication.
A sequence of If A, and if B, and if C and if D, and if E, then we should observe X is an implication. The observation is then made in an experiment. The prediction and the observation are both consequences of the conjoined hypothesis. You know the rest.

Reply to  David Dirkse
September 6, 2018 7:07 pm

Love watching you spin.

Reply to  David Dirkse
September 6, 2018 7:09 pm

PS my initial proposition was not false and Spencer is right.

Pat Frank
Reply to  David Dirkse
September 7, 2018 9:41 am

Keep it up, David. Insistence absent substance has been your forte throughout the conversation.

Pat Frank
Reply to  David Dirkse
September 6, 2018 3:48 pm

But that wasn’t your original argument, David.

Your original argument was “even if the “assumption” (hypothesis) is false, the implication is true.”

You’re now in the position of contradicting yourself. Oh, what tangled webs … and all that.

Reply to  Pat Frank
September 6, 2018 6:28 pm

Any implication with a false hypothesis is true.

Pat Frank
Reply to  David Dirkse
September 6, 2018 6:49 pm

In your logic. Not in science. Equivocation fallacy again.

Reply to  Pat Frank
September 6, 2018 6:53 pm

Logic is logic. The logic of science is the same as the logic of mathematics, is the same as the logic of philosophy. No Equivocation, you are trying to make a distinction where one doesn’t exist. You’ve already displayed your ignorance of basic logic when you crossed out “consequence.” Stick to chemistry buddy………you’re in over your head otherwise.

Reply to  David Dirkse
September 6, 2018 6:54 pm

Give me an example Frankie boy where an implication with a false hypothesis is FALSE.

Pat Frank
Reply to  David Dirkse
September 7, 2018 9:54 am

Let’s see how long it takes you to realize that your question inheres your original two-value logic, David.

You have once again contradicted your own later attempts to speciously include consequence.

Everyone here who notes your tactic of personal disparagement will understand it as the recourse of someone who has lost the substantive part of the debate.

Reply to  Pat Frank
September 7, 2018 10:07 am

No Frank your inability to provide a concrete example of an implication that is false and contains a false hypothesis shows EVERYONE here that you don’t know logic. You can’t even explain to us why such and example does not exist.

Pat Frank
Reply to  David Dirkse
September 7, 2018 5:05 pm

Straw man argument, David.

I’m sure you learned about those in your logic classes. Isn’t it nice to apply in practice what you learned in theory.

Your case fails on your argumentative shift of ground to falsely bring observation into your argument where it did not exist a priori. E.g., here: September 6, 2018 11:13 am.

I didn’t dispute your lovely logic table. I disputed the relevance of your logic to science, e.g., here: September 5, 2018 9:53 pm.

You’ve been wrong in every science question you’ve taken on.

Reply to  Pat Frank
September 7, 2018 7:58 pm

Not a “strawman” You cannot provide an example of an implication that is False that has a false hypothesis. Do you even know what a “strawman is?”

Pat Frank
Reply to  David Dirkse
September 7, 2018 9:48 am

The logic of science is deduction from an analytical surmise about physical reality. Physical result is independent of the surmise and can refute the surmise.

The logic of mathematics and the logic of philosophy is deduction from asserted or assumed axioms. All results are deduced from the axioms, no result is independent of the axioms, and nothing in the system can refute the axioms.

You continue to be wrong, David.

Reply to  Pat Frank
September 7, 2018 9:53 am

I have already shown you that Quantum Mechanics is an axiomatic system……give it up while you are ahead Frank.

Pat Frank
Reply to  David Dirkse
September 7, 2018 4:54 pm

You’re leveraging your standard equivocation fallacy fall-back tactic again, David.

Axioms that can be disproved (QM) are not axioms that are kept forever (philosophy and religion).

Same word, categorically different meaning. Equivocation fallacy, writ obvious.

That difference is fatal to your case. The fact you can’t see that, indicates either a refractory death-grip on a face-saving fatuity or just plain incompetence.

Reply to  Pat Frank
September 7, 2018 8:00 pm

Nope, wrong again Frankie-boi, you are misusing the term “equivocation”

Pat Frank
Reply to  David Dirkse
September 6, 2018 3:45 pm

Your original logic table doesn’t have “consequences,” David. It doesn’t even really have predictions.

I.e., your opening: “Here is the truth table for implication (my bold)” It has only implications following from hypotheses.

You subsequently grafted on consequences, perhaps when you realized the poverty of your original post.

Let me fix your follow-up for you, to bring it into coherence with your actual position:
A=hypothesis
B=consequence implication
A—>B is the prediction logical inference
A—>not-B is the experimental results. logical contradiction.

That was your original argument. Consequence is nowhere to be found.

Your approach to a losing debate is clearly to shift your ground every time your prior argument is defeated.

Reply to  Pat Frank
September 6, 2018 3:53 pm

No Frank, you are way off base and grasping at straws.

In Logic, implication is the relationship between two statements, A, and B. Crossing out “consequence” and substituting “implication” shows you don’t know what you are talking about.

There’s no point to talking to you anymore because you are clueless. You get an “F” in Logic 101.

Pat Frank
Reply to  David Dirkse
September 6, 2018 7:00 pm

Right. So, when you wrote, “Here is the truth table for implication (my bold),” you didn’t mean implication.

You meant implication and subsequent cohering observation. But forgot to include that latter part until your initial argument proved false.

You’re a great piece of work, David.

Reply to  David Dirkse
September 7, 2018 8:02 pm

The symbol “q” in the table is the consequent.

Reply to  Pat Frank
September 6, 2018 3:59 pm

Just to show you how clueless your are, my original table is the definition of implication, and the first statement is considered a “hypothesis” and the second statement is considered the “consequence”. Crossing out the formal definition of the terms and substituting what you think they mean is funny.
….
Now although a lot of people here will object to a link from wikipedia, I’m going to post one here for you because you really really need to take a look at it before making any more grossly inaccurate statements. https://en.wikipedia.org/wiki/Material_conditional

Reply to  David Dirkse
September 6, 2018 4:08 pm

You could also try this Frank, Section B, item 1: https://philosophy.lander.edu/logic/conditional.html

Gonna be hard for you to get around the word “consequent”

Pat Frank
Reply to  David Dirkse
September 6, 2018 7:19 pm

You’re making the same mistake as with Wiki, David. The logic table in philosophy.lander.edu still has only two elements, not three.

Whether you call the second element implication or consequence, the latter of those words does not mean, and never means, observation. It always means prediction.

Observation is not part of your original argument.

Your continual attempts to slide between definitions in an attempt to establish your incorrect claim by false means (the equivocation fallacy) will avail you nothing.

You make the same mistake over and over again.

Were it anyone but you, David, what with your reputation for towering integrity and all, the persistent repetition of an obviously false argument would lead me to wonder about the honesty of my disputant.

Pat Frank
Reply to  David Dirkse
September 6, 2018 7:09 pm

The Wiki article merely expresses a synonym relationship: “The material conditional (also known as material implication, material consequence, or simply implication, implies, or conditional)

The Wiki Table includes only two elements, hypothesis and implication, which latter Wiki also expresses as consequence.

That is, for Wiki, consequence = implication.
And for you, consequence = observation.

Your recourse to that confusion of definitions is a very clear example of your continual abuse of the equivocation fallacy.

Your source defines consequence as implication. You then turn around and redefine consequence as observation.

Equivocation fallacy. You redefine the word in mid-stream, and then claim victory.

Nice try, David. Wrong again.

Reply to  Pat Frank
September 6, 2018 7:20 pm

Suprising that someone like you can’t understand a simple Wiki article. There three distinct items, namely 1) hypothesis, 2) conlusion/consequence and 3) implication
..
An implication is a compound statement of the relationship between a hypothesis and a consequence.

There…..one sentence. Should be simple enough for you to understand.
.
Wiki does not define the consequence as an implication.

Not good enough? hows this: In an implication the hypothesis implies the consequence. Does that clear up your fog?

Reply to  David Dirkse
September 6, 2018 7:24 pm

Nowhere have I ever “redefined” anything. Coherence is required for us logicians.

Pat Frank
Reply to  David Dirkse
September 7, 2018 10:04 am

Maybe I should have quoted the Wiki article a little further to help you along, David.

Here it is: “The material conditional (also known as material implication, material consequence, or simply implication, implies, or conditional) is a logical connective (or a binary operator)… (my bold)”

Guess how many elements there are in a binary operator, David.
Hint: not three.

I have already pointed out that Wiki defines consequence as a synonym of implication. It has no independent standing.

Your continued and specious attempts to include consequence as a third item, as in hypothesis, implication, consequence (=observation), is truly fatuous.

Reply to  Pat Frank
September 7, 2018 10:15 am

Frank, I stated: “There three distinct items”

One is the hypothesis, the second the consequent and the third is combination of the two with the operator.
..
Reading is fundamental .

Pat Frank
Reply to  David Dirkse
September 7, 2018 5:58 pm

Glad you referenced reading skills, David.

You gave two, not three, items in your original logic table. Read it here: September 5, 2018 1:52 pm

Two items here: September 5, 2018 5:49 pm

I brought up the necessity of empirical observations here: September 5, 2018 9:53 pm

After that, you shifted your ground. Initially, your truth-table “consequence” followed from the logical coherence of the implication, as I already showed here: September 6, 2018 11:13 am and; here: September 6, 2018 3:45 pm.

The proof of your mistake is in your claim that, “You should also know that you can deduce ANYTHING from a false hypothesis.

I showed that claim to be wrong here: September 5, 2018 9:53 pm and; here: September 6, 2018 6:46 pm.

In science, your claim is categorically wrong. A valid hypothesis in science, right or wrong, is logically self-coherent and deduces a single prediction.

It is tested by independent observation. It is falsified by a failed prediction, not by a deduced logical contradiction.

After I showed your formulation to be irrelevant to science, you tried to save yourself by shifting the meaning of consequence from implication (item 2 in your two-value logic) to observation, which appeared nowhere in your original argument.

This is shown by your own statement right under your truth table, to wit: “You will note that even if the “assumption” (hypothesis) is false, the implication is true irrespective of the truth value of the consequence,” referring to rows three and four in your table.

Your “consequence” there is “p→q,” where hypothesis p implies condition q.

Nothing in your table indicates the method of science, in which p→q, followed by both p and q negated (or verified) by ‘o,” observation, and ‘o’ is independent of p and q.

You subsequently and falsely grafted observation onto your argument, after I raised the necessity.

Squirm and falsely revise as you like, David. Your logic reveals nothing about the method or mechanism of science.

You’ve were wrong from the outset, and insistently wrong ever since.

Reply to  Pat Frank
September 7, 2018 7:36 pm

Frank says: “You gave two, not three, items in your original logic table.”

Frank cannot count. There are THREE columns in the table

Pat Frank
Reply to  David Dirkse
September 8, 2018 10:16 am

Table items: hypothesis (1), implication (2), hypothesis yields implication [(1) & (2)]. Three columns, two items.

Observation is nowhere to be found.

Jim Masterson
Reply to  David Dirkse
September 6, 2018 11:00 pm

>>
Now although a lot of people here will object to a link from wikipedia . . . .
<<

I think you need to reread that link. I will quote:

The material conditional is used to form statements of the form p → q (termed a conditional statement) which is read as “if p then q”. Unlike the English construction “if… then…”, the material conditional statement p → q does not specify a causal relationship between p and q. It is merely to be understood to mean “if p is true, then q is also true” such that the statement p → q is false only when p is true and q is false.  The material conditional only states that q is true when (but not necessarily only when) p is true, and makes no claim that p causes q.

That supports my original statement from yesterday. You are confusing the definition of implication with its logical effect. If p is true (and p → q), then q must be true. If p is false, then q can have any value.

Jim

Reply to  Pat Frank
September 5, 2018 6:08 pm

LOL @ Pat Frank.
..
Pat says: “What makes you think science follows your linear logic”
..
Here is a clue Pat. Mathematics follows linear logic, and science would fall apart without the linear logic of mathematics.

Pat Frank
Reply to  David Dirkse
September 5, 2018 10:18 pm

Mathematics is the language of science, David. Mathematics does not govern the content of science or the logic of science.

“LOL,” by the way, does not improve your arguments.

Reply to  Pat Frank
September 6, 2018 6:29 pm

Frank, there is “logic”……when you say “logic of science” it is not different. Both are identical.

Pat Frank
Reply to  David Dirkse
September 6, 2018 7:24 pm

The structure of logic depends from its axioms. Logic is axiomatic. Science is not. They are not identical.

Quantum Mechanics upended the logic of classical electromagnetic theory. Such upending would never happen in an axiomatic system.

Reply to  Pat Frank
September 6, 2018 7:29 pm
Pat Frank
Reply to  David Dirkse
September 7, 2018 10:10 am

Quantum Mechanics can be falsified by experiment, David.

That means what MIT calls its axioms are conditional on disproof. Disproof requires their abandonment.

If there was anything that separated science from philosophy, that mortal threat is it.

You’re once again leveraging the equivocation fallacy. “Axiom” as used at MIT is not “axiom” as used in philosophy.

If you understood anything about science, you’d not continue to embarrass yourself like this.

Jim Masterson
Reply to  David Dirkse
September 5, 2018 6:22 pm

>>
Here is the truth table for implication . . . .
<<

I don't know what you are arguing about, but you're misusing the logic of implication. Here is a Venn diagram of implication:
comment image

As you can see, P is contained and inside of Q. Implication places a restriction on the values that P and Q can take. When P is true, then Q must be true. It says nothing about the value of Q when P is false. Since P cannot exist outside of Q the following Boolean expression holds:

P and not Q = false

If we apply De Morgan's Law, we get:

not P or Q = true

This leads to your truth table above. The mistake you're making is trying to change the values of independent variables P and Q. When P is true and implication holds, then Q must also be true. When P is false it says nothing about the truth value of Q–which may be either true or false.

Notice that when P implies Q, then not Q implies not P.

Also if P implies Q and Q implies P then P is identical to Q.

Lastly, if A implies B and B implies C then A implies C.

Jim

Reply to  Jim Masterson
September 5, 2018 8:05 pm

Jim, if the premise (hypothesis) of an implication is false, the implication is true irrespective of the truth value of the conclusion. There are four distinct cases for the truth table which your Venn diagram does not display.

Jim Masterson
Reply to  David Dirkse
September 5, 2018 10:18 pm

>>
Jim, if the premise (hypothesis) of an implication is false, the implication is true irrespective of the truth value of the conclusion.
<<

Your statement is true, but you’re using it wrong. For example:

\displaystyle A+\overline{A}=1,

that is, A or not A is always true. That doesn’t mean that A is always true. Likewise for implication:

\displaystyle \overline{P}+Q=1.

In other words, not P or Q is always true. That doesn’t mean that not P is always true or Q is always true.

>>
There are four distinct cases for the truth table which your Venn diagram does not display.
<<

Actually, it displays all that it needs to display. The region inside P represents P and Q or \displaystyle P\cdot Q. The region outside of P but inside of Q represents not P and Q or \displaystyle \overline{P}\cdot Q. The region outside of Q is also outside of P and represents not P and not Q or \displaystyle \overline{P}\cdot \overline{Q}. Taken together they represent the universe, so we can “or” them and set it equal to 1.

\displaystyle P\cdot Q+\overline{P}\cdot Q+\overline{P}\cdot \overline{Q}=1

Now we can apply some Boolean identities to simplify the expression. The first is to use the following identity to add another term:

\displaystyle A=A+A

\displaystyle P\cdot Q+\overline{P}\cdot Q+\overline{P}\cdot Q+\overline{P}\cdot \overline{Q}=1

Factoring we get:

\displaystyle Q\cdot (P+\overline{P})+\overline{P}\cdot (Q+\overline{Q})=1

Because we know that \displaystyle A+\overline{A}=1 we can reduce to:

\displaystyle Q\cdot (1)+\overline{P}\cdot (1)=1

And since \displaystyle A\cdot 1=A, we have:

\displaystyle Q+\overline{P}=1.

The commutative law holds for the Boolean operator “or” and “and” so we can rearrange:

\displaystyle \overline{P}+Q=1

Applying De Morgan’s law we get:

\displaystyle P\cdot \overline{Q}=0,

and there’s our missing fourth term. It’s not included in our Venn diagram because it’s not part of the universe for implication.

Jim

Reply to  Jim Masterson
September 5, 2018 8:12 pm

Jim says: “When P is false it says nothing about the truth value of Q–which may be either true or false.”
..
that is true but when P is false the implication is true. Pat Frank states that P is “false” so anything implicated from his statement is true.

Jim Masterson
Reply to  David Dirkse
September 5, 2018 10:59 pm

>>
. . . but when P is false the implication is true.
<<

ibid.

Jim

Reply to  Jim Masterson
September 5, 2018 8:14 pm

” but you’re misusing the logic of implication:

No I am not.

Jim Masterson
Reply to  David Dirkse
September 7, 2018 10:40 am

>>
No I am not.
<<

I figured out what you are doing wrong. You’re assuming P implies Q is always true. P implies Q is one of the things you need to prove. There are a couple of ways to do that. One is to prove the intersection of P and not Q is always false. Another is to prove that not P or’ed with Q is always true. After that, you must prove that P is true. Those two conditions will make Q true.

So, yes, you’re misusing the logic of implication.

Jim

Reply to  David Dirkse
September 4, 2018 8:31 pm

For example Frank, if you visit the home I grew up in and look at the back of the door to the closet in the living room, you’ll see scratch marks with dates next to them. Never measured the absolute height of the marks, but it’s pretty easy to see how rapidly I grew up as a child, since the marks were a measurement of my height on every birthday I had.

lee
Reply to  David Dirkse
September 4, 2018 11:32 pm

All this without uncertainty? Impressive.

standing tall, slouching? Marker not held straight?

Pat Frank
Reply to  David Dirkse
September 4, 2018 11:39 pm

±(how many inches), David?

How do you know the error is identical with every scratch?

Your assumption of constant systematic error has never been tested for satellite temperatures and is not known to be true.

It has been tested for surface air temperatures, and is known to be false.

Ktm
Reply to  David Dirkse
September 5, 2018 6:59 am

For a class 1/2 station, the measurement taken MAY be a relatively accurate representation of the true temperature of the area it represents.

For a class 4 station, it’s just not an accurate representation of the true temperature, regardless of how precise the measuring instrument claims to be.

Pat Frank
Reply to  Ktm
September 5, 2018 10:30 am

Calibration experiments done using well-sited and well-maintained platinum resistance surface air temperature sensors show about ±0.35 C systematic measurement error due to the impacts of environmental variables.

This ±0.35 C would be the very minimum accuracy limit of any class 1 or class 2 station.

Pat Frank
Reply to  David Dirkse
September 5, 2018 5:41 pm

… and, of course, every single scratch-mark is infinitely precise, incorrect by the utterly identical offset, and its height from the floor (which hasn’t settled an angstrom) can now be evaluated with perfect accuracy.

That’s your argument, David.

By now, I’d expect even you can see it’s wrong. If not, well, then, there’s no help for you.

Pat Frank
Reply to  David Dirkse
September 4, 2018 11:34 pm

It’s an assumption that the systematic error is a constant, David. That’s not known to be true.

And when systematic error is due to uncontrolled variables, as is almost certainly the case in satellite and radiosonde temperatures, and is indeed certainly the case in surface air temperatures, then it will never be constant.

Pat Frank
Reply to  Kip Hansen
September 5, 2018 10:25 am

Hi Kip – you’re apparently talking about instrumental resolution (the limit of instrumental accuracy). You’re right that the consensus people utterly ignore it.

In a direct confrontation over resolution after my Erice talk three years ago, Richard Muller of BEST pretty much said it’s unimportant. He ignored it then, he still ignores it, and he’s wrong.

The other measurement issue is systematic error from uncontrolled environmental variables, primarily wind speed and solar irradiance. They both put time-variable errors into the measurements.

This is the issue with David Dirkse, who clearly and incorrectly thinks that systematic error is constant in time and space.

Reply to  Pat Frank
September 5, 2018 10:42 am

Systematic error from any given singular instrument is eliminated by the use a anomalies. Spencer was correct.

Ktm
Reply to  David Dirkse
September 5, 2018 12:13 pm

“Systematic error from any given singular instrument is eliminated by the use a anomalies. Spencer was correct.”

So when the error goes from 0.5 to 0.05 by changing from absolute Temps to anomalies, and you argue this is because systematic measurements errors have been removed, you are saying that the error introduced by systematic instrument measurements is 10x larger than the errors introduced by all other influence combined.

Does that sound reasonable to you?

Pat Frank
Reply to  David Dirkse
September 5, 2018 12:49 pm

No, it is not, David, because the systematic measurement error varies in time and space. Calibration experiments show this.

Reply to  Pat Frank
September 6, 2018 3:39 pm

Yes it is, and the example datasets I provided shows how using anomalies removes systematic error. Spencer is right. And Spencer should know, because he has been doing this kind of work and analysis for his entire career, as opposed to a chemist that thinks he knows something about data analysis.

Pat Frank
Reply to  David Dirkse
September 6, 2018 7:40 pm

You didn’t even know about significant figures, David; something taught in a freshman chemistry class. How are you able to decide who is correct?

Your sample data set, apart from being wrong, was also irrelevant because you deliberately departed from the point at issue, namely the inconstancy of systematic measurement error due to the impact of uncontrolled environmental variables.

I’m a physical methods experimental chemist of long-standing, David. My work principally involves X-ray absorption spectroscopy. I sweat measurement error all the time. Here is a recent paper, for your critical examination. It is explicit and complete about experimental error and uncertainty.

Your disparagement of my professional competence ranks right up there with your wrong anomaly set: proof you don’t know what you’re talking about.

Reply to  Pat Frank
September 6, 2018 7:46 pm

” I’m a physical methods experimental chemist of long-standing” who doesn’t even know that Quantum Mechanics is axiomatic.

Reply to  David Dirkse
September 6, 2018 7:48 pm

Appealing to authority is a logical fallacy, and appealing to your own authority is super hilarious.

Pat Frank
Reply to  David Dirkse
September 7, 2018 10:21 am

You wrote, “as opposed to a chemist that thinks he knows something about data analysis.,” David.

You’ll note I included a link to a recent paper that demonstrates my expertise in data analysis, including analysis of physical error.

You ignored the evidence that you’re wrong, just as you have ignored every prior demonstration that your thinking is incorrect.

Demonstration is the opposite of an appeal to authority. You’ve conflated two opposed categories — right up to par with your other examples of logical integrity.

Pat Frank
Reply to  David Dirkse
September 7, 2018 10:12 am

QM is not axiomatic, in the sense you mean axiom, David.

QM can be falsified. The axioms you love so much cannot.

The difference cannot be greater.

Reply to  Pat Frank
September 7, 2018 10:27 am

Frank, I already showed you the axioms QM is built on. You as a chemist should know that QM is an axiomatic system. Here, buy and read this book: https://www.crcpress.com/Quantum-Mechanics-Axiomatic-Theory-with-Modern-Applications/Boliva-Abellan/p/book/9781771886918

Pat Frank
Reply to  David Dirkse
September 7, 2018 6:03 pm

Quantum Mechanics is subject to falsification. So, then, are its axioms. QM is not axiomatic in the sense of philosophy and religion.

Your argument is equivocation all the way down.

Reply to  Pat Frank
September 7, 2018 6:20 pm

Any axiomatic system is subject to falsification.
..
For example, in Euclidean geometry, the parallel postulate (axiom) can be jettisoned, and one can get spherical or hyperbolic geometries. Now tell me Mr. Frank, how does one “falsify” the Euclidean parallel postulate? Are hyperbolic and spherical geometries “false?” or is Euclidean geometry “false?”
.
.
Do you have a clue what you are talking about?

Reply to  David Dirkse
September 7, 2018 6:25 pm

Quantum Mechanics is an axiomatic system, When you post: “QM is not axiomatic”
..
YOU ARE WRONG.
..
Now reign in your oversized ego and admit it.

Reply to  David Dirkse
September 7, 2018 6:56 pm

Tell all of us Frank, if one day, someone discovers concrete proof that in fact Steve Janus was crucified 2000 years ago on the cross instead of Jesus Christ, does that “falsify” Christianity?

Pat Frank
Reply to  David Dirkse
September 8, 2018 10:34 am

Christianity rests on no falsifiable physical theory, David.

Your question is meaningless.

Reply to  Pat Frank
September 8, 2018 10:53 am

Christianity rests on the teachings of a man that walked on the face of the Earth about 2000 years ago. If one discovers the physical remains of said person, it kinda destroys the whole gig Franky boy. Seems to me that not only do you not know that QM is axiomatic, your understanding of Christianity is lacking also.

Reply to  David Dirkse
September 8, 2018 10:54 am

Look up the term “Resurrection” Frankie

Pat Frank
Reply to  David Dirkse
September 9, 2018 9:37 am

Which of the resurrection stories do you like best, David. Is it the one where dead people emerge from their graves and walk around?

There’s a startling, “Hi honey! I’m home!”, isn’t it.

Pat Frank
Reply to  David Dirkse
September 9, 2018 9:33 am

Purported to have walked the earth 2000 years ago, David. There is no indisputable historical record supporting your claim.

I’ve pointed out twice now that the so-called axioms of QM are under mortal threat of falsification. That makes them categorically separate from the axioms of logic that are under no mortal threat.

Axioms_QM ≢ Axioms_logic.

Once again you’re leveraging a word using the equivocation fallacy.

Your tactic is become tediously predictable.

Although it seems you are incapable of grasping that fundamental distinction, I expect your apparently refractory ignorance is just a tactic of denial.

If you admitted the obvious, you’d lose the point.

Jim Masterson
Reply to  Pat Frank
September 9, 2018 10:03 am

>>
Your tactic is become tediously predictable.
<<

David’s also stopped replying to my comments, because he knows his interpretation of implication logic was wrong.

Jim

Pat Frank
Reply to  Jim Masterson
September 10, 2018 7:29 pm

Jim, well, at least he’s self-consistent. 🙂

Reply to  Pat Frank
September 10, 2018 7:55 pm

And Mr QM-is-not-axiomatic admits QM is axiomatic. That is what is considered inconsistent.

Reply to  Jim Masterson
September 10, 2018 7:56 pm

Wrong Jimmy-boi, the truth table is correct. If you think it is not correct, please tell us all why.

Pat Frank
Reply to  David Dirkse
September 11, 2018 10:01 am

Among all those debating here, only you, David, evidently think that insulting diminutives improves an argument.

That makes you special.

Jim Masterson
Reply to  David Dirkse
September 11, 2018 2:54 pm

>>
Wrong Jimmy-boi, the truth table is correct. If you think it is not correct, please tell us all why.
<<

Well, I was trying to be kind an educate you on the logic of implication, but you are too naive and ignorant of logic to even know when you’re wrong.

I’ll try again, but I doubt it’ll work.

The truth table is correct for implication. The main point about implication is that when P → Q, then P is a logical subset of Q. If P → Q and P is true, then Q is also true. That’s all.

Notice that the truth table has one false term and three true terms. The false term: \displaystyle P\cdot \overline{Q} can be set to false (or zero). That gives us:

\displaystyle P\cdot \overline{Q}=0

If we “not” both sides and use De Morgan’s law we get:

\displaystyle \overline{P\cdot \overline{Q}}=\overline{0}

and

\displaystyle \overline{P}+Q=1.

This last expression is the one that defines implication. We can also get the same result by or’ing all the true terms, setting them equal to true (or one) and simplifying. I’ve already done this to prove that my Venn diagram represents implication correctly.

Now, I don’t understand why you keep trying to prove implication true. (It’s probably because you don’t really know what you’re doing.) Most would just say: “P implies Q” or “if P implies Q.” That is sufficient to assume that implication is true. Then you must show that P is true. If P implies Q AND P is true, then Q is also true. If P implies Q AND P is false, then Q can be either true or false–there’s no restriction on Q’s value in that case.

That David, is the correct way to use implication. And I didn’t violate the truth table for implication.

Jim

Reply to  Pat Frank
September 10, 2018 8:04 pm

Axioms are axioms, logic is an axiomatic system, and QM is an axiomatic system even though you posted:

1)”the fact that physical theories are not axiomatic. ”
..
2)”QM is not axiomatic,”
..
3) “QM is not axiomatic in the sense of philosophy ”

(All of these quotes are visible in this thread)
..
Plenty of evidence that Pat Frank doesn’t know much about QM

Pat Frank
Reply to  David Dirkse
September 11, 2018 9:59 am

Apparently you don’t understand the word, ‘falsifiable,’ David. QM is falsifiable, formal logic is not.

As I noted above, Axioms_QM ≢ Axioms_Logic.

I’ve pointed out that inequivalence several different ways, but you’re clearly determined to reject the obvious.

Pat Frank
Reply to  David Dirkse
September 8, 2018 10:32 am

Equivocation fallacy, thy name is David Dirkse. How you love it.

Pat Frank
Reply to  David Dirkse
September 8, 2018 10:30 am

All your example shows, David, is that different axioms produce a different mathematics.

Nothing about Riemann Geometry falsifies Euclidean Geometry because they start from different axiomatic premises.

Physics is non-axiomatic in the sense of mathematics and philosophy. Unlike those two, all of physics is provisional, and is subject to falsification and discard, right down to bed-rock.

You have once again employed the equivocation fallacy in your example from geometry, because you equate observational falsification with alternative formalisms.

You have equated orthogonal categories. Good job.

You clearly love to boast about your expertise in logic, and then go on to repeatedly and insistently make sloppy errors of definition.

Your argument has invariably relied on opportunistic slip-slidery.

Reply to  Pat Frank
September 8, 2018 11:00 am

“Physics is non-axiomatic” except for Quantum Mechanics which is axiomatic.

Reply to  David Dirkse
September 8, 2018 11:07 am

Also Frank, you do realize that parts of QM are not falsifiable due to issues related to the Church-Turing thesis.

Pat Frank
Reply to  David Dirkse
September 9, 2018 9:51 am

All physical theories are under-determined. Physical proofs from mathematics are axiomatically limited, and cannot anticipate the diagnoses from future independent and presently unpredictable observations.

Meaning, in short, your comment is grounded in ignorance.

Pat Frank
Reply to  David Dirkse
September 9, 2018 9:40 am

Axioms_QM ≢ Axioms_logic.

Equivocation fallacy. As noted three times now.

At least you’re consistent in your mistake, David.

What logical inference can one derive about someone (you) who insists on repeatedly making the same obvious mistake?

Anthony Banton
Reply to  Tom Abbott
September 5, 2018 7:42 am

“Well, if Spencer and Christy are fudging the figures, then so are the operators of the weather balloons since both sets of temperature data are in agreement.”

Not true
UAH is a clear cold outlier and a long way from correlating with radiosondes.

comment image

And to boot since ~2000 the latest sensor on NOAA 15 has been displaying an even more distinct cooling bias vs other tropospheric measurement AND it’s predecessor onboard NOAA 14 ….

comment image

Both UAH and RSS acknowledge this discrepancy. UAH by saying that the present one is the latest and therefore likely correct. RSS, take the pragmatic approach by saying they don’t know which is wrong and split the difference….

comment image

Tom Abbott
Reply to  Anthony Banton
September 6, 2018 4:16 am

“Not true
UAH is a clear cold outlier and a long way from correlating with radiosondes.”

How so? Christy says they correlated. Do you know something he doesn’t?

Gordon Dressler
September 4, 2018 8:34 am

Regarding the phrase “life-as-we-know-it” that was implicitly tied into the thermometer graphic and specified temperatures in this article, here is the range of temperatures that life on Earth has demonstrated (based on current human knowledge):

“. . . another group of scientists has found a microbe from deep-sea vents that is able to survive at 122C. And there are hints that even this is not the ultimate limit for life. A new microbe, for now called “Strain 121”, has since been discovered in a thermal vent deep in the Pacific Ocean. The microbe thrives at 121C and there are claims that it can even survive for two hours at 130C. However the finding is still contentious, as the strain has not been made publicly available to study.” — source http://www.bbc.com/earth/story/20160209-this-is-how-to-survive-if-you-spend-your-life-in-boilin-water

“The study, published in PLoS One, reveals that below -20 °C, single-celled organisms dehydrate, sending them into a vitrified – glass-like – state during which they are unable to complete their life cycle. The researchers propose that, since the organisms cannot reproduce below this temperature, -20 °C is the lowest temperature limit for life on Earth. Scientists placed single-celled organisms in a watery medium, and lowered the temperature. As the temperature fell, the medium started to turn into ice and as the ice crystals grew, the water inside the organisms seeped out to form more ice. This left the cells first dehydrated, and then vitrified. Once a cell has vitrified, scientists no longer consider it living as it cannot reproduce, but cells can be brought back to life when temperatures rise again. This vitrification phase is similar to the state plant seeds enter when they dry out. ‘The interesting thing about vitrification is that in general a cell will survive, where it wouldn’t survive freezing, if you freeze internally you die. But if you can do a controlled vitrification you can survive,’ says Professor Andrew Clarke of NERC’s British Antarctic Survey, lead author of the study. ‘Once a cell is vitrified it can continue to survive right down to incredibly low temperatures. It just can’t do much until it warms up. ‘ ” — source https://phys.org/news/2013-08-lowest-temperature-life.html

Gordon Dressler
Reply to  Kip Hansen
September 4, 2018 3:24 pm

Kip, one sign of a good article is that it promotes additional thoughts and comments from readers. Your article was one of the best I’ve read in a long time and provided some great points on how many scientists don’t even recognize that the “magic trick” of reducing reported data uncertainties has occurred right under their noses, despite “peer review”.

The chart you posted at 7:44 am on Sept 4 in response to Javier, showing the tracking anomalies from start-2014 through mid-2018 with the bands of data uncertainty (+/- 0.5 C overlaid added onto the yearly anomaly variation), should be all that is needed to quiet anyone asserting that climate temperature “anomalies” are meaningful at 0.1 C precision.

I wholeheartedly share in the hope to discover advanced intelligent, “conversational”, ET life forms, but will celebrate even if it is discovered in its most primitive form.

To paraphrase Arthur C. Clarke (my caps), “Two possibilities exist: either we are alone in the Universe or we are not. ONLY THE FIRST IS TERRIFYING.”

Thank you for your article!

n.n
September 4, 2018 8:34 am

Worse, we have made only limited near-observations at the edge of our solar system. The signals from outside are limited and the missing information is inferred based on Earth-biased assumptions/assertions.

Stephen Cheesman
September 4, 2018 8:57 am

If the “expanded” error was correct, then the plotted data should be outside the error about 5% of the time. The fact that this relationship does not hold between the original plot and your “corrected” error range shows that something is wrong with the “corrected plot” (the original does appear to follow the rule).

Stephen Cheesman
Reply to  Stephen Cheesman
September 4, 2018 9:02 am

Obviously, as plotted the error will always enclose the plotted line, as we don’t know the “correct” answer. I am referring to the fact that the “wobble” of the plotted line does not reflect the amplitude of the error range.

eyesonu
September 4, 2018 10:15 am

Good post/essay.

Ragnaar