Is NOAA trying to warm the current 8+ year pause?

From JunkScience.com

Steve Milloy

Recall that my 13 million-view tweet sent Big Climate into a paroxysm of desperate but fake ‘fact checks.’ Is NOAA now trying to slowly warm the current pause with an eye toward eliminating it? Or is this just innocent data correction?

So here is the graph that the 13 million-view tweet was based on. The image was taken on January 12, 2023.

Now here is the image taken today.

They are very similar. But note the trend. In January, the 2015-2022 cooling trend was -0.11°C/decade. Today, the trend has been knocked back to -0.07°C/decade.

Is this a mere data correction/adjustment by NOAA or the beginning of something more sinister? Stay tuned.

5 53 votes
Article Rating
538 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
February 20, 2023 6:06 am

It does not fit the Narrative, so of course it must be “corrected”.

TheFinalNail
Reply to  Tom Halla
February 20, 2023 6:21 am

If they were going to ‘correct’ it why wouldn’t they just make the trend positive?

Matthew Bergin
Reply to  TheFinalNail
February 20, 2023 6:31 am

They will change the trend to positive, it will just take a couple more adjustments. If you do all the changes at once people will notice. So it is best to fake the readings slowly.

TheFinalNail
Reply to  Matthew Bergin
February 20, 2023 6:38 am

Looks like people have already noticed, so…

Walter
Reply to  TheFinalNail
February 20, 2023 7:12 am

11 people have noticed Nail.

bnice2000
Reply to  TheFinalNail
February 20, 2023 1:34 pm

You wouldn’t have noticed if someone didn’t show you…

Gives the AGW followers hope.

bdgwx
Reply to  TheFinalNail
February 20, 2023 7:32 am

And why did they reduce the warming trend over the period of record?

climategrog
Reply to  bdgwx
February 20, 2023 9:11 am

What is this reduction? What are you actually talking about? If you don’t make a clear statement you won’t get an answer.

Of course you don’t want an answer do you? You think your nonsensical statement is a rhetorical question which is a killer argument. You probably found it on Reddit and don’t even know what it refers to. Try again.

bdgwx
Reply to  climategrog
February 20, 2023 10:36 am

I’m referring to adjustments documented in Menne 2009, Karl et al. 2015, Huang et al. 2014, etc. The net effect of all adjustments reduces the overall global warming trend over instrumental record period.

Last edited 1 month ago by bdgwx
Robert B
Reply to  bdgwx
February 20, 2023 1:07 pm

Human emissions could only be responsible for post 1950 warming, but there was similar warming before then to what has happened in the past 40 years. You deliberate refer to the net effect (or just regurgitate a talking point) because the overall effect is to adjust the data to better reflect the assertion that warming since 1950 was not natural by reducing the earlier warming trend.

That adjustment was still an exampe of tweaking data to fit the theory.

bdgwx
Reply to  Robert B
February 20, 2023 2:32 pm

Do you think CO2 (and other GHG) molecules created before 1950 behave differently than those created after 1950?

Last edited 1 month ago by bdgwx
Hivemind
Reply to  bdgwx
February 20, 2023 4:39 pm

Of course they do. CO2 is magic, weren’t you told?

Robert B
Reply to  bdgwx
February 20, 2023 5:50 pm

Everybody who is familiar with “The Science” knows that the argument is about how much was emitted.

It is The Science (What climate scientists insist is settled) that human emissions were not enough to cause any significant heating until after 1950. They couldn’t have the same rate of warming from 1900 to 1940 as in the past 40 years when human emissions of CO2 was a tenth of 1980 in 1900.

You had -2 likes and it’s-1. Somebody actually liked your silly rebuttal?

bdgwx
Reply to  Robert B
February 20, 2023 7:41 pm

I’m responding to your statement Human emissions could only be responsible for post 1950 warming” That just simply isn’t true regardless of whether you think it is silly or not.

MarkW
Reply to  bdgwx
February 21, 2023 7:38 am

Really is sad when an alarmist starts thrashing.

MarkW
Reply to  bdgwx
February 21, 2023 7:37 am

I can’t determine if bdgwx’s post is an example of ignorance or a desperate attempt to change the subject.
As the historical record shows, man wasn’t emitting enough CO2 prior to 1950, to make a difference.

bdgwx
Reply to  MarkW
February 21, 2023 8:16 am

I’ll ask you the same question. Do you think pre-1950 CO2 molecules behave differently than post-1950 molecules? If yes then what property of the CO2 molecule causes it to behave differently? If no then you agree with me.

And note that the contention is not the amount of CO2 emitted which is obviously lower in the past. Literally everyone accepts that. The contention is that Robert B said “Human emissions could only be responsible for post 1950 warming” which is patently false. I’m trying to convince Robert B that there is nothing special about pre-1950 molecules that cause it to stop behaving the same way as post-1950 molecules. CO2 causes a positive radiative force in proportion to the amount of increase in the atmosphere regardless of when it got there. Specifically the radiative force pre-1950 is 5.35*ln(310/280) = 0.54 W/m2 and post-1950 is 5.35*ln(420/310) = 1.50 W/m2. 0.54 W/m2 may be smaller than 1.50 W/m2 but it isn’t zero. That’s the point.

I take it you disagree?

Last edited 27 days ago by bdgwx
MarkW
Reply to  bdgwx
February 21, 2023 11:04 am

Notice how the troll evades the point and re-asks the already answered question. If it’s trying to be cute, it’s failing at that as well.

bdgwx
Reply to  MarkW
February 21, 2023 12:07 pm

You don’t have to answer. But without an answer I’m left with no other choice but to accept that you think 5.35 * ln(310/280) = 0.54 W/m2 is the same thing as 0 W/m2. It is an absurd argument, but I’ll accept that I cannot convince you otherwise. I’ll just put it down in my ever expanding list of absurd arguments and move along.

bnice2000
Reply to  bdgwx
February 20, 2023 1:36 pm

I tiny step back toward reality.

They knew the previous iteration was farcical.

Its now slightly less farcical.

Dave Fair
Reply to  bdgwx
February 20, 2023 2:09 pm

But Karlization jacks up the recent warming trend. Lowering the overall trend was a propaganda move.

They used the old night marine air temperature (NMA) relationship to SST to adjust SST upwards going forward. Sadly (for them), the recent NMA to SST relationship has changed and should lead to a downward revision in the calculated trend. That is if CliSciFi was normal science.

bnice2000
Reply to  TheFinalNail
February 20, 2023 1:33 pm

Step by small step, so that their mathematically illiterate (ie TFN) followers don’t notice.

MarkW
Reply to  TheFinalNail
February 21, 2023 7:35 am

They will, they are just doing it in several steps.

goracle
Reply to  TheFinalNail
February 21, 2023 6:51 pm

Like a frog placed into cool, not boiling, water, you need to turn the heat up slowly or else even some the idiots will notice and defect

Bill Toland
Reply to  Tom Halla
February 20, 2023 6:48 am

How many NOAA “corrections” result in cooling the temperature trend? If the corrections are really just corrections, you would expect the net effect to be no change in the temperature trend.

bdgwx
Reply to  Bill Toland
February 20, 2023 6:55 am

NOAA’s corrections result in a lowering of the overall warming trend over the instrument record. See Dr. Hausfather’s summary of the topic here.

comment image

Anyway, the change in the trend in this case (2015-2022) is not the result of new corrections. It is the result of the fact that data is still being uploaded into the repositories.

Last edited 1 month ago by bdgwx
Bill Toland
Reply to  bdgwx
February 20, 2023 7:26 am

1950 is the date when man made global warming is supposed to have started. According to the graph you have supplied, NOAA cooled the temperatures from 1950 to 1980 and warmed the temperatures thereafter. So thank you for confirming that NOAA has indeed warmed the temperature record over the period when man made global warming is supposed to have occurred.

bdgwx
Reply to  Bill Toland
February 20, 2023 7:41 am

BT said: “1950 is the date when man made global warming is supposed to have started.”

Anthropogenic CO2 emissions increased significantly starting around 1850.

BT said: “ So thank you for confirming that NOAA has indeed warmed the temperature record over the period when man made global warming is supposed to have occurred.”

Look again. The warming trend is lower because of the adjustments.

Bill Toland
Reply to  bdgwx
February 20, 2023 7:51 am

I was quoting the IPCC which said that human activity is the dominant cause of warming starting in the 1950s. Look again at the graph you supplied. Again, thank you for confirming that NOAA has indeed warmed the temperature record over the period when man made global warming is stated to have occurred by the IPCC.

https://www.bbc.co.uk/news/science-environment-24292615

bdgwx
Reply to  Bill Toland
February 20, 2023 7:59 am

The beginning of anthropogenic modulation and anthropogenic modulation being dominant are not the same thing.

Look at the graph. What would you say is the difference in the adjusted vs raw warming trend from 1950? What is the difference from 1880?

Last edited 1 month ago by bdgwx
Bill Toland
Reply to  bdgwx
February 20, 2023 8:06 am

We are talking about temperatures and I am merely quoting the IPCC. The IPCC clearly doesn’t care about temperatures before 1950 as it couldn’t attribute any human influence on the climate before then.

bdgwx
Reply to  Bill Toland
February 20, 2023 8:43 am

The IPCC attributes 0.13 W/m2 of influence to humans by 1850 [IPCC AR6 WG1 Annex III]. And you can see that the very first graph in the SPM shows temperatures back to 1 AD with a zoom in from 1850 to 2020. [IPCC AR6 WG1 SPM]

Richard Greene
Reply to  bdgwx
February 20, 2023 9:15 am

“The IPCC attributes 0.13 W/m2 of influence to humans by 1850”

Total baloney, malarkey, banana oil, BS speculation, meaningless, wild guess. No one on this planet knows what percentage of warming since 1850 was manmade. Those who claim to know are liars.

No one even knows the global average temperature before 1940 — it is a wild guess of the Northern Hemisphere not fit for science. In fact, before the use of satellite data in 1979, the global average is not fit for science. Too much infilling because of insufficient global coverage And too many adjustments.

Last edited 1 month ago by Richard Greene
KevinM
Reply to  Richard Greene
February 20, 2023 9:58 am

Not enough data.

Tim Gorman
Reply to  Richard Greene
February 20, 2023 10:10 am

It’s not just the infilling. Atmospheric heat must be measured by enthalpy, not temperature. It’s why 100 DegF in Phoenix is not equal to 100 DegF in Miami. Differences in humidity make major differences in the amount of heat involved in each location.

They talk about CO2 trapping “heat” but then start talking about temperature as a way to measure it. Basic scientific idiocy from the word go.

bdgwx
Reply to  Richard Greene
February 20, 2023 10:32 am

RG said: “Total baloney:”

It’s right there on pg. 2144 in annex III.

Last edited 1 month ago by bdgwx
Robert B
Reply to  bdgwx
February 20, 2023 6:59 pm

I read it. They have the uncertainty of those early measurements at 90%, and yet double of that blind guess can not explain a warming rate the same as in the past 40 years.

As people try to point out, human emissions are NOW an excess of 4% more than naturally emissions of CO2 that it’s asserted can not be absorbed by natural sinks , which increased CO2 levels. It’s silly to believe that the naturally sinks that were once capable of keeping CO2 levels steady at over 2000 ppm, but couldn’t have 1% more plankton, for example, sequestering 0.2% extra man-made CO2 in 1850, so CO2 levels must have risen 10 ppm in 10 years.
They are just making stuff up. Still, accepting it, the pre 1950s warming cannot be so much, so that change is adjustments to fit the theory, that has the dual purpose throwing everybody off that adjustments always cause more warming after 1950.

Richard Greene
Reply to  bdgwx
February 20, 2023 7:44 pm

It’s total baloney even if if was written in 1,000 studies. and included on 1,000 different IPCC pages. Multiple mentions of baloney do not create truth. I know baloney when I see it. As in your comments.

Your problem, BedofWax, is you treat the IPCC as a science organization because you are a gullible fool.

No real science organization would have dismissed all natural causes of climate change as “noise” in 1995, for the sole purpose of blaming all global warming on man.

No real science organization would treat CO2 enrichment of the atmosphere as bad news that must be stopped — real science proves the opposite — ,more CO2 is beneficial for life on our planet.

The IPCC started with a conclusion. It was formed to prove that conclusion was correct and failed miserably to do that.

The goal of the IPCC is leftist climate scaremongering, and you are not bright enough to notice. That’s why no sensible person takes your comments seriously.

Leftism goals of more power and control depend on climate scaremongering and useful idiot trained parrots like you. BedofWax.

Last edited 1 month ago by Richard Greene
Robert B
Reply to  Richard Greene
February 20, 2023 5:56 pm

There is more that there is 90% uncertainty.

Richard Greene
Reply to  Robert B
February 20, 2023 7:47 pm

90% certainty is just a BS number with no scientific meaning. It is based on a popular vote — perhaps 90% of “scientists” felt confident in the conclusion/ That is meaningless claptrap.

goracle
Reply to  Richard Greene
February 21, 2023 7:28 pm

You are correct Richard… the real truth is that the whole concept of a global “temperature” is itself malarky… a totally made up thing that has absolutely ZERO meaning except for the eggheads that grift our tax dollars on climate change lies/studies

Last edited 27 days ago by goracle
goracle
Reply to  bdgwx
February 21, 2023 7:24 pm

I wish i had bdgwx as my college prof. .. he could’ve adjusted my raw grades to a “B”… see, i didn’t even ask for a huge grade trend… just a “B”

Mr David Guy-Johnson
Reply to  bdgwx
February 20, 2023 7:54 am

“Anthropogenic CO2 emissions increased significantly starting around 1850” So what? Your climate catastrophe friends state they couldn’t have affected temperatures until about 1950. Why is that?

bdgwx
Reply to  Mr David Guy-Johnson
February 20, 2023 8:11 am

Mr David Guy-Johnson said: “So what?”

Anthropogenic modulation (at least in terms of CO2) started around 1850; not 1950 as Bill Toland claims.

Mr David Guy-Johnson said: “Your climate catastrophe friends”

I think you have me confused with someone else. They are not my friends.

Mr David Guy-Johnson said: “state they couldn’t have affected temperatures until about 1950. Why is that?”

I don’t know. You’ll have to ask them whoever they are. I only follow reputable scientific works which state unequivocally that CO2 affects temperatures at all points in time including those billions of years ago. They also say that CO2 is only one among many agents that modulate temperature and that temperature changes are the result of the net effect of all agents acting simultaneously.

Of course none this has any relevance to the fact that the net affect of NOAA’s adjustments actually reduce the warming trend relative to the unadjusted data over the period of record.

Last edited 1 month ago by bdgwx
Bill Toland
Reply to  bdgwx
February 20, 2023 8:19 am

Bdgwx, again you have misrepresented what I said. I was talking about temperatures, not co2. You have misrepresented me numerous times in the past. Stop it.

bdgwx
Reply to  Bill Toland
February 20, 2023 8:34 am

Bill Toland, I’m responding to “1950 is the date when man made global warming is supposed to have started.” If you don’t mean that man made global warming was supposed to have started in 1950 then what DO you mean?

Last edited 1 month ago by bdgwx
Bill Toland
Reply to  bdgwx
February 20, 2023 8:41 am

“Anthropogenic modulation (at least in terms of CO2) started around 1850; not 1950 as Bill Toland claims” is what you stated. That is NOT what I said and you know it. Stop misrepresenting me.

bdgwx
Reply to  Bill Toland
February 20, 2023 8:50 am

Bill Toland said: “That is NOT what I said and you know it. Stop misrepresenting me.”

If I’ve misrepresented you then I apologize. That is not my intention. If you weren’t talking about man made global warming starting in 1950 then 1) why did you say it that way and 2) what did you actually mean?

Bill Toland
Reply to  bdgwx
February 20, 2023 9:03 am

You honestly cannot tell the difference between what I said and what you said? I was talking about temperatures and you are talking about co2 levels.

bdgwx
Reply to  Bill Toland
February 20, 2023 10:28 am

I’m talking about temperature too. CO2 is an agent that modulates temperature. Humans began emitting in significant quantities around 1850. That means the anthropogenic modulation of temperature began at least as early as 1850; not 1950.

D. J. Hawkins
Reply to  bdgwx
February 20, 2023 10:59 am

Humans began emitting in significant quantities around 1850. That means the anthropogenic modulation of temperature began at least as early as 1850; not 1950.

Significant as compared to what? Keep in mind that the total annual turnover of CO2 in the biosphere is approximately 700 gigatonnes.

bdgwx
Reply to  D. J. Hawkins
February 20, 2023 11:37 am

Significant in that the anthropogenic fingerprint becomes detectable. The 14C fingerprint in trees began a noticeable decline starting at least as early as 1825 [Levin & Hesshaimer 2000] and the 13C/12C ratio first dropped below the preindustrial mean around 1750 [Bohm et al. 2002]. BTW…Friedlingstein et al. 2022 list about 480 GtCO2 of turnover in the biosphere each year.

Last edited 1 month ago by bdgwx
AGW is Not Science
Reply to  bdgwx
February 20, 2023 1:38 pm

The same “isotopes” also come from natural sources, and none of those are being measured. The fact is, they don’t know the reason for the increase in atmospheric CO2, it’s all estimates and assumptions and circular logic.

bdgwx
Reply to  AGW is Not Science
February 20, 2023 2:30 pm

No they don’t. The fossil reservoir is uniquely 14C and 13C/12C depleted. Not that the isotopes are the only smoking gun for the human fingerprint. Actually, the only smoking gun you need is the law of conservation of mass. The isotope and other lines of evidence just turn the smoking gun into a proverbial smoking nuclear bomb. The anthropogenic modulation didn’t start in 1950. It was at least 1850 and probably further back than that even.

Last edited 1 month ago by bdgwx
Richard Greene
Reply to  AGW is Not Science
February 20, 2023 8:03 pm

The increase of atmospheric CO2 since 1850 is 100% manmade. Grow up and stop being a science denier.

If you disagree. you must explain where the +200ppm to +300ppm of CO2 emissions from burning fossil fuels went, if not into the atmosphere.

And then you must explain how the atmospheric CO2 increased from 280ppm in 1850 (estimated) to 420ppm in 2023 (measured), up +140ppm

Where did the +200 to +300ppm of CO2 go?

What else could have caused the +140ppm increase of atmospheric CO2?

Two simple questions that you will never answer because you already proved with your comment that you don’t know the answers.

bnice2000
Reply to  bdgwx
February 20, 2023 1:39 pm

The ONLY fingerprint that is manifest.. is the prior “adjustment” fingerprint..

… that the whole of the AGW meme rests on.

bdgwx
Reply to  bnice2000
February 20, 2023 2:53 pm

I’m not sure what you’re point is there. Are you trying to say that you believe the anthropogenic modulation start in 1950 as well?

goracle
Reply to  bdgwx
February 21, 2023 7:35 pm

Bdgwx, the IPCCC is a lie of an organization much along the same lines as the covid bioweapo… ahem, i mean vaccine… both are completely political with a nefarious agenda… for both, the truth is right in front of you but u need to open your eyes to see it… ask Jesus and he may unblind you so you can see the truth

MarkW
Reply to  bdgwx
February 21, 2023 7:44 am

Just because something is detectable, is not evidence that it is enough to matter.
The low levels of CO2 being emitted in 1850 were not enough to effect temperatures. If they were, then the many times greater levels today should be increasing temperatures by way more than has been recorded.

Tim Gorman
Reply to  MarkW
February 21, 2023 8:07 am

You nailed it! Either it is a control knob or it isn’t.

bdgwx
Reply to  MarkW
February 21, 2023 9:57 am

5.35*ln(310/280) = 0.54 W/m2 doesn’t matter but 5.35*ln(420/310) = 1.62 W/m2 does? What is the point at which the radiative force matters? Be specific.

Tim Gorman
Reply to  bdgwx
February 21, 2023 10:59 am

You are using radiation intensity as if there is no such thing as an inverse square law. Do you know how to do a 3D integral using vector calculus?

What is the actual w/m^2 value at a point 3km above the earth if the earth is radiating x w/m^2 at every point on the surface?

MarkW
Reply to  bdgwx
February 21, 2023 11:07 am

going from 280ppm to 285 ppm isn’t enough to matter.
If it were, going from 285 ppm to 420 ppm would have ended all life on earth.
A few million years ago, the CO2 levels were several thousand ppm. According to your theory, that should have made the atmosphere hot enough to melt lead.

bdgwx
Reply to  MarkW
February 21, 2023 11:46 am

MarkW said: “going from 280ppm to 285 ppm isn’t enough to matter.”

5.35 * ln(285/280) = 0.1 W/m2. If you don’t think 0.54 W/m2 matters then it is no surprise that you also think 0.1 W/m2 doesn’t matter either. The question is…at what point does it matter? I argue that it matters when it is > 0 W/m2 consistent with the 1st law of thermodynamics.

MarkW said: “A few million years ago, the CO2 levels were several thousand ppm. According to your theory, that should have made the atmosphere hot enough to melt lead.”

Nope. That’s not what my theory says. Do you want to know my theory or do you want to keep creating strawmen?

Last edited 27 days ago by bdgwx
Clyde Spencer
Reply to  bdgwx
February 20, 2023 11:24 am

CO2 is an agent that modulates temperature.

However, measurements show that Earth can go one or two decades without increasing its temperature, despite CO2 increases every year. It would seem to me that the hiatus is in indication of little power of modulation.

Last edited 1 month ago by Clyde Spencer
bdgwx
Reply to  Clyde Spencer
February 20, 2023 12:03 pm

Pauses are not an indication that CO2 cannot modulate the temperature. It is an indication that it isn’t the only thing modulating the temperature. When we factor in ENSO, AMO, volcanic activity, and solar output we can see that the increase in CO2 is not inconsistent with the existence of pauses.

comment image

And according to CMIP5 we should expect to find ourselves in a pause lasting 101 months about 18% of the time which is slightly higher than the 15% of the time since 1979.

KNMI Climate Explorer

Tim Gorman
Reply to  bdgwx
February 20, 2023 1:21 pm

East Central Kansas has apparently been in a pause since 1953. That’s about 800 months. What does CMIP5 say about that length of pause?

bnice2000
Reply to  bdgwx
February 20, 2023 1:41 pm

The GAT line is URBAN data that will warm, unnaturally.

That is what GISS et al are all based on.

Urban areas consist some 1% of the globe but the majority of surface data.

Dave Fair
Reply to  bnice2000
February 20, 2023 2:23 pm

And since the CONUS recording stations have a higher percentage of rural stations than does the remainder of the global stations the temperature variations have vastly different patterns.

bdgwx
Reply to  bnice2000
February 20, 2023 2:26 pm

No. The GAT line is computed from the area weighted average of the grid mesh. So urban areas are only weighted 1% whereas non-urban are given 99% weight.

Tim Gorman
Reply to  bdgwx
February 20, 2023 3:05 pm

Then why use the urban stations at all?

MarkW
Reply to  Tim Gorman
February 21, 2023 7:47 am

Because they can’t get the results they want without using the urban stations.

Dave Fair
Reply to  bdgwx
February 20, 2023 2:19 pm

The “model” appears to be nothing more than an exercise in curve fitting. We need to be comparing UAH6 to the CliSciFi models.

Last edited 1 month ago by Dave Fair
Richard Greene
Reply to  bdgwx
February 20, 2023 8:10 pm

BedofWax gets credit for a correct comment

More CO2 always impedes Eath’s ability to cool itself, although not much above 400ppm

Climate change, however, is the net result of all local, regional and global climate change variables, and CO2 is NOT the temperature control knob.

CO2 is just one of many climate change variables, including:

The following variables are likely to influence Earth’s climate:

1)   Earth’s orbital and orientation variations (aka planetary geometry)
 
2)   Changes in ocean circulation
     Including ENSO and others 
 
3)   Solar energy and irradiance, including clouds, albedo, volcanic and manmade aerosols, plus possible effects of cosmic rays and extraterrestrial dust

4)   Greenhouse gas emissions

5)   Land use changes
     (cities growing, logging, crop irrigation, etc.) 

6)    Unknown causes of variations of a complex, non-linear system

7)  Unpredictable natural and 
       manmade catastrophes
 
8) Climate measurement errors
 (unintentional errors or deliberate science fraud)

9) Interactions and feedbacks,
     involving two or more variables.

MarkW
Reply to  bdgwx
February 21, 2023 7:46 am

According to the IPCC which you love to site, CO2 has become so dominant that it totally swamps all natural sources of variation.
That’s why they assume that 100% of the warming since 1850 is caused by CO2.

bdgwx
Reply to  MarkW
February 21, 2023 9:55 am

MarkW said: “That’s why they assume that 100% of the warming since 1850 is caused by CO2.”

The IPCC doesn’t say that. They say that 76% of the net radiative force comes from CO2. And only 40% of the total forcing. And in 2000 at the peak of the natural force it was only 32% of the total forcing [IPCC AR6 WG1 Annex III].

Clyde Spencer
Reply to  bdgwx
February 21, 2023 12:14 pm

Pauses are not an indication that CO2 cannot modulate the temperature.

I didn’t say or even suggest that.

It is an indication that it isn’t the only thing modulating the temperature.

That is closer to what I was saying.

bdgwx
Reply to  Clyde Spencer
February 21, 2023 4:32 pm

Then we agree. Maybe you can help me convince the WUWT audience of the fact that CO2 isn’t the only thing modulating temperature.

Last edited 27 days ago by bdgwx
AGW is Not Science
Reply to  bdgwx
February 20, 2023 1:35 pm

An “agent that modulates temperature?!” LMAO. Evidence is needed, and they (and you) don’t have any.

Nowhere in the climate record is there empirical evidence of “CO2 drives temperature. Conversely, there is a good deal of empirical evidence that CO2 does NOT drive temperature.

bdgwx
Reply to  AGW is Not Science
February 20, 2023 4:08 pm

The IPCC does a good job of citing the evidence. I recommend starting there. Make sure you actually review the evidence in the first order citations and branch out to the secondary, tertiary, etc. citations as needed.

You can convince people that CO2 has no effect by showing that CO2 behaves differently in regards to its ability to impede the transmission of energy in the atmosphere than in controlled experiments, that the 1st law of thermodynamics is wrong, and/or that reservoirs in the climate system can take in energy without increasing their temperature or causing a phase change.

Last edited 1 month ago by bdgwx
Tim Gorman
Reply to  bdgwx
February 20, 2023 4:15 pm

Have you figured out how 300 w/m^2 at the surface can be 300w/m^2 at the top of the atmosphere when there is a little law called the inverse square law when it comes to radiation?

Richard Greene
Reply to  bdgwx
February 20, 2023 7:55 pm

Manmade CO2 emissions were probably too small to measure the effect of more CO2 before 1950.

No global warming that could have been caused by CO2 was measured until 1975.

And no warming that could have been caused by CO2 was measured after 2015.

In 4.5 billion years, there were just 40 years, from 1975 to 2015, when some of the global warming could have been caused by manmade CO2, and probably was. The exact amount of that global warming that was caused by manmade CO2 is unknown. Except by liars.

Those 40 years of beneficial global warming, that harmed no one, are being used to promote leftist totalitarianism, with useful idiots like you helping.

No thanks to you for helping to reduce our personal freedoms, BedofWax

Leftists control CO2 to control people.
And you don’t have the intelligence to realize that fact.

Last edited 1 month ago by Richard Greene
Sweet Old Bob
Reply to  Bill Toland
February 20, 2023 10:54 am

and bdgwx says “I only follow reputable scientific works”

like SKS ?

😉

Last edited 1 month ago by Sweet Old Bob
bdgwx
Reply to  Sweet Old Bob
February 20, 2023 11:40 am

What is SKS?

MarkW
Reply to  Sweet Old Bob
February 21, 2023 7:49 am

According to the alarmists, in order to be reputable, a site has to follow the alarmist narrative.

Richard Greene
Reply to  Mr David Guy-Johnson
February 20, 2023 8:55 am

Manmade CO2 emissions were insignificant before 1940, relatively small until 1950, and accelerated after 1975.

The global warming from 1910 to 1940 was not caused by CO2 emissions because they were much too small.

Strike 1 for CO2 the control knob

The pre-“revision” cooling from 1940 to 1975 was not caused by CO2 emissions.

Strike 2 for the CO2 control knob

And the flat temperature trend from 2015 to 2023 was not caused by CO2 emissions.

Strike 3 for the CO2 control knob

The CO2 control knob of the temperature theory has struck out, BedofWax — you may now grieve over the loss of your favorite boogeyman.

Last edited 1 month ago by Richard Greene
Dave Andrews
Reply to  Richard Greene
February 20, 2023 9:42 am

As you say CO2 emissions did not really begin to rise much until after 1950.

Yet the open season at the coalport in Spitsbergen (Svalbard) went from 3 months of the year before 1920 to over 7 months of the year in the late 1930s. There was obviously considerable warming of the Arctic during that period.

On another thread I asked bdgwx what may have caused it?

He replied “I don’t know”

Last edited 1 month ago by Dave Andrews
KevinM
Reply to  Dave Andrews
February 20, 2023 10:01 am

“I don’t know” = correct answer for anyone.

Dave Andrews
Reply to  KevinM
February 21, 2023 7:25 am

You are right of course, but don’t you think it might just make you step back and ask what else is going on that I don’t understand?

Krishna Gans
Reply to  bdgwx
February 20, 2023 8:19 am

You realised, that there is a negativ sign.
So, tell me if -0.07 is more or less then -0.11.

bdgwx
Reply to  Krishna Gans
February 20, 2023 8:31 am

Yes. -0.07 C is more than -0.11 C in the same way that -0.28 C is more than -0.42 C for adjusted vs raw in 1880 respectively.

Last edited 1 month ago by bdgwx
Krishna Gans
Reply to  bdgwx
February 20, 2023 1:31 pm

comment image

bdgwx
Reply to  Krishna Gans
February 20, 2023 2:49 pm

I take it you disagree that -0.07 > -0.11?

Last edited 1 month ago by bdgwx
Tim Gorman
Reply to  bdgwx
February 20, 2023 8:31 am

Fossil-fueled agriculture equipment didn’t become common until the 1930’s. Fossil-fueled automobiles didn’t become common until after 1910 and didn’t reach large numbers until the 20’s.

Exactly what happened in 1850 (before the civil war) that caused anthropogenic CO2 emissions to increase significantly? Population growth and exhaling? We killed far more bison exhaling CO2 during that period than we had population growth.

MarkW
Reply to  Tim Gorman
February 21, 2023 7:51 am

The industrial revolution resulted in coal being burnt to create steam power.

Richard Greene
Reply to  bdgwx
February 20, 2023 8:49 am

The warming trend is steeper since 1940 by eliminating the global cooling from 1940 to 1975 and pretending it never happened.

bdgwx
Reply to  Richard Greene
February 20, 2023 8:58 am

Look at the graph. How much steeper is the trend using the raw vs adjusted data from 1940 to 2015?

climategrog
Reply to  bdgwx
February 20, 2023 9:20 am

Look again. The warming trend is lower because of the adjustments.

You are misunderstanding the point of data rigging. What they are doing if you look a the adjustment difference graph is they are reducing the early 20th c. warming period , which was just a rapid and the late 20th c. rise but occurred long before CO2 was deemed to be making a significant change to temperature.

This is a period that NO climate models manage to reproduce and has been a constant problem for their CO2 driven models which cannot even hindcast climate properly.

The only solution ( other then making models WORK ) is to rig the part of the climate record which is inconvenient to them.

That this reduces the 150y trend is not a problem because the it is only the last 70y they need to make as steep as possible.

bdgwx
Reply to  climategrog
February 20, 2023 10:30 am

So you don’t think ship intake observations are biased high relative to bucket observations?

Tim Gorman
Reply to  bdgwx
February 20, 2023 10:59 am

how do you know they are biased high? By the time the bucket is raised to the deck and sits for a while till someone finds the thermometer to stick in the bucket the water will tend to move toward the ambient temperature of the atmosphere surrounding the bucket. It will no longer be at the sea tempeature.

Depending on the replacement flow rate of the water in the ship intake pipe it may actually be closer to the sea temperature than the water in the bucket.

It’s the main reason why these temperatures have such a wide measurement uncertainty that they are unfit for determining differences even in the units digit let alone in decimal places. And no amount of “adjustments” can be anything other than a subjective guess!

leefor
Reply to  bdgwx
February 20, 2023 6:44 pm

the size of this warming is broadly consistent with
predictions of climate models, but it is also of the same magnitude as natural climate variability. “

the unequivocal detection of the enhanced green-
house effect from observations is not likely for a
decade or more.”

https://www.ipcc.ch/site/assets/uploads/2018/05/ipcc_wg_I_1992_suppl_report_full_report.pdf

So natural variation or CO2? So now attribution studies know these things.;)

bdgwx
Reply to  leefor
February 20, 2023 7:24 pm

That was written 3 decades ago. The warming is now 1.1 C.

leefor
Reply to  bdgwx
February 20, 2023 7:39 pm

And that has nothing to do with the science. Perhaps you can tell us exactly how much is natural variation? And quote the paper on which you rely.

And of course remember that the Southern Hemisphere was woefully under sampled. (Phil Jones CRU, John D McLean,JCU)

Last edited 1 month ago by leefor
bdgwx
Reply to  leefor
February 21, 2023 6:52 am

The natural force was +0.12 W/m2 and the anthropogenic force was 2.72 W/m2. The planetary energy imbalance is about 0.87 W/m2. So given the 2.84 W/m2 of total force minus the energy imbalance we have 1.97 W/m2 of transient force matched up with 1.1 C of transient warming. So of the 1.1 C anthropogenic accounted for about 1.03 C and nature accounted for 0.07 C. [IPCC AR6 WG1 Annex III].[Schuckmann et al. 2020]

leefor
Reply to  bdgwx
February 21, 2023 7:11 pm

So from Schuckman they know the OHC to 2000m back to 1980? That’s funny. “The basic assumption for the error distribution is Gaussian with a mean of zero, which can be approximated by an ensemble of various products. However, it does not account for systematic errors that may result in biases across the ensemble and does not represent the full uncertainty. “.
Remember Phil Jones and the SH ocean.

MarkW
Reply to  leefor
February 21, 2023 7:55 am

Broadly consistent.
That’s a weasel word big enough to drive an entire scam through.
Models predict that it will warm. It warms, ergo models have been proven.
It doesn’t matter that the models predict 3 times as much warming as has been seen.
It doesn’t matter that warming began prior to the introduction of meaningful amounts of CO2 into the atmosphere.
All that matters is that the narrative be protected.

MarkW
Reply to  bdgwx
February 21, 2023 7:40 am

Starting from zero, it’s easy to have a statistically significant increase in trend. But the fact remains that total CO2 levels did not start moving up in any significant fashion until after 1950.

bdgwx
Reply to  MarkW
February 21, 2023 9:45 am

CO2 increased 30 ppm from 1750 to 1950. That is 27% of the increase from 1950 to 2022. That is hardly insignificant. And you have to go back thousands of years before you see an equivalent change.

MarkW
Reply to  bdgwx
February 21, 2023 11:09 am

Half of nothing, is still nothing.

bdgwx
Reply to  MarkW
February 21, 2023 11:40 am

MarkW said: “Half of nothing, is still nothing.”

Do you think 420 – 310 = 0 and thus 0.5 * (420 – 310) = 0?

For the record…I get 420 – 310 = 110 and 0.5 * (420 – 310) = 55.

Last edited 27 days ago by bdgwx
goracle
Reply to  bdgwx
February 21, 2023 7:20 pm

Adjustments… LOL!!!… it’s the neoscience approach where you cherrypick or adjust the data to fit your narrative instead of changing your narrative to fit the data

Last edited 27 days ago by goracle
Frank from NoVA
Reply to  bdgwx
February 20, 2023 7:29 am

C’mon mann, how do you suppose ol’ Zeke could possibly know what the Earth’s surface temperature was way back at the end of the 19th century? (Hint: He couldn’t have known prior to the advent of satellite MSU data in 1979).

As for areas of the planet where there actually was fairly decent thermometer coverage back then, i.e., the good old USA, your friends at NASA-GISS have been tampering with the data, as shown here:

https://realclimatescience.com/nasa-noaa-us-data-tampering/

bdgwx
Reply to  Frank from NoVA
February 20, 2023 7:46 am

Frank from NoVA said: “C’mon mann, how do you suppose ol’ Zeke could possibly know what the Earth’s surface temperature was way back at the end of the 19th century?”

Observations.

Frank from NoVA said: “He couldn’t have known prior to the advent of satellite MSU data in 1979”

Sure, we can discuss the MSU data. Here are the adjustments that UAH makes.

Year / Version / Effect / Description / Citation

Adjustment 1: 1992 : A : unknown effect : simple bias correction : Spencer & Christy 1992

Adjustment 2: 1994 : B : -0.03 C/decade : linear diurnal drift : Christy et al. 1995

Adjustment 3: 1997 : C : +0.03 C/decade : removal of residual annual cycle related to hot target variations : Christy et al. 1998

Adjustment 4: 1998 : D : +0.10 C/decade : orbital decay : Christy et al. 2000

Adjustment 5: 1998 : D : -0.07 C/decade : removal of dependence on time variations of hot target temperature : Christy et al. 2000

Adjustment 6: 2003 : 5.0 : +0.008 C/decade : non-linear diurnal drift : Christy et al. 2003

Adjustment 7: 2004 : 5.1 : -0.004 C/decade : data criteria acceptance : Karl et al. 2006 

Adjustment 8: 2005 : 5.2 : +0.035 C/decade : diurnal drift : Spencer et al. 2006

Adjustment 9: 2017 : 6.0 : -0.03 C/decade : new method : Spencer et al. 2017 [open]

That is 0.307 C/decade worth of adjustments with a net of +0.039 C/decade.

Frank from NoVA said: “As for areas of the planet where there actually was fairly decent thermometer coverage back then, i.e., the good old USA, your friends at NASA-GISS have been tampering with the data, as shown here:”

First, they aren’t my friends. Second, they didn’t tamper with the data. They corrected the biases caused by time-of-observation changes, instrument package changes, etc. [Vose et al. 2003] [Hubbard & Lin 2006] Third, remember that Tony Heller was deemed so untrustworthy that even WUWT banned him from the site.

Last edited 1 month ago by bdgwx
Matthew Bergin
Reply to  bdgwx
February 20, 2023 8:42 am

Christy is making adjustments by referencing the satellite measurements to actual temperature readings from radiosondes. Thereby improving the accuracy of satellite measuring system. The NOAA is not. The NOAA actually believes that the output from a computer model is data. Enough said.😞

bdgwx
Reply to  Matthew Bergin
February 20, 2023 8:46 am

First, no, that is not how Christy is making the adjustments. Second, if it were then Christy is doing a bad job. And third, if it were than why use the MSU data at all and not just jump straight to the radiosonde datasets.

comment image

cilo
Reply to  bdgwx
February 20, 2023 8:58 am

Thank you, thank you, thank you!
I’ve been begging someone for proof these satellite measurements are but proxies. You have, with that list of links, proven my suspicions: Satellite measurements are interesting, useful even, but not as precise as I’m asked to believe.
Have I thanked you yet?
As for detector technology, I have many, many questions still…

Matthew Bergin
Reply to  cilo
February 20, 2023 9:12 am

All measurements are proxies

climategrog
Reply to  cilo
February 20, 2023 9:26 am

why is radiation brightness any more of a “proxy” than the height of an expanding liquid in a capillary or the resistance of thermally varying resistor.

cilo
Reply to  climategrog
February 20, 2023 9:40 am

Go sit in front of a simple electric bar heater. Measure the glow.
Take the whole thing outside in the wind, measure the glow.
Now tell me which one warmed you up more.
…and thanks for clarifying that aspect of “measurement” for me.

Last edited 1 month ago by cilo
MarkW
Reply to  cilo
February 21, 2023 7:59 am

In your opinion, the fact that wind removes heat from your body proves that the heater is providing less heat?

cilo
Reply to  MarkW
February 21, 2023 9:10 pm

In your opinion, the fact that wind removes heat from your body proves that the heater is providing less heat? TO YOU.

Producing ‘heat’ and delivering/absorbing it is not the same thing. I measure the ‘colour’ of a lamp in Kelvin, does not mean my skin gets 4000 Kelvin warmer on stage.

Last edited 27 days ago by cilo
Tim Gorman
Reply to  climategrog
February 20, 2023 10:32 am

The issue is what the proxy is for. With an expanding liquid or a resistor you can at least make an attempt to judge what is going on at a specific location over time. The way the so-called “climat scientists” go about combining these are not statistically justified but that’s another argument.

The radiation brightness, as measured by satellites, is never measured twice at the same place and especially not at the same time. It is a true proxy trying to define something that doesn’t exist.

The radiation brightness is no more a good proxy for heat than is temperature. Atmospheric heat depends on at least humidity as well as other factors (e.g. pressure). You can have two locations with the same temperature (and therefore the same radiance) with vastly different enthalpy (think Phoenix and Miami).

It’s why when you hear the CAGW crowd talking about “trapped heat” and then talking about temperature you can assume they don’t have a clue as to what they are talking about.

bdgwx
Reply to  cilo
February 20, 2023 10:43 am

Even UAH now says their dataset has a monthly uncertainty of 0.20 C. [Christy et al. 2003]. That is 20x higher than their original claim of 0.01 C. [Spencer & Christy 1990].

Tim Gorman
Reply to  bdgwx
February 20, 2023 11:26 am

I seem to remember trying to convince you of this a year ago and you refused to believe it. With an monthly uncertainty of +/- 0.20 C how do you tell a decadal difference of 0.14 C per decade like the government gives out?

Jim Gorman
Reply to  cilo
February 21, 2023 5:27 am

The key is not necessarily the overall accuracy.

The real advantage is using the same measuring device for all measurements. If the same adjustments are used on all measurements to achieve better accuracy, then there is no reason to expect that anomaly values will change one way or the other.

Frank from NoVA
Reply to  bdgwx
February 20, 2023 10:32 am

‘Observations.’

You must be joking. Outside of the US, Europe and a few other Western-like outposts, there was insufficient instrument coverage on land, let alone the oceans and the polar regions.

‘…they didn’t tamper with the data.

They don’t tamper with the actual data, which would hopefully be a felony, but they do make ‘adjustments’ to the reported data (see below) and actually fabricate ‘data’, which remarkably plots linearly against CO2 measurements, for non-reporting stations.

‘They corrected the biases caused by time-of-observation changes…’

You obviously don’t read links that don’t conform to your worldview. Heller (and others) have debunked the efficacy of TOBS ‘adjustments’ by separately analyzing stations depending on the time of day their thermometers are actually reset. Hint: they had the same trends.

Tim Gorman
Reply to  Frank from NoVA
February 20, 2023 11:11 am

Hubbard and Lin showed in 2004 (or maybe 2006, I forget) that regional adjustments to land based temperature measurement devices was impossible due to differences in micro-climate (e.g. humidity, elevation, geography, terrain, etc). You could only apply adjustments on a station-by-station basis. Although they didn’t address the issue, those station-by-station adjustments could only be made for a short period in the past unless there was some way to track calibration drift over time.

Bottom line? If you can’t apply adjustments on a regional basis then trying to infill data into unknown areas is impossible as well.

Any temperature tracking must be done only where sufficient measurements are available. That tracking should not be extended into areas that don’t have sufficient measurement data. That means that “global” averages, even today, are highly questionable since we don’t have “global” measurement data for even recent time periods.

bdgwx
Reply to  Frank from NoVA
February 20, 2023 11:19 am

Yes. I’m serious. Observations.

No. It is the exact opposite. The net effect of all adjustments is downward. CO2 concentration is upward.

I did read the links. I read all of the links I post. And no, Tony Heller has not debunked the efficacy of TOB adjustments. What he did was compute the US average temperature using a method so egregiously wrong that Anthony Watts himself banned him from participating on this very blog [1].

Matthew Bergin
Reply to  bdgwx
February 20, 2023 11:44 am

TOB adjustments are BULL. There is no way of knowing the time of day the reading was taken unless you have seen it yourself or have a video of the same. 100 years ago people were not even sure what the time of day was, let alone whether the person read a thermometer graduated in single degrees at exactly 5pm or didn’t and just made up a reading because the weather was too nasty to bother going outside.

bdgwx
Reply to  Matthew Bergin
February 20, 2023 2:16 pm

The TOB is recorded with the observations.

Last edited 1 month ago by bdgwx
Frank from NoVA
Reply to  bdgwx
February 20, 2023 1:16 pm

Per your 2014 link:

‘As part of its adjustment process, USHCN infills missing stations based on a spatially-weighted average of surrounding station anomalies (plus the long-term climatology of that location) to generate absolute temperatures. This is done as a final step after TOBs adjustments and pairwise homogenization, and results in 1218 records every month.’

That’s a lot of mannufactured ‘data’. To what end?

Tim Gorman
Reply to  Frank from NoVA
February 20, 2023 1:27 pm

It’s exactly what Hubbard and Lin showed you *shouldn’t* do! Anomalies carry the same exact measurement uncertainty (especially systematic uncertainty) as the absolute temperatures do. The anomalies carry the exact same variances that the absolute temperatures do. Using anomalies solves nothing when it comes to measurement uncertainty so using them to infill other locations only spreads the measurement uncertainty around to other stations. And that is *especially* true for UHI effects!

bdgwx
Reply to  Frank from NoVA
February 20, 2023 2:24 pm

Frank from NoVA said: “To what end?”

By not using a local infilling strategy like kriging or gaussian regression you effectively infill using a global strategy. For example, given the sample {1, 2, ?, 4, 5, 6, 7, 8, 9, 10} the naïve average is (1 + 2 + 4 + 5 + 6 + 7 + 8 + 9 + 10) / 9 = 5.78 and is effectively the same as (1 + 2 + 5.78 + 4 + 5 + 6 + 7 + 8 + 9 + 10) / 10. Notice that the ? is effectively infilled with the average of the filled 9. But if we use a local technique we get (1 + 2 + (2+4)/2 + 4 + 5 + 6 + 7 + 8 + 9 + 10) / 10 = 5.5. Notice that the naïve technique (which is effectively a global infill has 0.27 of error. Infilling like this is done to reduce the global bias.

Tim Gorman
Reply to  bdgwx
February 20, 2023 2:54 pm

If the 2 or the 4 is contaminated with UHI then you’ve just spread that contamination to another station.

Frank from NoVA
Reply to  bdgwx
February 20, 2023 3:46 pm

You’ve ignored the question ‘to what end?’. If East Jibip has actual data and you want to know what the average temperature (or trend) is there, you’re in luck! If not, too bad, because infilling, kriging or gaussian regression does not provide you with data. Unless, of course, you’re attempting to calculate the Earth’s GAST, in which case I’ll ask you to tell me what it is. And while we’re on the subject, maybe you can kindly tell me how the IPCC ‘knows’ that our pre-fall from Eden GAST was exactly 288.15K

bdgwx
Reply to  Frank from NoVA
February 20, 2023 4:03 pm

I literally just answered the question. It is to reduce error.

Tim Gorman
Reply to  bdgwx
February 20, 2023 4:13 pm

How does spreading UHI and/or systematic bias around reduce error?

MarkW
Reply to  Tim Gorman
February 21, 2023 8:03 am

The difference between what the data shows and models predict has been reduced. Therefor error has been reduced.

Frank from NoVA
Reply to  bdgwx
February 20, 2023 4:20 pm

What error? The non-reporting station temperature of ‘N/A’ (or however they code it) was accurate. By making up ‘data’, they’ve introduced error. Now that we’ve settled that, can you tell me where the IPCC’s GAST estimate of 288.15K comes from?

bdgwx
Reply to  Frank from NoVA
February 20, 2023 4:52 pm

You’re “making up data” either way. When you ignore unfilled cells you are effectively infilling them with the global average. That’s worse then infilling them with kriging (or like technique).

Frank from NoVA
Reply to  bdgwx
February 20, 2023 7:31 pm

I’m not making up anything. If you have real data at an actual location, it allows you to say things, e.g., the average temperature in July, about that location. Nothing else. But I would like to know more about where the IPCC’s 288.15K comes from. Is it modeled? How?

bdgwx
Reply to  Frank from NoVA
February 21, 2023 6:38 am

How do you compute the average temperature for July (which has 31 days) with only 30 days worth of data?

Tim Gorman
Reply to  bdgwx
February 21, 2023 8:02 am

The uncertainty in the data will swamp any difference in the averages between using 30 days or 31 days. As usual you are trying to completely ignore uncertainty and just assume that everything is 100% accurate and you can discern differences out to as many decimal places as you want!

MarkW
Reply to  Tim Gorman
February 21, 2023 11:11 am

bdgwx still believes that if you have 10 thermometers in 10 swimming pools, that by adding an 11 thermometer in an 11 pool, you can increase the accuracy of the first 10.

bdgwx
Reply to  MarkW
February 21, 2023 11:38 am

MarkW said: “bdgwx still believes that if you have 10 thermometers in 10 swimming pools, that by adding an 11 thermometer in an 11 pool, you can increase the accuracy of the first 10.”

First, I didn’t say that. Second, that has nothing to do with what we are currently discussing.

Tim Gorman
Reply to  bdgwx
February 21, 2023 12:14 pm

It’s got everything to do with it.

Taking 30 different atmospheric temperature measurements on 3 different different days is *exactly* like sticking thermometers in 30 different swimming pools. Adding a 31st swimming pool to the set isn’t going to increase the accuracy of your average in any meaningful way. The variation between the pools will make the change in the average truly undiscernable. This is especially true in months like September and March where monthly temperatures see a wide variation as the seasons change.

Tim Gorman
Reply to  MarkW
February 21, 2023 12:08 pm

That’s pretty much it in a nutshell.

sherro01
Reply to  bdgwx
February 20, 2023 10:59 pm

bdgwx,
No, there are ways to ignore missing data cells. Ease of computing with elementary programs like Excel is a prime reason to infill, but that is mental laziness. With many programs, say annual T data to compute a degrees per century trend, a year tagged as missing data is simply deleted, so the final result is attend over 99 or 98 or whatever number of years. It is surprising how often this type of error is unrecognised. I fell for it many times in the early days.
Geoff S

bdgwx
Reply to  sherro01
February 21, 2023 5:51 am

Sure, you can completely ignore the missing value, but then you are no longer computing a global average. Remember, the globe has 510e6 km2 of area. Not 400e6 km2 or 450e6 km2 or even 500e6 km2. It’s 510e6 km2 and no less. If you don’t compute a spatial average accounting for all 510e6 km2 of area then haven’t computed a global average temperature.

Tim Gorman
Reply to  bdgwx
February 21, 2023 6:45 am

then haven’t computed”

You haven’t “computed” a global average temperature by using guessed-at infill data. You have *guessed* at a global average temperature.

Jim Gorman
Reply to  bdgwx
February 21, 2023 8:25 am

And therein lies the rational for “LONG RECORDS”.

Your own calculation shows a small error by leaving out one day in ten! What is one day out of 30 or 31 using your method? What is the error of leaving one day of one grid out of 510e6 km2?

You are worried about the mean of a distribution. What the result in variance/standard deviation? How does infilling with a mean value change those?

MarkW
Reply to  bdgwx
February 21, 2023 8:04 am

You adjust for nonexistent data by increasing your uncertainty interval. You can’t remove uncertainty by making up data.

Tim Gorman
Reply to  MarkW
February 21, 2023 8:27 am

Bingo!

bdgwx
Reply to  MarkW
February 21, 2023 9:36 am

MarkW said: “You can’t remove uncertainty by making up data.”

That’s patently false.

Do a data denial experiment and see for yourself. Take a set of observations and hide some of the values for the averaging step. Do a naive average ignoring the missing values (which is effectively the same as infilling them with the average of the existing values) and record the error. Now exploit correlation and infill the missing values using locality (either temporal or spatial) and compute the full coverage average and record the error. Do this a thousand times. Which one has the lower root mean squared error?

Tim Gorman
Reply to  bdgwx
February 21, 2023 10:52 am

I did the experiment, both with my data and with yours, and posted it here in this thread.

The variability of the data results in an uncertainty that is wider than change in the average value. In other words the uncertainty prevents you from knowing what the true value actually is. It keeps you from discerning the difference in the hundredths digit the way you are doing.

You are only kidding yourself that you have somehow decreased the uncertainty.

Throw away *all* of the values and replace them with the average value. What is your uncertainty then?

MarkW
Reply to  bdgwx
February 21, 2023 11:12 am

So making up data improves data quality.
No wonder you believe that GCMs output data.

bdgwx
Reply to  MarkW
February 21, 2023 11:31 am

MarkW said: “So making up data improves data quality.”

If by improves data quality you mean reduce the error then the answer is yes. I demonstrate how it works here. In that example there was one missing day. Using the naive assumption that the missing day behaves like all of the others yields 0.07 F of error. But by exploiting spatial and temporal correlation I reduced it to 0.03 F. And by exploiting the global circulation of the atmosphere I reduced it to 0.00 F. If you disagree then point out the math mistake, I’ll correct it, and we’ll see how it plays out.

Last edited 27 days ago by bdgwx
Tim Gorman
Reply to  bdgwx
February 21, 2023 12:33 pm

And yet your uncertainty was something like 3.0. In other words if your average is 34C +/- 3C then how do you discern a difference of 0.07 or 0.03?

Ans: YOU CAN’T!

You are only fooling yourself!

Feynman: “The first principle is that you must not fool yourself and you are the easiest person to fool.”

Tim Gorman
Reply to  MarkW
February 21, 2023 12:30 pm

Infilling with average values only drives the things toward the average value. It masks standard deviation and variance in the data. It just creates a more peaked distribution.

But, of course, that’s the whole idea. Try and make it look like uncertainty goes down.

bdgwx
Reply to  MarkW
February 21, 2023 11:34 am

MarkW said: “You adjust for nonexistent data by increasing your uncertainty interval.”

Explain that. Be specific. How are you adjusting the data specifically? I want to replicate your procedure and see how well it does.

Tim Gorman
Reply to  bdgwx
February 21, 2023 12:39 pm

Why does this need explanation?

If you are measuring crankshaft journals and you measure seven out of eight do you just assume the eighth one is equal to the average of the other seven?

If you have seven boards of varying lengths you are using to create a beam to span a foundation do you just measure six and assume the seventh one is equal to the average value of the other six?

In each case you had better increase your uncertainty for the unknown piece or you are going to wind up failing in your task. the engine could fail because the crankshaft bearings seize or are too loose for good oil pressure. The beam could wind up being too short to cover the span.

It’s no different with temperatures when you are trying to figure out what is going on. If you just always assume the unknowns are equal to the average with zero uncertainty then you may as well just bet on the next horse race with no knowledge about the horses that are running!

sherro01
Reply to  bdgwx
February 20, 2023 10:50 pm

bdgwx,

Sorry, but your example is illogical nonsense. The missing value is missing because it’s real value is unknown. Any imputed value is a guess. People can affect the size of a guess, but not the size of the missing value. People can impute reasonable looking values, like in your example, but that is illusory because the true value is unknown for comparison.
With conventional temperatures, there are some ways to restrict the likely range of imputed values. For example, a.mercury thermometer should show values above Hg freezing point and below the melting point of glass, but that type of constraint is rarely useful.
Geoff S

bdgwx
Reply to  sherro01
February 21, 2023 5:56 am

Do you want to work through an example together to demonstrate how this works? Pick your favorite station and post the Tmin and Tmax for a single month. We’ll then do a data denial experiment where we intentionally hide one of the days data and we’ll compute the monthly average using different infilling strategies. And remember, the goal is compute a monthly average temperature for the station so we HAVE to account for all of the days in the month otherwise it isn’t a monthly average.

Tim Gorman
Reply to  bdgwx
February 21, 2023 6:20 am

You will find that the variability in the daily temperatures introduce an uncertainty (i.e. a standard deviation) that will swamp any change in the average that you might find through infilling missing data. You will be doing nothing but moving the average around inside the uncertainty interval when you don’t actually know the true value to begin with!

And remember, the goal is compute a monthly average temperature for the station so we HAVE to account for all of the days in the month otherwise it isn’t a monthly average.”

Same reasoning. If the standard deviation of the monthly temperatures is such that the average remains within the uncertainty interval then what have you gained from the additional data point? You still won’t know a “true value” for the average!

bdgwx
Reply to  bdgwx
February 21, 2023 6:36 am

We’ll a do a test run with my home town of St. Louis. In January of 2023 the data looks like this.

Day, Tmin, Tmax
1 43 65
2 52 62
3 45 72
4 37 46
5 33 42
6 27 48
7 31 43
8 26 40
9 23 56
10 38 59
11 42 66
12 31 52
13 22 31
14 19 39
15 32 54
16 46 62
17 39 50
18 38 43
19 37 44
20 29 37
21 22 44
22 32 42
23 34 48
24 30 45
25 33 38
26 25 35
27 25 55
28 32 59
29 23 49
30 17 23
31 11 22
The actual average temperature is 39.44 F.

Now, let’s the deny the use of the data from Jan. 12, 2023. The naïve average is 39.37 F. But that isn’t the average for January; only a subset of January. If we want to say it is the average for January we have to account for all 31 days. The no-effort naive approach is to assume the missing day behaves like the filled 30. We are effectively infilling using the average of the filled 30. The end result is 39.37 F. That is an error of 0.07 F.

Again, we will deny the use of the data from Jan 12. 2023. But this time we use a local infilling strategy that exploits temporal correlation. We will use Jan. 11 and 13th to infill Jan. 12. Using this approach our final monthly average is 39.40 F. This is an error of 0.04 F. We have reduced the error by using a local strategy as opposed to a global strategy.

But we’re not done yet. Not only can we exploit temporal correlation, but we can exploit spatial correlation as well. We will infill Jan. 12 using the Chesterfield station. Using this approach our final monthly average is 39.41 F. This is an error of 0.03 F. We have reduced the error yet again.

And it doesn’t stop there. If we exploit global circulation model predictions from the day prior we can infill using predictions of the temperature. The end result here is 39.44 F. This is an error of 0.00 F. We have reduced the error yet again.

Do you see the power of exploiting locality and spatial and temporal correlations? Do you see the power of exploiting the laws of physics (GCM predictions)? By using a better infilling strategy we reduced the error from 0.07 F to 0.00 F.

Last edited 27 days ago by bdgwx
Frank from NoVA
Reply to  bdgwx
February 21, 2023 7:05 am

‘Do you see the power of exploiting locality and spatial and temporal correlations? Do you see the power of exploiting the laws of physics (GCM predictions)?’

I see the power of massive fraud. Your StL example missed a single day. The reality, as you note, is that NASA is using GCMs to infill ‘data’ for stations that haven’t recorded a real measurement in weeks, months, years. How many and what percentage of the total ‘readings’ does this in fact represent? You may not realize this, but you’ve just verified Heller’s contention that NASA’s infills have a very high correlation with CO2.

Sounds to me like you (and Nick) are saying that we need to make up data to create a spherical grid, which we need to create a global average surface temperature. Question: Way before said spherical grid existed, the IPCC estimated GAST at 288.15K. Where and how did they get this number?

bdgwx
Reply to  Frank from NoVA
February 21, 2023 7:27 am

Frank from NoVA said: “Your StL example missed a single day.”

It works for any number of missing values. In fact, the more missing values you have the better locality based infilling strategies become. You can prove this out for yourself by repeating the data denial experiment for your favorite station. I’ll help you if want. Which station do you want to consider?

Frank from NoVA said: “The reality, as you note, is that NASA is using GCMs to infill ‘data’ for stations that haven’t recorded a real measurement in weeks, months, years.”

I said no such thing. And no, they don’t.

Frank from NoVA said: “How many and what percentage of the total ‘readings’ does this in fact represent?”

0%.

Frank from NoVA said: “You may not realize this, but you’ve just verified Heller’s contention that NASA’s infills have a very high correlation with CO2.”

No it doesn’t. NASA’s infilling strategy document in [Hansen & Lebedeff 1987] has nothing to do with CO2. What they do is exploit spatial correlation.

Frank from NoVA said: “Question: Way before said spherical grid existed, the IPCC estimated GAST at 288.15K. Where and how did they get this number?”

I don’t know. Can you post a link where you got that?

Tim Gorman
Reply to  bdgwx
February 21, 2023 7:59 am

Infilling is just a fraud. It’s a subjective guess since you don’t know the differences in the micro-climates among locations. As I’ve shown you in two different posts just leaving the data out leaves your average well within the expanded uncertainty interval of the data itself.

The variance in your data is so wide (almost 100) that the average is really meaningless in any case. Your SEM is 10/sqrt(31) = 1.8. In other words the population average could range from 37.6 to 41.2 quite easily.

This is all ignoring the measurement uncertainty in the temperatures and assuming that there is no systematic bias in the measurements.

Frank from NoVA
Reply to  bdgwx
February 21, 2023 8:13 am

‘0%’ – Absolutely not true!

‘NASA’s infilling…has nothing to do with CO2.’

If it’s based on GCM ‘physics’, it utilizes the model’s forcings, i.e., CO2.

‘Can you post a link where you got that?’

The 288K pre-industrial GAST figure pops up everywhere in alarmist circles. Here it is quoted by the lads at PSU:

“However, if we take ε = 0.77 (i.e., the atmosphere absorbs 77% of the IR radiation incident upon it), we get a result, Ts = 288 K = 15°C. Just right!”

Looks like they assumed the result they wanted and just plugged eta to obtain it. How robust! But you can investigate further here:

https://www.e-education.psu.edu/meteo469/node/198

bdgwx
Reply to  Frank from NoVA
February 21, 2023 9:18 am

Frank from NoVA said: “If it’s based on GCM ‘physics’, it utilizes the model’s forcings, i.e., CO2.”

Can you post the page and paragraph number in the HL87 publication I linked to above where you are seeing that. Better yet…show me the line in the source code where this is happening. You link it is based on GCM predictions…prove it!

Frank from NoVA said: “Looks like they assumed the result they wanted and just plugged eta to obtain it. How robust! But you can investigate further here:
https://www.e-education.psu.edu/meteo469/node/198

That has absolutely nothing to do with how GISTEMP measures the global average temperature.

Last edited 27 days ago by bdgwx
Frank from NoVA
Reply to  bdgwx
February 21, 2023 2:00 pm

‘You (th)ink it is based on GCM predictions…prove it!’

No problem – you recently said so yourself:

‘Do you see the power of exploiting the laws of physics (GCM predictions)?’

And here’s the link to the comment where you said it:

https://wattsupwiththat.com/2023/02/20/is-noaa-trying-to-warm-the-current-8-year-pause/#comment-3684558

‘That has absolutely nothing to do with how GISTEMP measures the global average temperature.’

But it has EVERYTHING to do with WHY you and Nick are so adamant about infilling temperature data – alarmists need to compute the current GAST so that they can compare it to the ‘pre-industrial’ GAST, i.e., the 288K, in order to scare the masses into doing colossally stupid things, like net zero.

So, again, what is the origin of the 288K?

bdgwx
Reply to  Frank from NoVA
February 21, 2023 4:27 pm

Frank from NoVA said: “No problem – you recently said so yourself:
‘Do you see the power of exploiting the laws of physics (GCM predictions)?’”

Yep. That is exactly what I said. Thank you. Notice what I didn’t say. I didn’t say NASA bases its infilling on GCMs or CO2. I wasn’t even talking about NASA.

I’ll ask again…if YOU think they base their infilling on GCMs or CO2 then provide the link supporting that hypothesis.

Frank from NoVA said: “And here’s the link to the comment where you said it:
https://wattsupwiththat.com/2023/02/20/is-noaa-trying-to-warm-the-current-8-year-pause/#comment-3684558

Yep. And I stand by that comment 100%.

Frank from NoVA said: “But it has EVERYTHING to do with WHY you and Nick are so adamant about infilling temperature data “

No it doesn’t. NASA does NOT infill based on GCMs. Just because I showed it was possible and reduces the error to zero does not mean that NASA or anyone actually does it that way. The point I’m making is that more advanced infilling techniques unequivocally reduce error even to the point where it goes to near 0. If you don’t understand the concept then ask questions. I’m more than willing to clarify points.

Frank from NoVA said: “So, again, what is the origin of the 288K?”

I have no idea. First it was 288.15 K now it is 288 K. I don’t even know where you got that figure. If you can provide the link from where you got it then I’ll read it and try my best to help with its interpretation.

Tim Gorman
Reply to  Frank from NoVA
February 21, 2023 7:46 am

As usual, bdgwx always wants to ignore uncertainty. Using Possolo’s method of analysis in TN1900, the expanded uncertainty of the monthly median temperatures is 0.65, a whole order of magnitude larger than his 0.07 difference in the averages.

When your uncertainty swamps the difference you are trying to perceive, thinking you are actually seeing something is only fooling yourself. What was it that Feynman said about that?

Tim Gorman
Reply to  bdgwx
February 21, 2023 7:43 am

The standard deviation of your data is 9.9 before dropping Jan 12. It is 10 afterwards.

The difference of 0.07 is meaningless. It is almost two orders smaller than your standard deviation. Using Possolo’s method of analysis the expanded uncertainty of your average is 0.65, an order of magnitude larger than the difference of 0.07.

Why you always want to ignore the uncertainty in what you do is just beyond me. When the uncertainty swamps the differences you are trying to perceive you are only fooling yourself that the difference actually exists!

MarkW
Reply to  bdgwx
February 21, 2023 8:08 am

Equating GCM’s to the laws of physics. Now that’s delusion on steroids.

bdgwx
Reply to  MarkW
February 21, 2023 11:26 am

MarkW said: “Equating GCM’s to the laws of physics. Now that’s delusion on steroids.”

Then it should doubly shocking to you that using that infilling technique reduced the error to 0.00 F. Tell me…what was the error using the assumption that the missing day behaves like all of the others that is being advocated for here?

Jim Gorman
Reply to  bdgwx
February 21, 2023 9:00 am

You do realize that continued insertion of an average value will reduce the error to zero, right? Replace every temp with the average and what error do you get? What happens when you infill 7 days with the average of the other 23/24 days? Your imputation of reduced error may be mathematically correct but is it physically correct?

You also fail to see that infilling errors accumulate whether at the same site or multiple sites. What is the accumulated error from 10 sites each with an error?

Your logic is similar to the train company in Ohio. It’s gone 10,000 miles w/o a problem, it will go further.

I see your temps are integer values. Yet you are doing calculations based on one hundredths of a degree. Significant digit rules allow for using one extra digit (tenths) to minimize rounding errors. Your final value of each average should be expressed as integers following those rules. Your “errors” then disappear!

These are physical measurements and should follow the rules every university, government, and business laboratory must follow to refrain from adding unwarranted information (precision).

Jim Gorman
Reply to  Jim Gorman
February 21, 2023 5:05 pm

bdgwx,

You have made multiple posting after this one, yet have not responded to this. Don’t you have an answer?

Reply to  bdgwx
February 22, 2023 1:13 am

bdgwx,
You claim the average of your numbers is 39.44 F.
That is immediately wrong.
You cannot state it better than 39 F.
Geoff S

bdgwx
Reply to  Geoff Sherrington
February 22, 2023 9:25 am

GS said: “You claim the average of your numbers is 39.44 F.”

Absolutely. And I stand by that claim. I did it in Excel.

GS said: “That is immediately wrong.”

Say what? You don’t think an average is Σ[x_i, 1, n] / n?

GS said: “You cannot state it better than 39 F.”

This has to be a joke right?

Jim Gorman
Reply to  bdgwx
February 22, 2023 10:45 am

I’ve already posted this to you about Significant Digits. I’ll post it again and maybe it will sink in. If you need university references I’ll be happy to give them to you.

“””””I see your temps are integer values. Yet you are doing calculations based on one hundredths of a degree. Significant digit rules allow for using one extra digit (tenths) to minimize rounding errors. Your final value of each average should be expressed as integers following those rules. Your “errors” then disappear!

These are physical measurements and should follow the rules every university, government, and business laboratory must follow to refrain from adding unwarranted information (precision).”””””

Just for one reference from:

http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch1/sigfigs.html

“””””It is important to be honest when reporting a measurement, so that it does not appear to be more accurate than the equipment used to make the measurement allows. We can achieve this by controlling the number of digits, or significant figures, used to report the measurement.”””””

Last edited 26 days ago by Jim Gorman
Tim Gorman
Reply to  Geoff Sherrington
February 23, 2023 4:17 am

Even then he needs to quote an uncertainty interval that will probably be in the units digit!

Tim Gorman
Reply to  bdgwx
February 21, 2023 7:03 am

Here is 31 median daily temperature values for Jan, 2022 (in Kelvin).

265.18,261.54,270.54,276.96,267.23
261.79,263.96,273.12,269.57,273.65
280.07,276.84,280.59,277.51,267.09
269.90,276.18,281.37,270.15,261.59
263.73,275.43,277.29,276.90,264.93
267.26,275.23,270.43,277.71,276.79
280.84

The average is 271.98 with a standard deviation of 6.25.

Let’s delete the last one, 280.84. The average is now 271.68 with a standard deviation of 6.1.

EXACTLY what do you think has been lost in leaving out one data point? Let’s add the data point back in using the average of the other 30 data points and see what we get.

average = 272.9, standard deviation = 6.4

All of these average values are well within the uncertainty interval as determined by the standard deviation of the values.

272, 272, 273

Pick a value out of the middle and replace it with the average of the two surrounding values. E.g. replace 280.59 with the average of 277.51 and 276.84 = 277.18. The average remains 273 with a standard deviation of 6.4. No change.

What other infilling technique would you use? Remember, these are values measured at the same location using the same device. If you try using temperatures measured at different locations using different devices how do you account for differences in the micro-climates at the various locations?

Frank from NoVA
Reply to  Tim Gorman
February 21, 2023 8:00 am

Good point – I had forgotten about the old Alarmist ploy of manipulating temperatures on C or F scales rather than K or R scales.

Reply to  Tim Gorman
February 22, 2023 1:26 am

Yes, Tim,
To add to the confusion, when you look over data in the detail that I have done over the years for BOM Australia, you see more and more irregularities. Re missing values from daily temperature files, sometimes Tmax values are shown as missing when Tmin values are continuous, allowing one a little inference that the Tmax was deleted because it was anomalously high as the preceding Tmin night was quite hot. There is no statistical method known to correct for the operator omitting to record values that are high. Infilling with averages or with the mean of values on either side is wrong and misleading.
When the operators allow wrong and misleading methods to happen, as with homogenization, you start to be sispicious of the actual abundance of wrong manipulation. You can only guess at it because the numbers are gone forever.
I am preparing another WUWT article that shows some real eye openers from tests that help you infer what is really different to what else. Do stay tuned, because the Gormans and I seem to have similar tight logic in mind. Geoff S

Tim Gorman
Reply to  Geoff Sherrington
February 22, 2023 1:49 pm

Omitting high and low “outliers” only serves to lower variance and standard deviation. Unless there is a physical reason for omitting them they should be considered as part of the record, e.g. if there was a grass fire under the station at 3PM local on day X. If you follow Possolo’s method in TN1900 all that omitting does is raise the coverage factor used for the expanded uncertainty thus making the uncertainty associated with the average larger.

If you are omitting data because it “seems” wrong that is basically just committing fraud, pure and plain.

There is a price to be paid for never considering the uncertainty of your results – no one can trust what you have done. That seems to be endemic in climate science.

The more I look at temperate data the more I don’t trust anything about CAGW. If temperatures are considered to be random variables then to “average” them they must be iid, i.e. have the same distribution with the same standard deviation skewness, etc.

That simply is not the case with Tmax and Tmin. That makes (Tmax+Tmin)/2 a median of a skewed distribution and not an average. The median of a skewed distribution doesn’t have a standard deviation. The fact that doing this for over a hundred years doesn’t make it right, correct, or physically useful.

It’s why I continue to push for the use of degree-days calculated as the integral of the temperature profile for studying climate. It eliminates the need to assume that all temperatures are random variables with Gaussian distributions. Degree-days give a *much* better picture of the climate at a location. You can add degree-day values and get a total value on a monthly or annual basis. A location with a cold climate is going to have fewer cooling degree-days than one with a warmer climate where they could each have the same Tmedian value!

Reply to  bdgwx
February 22, 2023 1:10 am

brgwax,
No,
I do not want to do that experiment.
You are using the wrong criterion to show you have improved something.
A better mathematical “result” from a favourite stats formula does not create a better product or future for that product.
Tim Gorman has just illustrated that your product can fail, be it bearings or planks, because you have used the wrong criterion.
You need to calculate actual fit, like the worst probable actual fit, not the best mathematical fit.
Geoff S

bdgwx
Reply to  Geoff Sherrington
February 22, 2023 9:15 am

GS said: “I do not want to do that experiment.”

Here’s what I think. I suspect you don’t want to do the experiment because you know infilling using local strategies is more effective than a naive strategy. You know the experiment is going to confirm this and you don’t want to see it firsthand.

GS said: “You are using the wrong criterion to show you have improved something.”

Really? You don’t think error (the difference between the estimated value and the actual value) is the right tool for job? What metric should I use to test the skill of infilling strategies?

GS said: “Tim Gorman has just illustrated”

Between he and Jim they constantly make numerous math mistakes including the belief that Σa^2=(Σa)^2 and some so egregious that they conflate averages with sums and conflate the mathematical operators quotient (/) and sum (/). Here is the thread where they made at least 24 algebra mistakes. Not a single one of these was corrected or even acknowledged. They also challenge the 1LOT, Stefan-Boltzmann Law, etc. They don’t think it is even valid to perform mathematical operations on an intensive property. Is this really the duo you want to act as your authority for truth?

Last edited 26 days ago by bdgwx
Tim Gorman
Reply to  bdgwx
February 23, 2023 4:16 am

“Between he and Jim they constantly make numerous math mistakes”

Malarky!

I took 31 daily median values from my own weather station and showed how the average and standard deviation changes when you leave a day out or infill a day with an average of the surrounding temperatures.

YOU DIDN’T DO ANYTHING TO SHOW HOW THAT WAS MISCALCULATED.

The result of the average remains within the standard deviation meaning you simply don’t know what the *actual* true value is. Yet you somehow think you have the “true value” and that you have decreased the “error”.

You *still*, after at least two years, do not understand uncertainty. You just ignore it and the impact that it has. You are a total failure as a physical scientist.

Jim Gorman
Reply to  bdgwx
February 21, 2023 6:55 am

Your statement of error falls short as usual. The NWS/NOAA specifies ±1° F combined uncertainty for all but CRN stations. Part of that uncertainty at each station is due to systematic error which obviously doesn’t disappear.

U_max would be ±9 if straight addition were used. Using RSS will give “u = √(9*1) = 3”.

You can not reduce this uncertainty by finding an “average uncertainty” which effectively spreads that uncertainty amongst all elements. That would be an uncertainty of “3 / 9 = 0.33”. In other words each measurement in the collection now has an uncertainty of 0.33.

You can not logically justify that since you already know each element has an uncertainty of ±1! It would be in essence saying that the mean is surrounded on each side by elements with ±1 uncertainty, but that the calculated mean has a much smaller uncertainty.

Let’s use the method specified in NIST TN 1900 for temperature.

mean = 5.78
SD of (1, 2, 4, 5, 6, 7, 8, 9,10) = 3.07
3.07 / √9 = 1.02
t Factor for DF=8 and 97.5% => 2.306
1.02 * 2.306 = 2.36

=> 5.78 ± 2.38

(1, 2, 5.78, 4, 5, 6, 7, 8, 9, 10)
mean = 5.78
SD = 2.09
Var = 8.4
t Factor @DF = 9 => 2.262
2.09 / √10 = 0.917
0.917 * 2.262 = 2.07

=> 5.78 ± 2.07

That is far, far from your error of 0.27.

I have also included an image of a calculator page for reference.

PSX_20230221_083301.jpg
MarkW
Reply to  bdgwx
February 21, 2023 7:56 am

So a few dozen “observations” almost all of which are in western Europe and east coast US, are sufficient to tell us what the temperature of the entire planet was, to an accuracy of greater than 0.1C?

Tim Gorman
Reply to  MarkW
February 21, 2023 8:25 am

it’s kind of like saying the force of gravity is the same on the surface of the moon as it is on the surface of the earth because we measured it at the surface of the earth!

bdgwx
Reply to  MarkW
February 21, 2023 9:24 am

MarkW said: “So a few dozen “observations” almost all of which are in western Europe and east coast US, are sufficient to tell us what the temperature of the entire planet was, to an accuracy of greater than 0.1C?”

I think you have me confused with someone else. I didn’t say that.

Tim Gorman
Reply to  bdgwx
February 21, 2023 10:55 am

That is *exactly* what you are saying when you state you can infill data for the entire globe from the data obtained in western Europe and the eastern US.

If you *can’t* do that then how do you extend your trend back into the past?

MarkW
Reply to  bdgwx
February 21, 2023 11:16 am

Since you claimed that we know the temperature of the whole earth to within a tenth of a degree 200 years ago, that is indeed what you said.

bdgwx
Reply to  MarkW
February 21, 2023 11:22 am

MarkW said: “Since you claimed that we know the temperature of the whole earth to within a tenth of a degree 200 years ago, that is indeed what you said.”

You definitely have me confused with someone else. I didn’t say that either.

If you want to challenge something I said then fine. Quote it and I’ll either defend it or concede that I was wrong.

What I’m not going to do is defend arguments you and you alone are making.

Last edited 27 days ago by bdgwx
Tim Gorman
Reply to  bdgwx
February 21, 2023 12:44 pm

This is an error of 0.03 F.”

Then how did you come up with an error in the hundredths digit?

Tim Gorman
Reply to  MarkW
February 21, 2023 12:42 pm

Yep.

All uncertainty is random and cancels. The average is always 100% accurate to any number of decimal points. You can just assume that all temperatures are equal to the average so the standard deviation and variance is always zero.

Richard Greene
Reply to  bdgwx
February 20, 2023 8:47 am

You left out the huge warming adjustment to the 1940 to 1975 period, to eliminate global cooling as CO2 was rising, liar. bedofwax

Honest Climate Science and Energy: 16 average temperature charts: The first five show Inconvenient average temperature data are changed at will by goobermint bureaucrat scientists”

bdgwx
Reply to  Richard Greene
February 20, 2023 8:59 am

The graph posted above includes all adjustments.

Last edited 1 month ago by bdgwx
Richard Greene
Reply to  bdgwx
February 20, 2023 9:08 am

Hausfather is the con man who claimed the climate models were “accurate” by using their TCS prediction, rather than their ECS prediction, and using RCP 4.5 rather than RCP 8.5. Zeke H. is a con man and his followers are trained parrots of climate alarmism.: Climate Howler Global Whiners.

Meanwhile the political IPCC continues to climate scaremonger with their ECS wild guess. A TCS with RCP 4.5 with roughly half the ECS / RCP 8.5 rate of warming prediction would not scare anyone, so it gets no publicity.

The IPCC conveniently allows people to think ECS is for the next 50 to 100 years, when it is really a wild guess for the next 200 to 400 years.

Their actual wild guess for the next 70 years, TCS, is not scary enough for their anti-manmade CO2 propaganda, which is the IPCCs main goal.

So TCS gets no publicity from the IPCC

Nor does the least inaccurate climate model — the INM from Russia — get any publicity from the IPCC
.
Nor does the most accurate temperature compilation, UAH, get publicity form the IPCC

Nor do the NOAA tide gauges, which are the ONLY accurate measure of relative sea level, get any attention from the IPCC because they prefer over-adjusted satellite absolute sea level claptrap.

Do you see a pattern here BedofWax?

Last edited 1 month ago by Richard Greene
Willard
Reply to  Richard Greene
February 20, 2023 9:16 am

The only pattern is you only arguing by assertion, greenest Richard.

bdgwx
Reply to  Richard Greene
February 20, 2023 10:55 am

I believe you are referring to Hausfather et al. 2019. The authors used two approaches 1) change in temperature versus time and 2) change in temperature versus change in radiative forcing (“implied TCR”). The reason ECS is not used is because it requires the equilibrium period to have elapsed which for the fast feedbacks is on the order 100 years.

Regarding the Russian INM model…I compared the 42 models from the CMIP5 suite to BEST over the period 1880-2020 the INMCM4 (Russia) had a trend of +0.063 C/decade. The best model is IPSL-CM5B-LR (France) with a trend of +0.088 C/decade. The BEST trend is +0.087 C/decade. The CMIP5 ensemble mean had a trend of +0.079 C/decade. The Russian INM model is clearly inferior among its peers. I encourage you to download the data and verify this yourself. The data can be downloaded at the KNMI Climate Explorer.

bnice2000
Reply to  bdgwx
February 20, 2023 1:48 pm

BEST is built on all the WORST data available.

The most urban affected, unstable, un-scientific data they could find.

Then adjust the wazoo out of it to get the trend they wanted.

bdgwx
Reply to  bnice2000
February 20, 2023 2:13 pm

That is interesting take considering WUWT gave it their seal of approval back in the day. It was heralded here as the be-all/end-all datasets because it 1) didn’t perform adjustments and 2) was independent.

MarkW
Reply to  bdgwx
February 21, 2023 8:12 am

WUWT gave that seal of approval prior to the work being done, based on the reputation of those involved and conversations between Anthony and those scientists.
The approval was withdrawn after the mess was published.

bdgwx
Reply to  MarkW
February 21, 2023 9:12 am

Exactly. Anthony liked the method. He just didn’t like the result.

MarkW
Reply to  bdgwx
February 21, 2023 11:18 am

When you decide to lie, you really go all out.]
He liked the method, the problem is that they didn’t follow the method, as Anthony explained when he withdrew his endorsement.

Like most alarmists, you only see what you want to see, and are simply incapable of ever presenting all of the facts.

BTW, I’m guessing that you aren’t bright enough to realize that even you know you were lying when you claimed that Anthony endorsed the work of the BEST group.

Last edited 27 days ago by MarkW
bdgwx
Reply to  MarkW
February 21, 2023 4:18 pm

MarkW said: “BTW, I’m guessing that you aren’t bright enough to realize that even you know you were lying when you claimed that Anthony endorsed the work of the BEST group.”

Anthony Watts said “I’m prepared to accept whatever result they produce, even if it proves my premise wrong.”

I’ve never seen an endorsement more unequivocal than that.

It doesnot add up
Reply to  bdgwx
February 20, 2023 11:22 am

Is that credible? Here are the changes year by year, taken from the two charts posted. I might be off by a pixel in the readings.

NOAA Anomaly.png
bdgwx
Reply to  It doesnot add up
February 20, 2023 2:47 pm

I think so. They are all within the ±0.075 C uncertainty NOAA publishes for those years.

It doesnot add up
Reply to  bdgwx
February 20, 2023 5:19 pm

No, you haven’t thought. The chart shows that the changes result in increased anomalies except in 2015. It is part of the salami slice process of warming the present. It is not an argument to say that the because the adjustment is less than the uncertainty it doesn’t matter. If we add a number of similar adjustments, but ignore them individually on your spurious reasoning it is not OK.

bdgwx
Reply to  It doesnot add up
February 20, 2023 5:33 pm

Now that we know the reason for the changes was the implementation of NOAAGlobalTemp v5.1 last week which includes significantly more observations and provides full spatial coverage for the first time the changes make since. And it’s a testament to NOAA’s 5.0 analysis that the changes from 5.1 are still within the uncertainty envelope.

I’m not sure what you mean by ignoring adjustments individually. I’m certainly not suggesting doing that. In fact, I’m suggesting the opposite. If you know of a bias you better make an attempt to correct it otherwise I (and others) are not going to be satisfied.

Last edited 1 month ago by bdgwx
Jim Gorman
Reply to  bdgwx
February 21, 2023 10:34 am

Neither you nor NOAA use the recommended procedure for finding the expanded uncertainty of the mean of temperatures.

I have also asked you to define what sample mean you are using.

1 sample with 9500+ entries or,

9500± samples of what size?

It does matter when finding the SE/SEM.

bnice2000
Reply to  bdgwx
February 20, 2023 1:37 pm

A minor correction of previous massive fakery.

The result is still massive fakery.

Pat from Kerbob
Reply to  bdgwx
February 20, 2023 4:15 pm

Because a hockey stick needs a straighter shaft.
Otherwise it’s not a hockey stick, right?

The biggest of the big lies is that values have been adjusted up AND down therefore it A-ok.

bdgwx
Reply to  Pat from Kerbob
February 20, 2023 5:15 pm

The “hockey-stick” is in reference to the 1902-1998 instrumental period set against the 1000-1980 proxy record [MBH99]. By adjusting the pre WWII period up more than the post WWII period you are reducing the perceived “hockey-stick” shape. So if the hypothesis is that NOAA is making adjustment to make the hockey-stick look more pronounced then they did a really bad job of it and instead made it look less pronounced.

Last edited 1 month ago by bdgwx
sherro01
Reply to  bdgwx
February 20, 2023 10:38 pm

bdgwx,
A significant part of that Hausfather graph comes from Australia. Australia’s Bureau of Meteorology has produced several successive homogenised versions of land temperatures, recently under the acronym ACORN-SAT.
Colleague Chris Gillham at his web site Waclimate has carefully documented the substantial “cooling the past” done by BOM. The Australian pattern since 1910 is rather different to the Zeke version. This invites discussion of mechanisms that make them different, since logic would infer that well-mixed CO2, if it heats the globe, should not recognise national boundaries.
Please search “”Gillham waclimate” then select the chapter on ACORN-SAT version 2.2 cooling the past. You will see actual data, simply organised, starting with raw data.
I am currently helping to prepare a WUWT article that, if accepted, will raise some interesting questions of when raw is not.
Geoff S

leefor
Reply to  sherro01
February 21, 2023 12:54 am

And ACORN-SAT has over 200 changes for “statistical reasons”.

bdgwx
Reply to  sherro01
February 21, 2023 5:46 am

About 1.5% of the global average temperature comes from Australia.

Reply to  bdgwx
February 22, 2023 1:29 am

bdgwx,
It is more than that in 1910 if you only count properly measured data.
Geoff S

bdgwx
Reply to  Geoff Sherrington
February 22, 2023 9:03 am

Geoff, that’s not how global average temperature datasets work. They don’t take the average of the stations. They take an area weighted average of all of the cells in the grid mesh. Australia’s area wasn’t any different in 1910 than it is today.

Jim Gorman
Reply to  bdgwx
February 22, 2023 6:31 pm

Yep, they bury the actual temperature data deeper and deeper so no one can tell what is actually occurring.

No one can answer if Tmax or Tmin is growing or falling and where is it happening!

Tim Gorman
Reply to  bdgwx
February 23, 2023 5:23 am

It doesn’t matter how yo do the averaging. As Hubbard and Lin pointed out all you do is spread around any systematic biases stemming from calibration and/or microclimate differences.

Jim Gorman
Reply to  sherro01
February 21, 2023 10:29 am

“””””This invites discussion of mechanisms that make them different, since logic would infer that well-mixed CO2, if it heats the globe, should not recognise national boundaries.”””””

Using a statistic that is not descriptive of the data, i.e., the SE/SEM, is exactly part of what you are talking about. The SE/SEM only describes the sample mean distribution and is dependent on sample size (and NOT the number of samples). It is not a descriptive statistics that describes the variability of the actual data.

This link has pertinent info.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1255808/

guidvce4
February 20, 2023 6:22 am

Wow! They just keep trying to keep the narrative going, cuz its dying and they know it. The push is on to reinforce whatever the BS of the day is cuz folks are catching on that its all meant to control the populations. That is all.

Peta of Newark
Reply to  guidvce4
February 20, 2023 9:31 am

And the trolls, as we see, know it also.
Else why arrive here so quickly to so strongly protest innocence?

KevinM
Reply to  guidvce4
February 20, 2023 10:15 am

People in general eat and pay rent.

gkarusa
February 20, 2023 6:33 am

NOAA has a history of manipulation and cherry picking data. Their agenda has been clearly stated. CO2 concentration is not far above record lows when you look at long tern trends. Temperature is not far above the lowest as we are in an interglacial period. The warming now should not come as a surprise. The only real question is the cause. CO2, though a greenhouse gas, has an R^2 correlation of near zero with actual temperature over eons. The point is, there are many factors to climate change.

bdgwx
February 20, 2023 6:46 am

Steve, the 2015-2022 trend is short and sensitive to the data still rolling into the GHCN and ERSST repositories. You should be seeing changes to the trend on a daily basis especially (but not necessarily limited to) 6 months after the date in question. Some data points like those from remote and disconnected stations are delayed by years. And if the records are hand recorded they have to be digitized which can create a decades long delay in getting the data uploaded. NOAA is still working on digitization projects for data recorded decades ago so the trends for decades in the past should be changing as well. Anyway, a change of 0.04 C/decade when the uncertainty is on the order of 1 C /decade is well within expectations. In fact, I’m surprised the change was that small.

Last edited 1 month ago by bdgwx
Tim Gorman
Reply to  bdgwx
February 20, 2023 8:41 am

Anyway, a change of 0.04 C/decade when the uncertainty is on the order of 1 C /decade is well within expectations. In fact, I’m surprised the change was that small.”

With that kind of uncertainty there isn’t any way to tell if the trend is positive, negative, or sideways.

Bellman
Reply to  Tim Gorman
February 20, 2023 9:00 am

Exactly what I keep saying. 8 years is far too short a period to be claiming any pause.

Tim Gorman
Reply to  Bellman
February 20, 2023 10:03 am

At the Forbes Air Force Base in Topeka, Kansas the pause has lasted since 1953. Is that long enough to be claiming a pause?

Bellman
Reply to  Tim Gorman
February 20, 2023 12:04 pm

Well as you were claiming the annual uncertainty of that station is ±6°C, and as you think that any linear trend that can exist within the uncertainty interval is acceptable, I’d assume you wouldn’t think it was long enough as temperatures could have risen or fallen by 12°C during that period.

But seriously, one location is not the globe.

Out of interest, using the daily GHCN data, I found the trend based on annual averages to be +0.11 ± 0.11°C / decade.

But the daily data is missing about 30 years between 1970 and 2000. So maybe you have a more complete data set.

Tim Gorman
Reply to  Bellman
February 20, 2023 2:13 pm

Where did I claim the uncertainty of that station to be +/- 6C?

It is and has been an ASOS weather station with a +/- 1.0C uncertainty.

Seriously, if one location on the globe has seen no trend in Tmax or Tmin since 1953 then where *is* the global warming occuring? For every station with *no* global warming there has to be another station with twice the global average in order for the global average to be what it is! If the global average has gone up 0.6C since 1950 then there has to be a station somewhere that has seen 1.2C increase in order for the average to come out to 0.6C! Where is that station?

My data is from NOAA. A big chunk of the data is missing. But since a linear trend is typically determined mostly by the start data and the end data the trend is zero. My personal weather stations agrees with KFOE since 2012 with an average anomaly of 0.04K, a mode of 0, and a standard deviation of 2.8.

I’m not surprised you are now trying to figure out a way to disregard the linear trending you so vociferously defend otherwise.

The fact is that the temperatures in the 50’s and 60’s are no different than they are today. Where in Pete’s name is the global warming?

As I said elsewhere, if you are afraid of CAGW then move to east central Kansas. We seem to be an island of constant temperature. (apologies to the Soggy Bottom Boys).

Tom Abbott
Reply to  Tim Gorman
February 20, 2023 3:01 pm

“As I said elsewhere, if you are afraid of CAGW then move to east central Kansas. We seem to be an island of constant temperature.”

The whole of the United States and Canada have been in a temperature downtrend since the 1930’s.

In the United States, in 1934, it was 1.2C warmer than the current temperature today.

Hansen said 1934 was 0.5C warmer than 1998, which makes 1934 warmer than 2016, too, since 1998 and 2016 are statistically tied for the warmest temperature in the satellite era (1979 to the present).

Like you said: “Where is the global warming occuring?

It’s only occuring in the computers of climate change alarmist data mannipulators.

Tim Gorman
Reply to  Tom Abbott
February 20, 2023 3:38 pm

It’s been a while but I once took a global sampling of locations on each continent and calculated their heating and cooling degree-day values for the past twenty years. Most came out with cooling degree-values on a downtrend (i.e. max temps going down) with most having heating-degree days going up (i.e. temps going down). Not all by any means but the majority.

That’s where it first came obvious to me that if you have some locations with moderating climate then there must be some that are seeing a *lot* of warming to offset the cooling plus even more. Problem is that I didn’t find any such locations in my sampling. That’s not to say my sampling was very exhaustive, it was only about 30 locations around the globe. But it was enough for me to start asking questions about the “global average”.

I still don’t know why climate science doesn’t move to doing degree-days. HVAC engineers use them extensively, especially the integral form that is in use today. Agriculture has always used the degree-day measure. Degree-days have always seemed to me to better represent the “climate” than just using temperature measurements with all the statistical inadequacies that go along with that.

Bellman
Reply to  Tim Gorman
February 20, 2023 3:33 pm

Where did I claim the uncertainty of that station to be +/- 6C?

It was Jim who made the claim. We spent some time discussing it at the start of the month. I don’t remember you disagreeing with the claim.

See the thread starting here for instance

https://wattsupwiththat.com/2023/02/01/uah-global-temperature-update-for-january-2023-0-04-deg-c/#comment-3674931

JG starts by saying

Would it surprise you that annual averages have somewhere between ±9 and ±13 °F of measurement uncertainty using NIST calculations for monthly temperatures?

and when I expressed doubts you told me to

Go study Possolo’s methods in TN1900. That’s where those figures come from.

Whist JG showed how the calculation worked for the Topeka station to give an uncertainty of ±12°F.

Tim Gorman
Reply to  Bellman
February 20, 2023 4:00 pm

It was Jim who made the claim. We spent some time discussing it at the start of the month. I don’t remember you disagreeing with the claim.”

Your reading comprehension disability is showing again! Jim was the one that posted the excerpt from the ASOS manual showing the +/- 1.8F (+/- 1C) uncertainty for ASOS stations!

Possolo calculates the variability in the stated values of the temperatures while ignoring the measurement uncertainty as being small enough to discount compared to the data variability.

THAT HAS NOTHING TO DO WITH THE MEASUREMENT UNCERTAINTY OF THE MEASURING STATIONS!



Bellman
Reply to  Tim Gorman
February 20, 2023 4:52 pm

THAT HAS NOTHING TO DO WITH THE MEASUREMENT UNCERTAINTY OF THE MEASURING STATIONS!

It was Jim who referred to it a measurement uncertainty. I’ll repeat in case you missed it

Would it surprise you that annual averages have somewhere between ±9 and ±13 °F of measurement uncertainty using NIST calculations for monthly temperatures?

And he specifically used Topeka to illustrate his calculations. He even labels the calculation of 12F as “MEAS. UNCERT.”

Jim Gorman
Reply to  Bellman
February 21, 2023 11:11 am

Yes sir, I did. I also said it was preliminary. I’m pretty sure it was for Tmax only. Besides the point now.

You had just as well forget what I said before, because I have moved to another method of calculation, i.e., TN 1900. You might want to familiarize yourself with it.

And because the average of Tmax and Tmin hides so much of the variability, I am only working with them as separate distributions. They are both generated by totally different waveform functions and therefore Tmax and Tmin are representative of entirely different distributions.

They are also terribly correlated and therefore their average is not Independent which both the LLN and CLT require. They are both from different distributions, which again violates the assumptions necessary for the LLN and CLT.

It would behoove you to begin investigating the assumptions necessary for many of the statistical calculations and tests you make rather than just formula shopping. An example is you never quote a variance/SD for any mean you use! Where do you think that variance/SD goes throughout the calculations? Does it just magically disappear through the magic of statistics?

Bellman
Reply to  Jim Gorman
February 21, 2023 5:18 pm

You had just as well forget what I said before, because I have moved to another method of calculation, i.e., TN 1900. You might want to familiarize yourself with it.

TN1900 was the method you were claiming to use in previous comments.

I’m pretty sure it was for Tmax only. Besides the point now.

Nope. The table you posted was showing average temperatures.

https://wattsupwiththat.com/2023/02/01/uah-global-temperature-update-for-january-2023-0-04-deg-c/#comment-3674980

e.g. June 1953 is TMax: 34.0, TMin: 21.4, TAvg: 27.7.

Your table gives the June figure as 82, which assuming you are using °F, is 27.8°C.

It would behoove you to begin investigating the assumptions necessary for many of the statistical calculations and tests you make rather than just formula shopping.

Says someone who tries to calculate the SEM using the seasonal monthly variance as the standard deviation.

An example is you never quote a variance/SD for any mean you use!

I keep asking what standard deviation you want, and why you think it would be relevant?

Where do you think that variance/SD goes throughout the calculations? Does it just magically disappear through the magic of statistics?

Depends on what you are talking about. Do you know?

Jim Gorman
Reply to  Bellman
February 21, 2023 6:40 pm

“””””Says someone who tries to calculate the SEM using the seasonal monthly variance as the standard deviation.”””””

That is exactly what the NIST TN1900 shows. If you have a beef with them you should take it up with them.

Perhaps you didn’t read my response to bdgwr. I attached a screenshot of a standard old standard deviation calculator. It uses the same method. Funny how that works.

You need to critique the method instead of just making an assertion that it is incorrect.

“””””Depends on what you are talking about. Do you know?”””””

I asked first, you’re it!

Screenshot_20230221-083234.png
Bellman
Reply to  Jim Gorman
February 21, 2023 7:33 pm

It is not “exactly” what they show. They show a SEM calculation on individual daily values from a single month. You took monthly averages from the whole year. The standard deviation you see comes almost entirely from the seasonal variation.

The irony is still that you accuse me of just plugging values into an equation without understanding the assumptions behind it, and then do just that by putting the monthly absolute values into a SEM calculator.

Your values are not random IDD variables.

Jim Gorman
Reply to  Bellman
February 22, 2023 4:39 am

They show a SEM calculation on individual daily values from a single month.”

If you are talking about the image, those are values from bdgwx’s calculations of how adding an average value in changes calculations. They are not daily, monthly, or annual anything.

You took monthly averages from the whole year.”

I have not done that. I have calculated January monthly values using daily Tmax and Tmin. For the baseline, yes, I used the January average for Tmax and Tmin from each year. I then used the method in TN1900 to calculate the baseline average and its uncertainty. As I tried to show, the calculation procedure is not isolated to the TN1900. It is an accepted algorithm for finding an expanded uncertainty. Your argument is not with me, but with other authorities. You should contact them with your concerns.

Better yet, show us references that differ in the algorithm that should be used. Otherwise your protestations will fall on deaf ears.

You took monthly averages from the whole year. The standard deviation you see comes almost entirely from the seasonal variation.”

You are barking up the wrong tree. Nothing I have shown yet used an average based upon a 12 months annual average. I will be doing each month separately which eliminates any seasonal variation with which you are concerned. My point? If no months show any growth in warming, then by inference, there can be no annual growth either. It also shows that CO2 can not be the driver of “heat” for just one or two months.

The irony is still that you accuse me of just plugging values into an equation without understanding the assumptions behind it, and then do just that by putting the monthly absolute values into a SEM calculator.”

Sorry dude, I have studied the assumptions. From TN1900, (a) measurement and systematic uncertainty are negligible, (b) the distributions can be assumed to follow a Student’s t distribution, and (c) an expanded uncertainty is calculated by multiplying the SEM of the sample by a Student’s t factor based upon the degrees of freedom.

Lastly, if you have examined the common data of Tmax and Tmin, you should have determined that they are extremely correlated. Something like >0.9. That means they are not considered independent! No statistic determined from them can be considered to be independent either. In other words, Tavg is not made up of independent elements. That means the LLN and CLT can not be used to justify any finding since the IID assumption is violated.

Hence the reason for treating Tmax and Tmin as each independent from each other! That is also bolstered by the fact that they are each samples drawn from different distributions, again violating IID when averaged together.

Bellman
Reply to  Jim Gorman
February 22, 2023 7:46 am

If you are talking about the image

Well I was thinking about the image in your original comment

https://wattsupwiththat.com/2023/02/01/uah-global-temperature-update-for-january-2023-0-04-deg-c/#comment-3674980

I’d assumed your other image was the same. But it doesn’t really matter. The point still stands. Calculating a SEM from a standard deviation based on seasonal variation is where your method is wrong.

I have not done that. I have calculated January monthly values using daily Tmax and Tmin. For the baseline, yes, I used the January average for Tmax and Tmin from each year.”

We may be talking at cross purposes here. I ‘m refering to the specific chart you showed in the referenced comment. There you seem to be calculating the average monthly value for each month of 1953, then using the standard deviation of these monthly values to determine the SEM and uncertainty of the annual value. This has nothing to do with base lines.

As I suggested, if you want to do this better you should be taking monthly anomaly values rather than absolute values, as you want to remove the seasonality. Maybe you’ve done that elsewhere, but I haven’t seen it. I don’t read every comment made to every article.

It is an accepted algorithm for finding an expanded uncertainty.

And I’ve not disagreed with it. Treating the SEM as the uncertainty (at least the non-biased part) of an average is what I’ve been advocating for years, and you kept telling me I was wrong to do so. I’m glads you are now accepting it. But that doesn’t mean you can just plug any values into a SEM calculator and get a meaningful result. The assumptions are that values in the sample are random, independent and are identically distributed. Sometimes these assumptions don’t matter too much, but in your case they definitely do. You are not taking random samples from throughout the year, or looked at another way, each month is not an identically distributed random variable. Augusts will always be hotter than Januaries, and you will always have exactly one August and one January in each “random” sample.

Bellman
Reply to  Bellman
February 22, 2023 7:52 am

I will be doing each month separately which eliminates any seasonal variation with which you are concerned.

Then that’s not what I was talking about. I’m talking about what you did do, not what you intend to do in the future.

My point? If no months show any growth in warming, then by inference, there can be no annual growth either.

And agin this has nothing to do with my point, which was your claim that the annual uncertainty was around ±6°C.

Jim Gorman
Reply to  Bellman
February 22, 2023 8:39 am

For the last time, that was from preliminary work. Think of it as a prerelease. I don’t even have that spreadsheet anymore and am not going back to redo the formulas.

If that’s all you got to argue about, don’t bother!

Bellman
Reply to  Jim Gorman
February 22, 2023 9:08 am

Fine. then all you had to say is you no longer agree with your earlier assessment, and say what your new uncertainty assessment is. But it was only a couple of weeks ago when I was being yelled at and called a troll because I pointed out that this same uncertainty figures where obviously nuts. Maybe if you had spent more time pointing out how preliminary the figures were, we could have saved a lot of wasted time.

Jim Gorman
Reply to  Bellman
February 22, 2023 11:00 am

“””””And I’ve not disagreed with it. Treating the SEM as the uncertainty (at least the non-biased part) of an average is what I’ve been advocating for years, and you kept telling me I was wrong to do so. I’m glads you are now accepting it. But that doesn’t mean you can just plug any values into a SEM calculator and get a meaningful result. “””””

Let’s be honest here. The expanded uncertainty being calculated is the statistical uncertainty from the distribution. It IS NOT the measurement uncertainty from random errors and systematic error.

The fact that sampling bias and small samples sizes result in a larger uncertainty than the measurement uncertainty doesn’t remove the fact that measurement uncertainty is important.

“””””You are not taking random samples from throughout the year, or looked at another way, each month is not an identically distributed random variable. Augusts will always be hotter than Januaries, and you will always have exactly one August and one January in each “random” sample.”””””

Don’t try to rationalize seasonality as a reason not to look at annual monthly averages. I spent 30 years of my career dealing with call center hiring and scheduling along with circuit provisioning. Seasonality was just one issue. In general if growth appears only in certain months, it WILL BE APPARENT in graphs of total monthly averages. Same with negative growth in other months. The real problem is using a Tmid that hides what is happening at different times.

Bellman
Reply to  Jim Gorman
February 22, 2023 1:52 pm

It IS NOT the measurement uncertainty from random errors and systematic error.

Measurement uncertainty was the words you used. NIST TN 1900 talks about uncertainty of the measurement result, and distinguishes between measurands derived from measurement equations, and those defined by observation equations, Example 2 being an example of the latter. They are treating the daily average maximum temperature as being made up from observations with random errors about the average. (Whether this makes sense for measuring the actual monthly temperature is a question I ask myself, but I’m happy to accept it for the sake of argument.)

The fact that sampling bias and small samples sizes result in a larger uncertainty than the measurement uncertainty doesn’t remove the fact that measurement uncertainty is important.”

As I’ve tried to point out before, I think that uncertainty from sampling should usually be a lot greater than the uncertainty from measurements.

Don’t try to rationalize seasonality as a reason not to look at annual monthly averages.

I’m not doing anything of the sort. In fact I advocate using annual averages as a way of removing the seasonality. But this has nothing to do with the uncertainty.

In general if growth appears only in certain months, it WILL BE APPARENT in graphs of total monthly averages.

Your ability to keep missing the point is quite spectacular. This is not about whether some months are warming faster than others. It’s about the fact that some months are naturally warmer than others.

The real problem is using a Tmid that hides what is happening at different times.

It doesn’t matter if we are using TMAX, TMIN or TAVG, it will still be the case that some months are warmer than others, and using the standard deviation of those months to calculate the standard error of the mean will not be meaningful.