UAH Global Temperature Update for March, 2022: +0.15 deg. C

From Dr. Roy Spencer’s Global Warming Blog

April 2nd, 2022 by Roy W. Spencer, Ph. D.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for March, 2022 was +0.15 deg. C, up from the February, 2022 value of -0.01 deg. C.

The linear warming trend since January, 1979 still stands at +0.13 C/decade (+0.12 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).

Various regional LT departures from the 30-year (1991-2020) average for the last 15 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST 
2021 01 0.12 0.34 -0.09 -0.08 0.36 0.49 -0.52
2021 02 0.20 0.32 0.08 -0.14 -0.66 0.07 -0.27
2021 03 -0.01 0.12 -0.14 -0.29 0.59 -0.78 -0.79
2021 04 -0.05 0.05 -0.15 -0.29 -0.02 0.02 0.29
2021 05 0.08 0.14 0.03 0.06 -0.41 -0.04 0.02
2021 06 -0.01 0.30 -0.32 -0.14 1.44 0.63 -0.76
2021 07 0.20 0.33 0.07 0.13 0.58 0.43 0.80
2021 08 0.17 0.26 0.08 0.07 0.32 0.83 -0.02
2021 09 0.25 0.18 0.33 0.09 0.67 0.02 0.37
2021 10 0.37 0.46 0.27 0.33 0.84 0.63 0.06
2021 11 0.08 0.11 0.06 0.14 0.50 -0.43 -0.29
2021 12 0.21 0.27 0.15 0.03 1.62 0.01 -0.06
2022 01 0.03 0.06 0.00 -0.24 -0.13 0.68 0.09
2022 02 -0.01 0.01 -0.02 -0.24 -0.05 -0.31 -0.50
2022 03 0.15 0.27 0.02 -0.08 0.21 0.74 0.02

The full UAH Global Temperature Report, along with the LT global gridpoint anomaly image for March, 2022 should be available within the next several days here.

The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

5 14 votes
Article Rating
392 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mike
April 3, 2022 6:34 pm

I would be surprised if we hit the 2016 heights again from now on.

Pat from kerbob
Reply to  Mike
April 3, 2022 6:52 pm

Sssshhhhh
The Adjustment Bureau relishes a challenge.

Rob_Dawg
Reply to  Pat from kerbob
April 4, 2022 11:00 am

The Adjustocene Bureau will pull it out of their rectify.

TallDave
Reply to  Rob_Dawg
April 5, 2022 8:30 am

the Adjustment Adjusters will then remove all history of the Adjustment, ensuring Consensus

John Tillman
Reply to  Mike
April 3, 2022 7:41 pm

The six year-long coolling trend could well continue for 12 more years, given the length of time bewteen Super Los Ninos of 1982, 1998 and 2016. But 2016 might also be the peak of the 40-year warm cycle, 1977-2016. In which case, Earth might not enjoy its like again for another 30 or 40 years.

Who can say?

Oldanalyst
Reply to  John Tillman
April 4, 2022 6:49 pm

John, you must be a tool of Satan and an Exxon paid DENIER. 🙂 Climate models CANNOT LIE!

Bill Everett
Reply to  John Tillman
April 5, 2022 8:24 am

It appears from the graph that the thirty years of warming which began in the mid-seventies ended about 2004. If a horizontal line is drawn forward from the 2004 temperature, then it appears that almost all of the yearly temperature measurements after 2004 fall below the temperature for that year. The obvious El Nino periods should not be considered in the temperature comparison. This indicates that the pause in warming is occurring, and it remains to be seen if it lasts until 2034-5 and continues the temperature pattern which probably began in the 1850’s.

John Tillman
Reply to  John Tillman
April 5, 2022 8:56 pm

Yesterday Arctic sea ice extent was higher than in the seven preceding years on that date.

bdgwx
Reply to  Mike
April 3, 2022 7:45 pm

That’s what we were told after 1998 as well. With a planetary energy imbalance over +0.8 W/m2 [1] and OHC hitting new records [2] it is all but guaranteed that the UAH TLT anomaly will go higher than 2016 likely within 10 years.

Last edited 1 year ago by bdgwx
Mike
Reply to  bdgwx
April 3, 2022 8:18 pm

Give me a break with your energy imbalance.

From your [2]..
The increased concentration of greenhouse gases in the atmosphere from human activities traps heat within the climate system and increases ocean heat content (OHC).”
And the hypothesis becomes the fact by declaration.

Richard M
Reply to  bdgwx
April 3, 2022 8:23 pm

Who’s “we”? The AMO had just went positive in the mid 90s so we were virtually guaranteed another 30 years of relative warmth. That could be ending soon and the PDO may also get into a longer negative phase.

It appears these ocean cycles reduce clouds allowing in more solar energy. There’s your “energy imbalance”. No enhanced greenhouse effect required.

Julian Flood
Reply to  Richard M
April 5, 2022 1:46 am

Ocean surface pollution by oil, surfactant and (nutrient pollution fed) oleaginous phytoplankton will reduce wave breaking and thus salt aerosol CCNs, lower ocean albedo and so increase surface warming. Both reduce cloud cover.

JF

Derg
Reply to  bdgwx
April 4, 2022 12:29 am

For nearly 40 years you guys have been pushing the CO2 as a control knob narrative and all I have seen is prosperity for humanity along with the same g0d d@amn winters. Your theory is that my winters will slightly warm…I CANT TELL. I am getting closer to becoming a climate refugee.

Bindidon
Reply to  Derg
April 4, 2022 5:08 am

” Your theory is that my winters will slightly warm… ”

… I’m pleased to confirm. But the price is not zero:

https://wattsupwiththat.com/2022/04/03/uah-global-temperature-update-for-march-2022-0-15-deg-c/#comment-3491126

Btw: no idea where you live, but for example, no one tells anywhere that North America is warming. CONUS’ winter temperatures at night are getting colder and colder since years.

MarkW
Reply to  Bindidon
April 4, 2022 6:21 am

You’re hanging your hat on a one month change?
Are you really that ignorant?

Bindidon
Reply to  MarkW
April 4, 2022 11:46 am

What for a nonsense are you telling here?
Which month are you speaking about?

Where the heck were I speaking about one month?

Are you a blogbot?

bdgwx
Reply to  Derg
April 4, 2022 11:47 am

Derg said: “For nearly 40 years you guys have been pushing the CO2 as a control knob narrative”

It’s actually 185 years. Pouillet is credited with first hypothesizing CO2 as a control knob in 1837.

Derg said: “Your theory is that my winters will slightly warm”

I think you have me confused with someone. I’ve not made any statements about your winters. I don’t even know where you live.

Mark Pawelek
Reply to  bdgwx
April 4, 2022 12:49 pm

Is every biased and doom-laden guess taken by armchair pseudoscientists right?, or only he guesses you agree with?

Reply to  bdgwx
April 4, 2022 2:33 am

“That’s what we were told after 1998 as well.”

I replied before reading your comment.

Reply to  bdgwx
April 4, 2022 5:47 am

1998 was warmer than 2016. 2016 Pacific temperatures were exaggerated by a bogus upward adjustment of Pacific SSTs that also inflated the 2016 elNino from a pretty average one to one with pseudo spectacular amplitude. We have, in reality, been cooling since 2005.

bdgwx
Reply to  Phil Salmon
April 4, 2022 6:18 am

PS said: “1998 was warmer than 2016″

No it wasn’t. 1998 was +0.35 C. 2016 was +0.39 C. You can see for yourself here.

PS said: “We have, in reality, been cooling since 2005.”

No we haven’t. The trend since 2005 is +0.21 C/decade. Compare that with the trend over the entire period of only +0.13 C/decade. Not only has it not cooled, but it has warmed at a pace higher than the pace since 1979.

Bill Powers
Reply to  bdgwx
April 4, 2022 10:44 am

How did you capture your temperature data? If you believe those numbers, and if you believe that decadal change since 2005 represents a “trend” you are a daydream believer and quite probably a homecoming queen.

bdgwx
Reply to  Bill Powers
April 4, 2022 11:25 am

BP said: “How did you capture your temperature data?”

I didn’t capture the data. UAH did. I’m just using what they provide.

BP said: “If you believe those numbers, and if you believe that decadal change since 2005 represents a “trend” you are a daydream believer and quite probably a homecoming queen.”

That +0.21 C/decade figure from 2005 to present is literally the trend. It is the output from Excel’s LINEST function. I certainly didn’t make it up while Monkee’ing around on the last train to Clarksville. It’s what the UAH data says.

Jtom
Reply to  bdgwx
April 4, 2022 3:56 pm

What is the trend since 1988? That’s a much longer time-period. Cherry picking is fun, but basically a waste of time.

bdgwx
Reply to  Jtom
April 5, 2022 9:04 am

I’m not a fan of cherry-picking start dates either. The trend since 1979 is +0.13 C/decade.

MarkW
Reply to  bdgwx
April 4, 2022 6:20 am

The problem is that your so called energy imbalance is less than a tenth of the confidence interval on that data.

bdgwx
Reply to  MarkW
April 4, 2022 6:42 am

First…it’s not my energy imbalance. Second…the EEI is clearly stated as +0.87 ± 0.12 W/m2. That means the EEI is about 7x the confidence interval.

MarkW
Reply to  bdgwx
April 4, 2022 9:55 am

Compare that to the confidence interval on the size of the incoming and outgoing radiation.

bdgwx
Reply to  MarkW
April 4, 2022 10:13 am

Why would we do that? Note that Schuckmann et al. 2020 does not assess EEI via the Fin – Fout method.

Captain Climate
Reply to  bdgwx
April 4, 2022 9:55 am

I find it amazing that you think you can take a small energy imbalance figure without knowing what drives it in detail and then assume you know how it will translate to average air temperature in the next 10 years. Observed temperature changes don’t need to depend on addition of energy from outside the system. Exchange of heat from oceans to air and back are enormously more powerful.

bdgwx
Reply to  Captain Climate
April 4, 2022 10:32 am

It’s just an application of the laws of thermodynamics. When a system accumulates energy its temperature increases. That’s the specific heat formula Q = cmΔT. Or rewritten where F is the EEI flux, t is time and A is area it is ∫ ∫ F dt dA = cmΔT. Since c and m are essentially constant that means ΔT > 0 when F > 0. Note that if a phase change is involved then the enthalpy of fusion must be considered as well. Schuckmann et al. 2020 do consider the enthalpy of fusion in their analysis.

Captain Climate
Reply to  bdgwx
April 4, 2022 8:06 pm

No, it’s your assumption that something that you are observing (a change in a global average temperature that measures no true thing ) is being driven by your assumed energy imbalance. You can have global average temperature go up with no energy added to the system, it’s just that you’re too ignorant to realize what you’re measuring with global average temperature tells you nothing about the energy in the system.

Tim Gorman
Reply to  Captain Climate
April 5, 2022 7:29 am

Since c and m are essentially constant “

bdgwx uses a lot of formulas he’s never actually studied.

Q = cmΔT

c is specific heat.

Specific heat for the atmosphere is dependent on both pressure and absolute humidity. C is *not* constant. And mass is also dependent on pressure and absolute humidity. Mass is not constant.

This also means F is not just a time and area integral.

bdgwx
Reply to  Captain Climate
April 5, 2022 8:04 am

It’s not an assumption. It’s an observation. And measuring temperature does tell you about the energy in the system. The laws of thermodynamics say so. Literally Q=cmΔT.

Matt G
Reply to  bdgwx
April 5, 2022 8:55 am

Temperature depends on humidity in the atmosphere and energy is dependant on both these.

If humidity is high = relative temperature low
If humidity is low = relative temperature high

A thunderstorm cools the atmosphere significantly during a hot day because the rainfall causes very high humidity.

There is a lot more energy in the atmosphere at 20c with 98% humidity than 29c with 35% humidity.

A record high temperature can occur not from more energy in the location, but less humidity. Record high temperatures can also be prevented by having higher humidity.

Temperatures only are actually a crude way of estimating energy levels and are only accurate when humidity is taken into account equally.

bdgwx
Reply to  Matt G
April 5, 2022 9:43 am

Temperature depends on a lot of things. And there are lot of thermodynamic relationships we could discuss. Scientists are fully aware of this.

Tim Gorman
Reply to  bdgwx
April 5, 2022 10:15 am

“Temperature depends on a lot of things.”

Exactly. So does energy.

bdgwx
Reply to  Tim Gorman
April 5, 2022 10:26 am

TG said: “Exactly. So does energy.”

I know. That’s what I keep try to telling people on here, but it seems like a daily exercise in trying to convince people that CO2 is not the only thing that modulates energy flows into and out of the atmosphere. Maybe you can help me out and remind people of this. Now that I think of it…haven’t I had to explain this to you before?

Last edited 1 year ago by bdgwx
Tim Gorman
Reply to  bdgwx
April 5, 2022 11:11 am

If CO2 is not the only modulating factor then why is it the only one receiving any attention? It must obviously be a minor factor or it couldn’t be overridden in such a fashion as to cause a pause in temperature increase. Where are those overriding factors in the climate models? What are they? Where are they acting? If they are cooling factors shouldn’t we be focusing on how to increase them?

There are *lots* of things that affect temperatures on the surface and in the atmosphere. Focusing on CO2 as the only control knob, or even an important one, risks getting things catastrophically wrong. We could be *causing* our own demise instead of protecting against it. We could even be starting WWIII as poorer nations rebel against the “elites” trying to prevent them from increasing their standard of living using their own cheaper, highly effective resources.

I am very much of the “don’t tinker with Mother Nature” philosophy. None of the evidence leads to a conclusion that the Earth is going to turn into a cinder in ten years (always ten years from *now*, why is that? That ninth year never seems to get here let alone year ten!). All I see is a multitude of natural cycles that introduce significant natural variation over the years, centuries, and milleniums. The climate scientists can’t even describe these natural cycles let alone model them or include them in their climate models.

A lot of *wow* is being made over the average temp going up a couple of degrees. if that causes the max temps in Kansas to up from 100F to 102F then color me bored. (of course max temps here seem to be going down – yawn). If min temps go up from 0F to 2F then, again, color me bored. Neither will affect life here in any manner at all. My ancestors here made it through the 1800’s without being wiped out by temps. Same for the 20’s and 30’s, they didn’t burn to death from temps higher than they are today. They lived through the floods of the 50’s and 70’s.

Nor am I convinced of the energy flow analyses being done. Most of them assume IR gets sent up from the earth but gets absorbed by CO2 and re-emitted back toward the earth where it remains forever thus raising the temp of the earth. Why doesn’t the downward IR get absorbed and re-emitted back toward space? Hot air rises so why doesn’t heated CO2 rise away from the earth and either emit it energy or lose it as latent heat? You keep using the equation of Q = cmΔT without realizing that c and m are functions and not constants. How do the CGM’s handle this? Since they don’t do clouds very well it is doubtful that they handle energy very well since clouds have a big impact on the energy in the atmosphere, both in location and magnitude.

I have no buy-in into convincing anyone of anything other than showing that seeing that CO2 is the control knob we need to worry about is just plain not supported by the evidence or the climate scientists that ignore measurement uncertainty.

mario lento
Reply to  bdgwx
April 7, 2022 11:55 pm

You wrote: “I know. That’s what I keep try to telling people on here”

No you have not. Your reasoning is constantly self contradictory. I have laid out your quotes and so has everyone who’s responded to you.

Matt G
Reply to  bdgwx
April 5, 2022 3:07 pm

Only temperature and water vapor needed.

http://hyperphysics.phy-astr.gsu.edu/hbase/Kinetic/relhum.html

mario lento
Reply to  bdgwx
April 7, 2022 11:53 pm

Scientists are fully aware of this.”

But evidently, you are not. I base on your statements. You say stuff, you contradict stuff and then you make a general appeal to other authorities “scientists…” claiming to speak for them as proof of your meandering scattered and mindless statements.

There is no use in arguing with you… perhaps you’re over your head and don’t yet realize it.

mario lento
Reply to  Matt G
April 7, 2022 11:49 pm

Right… plus 1~
which is why temperature goes way up when there is drought… the relatively dry again has much less heat capacity than moist air.

So, the alarmists have it backwards when we have a hot spell, they say, “CO2 done it.”

Tim Gorman
Reply to  bdgwx
April 5, 2022 10:14 am

c is a function of pressure and absolute humidity at the point being measured in the atmosphere. Mass is a function of pressure and absolute humidity at the point being measured in the atmosphere.

c is not a constant. c is a function. c(p,h). Same for mass. m is not a constant. It too is a function, m(p,h).

That means that Q is a function as well. Q(p,h).

Thus ΔT = Q/[c(p,h)m(p,h)]

It’s not a simple calculation for a moving target like the atmosphere.

bdgwx
Reply to  Tim Gorman
April 5, 2022 10:27 am

How much has c and m changed for the UAH TLT layer since 1979?

Last edited 1 year ago by bdgwx
Tim Gorman
Reply to  bdgwx
April 5, 2022 11:46 am

How much has c and m changed for the UAH TLT layer since 1979?”

You just don’t get it. c and m change second to second in time, mm by mm in horizontal distance, and mm by mm in vertical distance.

They are *NOT* constant. That is only a simplifying assumption used by the CAGW scientists in their models because they can’t handle this any better than they can clouds. So they just parameterize them (i.e. make them constants) the same way they do clouds in order to help get the answer they are looking for.

Do you even know what a steam table is?

bdgwx
Reply to  Tim Gorman
April 5, 2022 11:52 am

I’ll ask again…how much has c and m changed for the UAH TLT layer since 1979?

Tim Gorman
Reply to  bdgwx
April 5, 2022 12:00 pm

I told you. They are *NOT* constant. They are time and space functions. How much have clouds changed where you live since 1979? They are time and space functions as well. Are *they* constant?

bdgwx
Reply to  Tim Gorman
April 5, 2022 3:02 pm

I didn’t ask if they were constant. I asked you how much they have changed. Do you know the answer or not? Don’t deflect. Don’t divert. If you don’t answer the question I will. But because I already researched this before you even started challenging it I know you aren’t going to like the answer. That’s why I chose my words carefully and said “essentially constant” as opposed to just “constant”.

Tim Gorman
Reply to  bdgwx
April 6, 2022 5:59 am

You got your answer. They are CONSTANTLY CHANGING. Pressure changes as you go up in/down in elevation. Pressure changes as weather fronts move through. Absolute humidity changes as you change elevation and as weather fronts move through. The question of “how much” they have changed over long periods of time is irrelevant.

You are trying to rationalize your claim that specific heat and mass in the atmosphere are constants. They aren’t constants, not even close. It truly is that simple.

Essentially constant? What in Pete’s name does that mean? I assure you that the specific heat of the atmosphere on top of Pikes Peak is vastly different than that of the atmosphere at Galveston, TX. So is the mass of a parcel of atmosphere. That means there will be a significant difference in the amount of energy contained in a parcel of air at each point – meaning the temperatures at each point simply can’t tell you what Q is at each location.

Q = c(p,h) * m(p,h) * ΔT

If you think people are going to buy into your claim that the air pressure at Pikes Peak and Galveston are *ESSENTIALLY* the same you are going to be sadly disappointed. Every airplane pilot in the world will have a good laugh at your expense.

bdgwx
Reply to  Tim Gorman
April 6, 2022 7:07 am

TG said: “If you think people are going to buy into your claim that the air pressure at Pikes Peak and Galveston are *ESSENTIALLY* the same you are going to be sadly disappointed.”

Strawman. I never said the air pressure at Pikes Peak and Galveston are essentially the same. You and you alone said it. Don’t expect me to defend your statements especially when they aren’t true. And you know exactly what I mean by essentially constant. You know this because by now you researched just how much c and m in the UAH TLT layer has changed since 1979 and you know it’s not significant.

Tim Gorman
Reply to  bdgwx
April 6, 2022 2:27 pm

You said: ““Since c and m are essentially constant “”

Since c and m both dependent on pressure then how can c and m be essentially constant if the pressure isn’t constant?

Again, c and m are space and time functions. They are not constants. Asking if they have changed over time has no answer other than they *always* change over time!

You are still trying to rationalize away your claim that they are “essentially” constant. They aren’t. They are functions.

Trust me, the pressure, and therefore the specific heat at the edge of the atmosphere is *not* the same as the pressure at the bottom of the atmosphere. Therefore c and m are not even “essentially” constant!

bdgwx
Reply to  Tim Gorman
April 6, 2022 3:03 pm

TG said: “You are still trying to rationalize away your claim that they are “essentially” constant. They aren’t.”

The UAH TLT pressure hasn’t changed that much. And even big changes in pressure result in only small changes in specific heat capacity. c is essentially constant. It just doesn’t change that much in the UAH TLT layer.

TG said: “Trust me, the pressure, and therefore the specific heat at the edge of the atmosphere is *not* the same as the pressure at the bottom of the atmosphere.”

I never said the specific heat capacity at the edge of the atmosphere was the same as the bottom of the atmosphere. What I said is that c and m are essentially constant in the UAH TLT layer since 1979. They just haven’t changed that much.

Carlo, Monte
Reply to  bdgwx
April 7, 2022 2:24 pm

The UAH TLT pressure hasn’t changed that much.

What I said is that c and m are essentially constant in the UAH TLT layer since 1979.

Get some basic atmosphere knowledge PDQ.

bdgwx
Reply to  Carlo, Monte
April 7, 2022 4:04 pm

I’m going to ask you the same the question I asked Tim Gorman. TG declined to answer. Maybe you know the answer. How much has c and m changed in the UAH TLT layer since 1979?

Last edited 1 year ago by bdgwx
Carlo, Monte
Reply to  bdgwx
April 7, 2022 4:13 pm

Why did you ignore what he told you?

Tim Gorman
Reply to  bdgwx
April 7, 2022 7:59 pm

This is an argumentative fallacy. It is similar to “when did you stop beating your wife”.

mario lento
Reply to  bdgwx
April 8, 2022 12:04 am

You wrote: “How much has c and m changed in the UAH TLT layer since 1979?”

Your question is wrong…

bdgwx
Reply to  mario lento
April 8, 2022 9:24 am

mario lento: “Your question is wrong…”

First…I’m not sure how it could even possible for a question to be wrong.

Second…I suspect the resistance to answering it is because the challengers have now researched the topic and concluded that the mass and specific heat capacity of the UAH TLT has not changed much since 1979.

mario lento
Reply to  bdgwx
April 8, 2022 9:30 am

I suspect the resistance to answering…”

That you can’t understand the answers to your questions says more about your lack of comprehension than any lack of answers.



bdgwx
Reply to  mario lento
April 8, 2022 12:24 pm

mario lento: “That you can’t understand the answers to your questions says more about your lack of comprehension than any lack of answers.”

No one has answered the question of how much the specific heat capacity (c) and mass (m) has changed in the UAH TLT layer since 1979. Remember c is in units of j.kg-1.K-1 and m is kg. No figures in those units have been posted as of the time of my post now.

Tim Gorman
Reply to  bdgwx
April 8, 2022 5:30 pm

The question is wrong because of it’s underlying assumptions. You are assuming c & m are constants and are then asking how the constants have changed. They are *NOT* constants this your question is wrong.

What does c(p,h) and m(p,h) mean to you? That c and m are *not* dependent values? That pressure and humidity are constants?

Jim Gorman
Reply to  bdgwx
April 8, 2022 8:09 am

The values you are asking about vary all the time. The mass of a given volume of air varies based on the pressure it is under and the mix of gases it contains which then determines the specific heat value for the volume.

More importantly, UAH is a metric that is calculated from a different base than temperature.

You need to ask this question of the people who do the calculations if they even apply.

bdgwx
Reply to  Jim Gorman
April 8, 2022 9:21 am

JG said: “The values you are asking about vary all the time.”

How much does the mass of the UAH TLT layer vary?

How much does the specific heat capacity of the UAH TLT layer vary?



Jim Gorman
Reply to  bdgwx
April 8, 2022 11:18 am

Do you not understand that water vapor has a large effect on these items. Water vapor changes moment to moment. What you are asking for is what is the mathematical function that describes the variations.

Sorry, dude, that’s beyond my ken.

bdgwx
Reply to  Jim Gorman
April 8, 2022 12:19 pm

If it is beyond your ken and you don’t know how much the specific heat capacity and mass of the UAH TLT has changed since 1979 then why are you so insistent that they have changed significantly?

Jim Gorman
Reply to  bdgwx
April 8, 2022 3:26 pm

How many times and how many people need to tell you that “c” and “m” of a given volume of air is constantly changing through time.

If UAH has used an average, that is up to them. You need to ask them why they have done so.

bdgwx
Reply to  Jim Gorman
April 8, 2022 4:38 pm

JG said: “How many times and how many people need to tell you that “c” and “m” of a given volume of air is constantly changing through time.”

Why not just tell me how much the specific heat capacity and the mass of the troposphere has changed since 1979? If you want to convince me that c and m are not essentially constant then the best way to do it is to say c changed by X j.kg-1.K-1 and m changed by Y kg. If you can’t or won’t do that then I don’t have any choice but to continue to align my position on the matter in accordance with what is already available and which says that c and m have been essentially constant since 1979. It’s that simple.

Tim Gorman
Reply to  bdgwx
April 8, 2022 5:04 pm

You keep asking “how much has the constants c &m changed”?

The answer is that they are not constants. For some reason you can’t seem to comprehend that. That is YOUR problem, not ours.

bdgwx
Reply to  Tim Gorman
April 8, 2022 7:39 pm

TG said: “The answer is that they are not constants. For some reason you can’t seem to comprehend that. That is YOUR problem, not ours.”

I’m not asking if they are constant. I’m asking how much they changed. I’m asking because you are fervently defending a claim that they have changed significantly since 1979. At this point I have no choice but to dismiss any suggestion that they’ve changed significantly due to lack any attempt whatsoever to show that they have.

Tim Gorman
Reply to  bdgwx
April 9, 2022 6:41 pm

The tropopause can exist anywhere between 70 hPa (≈18 km) and 400 hPa (≈6 km), and it is therefore not convenient to use a constant pressure level to describe the tropopause.

Tim Gorman
Reply to  bdgwx
April 8, 2022 4:59 pm

It varies believe or not. The amount of water vapor varies which changes the specific heat value.

I asked you once before if you had ever used steam tables. You didn’t answer. It’s obvious you haven’t from your comments.

bdgwx
Reply to  Tim Gorman
April 8, 2022 7:35 pm

I have used stream tables. I’ve used specific heat capacity tables. I’ve used all kinds of tables. I even did so before I said the specific heat capacity in the UAH TLT layer is essentially constant. That’s how I know that it is essentially constant. I’ll turn the question around on you. Did you ever both to consult these tables prior to lecturing me about it?

Jim Gorman
Reply to  bdgwx
April 9, 2022 5:03 am

You need to explain your concern as it applies to UAH.

One of the problems with GHG is that it includes a water vapor increase as CO2 increases. That hasn’t occurred for some reason.

Additionally, UAH doesn’t measure these items directly. So the reason for your concern needs to be known if you wish to receive a response explaining your concern.

bdgwx
Reply to  Jim Gorman
April 9, 2022 1:10 pm

I don’t have any concerns relevant to the topics being discussed in this subthread other than 1) the rejection of the relationship between energy/heat (Q) and temperature (T) via Q=mcΔT 2) the rejection of the relationship between pressure (P) and volume (T) and temperature (T) via PV=nRT 3) the claim that the specific heat capacity and mass of the troposphere has changed significantly since 1979 4) the claim that the existence of isothermal processes ΔT = 0 means energy and temperature are not related.

Tim Gorman
Reply to  bdgwx
April 9, 2022 6:22 pm

Do all the steam tables depend on the same pressure and humidity across all table entries? If so, then why are there so many tables? One would do.

mario lento
Reply to  bdgwx
April 7, 2022 11:58 pm

You wrote: “I’ll ask again…how much has c and m changed for the UAH TLT layer since 1979?”

c and m never stop changing on every timescale.

mario lento
Reply to  bdgwx
April 7, 2022 11:45 pm

You wrote: “It’s an observation. And measuring temperature does tell you about the energy in the system.”

Again, you do not understand, as Tim Gorman explained. Let me fix this quote for you again.

And measuring temperature does tell you about the temperature you measured”

Look at the equation and you will see you were confused. Else, you’re simply not telling the truth. So which is it?

bdgwx
Reply to  mario lento
April 8, 2022 12:28 pm

mario lento: ““And measuring temperature does tell you about the temperature you measured”

Sure, T = T. But Q = mcΔT as well.

mario lento: “Look at the equation and you will see you were confused. Else, you’re simply not telling the truth. So which is it?”

Maybe you can help me out. How does Q = mcΔT not relate energy (Q) to temperature (T)? What about PV=nRT does pressure (P) or volume (V) not relate to temperature (T) either? Is that what these formulas are saying…that is energy, pressure, and volume are NOT related to temperature?

Tim Gorman
Reply to  bdgwx
April 8, 2022 5:08 pm

P and V are not constants either, except in a lab environment where they can be artificially held constant.

Your lack of experience in the real world shines through in everything you say.

bdgwx
Reply to  Tim Gorman
April 8, 2022 7:33 pm

TG said: “Your lack of experience in the real world shines through in everything you say.”

Let me get this straight so that I’m not putting words in your mouth. Are you defending the rhetorical proposition that pressure (P) and volume (V) are in no way related to temperature (T)?

Tim Gorman
Reply to  bdgwx
April 9, 2022 6:18 pm

Stop throwing crap against the wall hoping something sticks! Quote the formula I gave you showing c & m as functions. You’ll have your answer.

mario lento
Reply to  bdgwx
April 7, 2022 11:40 pm

You wrote: “When a system accumulates energy its temperature increases.”

Then you sort of proved that your statement is not accurate –wrt phase changes…

Let me correct your quote:

“When a system accumulates energy its stored energy increases.”

Temperature might go up and it might go down since other things need to happen for temperature to go up.

Last edited 1 year ago by mariojlento
bdgwx
Reply to  mario lento
April 8, 2022 12:30 pm

mario lento said: “Then you sort of proved that your statement is not accurate”

Which thermodynamic law says that temperature does not increase when a system accumulates energy?

Tim Gorman
Reply to  bdgwx
April 8, 2022 5:42 pm

It’s called an isothermal process. Did you take any science classes in school?

bdgwx
Reply to  Tim Gorman
April 8, 2022 7:29 pm

Yes. I have taken science classes. I have even taken thermodynamics classes specifically. That’s how I know that your defense of the claim that an input of energy into a system cannot increase T is absurd and your use of the concept of an isothermal process in doing so is doubly absurd. That’s not even remotely close to what an isothermal process means. An isothermal process is when ΔT = 0. Even your own everyday experiences should have clued you in on how absurd this claim is. When you add energy to your oven it heats up. When you add energy to your furnace it heats up.

Last edited 1 year ago by bdgwx
Jim Gorman
Reply to  bdgwx
April 9, 2022 5:59 am

What is latent heat and how does it affect temperature?

bdgwx
Reply to  Jim Gorman
April 9, 2022 1:29 pm

Latent heat is the energy released or absorbed during a phase change. This energy is related to mass via Q = mL where m is mass and L is the specific latent heat. Note that just because Q = mL that in no way invalidates Q = mcΔT. And before you start deflecting and diverting again by lecturing me about ignoring latent heat understand that I already told Captain Climate about it being important to consider things like latent heat of fusion when analyzing the energy content of the climate system so let’s save both our times and have you deflect and divert in some other way.

Last edited 1 year ago by bdgwx
Tim Gorman
Reply to  bdgwx
April 9, 2022 6:06 pm

“Q = mL”

Where is delta- T in this equation? It *is* there you know, right?

Tim Gorman
Reply to  bdgwx
April 9, 2022 6:02 pm

Correct, an isothermal process where there is no temp increase.

But you then go on to imply that an isothermal process can’t increase energy – i.e. c and m are constants. But c & m are *not* constants, no more than gravity is the same on all planets.

Either your science teacher was bad or you didn’t pay attention. My money is on the latter.

Mark Pawelek
Reply to  bdgwx
April 4, 2022 12:47 pm

There is no energy imbalance, nor any balance. Have you considered that your ability to look at evidence may be unbalanced?

bdgwx
Reply to  Mark Pawelek
April 4, 2022 4:33 pm

The 1st law of thermodynamics strongly disagrees.

TallDave
Reply to  bdgwx
April 5, 2022 8:31 am

still no mute button?

Carlo, Monte
Reply to  TallDave
April 5, 2022 12:16 pm

Here here.

Reply to  Mike
April 4, 2022 2:31 am

I remember the same being said about 1998.

DMacKenzie
Reply to  Mike
April 4, 2022 8:15 am

It’s a mistake to make the reverse assumption of the warmists. We’ve only gone up a degree since the Little Ice Age. We could easily go up another 0.5 degrees before Planck feedback dominates sea surface albedo.

bluecat57
April 3, 2022 7:30 pm

Was that rectal?

Derg
Reply to  bluecat57
April 4, 2022 2:59 am

Answer: the type of thermometer you do not put in your mouth.

bluecat57
Reply to  Derg
April 4, 2022 6:08 am

Lol. It was a joke.

Kenso Ghost
April 3, 2022 7:30 pm

I am posting just to clarify a point I am not clear on. The red line is the centred 13 month average so the latest value must be the average from March 2021 to March 2022 centred on September 2021. Thus the value lags by 6 month as there cannot be a value with March 2022 as its centre. Is this correct or am I missing something?

TheFinalNail
Reply to  Kenso Ghost
April 4, 2022 8:58 am

You’re right. It’s a centred moving average.

Pillage Idiot
April 3, 2022 7:52 pm

ENSO forecast is above 50% to still be in La Nina conditions for JJA. Add in a four-month temperature lag, and 2022 is going to be a cool year.

TheFinalNail
Reply to  Pillage Idiot
April 4, 2022 4:40 am

On the other hand, we’re still deep within the second dip of a double-dip La Nina, yet March 2022 was the joint 8th warmest March globally in the past 44 years, according to UAH. So, relative to past temperatures, it’s not that cool.

MarkW
Reply to  TheFinalNail
April 4, 2022 6:23 am

La Nina yes, but so weak that only the instruments can detect it.
Past 44 years. For some reason, that seems to really impress you.
When it’s the 8th warmest in 200 years, then you can brag.

PS: 44 years ago was one of the coldest time periods of that century.

Last edited 1 year ago by MarkW
TheFinalNail
Reply to  MarkW
April 4, 2022 9:12 am

MarkW

La Nina yes, but so weak that only the instruments can detect it.

It’s actually quite a strong La Nina. The threshold is -0.5C over a 3-month running mean. This one was been -1.0C for the last 3 3-month averages. For the latest (JFM 2022) it now stands at -0.9C. So not weak at all, really.

PS: 44 years ago was one of the coldest time periods of that century.

Not so much. The 20th century as a whole was -0.03C relative to the GISS base period. 1979 was +0.17C on the same base. The early 20th century was much cooler than the 1970s globally.

Last edited 1 year ago by TheFinalNail
Fraizer
Reply to  MarkW
April 4, 2022 1:52 pm

So La Niña is a heat recharging time for the oceans; where-as an El Niño is a heat release ocean cooling phenomena.

Maybe Bob T will comment?

TheFinalNail
Reply to  Fraizer
April 5, 2022 4:54 am

Looks like it. Ocean heat content is hitting record high 3-month averages at both 0-700 and 0-2000m depths.

rbabcock
Reply to  Pillage Idiot
April 4, 2022 6:21 am

Not going warm for a little while yet at the earliest.

BOM Subsurface.png
Bill Treuren
April 3, 2022 7:56 pm

Depends on this year really, if the La Nina goes on the temperature will slide lower El Nino will be different.
The solar cycle looks to be center of field whatever the consequences of that are but the super solar cycles seem behind us for a bit.
If the temperatures continue to decline we can resume the improvements or the reduction of poverty and well being of the people on earth rather than pretending to build power contraptions that make everyone except some ruling class sycophants poor.

Richard M
April 3, 2022 8:29 pm

Look for the 2nd pause to continue its slide toward the first pause baseline as ocean cycles appear to be working against the climate alchemists.

An all summer La Nina would certainly drop the 2nd pause trend line.

https://woodfortrees.org/plot/uah6/from:1997/to/plot/uah6/from:1997/to:2014/trend/plot/uah6/from:2015/to/trend

Scissor
Reply to  Richard M
April 4, 2022 4:56 am

Is your comment a veiled insult of alchemists?

TheFinalNail
Reply to  Richard M
April 5, 2022 2:11 am

Richard M

I added your two periods of cooling from 1997 together and guess what? They contribute to a clear overall warming trend.

https://woodfortrees.org/plot/uah6/from:1997/to/plot/uah6/from:1997/to:2014/trend/plot/uah6/from:2015/to/trend/plot/uah6/from:1997/trend/plot/none

Last edited 1 year ago by TheFinalNail
Richard M
Reply to  TheFinalNail
April 5, 2022 6:48 am

Yes, the overall period is one of warming. However, we know the cause of that warming in 2014. A reduction in albedo right over the PDO flip. It’s not AGW. Given the trend over 24 of the 25 years shows no warming, the possibility this was anything but natural is minimal.

Ireneusz Palmowski
April 3, 2022 10:54 pm

A polar vortex pattern in the lower stratosphere will provide winter weather in Europe through mid-April.comment image

Hari Seldon
Reply to  Ireneusz Palmowski
April 4, 2022 6:51 am

In Germany the winter weather has been back since 1. April (this is not a joke!!!!). 30 cm high snow at elevation of 242 m, up to -7 Centigrades Celsius during the night, etc. All fruit-trees in our garden were damaged, so no any fruits could be expected this year. According to the weather app of Win10 (I think CompuWeather) there has never been so cold at the beginning of April during the last 30 years in our village.

Robert Wager
Reply to  Hari Seldon
April 4, 2022 9:21 am

Yes but that is just weather. To be climate change there has to be unusual heat for at least 12 hours…

Hari Seldon
Reply to  Robert Wager
April 4, 2022 9:32 am

Dear Mr. Wager, currently we are at about 48 hours…

OweninGA
Reply to  Hari Seldon
April 4, 2022 11:11 am

But not to worry, any unusual cold you experience today will be lovingly removed from the historic record to show your fruit trees were damaged by the heat of the adjustocene.

Hari Seldon
Reply to  OweninGA
April 4, 2022 11:01 pm

Last year was the situation very similar, and even in France an “agricultural emergency” had to be declared. And now IPCC writes, that the “mankind is in the last minute to save the climate”. Would these people be mentally OK?

Ireneusz Palmowski
April 3, 2022 11:06 pm

The global average temperature in March is raised only by the anomaly in the Arctic: +0.74 C. In fact, satellites measured the anomaly in the stratosphere (SSW).
2022 03 0.15 0.27 0.02 -0.08 0.21 0.74 0.02comment image

Ireneusz Palmowski
April 3, 2022 11:13 pm

There can be no global warming during La Niña periods because the average temperature in the equatorial oceans is below average 1981-2000.comment image

Matthew Sykes
April 3, 2022 11:44 pm

Terrifying isnt it. Snow in the UK in early April, coldest March since whenever, the usual wet and windy winter. Utterly terrifying this climate change, this warming….

The question is though, what change? What has actually changed since the 1970s I remember as a kid? Nothing so far as I can tell. It is the same weather we had then, snow at easter, wet and windy winters, the odd hot summer.

Bindidon
Reply to  Matthew Sykes
April 4, 2022 4:13 am

” It is the same weather we had then… ”

This I can’t confirm for Northeast Germany where I live since half a century.

Our last, cold ans snowy winter deserving the name was 2010.

Since then, we have unusually warm winters (in the sense of ‘not cold enough’, with some bit of snow) and mostly summers weaker than earlier.

Since 2020, we experience springs cooler than usual. Dito for France.

Unusual is also the increasing amount and strength of westerly winds.

The origin of all that has for me nothing to do with any global cooling.

It is manifestly due to a yearly increase of low pressure areas downwelling over the ocean from the Nortwest Atlantic to Western Europe: it’s as if Berlin had been moved to some North Sea coast.

More: whenever one of these damned, CCW turning LPAs reaches us while at the same time a CW turning HPA stays over Scandinavia, both act together as a giant Arctic air aspirator.

Last edited 1 year ago by Bindidon
Eben
Reply to  Bindidon
April 4, 2022 1:10 pm

“where I live since half a century”
Are you still on the list and waiting for your Trabant ???

Bindidon
Reply to  Eben
April 4, 2022 4:35 pm

Eben

You behave here exactly like at Roy Spencer’s blog: as an ignorant, polemic dumbass who knows nothing, and therefore has only one thing in mind: to put other people down in a lousy way.

Hari Seldon
Reply to  Eben
April 4, 2022 11:07 pm

Dear Eben, he/she lives clearly not in the former East Germany, because the people in the former East Germany have not lost common sense. It is felt, that the guy is living in one of the “green (woke) hot spot” of the former West Germany. Note also, that in the former East Block the mathe/physics/chemistry education was rather strong…

Bindidon
Reply to  Hari Seldon
April 5, 2022 2:29 am

Hari Seldon

” Dear Eben, he/she lives clearly not in the former East Germany, because the people in the former East Germany have not lost common sense. ”

I see you not only have a lot of ‘common sense’, but also are terribly knowledgeable about

– the former East Germany
– Northeast Germany

https://www.google.com/maps/place/Diedersdorf,+15831+Grossbeeren/@52.3462819,12.2373545,8z/data=!4m5!3m4!1s0x47a84317462318ff:0xa212048d6822fe0!8m2!3d52.347495!4d13.3599758?hl=en

Perfect, Mr Seldon!

TheFinalNail
Reply to  Matthew Sykes
April 4, 2022 4:48 am

Matthew Sykes

Snow in the UK in early April, coldest March since whenever…

Since 2019, if you’re talking about the UK. Average UK temperatures this March were 6.7C, which is +1.0C warmer than the 1991-2020 average for March.

Matthew Sykes
Reply to  TheFinalNail
April 6, 2022 8:02 am

It was one of the coldest Marches we have had

Bellman
Reply to  Matthew Sykes
April 4, 2022 7:48 am

Snow in the UK in early April, coldest March since whenever, the usual wet and windy winter.

March in the UK was well above average. 11th warmest in the Met Office summaries.

You seem to have a very selective memory about recent weather. Much of March was sunny and warm. It only turned cold in the last week, and it’s hardly been wall to wall snow. In the south I saw the occasional flurry of drifting snowflakes, nothing that stayed on the ground for more a minute. This, by the way is the first snow I’ve in over a year.

Bellman
Reply to  Bellman
April 4, 2022 8:59 am

That should be March was the equal 11th warmest for mean temperatures.

It was a very sunny month, 2nd sunniest on record for the UK. There was quite a range, therefore, between max and min temperatures. It was the equal 6th warmest for max, but only the 34th warmest for min.

In figures

TMean: 6.7 °C
TMax: 11.1 °C
TMin: 2.3 °C

Compared to the 1991-2020 average the anomalies were

TMean: +1.0 °C
TMax: +1.9 °C
TMin: +0.1 °C

Last edited 1 year ago by Bellman
Ireneusz Palmowski
April 4, 2022 12:16 am

It is not true that this solar cycle is stronger than the previous one, as can be clearly seen by the level of galactic radiation compared to previous cycles. The level of galactic radiation that reaches Earth is controlled by the strength of the solar magnetic field, so its systematic increase indicates a weaker solar magnetic field.comment image
http://wso.stanford.edu/gifs/Dipall.gif

Bindidon
Reply to  Ireneusz Palmowski
April 4, 2022 4:40 pm

Exactly like at Roy Spencer’s blog, I reply to your same comment:

Thus, according to your claim, neither the Sun Spot Number

comment image

nor the solar flux

comment image

can be considered a valid measure of solar activity.

Interesting point, no doubt!

Ireneusz Palmowski
April 4, 2022 1:00 am

La Niña in action – a tropical cyclone is approaching the east coast of Australia.
http://tropic.ssec.wisc.edu/real-time/mtpw2/product.php?color_type=tpw_nrl_colors&prod=ausf&timespan=24hrs&anim=html5

griff
April 4, 2022 2:04 am

This is a proxy measurement of part of the atmosphere subject to multiple adjustments and out of step with similar measurements like RSS – how does it have any value at all?

b.nice
Reply to  griff
April 4, 2022 2:19 am

RSS “adjusts” using climate models and ship intakes….. all part of the AGW scam.

It wants to be like GISS, Had etc etc. it wants to “belong”, no matter what they have to do. !

Such a pity the RSS guys couldn’t remain honest !

But you are right… RSS has very little value any more.

bdgwx
Reply to  b.nice
April 4, 2022 6:57 am

b.nice: “RSS “adjusts” using climate models and ship intakes”

No they don’t. See Mears & Wentz 2017 for details.

Jeroen
Reply to  griff
April 4, 2022 2:21 am

Ask your friend Micheal M.

Ben Vorlich
Reply to  griff
April 4, 2022 5:49 am

Have you incontrovertible proof it’s wrong then?

I hope you’re making the most of today’s incredible performance of wind turbines, a magnificant 36.37% of demand Now that you’re living your life in balance with unpredictable’s output

John Hultquist
Reply to  Ben Vorlich
April 4, 2022 8:35 am

magnificant  “

I have no idea what this is about!
It does remind me of Johnny Carson’s Carnac the Magnificent,
but I’m sure you were thinking of something or someplace else.

Ireneusz Palmowski
April 4, 2022 3:50 am

In two days, very cold air from the north will begin to enter the central US. Under the clouds of the low, temperatures will remain very cold. Frontal thunderstorms will occur on the eastern side of the low.comment image

Scissor
Reply to  Ireneusz Palmowski
April 4, 2022 5:05 am

If I focus on the center of the map and sort of squint and cross my eyes, Hunter Biden appears.

Tim Gorman
Reply to  Ireneusz Palmowski
April 4, 2022 5:49 am

Weather Underground 10-day low temp forecast (usually pretty accurate) for Kansas, starting today:

41
41
40
34
28
44
61
53
46

These are typical April low temps. Only one night of possible freeze. It won’t be low enough to affect either my peas or lettuce.

An inch of rain predicted for Wed next week. That’s about it. Not much for thunderstorms over the next 10 days. Will have to water the garden in the meantime.

Ireneusz Palmowski
Reply to  Tim Gorman
April 4, 2022 10:56 am

Well, I am predicting low temperatures, including freezing precipitation.comment image

Tim Gorman
Reply to  Ireneusz Palmowski
April 5, 2022 7:31 am

You can predict whatever you want. I’ll stick with Kansas State University and Weather Underground.

Reply to  Tim Gorman
April 4, 2022 5:24 pm

The ten day for Northern California shows a return to night time freezing starting at the end of the week and continuing on for another 5 days. This has been an interesting year this year as spring has started and failed several times now with temps dropping into the mid 20sF at night. There is a warm trend coming in now which is forecast to bring the day’s high up to 83F, night temps almost up to 50F. Then the night temps will drop 20 degrees lower on Saturday. Now that is some real climate change in action.

Ireneusz Palmowski
Reply to  goldminor
April 5, 2022 12:59 am

Cold air from the west will fall into the central US behind a cold front.
It is likely that a loop of cold air will be cut off in the upper troposphere over the central US.comment image

Steve Z.
April 4, 2022 5:53 am

Very surprised to see the Northern Hemisphere at +0.27 and the Arctic at +0.74.

The USA Pacific coast was below average for most of March. The Midwest and New York City had several below average periods, and England and northern Europe had several below average periods.

Bindidon
Reply to  Steve Z.
April 4, 2022 12:17 pm

In the UK? Are you serious?

UK was in March 1 °C above the 1991-2020 average.
Tomorrow I’ll download the March data for Germoney, and I’ll check for your ‘below period’.

Anyway, you seem to be very influenced by CONUS’s weather and climate matters, which heavily differ from Europe’s.

Bindidon
Reply to  Steve Z.
April 5, 2022 12:07 pm

Here is the data for March 2022 in Germoney, downloaded from

ftp://opendata.dwd.de/climate_environment/CDC/observations_germany/climate/hourly/air_temperature/

(from 41 stations in 1941 up to 510 in 2011), and processed to a monthly time series of absolute temperatures.

Absolute values

Coldest March between 1941 and 2022: 1987, -0.9 °C
Warmest: 2017, +6.7 °C
2022: +5.0 °C

Average for 1991-2020: 4.2 °C

Anomalies wrt the average

Coldest March between 1941 and 2022: 1987, -5.1 °C
Warmest: 2017, +2.5 °C
2022: +0.8 °C

The sorted list since 2010:

2013 3 -3.83
2018 3 -2.13
2010 3 -0.66
2016 3 -0.29
2021 3 0.10
2011 3 0.42
2020 3 0.63
2015 3 0.77
2022 3 0.80
2019 3 1.82
2012 3 2.17
2014 3 2.44
2017 3 2.52

Last edited 1 year ago by Bindidon
Steve Z.
Reply to  Bindidon
April 6, 2022 8:11 am

Thanks for the data. I get my European weather reports from comments on this website plus Euro-tabloid street photos of pedestrians bundled up in winter clothes in March. As far as the USA, I pay very close attention to the weather data each day. Here in Seattle, we had 10 straight days of colder than average temps, and later in the month, 7 straight days of below average temps. The day I wrote the comment you responded to, our daily high was 47 F, which was 10 F below our 57 F average!

Bindidon
Reply to  Steve Z.
April 6, 2022 2:04 pm

Thanks in turn for the convenient reply.

Yes, in Germany’s South it can be pretty cold even in March.

But if, like in 1986, there is still ice and snow on shady sidewalk corners here in north-eastern Germany at the end of April, then you know that it was really cold.

I had the intention to show most recent data for Scandinavia and Europe, using Meteostat hourly data, but the global download failed.

Yes, CONUS is getting colder and colder since a while.

Some years ago, I collected all GHCN daily data for CONUS and the entire European continent (incl. Ukraina, Belarus and Western Russia):

comment image

If I had selected Tmin instead of (Tmin+Tmax) / 2 as I chose here, the difference would have been even greater.

It’s not so long time ago that I saw on the monitor:
Cotton, MN -49 °C end of January.

Last edited 1 year ago by Bindidon
David Elstrom
April 4, 2022 6:36 am

Who cares? No human being has lived in global average temperature in the history of mankind, and local temperatures vary seasonally by far more than the Warm-mongers deem catastrophic.

Editor
April 4, 2022 7:37 am

Dr. Spencer ==> What is the uncertainty on your global average? +/- how much?

Readers — anyone who knows can answer this, please.

bdgwx
Reply to  Kip Hansen
April 4, 2022 7:49 am

Christy et al. 2003. ±0.2 C (95%) for monthly anomalies and +0.05 C/decade (95%) for the trend.

It might also interest the WUWT audience to point out that per Christy & Spencer 1994, Christy & Spencer 1997, and Christy & Spencer 2000 there are at least 0.23 C/decade worth of adjustments to the data. This does not include the adjustments added in version 5 and 6.

Last edited 1 year ago by bdgwx
Editor
Reply to  bdgwx
April 4, 2022 10:13 am

bdgwx ==> Thanks for helping out.

I’ll check their earlier papers as well.

Carlo, Monte
Reply to  Kip Hansen
April 4, 2022 12:13 pm

0.2°C uncertainty for a temperature measurement outside of a calibration lab is nonsense.

Bellman
Reply to  Carlo, Monte
April 4, 2022 1:50 pm

Remind me what your analysis suggests for the uncertainty.

Editor
Reply to  Bellman
April 4, 2022 3:48 pm

Bellman ==> There is no analysis that properly gives the true uncertainty given the diversity, varability and the unknown quantity and quality of the original measurement errors for the thermometer record. The satellite record has some chance of coming up with error bars on their individual grid point/altitude measurement but averaging “global temperature” is hopeless for more reasons that will fit in a comment.

Bellman
Reply to  Kip Hansen
April 5, 2022 5:18 am

So why ask the question?

If all global temperature data is hopeless, why keep posting UAH data, and why give any credence to a “pause”? We just don’t know if global temperatures are going up at all, or if they are going up five times as fast as the data suggests.

bdgwx
Reply to  Kip Hansen
April 5, 2022 10:34 am

KH said: “There is no analysis that properly gives the true uncertainty given the diversity, varability and the unknown quantity and quality of the original measurement errors for the thermometer record.”

Do you think the global average could be anywhere between 0 K and 1.4e32 K?

That’s a serious question because I want to get you thinking about what constraints can be placed on the uncertainty.

Last edited 1 year ago by bdgwx
Tim Gorman
Reply to  bdgwx
April 5, 2022 11:50 am

When the uncertainty exceeds the values of the data points then you know you an invalid data set. You can keep on accumulating uncertainty by adding more uncertainty but once it masks what you are trying to identify you are just indulging in mental masturbation. Once you can’t tell what is happening you need to stop and redesign your experiment.

Editor
Reply to  bdgwx
April 5, 2022 1:28 pm

bdgwx ==> There is no need “to get me thinking”. I speak of “the true uncertainty” of the thermometer record — not some philosophical but not physical or scientific “then it could be just anything….”.

If you want to know my thinking, read my essays here and at Judith Curry’s.

My essays here are easiest to find using this link:

https://wattsupwiththat.com/?s=%22by%20Kip%20Hansen%22

once you have scrolled to the bottom, the page will lengthen yet again, many times. Last count was over 200 . . . .

bdgwx
Reply to  Kip Hansen
April 5, 2022 2:43 pm

Is there a specific essay where you discuss the UAH TLT uncertainty that I can read?

Editor
Reply to  bdgwx
April 5, 2022 5:42 pm

bdgwx ==> No, although many deal with uncertainty in the thermometer record. If you want to discuss details about “the UAH TLT uncertainty”, write to Dr. Roy Spencer.

bdgwx
Reply to  Kip Hansen
April 6, 2022 6:48 am

No need for me to write Dr. Spencer. He is an author on that Christy et al. 2003 publication. I’ve read through it and have no questions at the moment. Their type B evaluation is consistent with the Mears et al. 2009 monte carlo approach and my own type A evaluations. It is also consistent with the timing and magnitude of ENSO and volcanic events. That is if the uncertainty were much higher then we would not be able to discern those events in the UAH TLT timeseries.

Tim Gorman
Reply to  bdgwx
April 6, 2022 2:16 pm

It’s been pointed out to you before that monte carlo techniques only work with *random* variables. Anything that has a systemic error is *NOT* a totally random variable and it is not correct to use a monte carlo simulation to represent the data.

Carlo, Monte
Reply to  Tim Gorman
April 7, 2022 2:27 pm

It also requires a mathematical formulation that describes the ENTIRE derivation procedure.

Carlo, Monte
Reply to  Bellman
April 4, 2022 3:53 pm

Stop whining.

Bellman
Reply to  Carlo, Monte
April 5, 2022 5:14 am

So you don’t know either.

Carlo, Monte
Reply to  Bellman
April 5, 2022 7:04 am

Here’s a quarter, kid, go buy yourself a slide rule.

Bellman
Reply to  Carlo, Monte
April 5, 2022 9:21 am

A sincere question, do you think the uncertainty of UAH is at least ±7 °C?

Tim Gorman
Reply to  Bellman
April 5, 2022 11:28 am

It would surprise me that it has a +/- 7C uncertainty. The uncertainty of the measurement taken at any one point on the globe at any one time point can certainly have a +/- 0.5C uncertainty due to the presence of clouds, the humidity involved at that point in time and location, and the pressures at that that point in time and location (e.g. low pressure front vs high pressure point), plus probably numerous others.

That uncertainty accumulates as you combine separate individual measurements of different things around the globe. Were all the measurements of the same thing then you could possibly see smaller uncertainties in the average.

Natural variation might be psuedo-random but that doesn’t mean it all cancels out and doesn’t affect any average calculated. It most definitely does. The variation of temps is higher in winter than in summer leading to multi-modal distributions when combining NH and SH satellite measurements. Those don’t cancel out!

Remember, an uncertainty higher than 1C would mask any annual temperature change let alone the differences in the hundredths digit the CAGW advocates are alleging.

Last edited 1 year ago by Tim Gorman
Carlo, Monte
Reply to  Bellman
April 5, 2022 12:17 pm

Not going to play your silly clown games.

Bellman
Reply to  Carlo, Monte
April 5, 2022 1:25 pm

That’s a pity, as you could have corrected my mistake. You actually said the uncertainty was at least ±3.4 °C.

https://wattsupwiththat.com/2021/12/02/uah-global-temperature-update-for-november-2021-0-08-deg-c/#comment-3401727

I’m sorry for exaggerating your figure. But it still seems implausible and I’d really like to know if you have reconsidered since then.

Carlo, Monte
Reply to  Bellman
April 5, 2022 2:17 pm

Asked and answered, counselor, move on… /judge

Editor
Reply to  Bellman
April 5, 2022 7:06 am

Bellman ==> There is value in asking good questions. There is also value in asking the right people. Asking your next door neighbor’s wife what principles of physics she uses to get to work in her car might help her to seek to learn more about the physical world — thus be valuable. But to assume that because she has no real answer that she is thus unqualified to drive her car is a step too far.

If you were asking a rhetorical question as a teaching tool, a question to which you (or anyone in this instance) could provide a scientifically precise answer, then we would excuse you your rudeness here.

The original question I asked was “What is the uncertainty on your global average?”. That is an important question so others can judge the data and its variability.

Your questions “what your analysis suggests for the uncertainty.” to Carlo. And “So why ask the question?” to me appear to me not asked in sincerity. Nevertheless, I tried to answer the question for you.

The fact that seeking a “global surface temperature average” is scientifically hopeless coupled to the fact that there are many groups around the world spending endless time and untold funds trying to calculate that number is one of the mysteries of our time.

If you wish to truly understand why I say that, you must read at least my series in Averages and my series on Chaos.

If you are just trolling and kicking the hornets nest, that’s just pathetic.

Bellman
Reply to  Kip Hansen
April 5, 2022 9:19 am

Sorry for asking the questions in an insincere way. The problem is I’ve been arguing with Carl, Monte for ages, and much of the time he just responds with personal insults, such as “stop whining”. I asked the question because I know that in the past he’s insisted the actual uncertainty in UAH is at least ±7 °C, or some such. I didn’t want to just say that as I thought it fairer to make sure he still held that opinion.

I asked you why you’d asked the question because I had assumed you genuinely wanted to know what the estimated uncertainty was, and it seemed odd to turn round and say there was no acceptable uncertainty.

I’ve been arguing about uncertainty for some time, and I do find it strange that some here think all data sets are so uncertain it’s impossible tell anything about what global temperatures have been doing over the last few decades, yet happily accept analysis such as Monckton’s pause if it seems to show warming has stopped.

Editor
Reply to  Bellman
April 5, 2022 12:40 pm

Bellman ==> The problem with uncertainty is the uncertainty of it all. I didn’t intentionally get in the middle of your long-term argument with Carlo Monte. I had no idea of his proffered uncertainty for GMST. I dont recall him publishing anything hre to that effect (comments do not count, ever)

I asked about Spencer’s uncertainty because it is not included on his graph. That simple.

Of course, it will be Spencer’s stated uncertainty. One would have to do a lot of work to determine if that stated uncertainty in any way represented a real-physical-world estimate of just how wide the uncertainty bands would be. Uncertainty in the numbers do not necessarily carry over to the trends of the data set…that is another topic altogether.

If you are truly serious about uncertainty, then you have to really dig in the data and its origins. You have to ask the question “ The problem with uncertainty is the uncertainty of it all. If you are truly serious about uncertainty, then you have to really dig in the data the and its origins. You have to ask the question "What Are The Really Counting?"” target=”_blank”>What Are The Really Counting?” When things get scientific, then you have to dig in further and ask “Are they counting the thing they say they are counting?” and then ask “How are they counting it?” (measuring it). And then, “how are they analyzing the data they’ve collected?”

It is in those areas that the true uncertainty surfaces. For instance, in the world of Global Mean Surface Temperature one runs into the fact that the idea itself is somewhat unscientific. Physically, one cannot average records of sensible heat -temperature- it is not a physical quality that can be averaged. On the other hand, there are things about incoming solar energy and its effects that could be averaged.

To really get a grasp of this one must first quit thinking thoughts like “some people here.” Not that that isn’t true, but it just isn’t useful. I write a lot of things here that “some people” don’t like and a lot that “almost no one here likes”. That doesn’t change the truth value if what I write.

Take for instance the GMST for the 1890s….there is no reliable data set that tells us anything useful about GMST to a degree or three. There just isn’t such a thing that would pass any sort of scientific muster. It would be great if there was because warming since then seems to be of great interest to a lot of people and that is messing with our national energy policies.

You may argue with Monckton about his “pause” but not with him about the numbers which aren’t his. He just looks at the official numbers and uses them to poke doomsayers with that sharp stick (valid or not — not my call).

If you have something substantive to say on uncertainty, write it up in a thousand words or so and I’ll help you get it published here.

Bellman
Reply to  Kip Hansen
April 5, 2022 1:38 pm

One would have to do a lot of work to determine if that stated uncertainty in any way represented a real-physical-world estimate of just how wide the uncertainty bands would be.

I reserve judgement as to how uncertain monthly UAH figures are. I agree that there are many possible sources of uncertainty and any quoted figure is a best a ball-park estimate. I also wish UAH would publish their own estimates.

However I do feel there can’t be that much uncertainty, or else it would be difficult to detect aspects such as ENSO and volcanic activity. Moreover it seems to me the correlation between different temperature records, such as UAH and GISS puts a limit on any uncertainty.

Uncertainty in the numbers do not necessarily carry over to the trends of the data set…that is another topic altogether.

That’s something I keep saying. Uncertainty in trends is mostly caused by the variation in the actual temperatures rather than from the measurements. The only uncertainty that will seriously affect the trends is systematic bias changing over time.

It is in those areas that the true uncertainty surfaces. For instance, in the world of Global Mean Surface Temperature one runs into the fact that the idea itself is somewhat unscientific.

I agree to an extent, though not for the reasons you say. That’s why it’s better to talk about the change in temperature and anomalies rather than a specific global temperature.

Editor
Reply to  Bellman
April 5, 2022 5:30 pm

Bellman ==> At “best a ball-park estimate” — yes, absolutely.

“detect aspects such as ENSO” these are dectected in the satellite sea surface temperature record. I’m not sure that volcanic eruptions or their effects are actually detected as such, only slight changes in some data sets – maybe.

Dig in — that is my advice. Your general comments indicate a lack of deeper understanding of the issue involved. I don’t mean this as in insult — it is just so. You still talk as if you are sure that global temperature is a real physical thing and that wildly uncertain data sets can produce detailed, accurate, precise results despite comparatively huge uncertainties in collected and recorded data.

I assure you that it is just not the case — but I cannot give an in depth class here in comments. About 100 of my essays here touch on one or more aspects of these questions. The search tool on this site is quite good. Use it to find stuff of mine about GMST.

all my best,

Kip

Carlo, Monte
Reply to  Kip Hansen
April 5, 2022 2:32 pm

Kip—Apparently this lot have some kind of odd or vested interest in making sure the uncertainties of The Trends is as low as possible, but they have no real-world understanding of what is involved in performing uncertainty analysis nor any real metrology experience.

Trying to educate them has proven to be a waste of time; having been provided with page-after-page of pointers, clues, tutorials, etc. by experienced professionals, they have chosen to ignore it all because it doesn’t line up with their gut instincts.

They really don’t care what real temperature measurement uncertainties are, all they do bite the ankles of anyone daring to cast doubt on the official Trends.

Bellman
Reply to  Carlo, Monte
April 5, 2022 3:13 pm

Apparently this lot have some kind of odd or vested interest in making sure the uncertainties of The Trends is as low as possible…

I do not. I’ve suggested before that you could probably increase any stated uncertainty. There are always unknown unknowns. I just find it implausible that monthly values could have too much of an uncertainty yet show such consistency across so many different methods.

What I have said is that I don’t think the measurement uncertainty is the main issue with any monthly global value, and I don’t think the uncertainty of monthly values is very important in establishing the uncertainty in the trend.

But what I’ve primarily been interested in is trying to fathom why a small group of people here insist that uncertainty of a mean increases with sample size. And I’m interested in that from the statistical side. If you can’t get that right I have little faith in any other expertise you choose to present.

Trying to educate them has proven to be a waste of time; having been provided with page-after-page of pointers, clues, tutorials, etc. by experienced professionals, they have chosen to ignore it all because it doesn’t line up with their gut instincts.

You know of course, we think the same about you. The problem is that looking at all these pages and pages of expert material all lead to the same conclusion – sample size does not increase uncertainty.

…all they do bite the ankles of anyone daring to cast doubt on the official Trends.”

I keep casting doubts on the trends. Especially those that are based on carefully choosing start points and ignoring the context. I keep saying you need to look at the uncertainty of the trend, ask whether a linear fit is best, and compare multiple possibilities.

When I do mention trends it’s usually to explain that some claim is not supported by the statistics, not to claim that the trend is the only correct possibility, or that it will continue into the distant future.

Carlo, Monte
Reply to  Bellman
April 5, 2022 4:46 pm

Nice screed, unread.

Tim Gorman
Reply to  Bellman
April 5, 2022 4:52 pm

I just find it implausible that monthly values could have too much of an uncertainty yet show such consistency across so many different methods.:”

Almost all analyses use the same data and follow the same process and methods. None of the data sets I have ever seen do a good job of estimating uncertainty in individual measurements. Berkeley Earth admits it uses precision for its uncertainty estimates in the raw data – so that follows through in any analysis someone does using their raw data. Precision is *NOT* accuracy, i.e. uncertainty.

“What I have said is that I don’t think the measurement uncertainty is the main issue with any monthly global value, and I don’t think the uncertainty of monthly values is very important in establishing the uncertainty in the trend.”

That’s because you ignore the uncertainty of the data points and just assume that all data points are 100% accurate. Thus the uncertainty of the trend is really just the residuals of the trend line and the stated values. Nothing about uncertainties is included in the analysis of the trend line. The same sort of thing applies to monthly averages. You just assume all errors in the data are random and no systematic error exists in any measurement. That allows you to assume that all errors cancel out and the averages calculated from the stated values are 100% accurate. It’s the same thing all the so-called climate scientists advocating for CAGW do. And it is *wrong*.

“But what I’ve primarily been interested in is trying to fathom why a small group of people here insist that uncertainty of a mean increases with sample size.”

Again, that is because you refuse to believe what the experts like Taylor and Bevington tell us in their tomes. You’ve never in your life had to take personal liability responsibility for a product affecting a client or the public. You think it is just peachy to assume all error cancels out (i.e. it is all random) and everything will come out 100% accurate in the end. You’ve never, ever had to build a beam spanning a foundation – you just assume that when you put all those pieces of lumber together than they will be exactly the right length, no addition of uncertainty from the individual elements making up the beam. You’ve never had to build stud walls in a house. You just assume that if you just nail all the studs together on a frame that the drywall you attach will not have any waves in it because they will all be the average value! You’ve never had to build a multi-story building – you would just assume that if you use the average length for all of the framing studs that each end will wind up at the same height on the top of the third floor and the roof therefore won’t tilt. All those “random” errors in the framing studs will just cancel out!

It’s not a matter of you “can’t” fathom why uncertainties add, it’s a matter of you *won’t* fathom why they add.

sample size does not increase uncertainty”

That is *ONLY* true when you have nothing but random errors. And for some reason you refuse to accept that both Taylor and Bevington say that isn’t true in almost all cases. You will *always* have some systematic error, the best you can do is minimize it. But there isn’t any way to minimize it in 1000 measurements of different things using different measuring devices. You simply cannot assume that all systematic errors will cancel. If they don’t cancel then they add. It truly is just that simple.

“supported by statistics”

That’s just great coming from someone that assumes that all measurement distributions are Gaussian and that the average is the “true value”. You can’t even admit that combining measurements from the southern hemisphere with measurements from the northern hemisphere give you a multi-modal distribution in which the average value is useless. It describes none of the modes and it doesn’t tell you the standard deviation of the combined modes,

And you speak of using statistics. Uh……

Bellman
Reply to  Tim Gorman
April 5, 2022 6:07 pm

Almost all analyses use the same data and follow the same process and methods.

I’m talking here mainly about the difference between satellite and surface data. Two independent ways of measuring temperature.

That’s because you ignore the uncertainty of the data points and just assume that all data points are 100% accurate.

I don’t know how many times you’ve repeated that untruth. We’ve spent so many pointless hours discussing how to propagate measurement uncertainty.

Thus the uncertainty of the trend is really just the residuals of the trend line and the stated values.

That’s not the uncertainty of the trend line being discussed here. I think you are talking about the prediction interval, not the confidence interval.

Nothing about uncertainties is included in the analysis of the trend line.

As Bigoilbob said sometime ago it is possible to incorporate measurement uncertainties into the trend line uncertainty but it makes little difference. But I also think it’s a pointless task, as any uncertainty is already present in the variation of the values.

You just assume all errors in the data are random and no systematic error exists in any measurement. That allows you to assume that all errors cancel out and the averages calculated from the stated values are 100% accurate.

Nobody claims any stated values are 100% accurate. And all data sets assume there is systematic error, that’s why the data is adjusted.

Again, that is because you refuse to believe what the experts like Taylor and Bevington tell us in their tomes.

Stop with these untruths.I don’t have to “believe” that Taylor and Bevington are correct. I know they are because it’s just standard statistics. The problem is you refuse to accept you are misinterpreting what they say.

You think it is just peachy to assume all error cancels out

I literally do not do that. The whole point of the statistics, whether talking about random sampling and the standard error of the mean or propagating measurement errors is that they do not all cancel out. At least not unless you could take an infinite number of samples and and all errors were random. What’s the point of calulating the standard error of the mean if you think all errors will cancel out?

That is *ONLY* true when you have nothing but random errors.

Well that’s progress. Originally you were insisting that uncertainty of the mean increased with sample size for random errors. It’s still nonsense to say that they increase with systematic errors. If you take the mean of an infinite number of samples, you will be left with the systematic error, it won’t be bigger than the systematic error of the individual values.

That’s just great coming from someone that assumes that all measurement distributions are Gaussian and that the average is the “true value”.

Untruth upon untruth. Again how many times have I talked with you about different distributions. What I have said is that if you are taking the average of the sample it’s in order to estimate the true value of the average. For some reason you seem to think an average is not a true value if it isn’t the size of a specific thing.

You can’t even admit that combining measurements from the southern hemisphere with measurements from the northern hemisphere give you a multi-modal distribution in which the average value is useless.

You just keep asserting that, but never demonstrated what the global temperature distribution is. And remember, we are really talking about anomalies here. However, you are right about one thing – I do disagree that an average of a multi-modal distribution is useless.

Again, and this is where we will have to agree to disagree – a mean does not have to tell you what a particular mode is to be useful. If the question is are two populations the same, then knowing the means are significantly different is sufficient to tell you they are not the same.

Tim Gorman
Reply to  Bellman
April 6, 2022 8:02 am

I’m talking here mainly about the difference between satellite and surface data. Two independent ways of measuring temperature.”

This shows a total lack of understanding about the differences in the two data sets. Satellite data is a METRIC. It doesn’t even *try* to identify minimum and maximum temperatures at any specific location. The satellite just floats around gathering snapshots all over the globe which are then repeated the next day and the next day and the next day. All those daily snapshots are then averaged and then all of those averages are averaged again to form a baseline from which anomalies are calculated and averaged again! Then those anomalies are trended. Whether all those averages of averages truly represent a measure of global climate is questionable and whether any trend line deduced from them is even more questionable. And none of them are truly *temperature* measurements.

Surface data? It’s just as bad. No attempt is made to propagate uncertainty properly, it is just ignored or assumed to be the precision of the sensor. It is a conglomeration of multiple measurements of different things using different measurement devices whose uncertainties vary all over the place. Even worse is using measurements from other stations to estimate measurements at another location when the correlation of the components is not known but just assumed to be 100%.

Bellman
Reply to  Tim Gorman
April 6, 2022 12:17 pm

The satellite global temperatures are a metric, so to are surface temperature records. I made no mention of ma and min, I’m just comparing the stated monthly mean values.

Your argument seems to be that you consider both satellite and surface data to be unreliable. But that’s missing my point, which is that two data sets both using completely different measurements and techniques, still show much more agreement month to month than would seem possible if the uncertainty was really in the multi degree range.

Tim Gorman
Reply to  Bellman
April 6, 2022 12:26 pm

 I made no mention of ma and min”

And yet that is what the data sets are made up of. You don’t even seem to know that!

” stated monthly mean values”

What do you think goes into determining those “stated monthly mean values”. And just what is the uncertainty of those “stated monthly mean values”?

“Your argument seems to be that you consider both satellite and surface data to be unreliable.”

Surface data are *only* reliable when used within their uncertainty limits. Anything outside of that *is* unreliable.

” But that’s missing my point, which is that two data sets both using completely different measurements and techniques, still show much more agreement month to month than would seem possible if the uncertainty was really in the multi degree range.”

If the surface data is unreliable outside its uncertainty range then using it as a standard to compare to the satellite data is useless. And the satellite data does *NOT* show the same magnitude of temperature rise as the surface data shows.

Besides month-to-month change is driven far more by the tilt of the earth then it is by anything else. If the temperature records show a decline from November to December so what? That doesn’t mean they are reliable to determine “global average temperature changes” in the hundredths of a degree!

Bellman
Reply to  Tim Gorman
April 7, 2022 6:47 am

If the surface data is unreliable outside its uncertainty range then using it as a standard to compare to the satellite data is useless.

Yet somehow you aren’t worried that the two sets agree. How do two independent unreliable data sets both agree so much?

And the satellite data does *NOT* show the same magnitude of temperature rise as the surface data shows.

I’m not talking about the trend, but the uncertainty in monthly values. If there’s a bias in the trend that’s a different question, but it can;t be caused by random or systematic errors in monthly measurements.

Besides month-to-month change is driven far more by the tilt of the earth then it is by anything else. If the temperature records show a decline from November to December so what? That doesn’t mean they are reliable to determine “global average temperature changes” in the hundredths of a degree!

Sorry, are you saying it was a tilt in the earth that caused the 2016 El Niño, or the current La Niña?

I’m not saying any data set is accurate to a hundredth of a degree. I’m saying the comparison of satellite data to surface data suggests they are not uncertain to multiple degrees.

Carlo, Monte
Reply to  Tim Gorman
April 7, 2022 2:30 pm

Another point that is glossed over/hidden with the satellite measurements is how many times in a month is a given grid location sampled.

Tim Gorman
Reply to  Bellman
April 6, 2022 8:19 am

I don’t know how many times you’ve repeated that untruth. We’ve spent so many pointless hours discussing how to propagate measurement uncertainty.”

You can deny it all you want but it *is* the truth. It even carries on to you claiming the fit of the trend line of the stated values is the uncertainty of the trend line. It isn’t. You’ve been provided all kinds of documentation showing that it isn’t. It’s merely the fit of the trend line to the stated values while absolutely ignoring the uncertainties of those stated values.

You even refuse to admit that the uncertainty of a sum, propagated from the individual elements in the sum, reflects on the uncertainty of the average calculated from that sum. You try to claim that the average uncertainty is the uncertainty of the average. It isn’t! Division by a constant does *NOT* change the uncertainty of the numerator. ẟN (the uncertainty of a constant) is ZERO. One more time: if you stack three 2″x4″x8′ boards together to create a beam the uncertainty of the total length is *NOT* the average uncertainty, it is the total uncertainty of the three boards added together. It is the uncertainty of the total, the SUM, that is important, not the average of the uncertainties.

Bellman
Reply to  Tim Gorman
April 6, 2022 12:42 pm

You can deny it all you want but it *is* the truth.

I will keep denying it.But I’m more interested in what you think “It even carries on to you claiming the fit of the trend line of the stated values is the uncertainty of the trend line.” means. I don’t think the fit of the trend line is the uncertainty of the trend line. It’s the confidence interval of the trend line that is it’s uncertainty.

You’ve been provided all kinds of documentation showing that it isn’t.

You keep imagining you’ve sent me all this documentation that somehow proves your point, but you don’t say what it is.

You even refuse to admit that the uncertainty of a sum, propagated from the individual elements in the sum, reflects on the uncertainty of the average calculated from that sum.

Again with the making up stuff you think I’ve said. What I’ve argued is that the uncertainty of the sum does reflect on the uncertainty of the average – you divide the uncertainty of the sum by the sample size to get the uncertainty of the average, hence it is reflected. I have really no idea why you still think this is a controversial idea. It’s one of the main pillars of statistics, it’s how the standard error of the mean is determined, and it’s explained or implied in every text book you ask me to look at.

You try to claim that the average uncertainty is the uncertainty of the average.

I’m not sure I’ve ever said that. It doesn’t make any sense. To go back to your first example, 100 thermometers each with a random uncertainty of ±0.5°C. The average uncertainty would be ±0.5°C, but the measurement uncertainty of the average would be ±0.05°C.

Division by a constant does *NOT* change the uncertainty of the numerator. ẟN (the uncertainty of a constant) is ZERO.

I really don;t want to waste any more time trying to explain what you are missing here. If you don’t accept the explanations from your preferred school books, you are not going to accept it from me.

But for anyone else mad enough to still be reading this the mistake Tim makes is to ignore the fact that when you multiply or divide values you add the relative uncertainties, not the absolutes as he does here. Hence adding the zero uncertainty of N to the uncertainty of the sum is a red herring – what matters is that the relative uncertainty of the mean is the same as the relative uncertainty of the sum, and as the mean is smaller than the sum it follows that the absolute uncertainty of the mean must be smaller than the absolute uncertainty of the sum.

if you stack three 2″x4″x8′ boards together to create a beam the uncertainty of the total length is *NOT* the average uncertainty, it is the total uncertainty of the three boards added together. It is the uncertainty of the total, the SUM, that is important, not the average of the uncertainties.

And finally we come to the familiar misdirection – he’s now talking about the uncertainty of the sum of three boards as if they were the mean.

But it’s particularly daft here as we are not talking about stacking boards, but averaging temperatures. The sum f three wooden boards may be a more useful value than the average of their length. But that makes no sense when talking about temperatures. The sum of 100 thermometer readings is a meaningless value, but the average isn’t. If you add another three boards you might expect the length to double along with the uncertainty. But if you add another 100 thermometer readings to the 1st 100, the sum will double but that doesn’t mean the temperature has increased.

Tim Gorman
Reply to  Bellman
April 6, 2022 3:29 pm

“I don’t think the fit of the trend line is the uncertainty of the trend line. It’s the confidence interval of the trend line that is it’s uncertainty.”

When you calculate the average of the residuals you are measuring the fit of the trend line to the data. That *is* what you have been doing.

The uncertainty of the trend line is related to the uncertainty of the data points, not just the residuals of the stated values vs the trend line. Where in anything you’ve ever done have you translated the uncertainty of the stated values into the anything associated with a trend line? If you had done so you would understand that as long as the trend line is inside the uncertainty intervals of the data elements you can’t determine the validity of the trend line. It could be up, down, or horizontal.

Bellman
Reply to  Tim Gorman
April 6, 2022 5:15 pm

When you calculate the average of the residuals you are measuring the fit of the trend line to the data. That *is* what you have been doing.

You do not do that to determine the fit. The average of the residuals is likely to be close to zero. You average the absolute values, or in this case the squares of the residuals. But this is not how you calculate the confidence interval.

Where in anything you’ve ever done have you translated the uncertainty of the stated values into the anything associated with a trend line?

See my comments else where where I looked at what the trend line is using Carlo, Monte’s ridiculous uncertainty estimates. It increases the uncertainty of the trend, but not by that much. For realistic uncertainty estimates it will probably have a minimal effect.

If you had done so you would understand that as long as the trend line is inside the uncertainty intervals of the data elements you can’t determine the validity of the trend line.

And I’ve explained to you why that is not the case.

Carlo, Monte
Reply to  Bellman
April 7, 2022 2:35 pm

Carlo, Monte’s ridiculous uncertainty estimates

HAHAHAHAHAHAHAHAH

YES! I’m stuck in the mind of bellcurveman forever now!

So sad for you, you still have ZERO CLUES about real uncertainty values.

Last edited 1 year ago by Carlo-Monte
Bellman
Reply to  Carlo, Monte
April 7, 2022 2:56 pm

I gave you the opportunity to correct yourself. You went into a sulk.

Carlo, Monte
Reply to  Bellman
April 7, 2022 4:14 pm

I gave you the opportunity to correct

How magnanimous of you, Your Highness!

Tim Gorman
Reply to  Bellman
April 6, 2022 3:31 pm

You keep imagining you’ve sent me all this documentation that somehow proves your point, but you don’t say what it is.”

How many times do I have to post this before you actually look at it and understand it?

residual_uncertainty.jpg
Bellman
Reply to  Tim Gorman
April 6, 2022 3:47 pm

Some doodle from you does not equate to documentation.

I’ve hazarded a guess as to what you think you are doing with that picture and explained why it isn’t what is meant by an OLS trend. If you want to draw Monckton’s pause like that and use it to determine the uncertainty please feel free to send it to me.

Here’s the Skeptical Science uncertainty range for comparison.

canvas.png
Jim Gorman
Reply to  Bellman
April 7, 2022 9:35 am

You are hilarious. What do think the “doodle” TG showed you actually does?

Look at the uncertainty shown by your graph. The trend line could vary from “-0.009 + 0.610 = +0.609”, an increase, to -0.619, a decrease. Those are the crossing lines shown in TG’s “doodle”.

The upshot is that you have no idea where the trend line actually should be within the uncertainty interval. That’s what we’ve been trying to tell you for ages. You finally stepped into it and don’t even recognize what you’ve done.

Bellman
Reply to  Jim Gorman
April 7, 2022 10:43 am

What do think the “doodle” TG showed you actually does?

Not what the one I posted does.

The confidence interval is ±0.610°C / decade. It has nothing to do with the measurement uncertainty of the monthly averages. We don;t know what the measurement uncertainties are in the UAH data, it’s just the usual calculation for the confidence interval taking into account autocorrelation.

The upshot is that you have no idea where the trend line actually should be within the uncertainty interval. That’s what we’ve been trying to tell you for ages.

Funny, it’s what I’ve been trying to tell Monckton for ages. The pause is meaningless. Temperatures could be increasing at 4 times the previous rate or declining at a similar rate. You simply cannot tell what is happening over such a short interval.

Now look at the uncertainty over a longer period.

Trend: 0.134 ±0.049 °C/decade (2σ)

Could be as much as 0.183 or as little as 0.085°C / decade, but it’s very likely that there has been no warming.

canvas.png
Carlo, Monte
Reply to  Bellman
April 7, 2022 2:39 pm

You’re nothing but a disingenuous disinformation agent.

Tim Gorman
Reply to  Bellman
April 7, 2022 7:10 pm

“The pause is meaningless. Temperatures could be increasing at 4 times the previous rate or declining at a similar rate. You simply cannot tell what is happening over such a short interval”

Then the rise is also meaningless.

So which is it? 1. The climate scientist don’t actually know what is happening, or 2. The climate scientists do know.

You can’t have it both ways. Pick one and stick to it.

Bellman
Reply to  Tim Gorman
April 8, 2022 8:26 am

No, the rise since 1979 is statistically significant. It may be lower or higher than the best estimate, but it’s vanishingly improbable that you would see that much warming just by chance.

Carlo, Monte
Reply to  Bellman
April 7, 2022 2:37 pm

SkepticalScience? Great source, nimwit. Exposes what your real agenda is.

Bellman
Reply to  Carlo, Monte
April 7, 2022 2:54 pm

Yes, my agenda is to use what ever tools are most convenient. I could have drawn the graph myself, as I usually do, but as this was about confidence intervals I prefer this graph as it gives a large correction for autocorrelation. Given the whole claim is that the uncertainties should be bigger I’m not sure why you don’t want me to use this.

Do you have any specific objection to the presentation of the graph or are you on another attention seeking stalking exercise?

Tim Gorman
Reply to  Bellman
April 6, 2022 3:42 pm

Again with the making up stuff you think I’ve said. What I’ve argued is that the uncertainty of the sum does reflect on the uncertainty of the average – you divide the uncertainty of the sum by the sample size to get the uncertainty of the average, hence it is reflected.”

NO, you *don’t* do it this way! Again, how many times does this have to be explained before you get it?

If I have a group of 2″x4″x8′ long boards, each with a different uncertainty interval and I calculate the sum of their uncertainty values and divide it by the number of boards I get an AVERAGE UNCERTAINTY. That average uncertainty may or may not have *any* relationship to the actual uncertainty of any specific board just as the average length may or may not have any relationship to any of the boards.

If I use the average length of the boards and the average uncertainty to fasten three of them together using fish plates what is my certainty that they will successfully span the distance I need? Is it 3 x AvgUncertainty? Or is it the sum of the uncertainties of the three boards?

I assure you that if you pick 3 x AvgUncertainty then I want you no where near my construction site! You would be a disaster wasting me time and money and probably a client.

Bellman
Reply to  Tim Gorman
April 6, 2022 4:29 pm

Again, how many times does this have to be explained before you get it?

Until you give an explanation that makes sense.

If I have a group of 2″x4″x8′ long boards, each with a different uncertainty interval and I calculate the sum of their uncertainty values and divide it by the number of boards I get an AVERAGE UNCERTAINTY.

Yes. Not sure why you’d want to know the average uncertainty, but carry on.

If I use the average length of the boards and the average uncertainty to fasten three of them together using fish plates what is my certainty that they will successfully span the distance I need?

You haven’t specified what type of uncertainty.

Is it 3 x AvgUncertainty? Or is it the sum of the uncertainties of the three boards?

Trick question, they’re the same.

I assure you that if you pick 3 x AvgUncertainty then I want you no where near my construction site!

That’s a relief.

But now lets look at example that is relevant to what you claim. You have 3 pieces of wood, you measure the length of each and add them together. The sum comes to 24′ and you estimate the uncertainty of each measurement to be ±0.1′. You calculate the uncertainty of the sum therefore to be ±0.3′, as you don’t want to assume all the uncertainty is random.

Now you also want to know the average board length, so you divide by 3 to get an average length of 8′ ± 0.3′. (Because you don;t believe in dividing the uncertainty by 3).

So now what is the uncertainty of the 3 boards nailed together? Again assuming these uncertainties may not be random, I assume you would have to say it’s 24′ ± 0.9′.

Maybe that’s not too much of a problem. But now say you repeat that with 100 boards. Sum is 800′, uncertainty of the sum is ±10′, and the average is 8′ ± 10′.

So what is the uncertainty of all the boards stuck together? Is it 800′ ± 1000′?

Tim Gorman
Reply to  Bellman
April 6, 2022 3:55 pm

I have really no idea why you still think this is a controversial idea. It’s one of the main pillars of statistics, it’s how the standard error of the mean is determined, and it’s explained or implied in every text book you ask me to look at.”

You just can’t seem to get it into your head that the standard deviation of the sample means is a measure of precision not a level of uncertainty!

The smaller the standard deviation of the sample means is it only indicates that the stated values of the component elements are close together so that the various sample means are all about the same. THAT DOESN’T MEAN THAT THOSE STATED VALUES GIVE YOU AN ACCURATE MEAN FOR THE SAMPLES!

One more time — If the uncertainty interval for those stated values have *any*, and I mean ANY, systematic error then the means you calculate simply won’t be accurate. They may all be close together in value but they may be completely off target! How far off target they are depends on how large the systematic error is compared to the random instrumental error. They may give you a small standard deviation of the sample means but the average you calculate from those sample means will just be as inaccurate as all git-out!

Why is this so hard to understand? Every text book I have asked you to look at assumes the stated values are 100% accurate and the only thing causing a standard deviation of the sample means that isn’t zero is the variation in the sampling. None of those textbooks consider that the stated values have an uncertainty interval.

The only references I have given you that do address this are Taylor and Bevington and you just simply refuse to do anything other than to try and apply their rules to *everything* as if everything is made up of totally random error that totally cancels out! That can only happen with Gaussian distributions of random error.

Bellman
Reply to  Tim Gorman
April 6, 2022 5:23 pm

If the uncertainty interval for those stated values have *any*, and I mean ANY, systematic error then the means you calculate simply won’t be accurate.

I’ll just deal with this one point.

You keep insisting that the idea of using the standard error of the mean is valid as long as you are measuring the same thing with the same instrument under the same conditions over and over again. All the metrology books say that is OK. Am I right? But multiple measurements only improve precision as you say, and you also insist that all measurements have some systematic error.

So would you say that any measurement made with any instrument is never accurate, no matter how many times you measure it? And if so, would you say that that is a valid reason for not trying to improve the precision?

Carlo, Monte
Reply to  Bellman
April 7, 2022 2:44 pm

I’ll just deal with this one point.

Should Tim feel honored that you deign to do so?

Oh, and you know absolutely zip about real-world metrology, despite all the reams of blather you generate while trying to paper over your abject ignorance. This idiotic question exposes your bare hindside.

Bellman
Reply to  Carlo, Monte
April 7, 2022 2:49 pm

Should Tim feel honored that you deign to do so?

It was early morning and I wanted to go to bed, so I decided not to go through every point. Is that OK with you?

Tim Gorman
Reply to  Bellman
April 7, 2022 8:15 pm

1. Go back and reread Taylor and Bebington. I’ve given you their exact quotes on the subject.
2. No measurement is ever perfectly accurate. No measuring device is ever perfect. That is the whole point of the uncertainty interval – to describe how imperfect the measurement is.

Increasing precision doesn’t mean you have increased accuracy. Once again you show your ignorance of metrology. Increased precision MAY help but that isn’t a guarantee. An inaccurate seven digit voltmeter may have higher precision than a four digit one but it may be more inaccurate as well.

Bellman
Reply to  Tim Gorman
April 7, 2022 7:10 am

You just can’t seem to get it into your head that the standard deviation of the sample means is a measure of precision not a level of uncertainty!

This deflection with strawmen is quite tedious.

You say that in calculating the uncertainty of a mean you take the uncertainty of the sum and use that as the uncertainty of the mean. I say that’s wrong, you have to divide the that uncertainty by sample size. I point out that this is very basic statistics and true whether talking about measurement uncertainty or sampling uncertainty and is the basis of calculating the SEM.

Your response is to go about how the SEM may not be a perfect estimate of the uncertainty, none of which I disagree with. But it’s irrelevant to the point. The SEM is derived in the way I’ve said, it gets smaller as sample size increases, just as every text book says, including the ones by metrologists.

There are lots of ways in which a mean may be wrong, including sampling bias, and to a far lesser extent systematic errors in measurements. There are lots of factors to be considered in devising and interpreting a statistical experiment. But none of this removes the fact that in general a larger sample will reduce the uncertainty of your result. And absolutely none of this implies that uncertainty increases with sample size.

Carlo, Monte
Reply to  Bellman
April 7, 2022 8:04 am

But none of this removes the fact that in general a larger sample will reduce the uncertainty of your result.

Just another load of your usual bullshite.

Jim Gorman
Reply to  Bellman
April 8, 2022 7:35 am

Bellman. –> “But none of this removes the fact that in general a larger sample will reduce the uncertainty of your result. ”

You are correct that a larger sample size will reduce the uncertainty of the estimated population mean.

But it does nothing to resolve the precision of the mean value. Too many people misinterpret the SEM to say that you can add decimal digits to to the value of the mean. IOW, if the SEM is 0.005 I can carry out the calculation of the mean to 3 decimal digits and then show the uncertainty as 4 decimal digits. Wrong, wrong, wrong.

Bellman
Reply to  Jim Gorman
April 8, 2022 8:12 am

This is getting confusing. Tim says the SEM tells you the precision of the mean but not the uncertainty, you say the opposite.

Still, you are now saying that increasing sample size will reduce the uncertainty of the estimate, which is what I’ve been arguing for the last year. When you and Tim have been insisting the uncertainty increases. So to be clear, do you now accept the claim the uncertainty increases with increasing sample size is incorrect?

Tim Gorman
Reply to  Bellman
April 8, 2022 4:50 pm

You simply can’t read or you can’t comprehend simple statistics. Standard deviation of the sample means is a measure of how precise the calculated mean is. If it is 60 with a standard deviation of .001 that does not mean the precision of the measurements contributing to the mean has increased to the thousandth digit. You can’t make a micrometer out of a ruler! It only means the spread of the contributing means is small. That doesn’t mean you can increase the number of significant digits in the mean.

Bellman
Reply to  Tim Gorman
April 8, 2022 7:10 pm

Yes, it’s a measure of how precise the sample mean is. Of course it doesn’t make the measurements more precise. It’s the mean that is more precise. If it’s 60 with a standard error of 0.001 than any other sample mean with the same size will likely be within ±0.002 of 60. Why do you think the precision of the measurements is relevant to this? And why do you think you cannot report this to 3 or 4 decimal places?

Tim Gorman
Reply to  Bellman
April 9, 2022 6:31 am

Because of the uncertainty of the mean! One more time, if all the measurements are off by an inch then their mean will be also! It doesn’t matter how precisely you calculate the mean it’s accuracy can be no better than that of the measurements creating it. And part of that accuracy includes the significant digits in the measurements.

Bellman
Reply to  Tim Gorman
April 9, 2022 4:29 pm

What has being off by an inch got to do with the number of digits. You know the precision of the mean, just guessing there may be some arbitrary systematic error doesn’t affect that precision.

You could apply your logic to anything. Measure one of your precious wooden with a highly precise instrument to 0.001mm, but hay, how do you know it doesn’t have a systematic error of 5m. Best only give your measurement to the nearest 10m just to be sure your not committing fraud.

Tim Gorman
Reply to  Bellman
April 9, 2022 7:07 pm

A *precisely* calculated mean is not the Precision of the mean. You can’t increase precision by calculation. That is unethical and a fraud. No physical scientist or engineer would do such a thing. Sooner or later the fraud would be caught out.

Jim Gorman
Reply to  Bellman
April 9, 2022 7:10 am

Why must you be taught basic theory? The SEM is not how precise the sample mean is. It is a measure of the interval within which the population mean may lay.

Why an interval? First, lets assume zero uncertainty in any of the data points. Second, assume each and every sample has a large enough size (“N”) to perfectly match the population distribution. Third, the population is a perfect Gaussian distribution.

The Cental Limit Theory predicts that the means of the sample means will accurately predict the population mean. Why?

If the population distribution is Gaussian, then there is a plus point for every minus point from the mean. If the samples duplicate this, all the sample means will be the same value. What is the SEM in this case? It is zero. Guess what? You just absolutely with no error accurately calculated the population mean.

Now none of this ever happens. Populations are never absolutely Gaussian. Samples never accurately represent the whole population. Sample means always have a non-Gaussian distribution. That means there will exist a standard deviation of the sample means and you must assume that the means of the sample means can lay within 1σ, 2σ, whatever interval you choose. Most choose 1σ.

Now let’s analyze. If your sample size “N” is large enough and random enough you can minimize sampling error because the sample will more accurately depict the population distribution. If you take enough samples you minimize sampling error for the same reason. But you can never reach perfection. You can never know just how accurately the means of the sample means is. It could be anywhere within the ±σ interval.

If you have data that is all integer, then the sample data should be all integer and the deviations should be integer. In other words, the measurement resolution defines what the actual mean value should have for Significant Digits. Can you have a mean of sample means of 12 ± 0.001? How about 12 ± 0.000001? Sure. But you still can’t say the mean is 12.001 or 12.000001.

Bellman
Reply to  Jim Gorman
April 9, 2022 4:59 pm

Why must you be taught basic theory? The SEM is not how precise the sample mean is. It is a measure of the interval within which the population mean may lay.

Why do you always have to be so patronizing every time you write a long screed demonstrating how much you don;t understand?

Have a word with Tim. He says that SEM is a measure of precision of the mean, not uncertainty. He’s right about that. You for some reason are claiming that it isn’t indicating precision but is measuring uncertainty.

The SEM tells you the precision of the mean, but not the trueness, i.e. it shows the effects of random error but not systematic error. This can come as Time says from having a systematic error in measurements, but more usually and significantly comes from biases in the sampling.

If you have data that is all integer, then the sample data should be all integer and the deviations should be integer.

Rubbish. I don;t if any of your “rules” say anything of the sort. Consider a simple six sided die. All integer values. standard deviation 1.708. What possible benefit would there be in pretending that was 2.

In other words, the measurement resolution defines what the actual mean value should have for Significant Digits.

I keep telling you, that’s not what any of the metrology guides say has to happen. You determine the uncertainty and report to the same number of decimal places. What you keep describing are the simple rules taught for those who don’t get into the better measurement of uncertainty. They work fine for simple situations but are demonstrably wrong if you use them to argue that means taken from many observations can not be more precise.

Can you have a mean of sample means of 12 ± 0.001? How about 12 ± 0.000001?

Theoretically, but I doubt if any real world one would have that small a confidence interval. What is your sample size, what’s the sample standard deviation?

To have a SEM of 0.001, assuming the standard deviation was 1 and your uncertainty range is 2σ, would require a sample size of 4,000,000. To get 0.000001 would require 4,000,000,000,000.

One reason to determine SEM is to decide what is a reasonable sample size. There’s unlikely to be any benefit to taking millions of samples just to get an extra decimal point of precision, especially as the smaller this figure becomes for greater the influence of any systematic bias will be.

Sure. But you still can’t say the mean is 12.001 or 12.000001.

I’m not sure if you understand what those ± symbols represent. This isn’t a an equation where ± means both add and subtract that value. It’s simply a short hand way of showing this is an interval.

Bellman
Reply to  Tim Gorman
April 7, 2022 7:18 am

One more time — If the uncertainty interval for those stated values have *any*, and I mean ANY, systematic error then the means you calculate simply won’t be accurate.

Define accurate. Nothing is ever 100% accurate, what you are trying to do is improve accuracy. If I’m taking a sample and measuring them with a systematic error that error will also be present in the mean. This means that larger samples will not improve the trueness, but will improve precision and hence accuracy.

In general this is unlikely to be a problem as any systematic measuring error should be small compared with the sampling uncertainty. And if you are using an instrument that has such a large systematic error that it affects the result of your experiment, that’s more of an issue with your procedures than it is with the statistics.

Carlo, Monte
Reply to  Bellman
April 7, 2022 8:05 am

Get some clues PDQ, please!

Tim Gorman
Reply to  Bellman
April 6, 2022 8:25 am

That’s not the uncertainty of the trend line being discussed here. I think you are talking about the prediction interval, not the confidence interval.”

Nope! If the stated values have an uncertainty interval then the residuals will also have an uncertainty. And that uncertainty of the residuals carries over to the uncertainty of the trend line. *YOU*, on the other hand continually want to ignore that uncertainty in the residuals which result from the uncertainty of the stated values.

You were given the attached graph to show this. You were provided several other graphs showing the same thing in more detail. And you just blew them off because they were outside your understanding.

Try one more time to understand.

residual_uncertainty.jpg
Bellman
Reply to  Tim Gorman
April 6, 2022 12:47 pm

I have no idea what you thing that doodle has to do with anything. It might make some sense if you are trying to get a rough idea of a trend in a lab, when you only have 3 measurements, it is not how you determine a trend statistically, with hundreds of data points, and variances that have little to do with measurement errors.

Tim Gorman
Reply to  Bellman
April 6, 2022 4:09 pm

I didn’t figure you would be able to understand it. It’s too far outside your limits of knowledge.

In essence it is three data points and their uncertainty intervals. I drew three different trend lines that all stay within those uncertainty intervals. One line has a positive slope, one has a zero slope, and one has a negative slope.

It doesn’t matter how many points you have as long as you can draw a linear line connecting points within their individual uncertainty intervals. It’s only when the stated values are far enough apart that you can no longer draw a straight line through their uncertainty intervals that you can give any credence to the trend line. Temperatures and temperature anomalies simply aren’t spread that far apart from data value to data value.

The *variances* associated with the data values *IS* their uncertainty intervals of the measurements! You can’t even get that right!

Bellman
Reply to  Tim Gorman
April 7, 2022 7:29 am

I know what you are trying to say, I just don’t think your doodle makes it very clear.

You are using a simplified type of line fitting, appropriate to some simple situations. In particular the assumption is that an exact linear relationship exists, and any deviation of a point from the line is the result of measurement error. Hence it is assumed you should be able to draw a straight line which passes through all uncertainty bars.

This is not usually the case where linear regression is being used. There may be an exact linear relation between the independent and dependent variables, but there is no assumption that all points will exactly fit on that line regardless of how exact the measurement is. That;s because there are other independent factors affecting the dependent variables.

You don’t just guess what the best fit is, you calculate it, and you can also calculate the confidence intervals of this trend. As always this is based on assumptions, and there are many factors, such as autocorrelation, that can affect the confidence.

Carlo, Monte
Reply to  Bellman
April 7, 2022 2:46 pm

More blather.

Bellman
Reply to  Bellman
April 7, 2022 4:31 pm

For example, here’s the pause period in UAH with the claimed ±0.2°C monthly uncertainty. I’m not sure how you could determine the trend line from that, let alone the confidence interval, just by drawing a straight line through all the uncertainty bars. Any trend will have to miss the uncertainty bounds by some margin, but that is not becasue the measurements are wrong, it’s because the of natural variance in the actual temperatures.

20220407wuwt4.png
Bellman
Reply to  Bellman
April 7, 2022 4:36 pm

And here’s the same using Carlo, Monte’s uncertainty ranges. Now you could easily fit a lot of different trends. But if you argue that the uncertainty of the trend is any line you could fit in that area, you would be saying the uncertainty was around ±7°C in the seven and a half years, meaning the true warming rate could be ±9°C / decade.

20220407wuwt4.png
Jim Gorman
Reply to  Bellman
April 8, 2022 8:33 am

You just made the best argument anyone could for how valuable linear regressions are when dealing with time varying phenomena.

You would like a straight line so it could be shown to be correlated with CO2 increase. But the issue is that climate has a multitude of cycles. Why would anyone try to use linear regressions to show the temperature that is the result of numerous cycles. There is only one reason, to attempt to show a simplified example of growth regardless of its accuracy. As time goes on, linear regression is going to mean less and less as cycles become more and more apparent. Live with it.
Learn time series analysis techniques.

Bellman
Reply to  Jim Gorman
April 8, 2022 11:34 am

You attacked me for making predictions based on linear regression, when I’d done no such thing. Now you want topredict the future based on cycle fitting.

Jim Gorman
Reply to  Bellman
April 8, 2022 2:37 pm

You keep referring to “degrees/decade”. That is ok for the data you have. But, you’d better be sure that linear regression truly describes the data when you start traipsing through the future. Very few, if any can see into the future accurately. Those who say we could see 5 degrees or more warming should have to bet their farms on that and forfeit the farms when it doesn’t come true.

Bellman
Reply to  Jim Gorman
April 8, 2022 4:30 pm

Once again, I am not making any prediction about the future from any linear trend. That is simply showing what has happened. I use degrees / decade because that’s a unit of temperature change. Monckton uses degrees / century. It does not mean you expect the trend to continue for that length of time, any more than giving a speed in kilometers / hour means you expect that speed to be maintained for an hour.

Bellman
Reply to  Jim Gorman
April 8, 2022 4:36 pm

You really need to take this up with Monckton. He’s the one who only ever uses linear trends regardless of sense. Either showing a trend over a very short time span and calling it a pause. Or using a linear trend over the past 170 years when the data is clearly not linear.

Any straight line linear trend, or any over linear regression is only an estimate of what is happening. So too is any attempt to fit cycles. Either could be wrong. If you want to test the fit you need to do more rigorous testing, e.g. separating out training and testing data.

But trying to predict the future is better done using an understanding of the climate, not extrapolating from the past.

Jim Gorman
Reply to  Bellman
April 7, 2022 10:08 am

What do you think those curved blue bars on the graph you posted are for? Draw a line connecting the top left and bottom right, then another line from the top right to the bottom left. That is the uncertainty interval and the trend can lie anywhere within that interval.

trend with error bars (1).jpg
Bellman
Reply to  Jim Gorman
April 7, 2022 10:33 am

Exactly. And it has nothing to do with measurement uncertainty.

Carlo, Monte
Reply to  Bellman
April 7, 2022 2:46 pm

Idiot!

Tim Gorman
Reply to  Bellman
April 6, 2022 8:30 am

As Bigoilbob said sometime ago it is possible to incorporate measurement uncertainties into the trend line uncertainty but it makes little difference. But I also think it’s a pointless task, as any uncertainty is already present in the variation of the values.”

It makes a BIG difference as the graph I attached in the other message shows. You are *still* looking for a rational for your assumption that the stated values are 100% accurate!

“Nobody claims any stated values are 100% accurate. And all data sets assume there is systematic error, that’s why the data is adjusted.”

*YOU DO!* Every time you claim the uncertainty of a trend line is based on the residuals calculated solely from the stated values with no consideration of the uncertainty of the stated values.

And NO ONE can adjust anything on temperature measurements other than on a station-by-station basis. Hubbard and Liu showed that in their mid-2000 papers on the subject! You *especially* can’t adjust past measurements because you don’t know anything about the past.



Bellman
Reply to  Tim Gorman
April 6, 2022 12:50 pm

You are *still* looking for a rational for your assumption that the stated values are 100% accurate!

You really don;t help your case by continuously repeating these false claims.

*YOU DO!* Every time you claim the uncertainty of a trend line is based on the residuals calculated solely from the stated values with no consideration of the uncertainty of the stated values.

No I don’t. I assume that measurement uncertainties are unlikely to make much of a difference, that doesn’t mean I assume they don’t exist.

Tim Gorman
Reply to  Bellman
April 6, 2022 4:11 pm

No I don’t. I assume that measurement uncertainties are unlikely to make much of a difference, that doesn’t mean I assume they don’t exist.”

I just posted a graph to you showing that that the measurement uncertainty DOES MAKE A DIFFERENCE!

I’m sure you will deny ever getting the graph because it just puts the lie to your claim here!

When you ASSume they don’t make a difference then you *are* assuming the stated values are 100% accurate!

Carlo, Monte
Reply to  Bellman
April 7, 2022 2:47 pm

I assume that measurement uncertainties are unlikely to make much of a difference

Which is just another sign of your idiocy.

Tim Gorman
Reply to  Bellman
April 6, 2022 8:33 am

Stop with these untruths.I don’t have to “believe” that Taylor and Bevington are correct. I know they are because it’s just standard statistics. The problem is you refuse to accept you are misinterpreting what they say.”

It is *NOT* standard statistics. It is metrology statistics. I gave you numerous examples from four different statistics textbooks showing that they *ALL* ignore uncertainty of the data values. Not a single one covered how to propagate uncertainties in the real world.

You are living in a textbook world where there is *NO* uncertainty is data values and, in most cases, a Gaussian distribution is assumed.

Nor am I misinterpreting what they say. You’ve been shown that over and over and over again.

Bellman
Reply to  Tim Gorman
April 6, 2022 12:53 pm

It is *NOT* standard statistics. It is metrology statistics.”

I hadn’t realized metrologists had invented their own version of statistics. Nothing in any of your text books suggests there’s any difference between the two.

Tim Gorman
Reply to  Bellman
April 6, 2022 4:15 pm

Standard statistics texts do *NOT* teach about uncertainty, how to calculate it, how to propagate it, and when different approaches apply. I posted you excerpts from FOUR different statistics textbooks as examples.

The proof is your posting that assumes that all error is totally random and all data/error distributions are Gaussian. You don’t have the slightest idea of how to handle a skewed data set with increasing systematic error, e.g. measuring the diameter of a wire being drawn through a die which wears on the measuring device.

Tim Gorman
Reply to  Bellman
April 6, 2022 8:35 am

I literally do not do that. The whole point of the statistics, whether talking about random sampling and the standard error of the mean or propagating measurement errors is that they do not all cancel out.”

Then why do you assume that they *do* all cancel out? Why do you assume that the uncertainty of multiple measurements of different things using different devices doesn’t grow as you add elements to the data set? Why do you assume all the uncertainties cancel out?

Bellman
Reply to  Tim Gorman
April 6, 2022 12:55 pm

Then why do you assume that they *do* all cancel out?

Perhaps if you just stopped making stuff up, you would understand better.

Why do you assume that the uncertainty of multiple measurements of different things using different devices doesn’t grow as you add elements to the data set?

Do you understand that there can be a big set of options between “everything cancels out” and “everything keeps growing”?

Why do you assume all the uncertainties cancel out?

Why do you assume they faked the moon landings?