UAH v6.1 Global Temperature Update for October, 2024: +0.73 deg. C After Truncation of the NOAA-19 Satellite Record

From Dr Spencer’s Global Warming Blog

by Roy W. Spencer, Ph. D.

The Version 6.1 global average lower tropospheric temperature (LT) anomaly for October, 2024 was +0.73 deg. C departure from the 1991-2020 mean, down from the September, 2024 anomaly of +0.80 deg. C.

The new (Version 6.1) global area-averaged temperature trend (January 1979 through October 2024) is now +0.15 deg/ C/decade (+0.21 C/decade over land, +0.13 C/decade over oceans).

The previous (version 6.0) trends through September 2024 were +0.16 C/decade (global), +0.21 C/decade (land) and +0.14 C/decade (ocean).

The following provides background for the change leading to the new version (v6.1) of the UAH dataset.

Key Points

  • The older NOAA-19 satellite has now drifted too far through the diurnal cycle for our drift correction methodology to provide useful adjustments. Therefore, we have decided to truncate the NOAA-19 data processing starting in 2021. This leaves Metop-B as the only satellite in the UAH dataset since that date. This truncation is consistent with those made to previous satellites after orbital drift began to impact temperature measurements.
  • This change reduces recent record global warmth only a little, bringing our calculated global temperatures more in line with the RSS and NOAA satellite datasets over the last 2-3 years.
  • Despite the reduction in recent temperatures, the 1979-2024 trend is reduced by only 0.01 deg/ C/decade, from +0.16 C/decade to +0.15 C per decade. Recent warmth during 2023-2024 remains record-setting for the satellite era, with each month since October 2023 setting a record for that calendar month.

Background

Monitoring of global atmospheric deep-layer temperatures with satellite microwave radiometers (systems originally designed for daily global weather monitoring) has always required corrections and adjustments to the calibrated data to enable long-term trend detection. The most important of these corrections/adjustments are:

  1. Satellite calibration biases, requiring intercalibration between successively launched satellites during overlaps in operational coverage. These adjustments are typically tenths of a degree C.
  2. Drift of the orbits from their nominal sun-synchronous observation times, requiring empirical corrections from comparison of a drifting satellite to a non-drifting satellite (the UAH method), or from climate models (the Remote Sensing Systems [RSS] method, which I believe the NOAA dataset also uses). These corrections can reach 1 deg. C or more for the lower tropospheric (LT) temperature product, especially over land and during the summer.
  3. Correction for instrument body temperature effects on the calibrated temperature (an issue with only the older MSU instruments, which produced spurious warming).
  4. Orbital altitude decay adjustment for the multi-view angle version of the lower tropospheric (LT) product (no longer needed for the UAH dataset as of V6.0, which uses multiple channels instead of multiple angles from a single channel.)

The second of these adjustments (diurnal drift) is the subject of the change made going from from UAH v6.0 to v6.1. The following chart shows the equator crossing times (local solar time) for the various satellites making up the satellite temperature record. The drift of the satellites (except the non-drifting Aqua and MetOp satellites, which have fuel onboard to allow orbit maintenance) produces cooling for the afternoon satellites’ LT measurements as the afternoon observation transitions from early afternoon to evening. Drift of the morning satellites makes their LT temperatures warm as their evening observations transition to the late afternoon.

The red vertical lines indicate the dates after which a satellite’s data are no longer included in the v6.0 (UAH) processing, with the NOAA-19 truncation added for v6.1. Note that the NOAA-19 satellite has drifted further in local observation time than any of the previous afternoon satellites. The NOAA-19 local observation times have been running outside our training dataset which includes the assumption of a linear diurnal temperature drift with time. So we have decided it is now necessary to truncate the data from NOAA-19 starting in 2021, which we are now doing as of the October, 2024 update.

Thus begins Version 6.1 of our dataset, a name change meant to reduce confusion and indicate a significant change in our processing. As seen in the above figure, 2020 as the last year of NOAA-19 data inclusion is roughly consistent with the v6.0 cutoff times from the NOAA-18 and NOAA-14 (afternoon) satellites.

This type of change in our processing is analogous to changes we have made in previous years, after a few years of data being collected to firmly establish a problem exists. The time lag is necessary because we have previously found that two operating satellites in different orbits can diverge in their processed temperatures, only to converge again later. As will be shown below, we now have sufficient reason to truncate the NOAA-19 data record starting in 2021.

Why Do We Even Include a Satellite if it is Drifting in Local Observation Time?

The reasons why a diurnally drifting satellite is included in processing (with imperfect adjustments) are three-fold: (1) most satellites in the 1979-2024 period of record drifted, and so their inclusion was necessary to make a complete, intercalibrated satellite record of temperatures; (2) two operational satellites (usually one drifting much more than the other) provide more complete sampling during the month for our gridded dataset, which has 2.5 deg. lat/lon resolution; (3) having two (or sometimes 3) satellites allows monitoring of potential drifts, i.e., the time series of the difference between 2 satellite measurements should remain relatively stable over time.

Version 6.1 Brings the UAH Data closer to RSS and NOAA in the Last Few Years

Several people have noted that our temperature anomalies have been running warmer than those from the RSS or NOAA satellite products. It now appears this was due to the orbital drift of NOAA-19 beyond the useful range of our drift correction. The following plot (preliminary, provided to me by John Christy) shows that truncation of the NOAA-19 record now brings the UAH anomalies more in line with the RSS and NOAA products.

As can be seen, this change has lowered recent global-average temperatures considerably. For example, without truncation of NOAA-19, the October anomaly would have been +0.94 deg. C, but with only MetOp-B after 2020 it is now +0.73 deg. C.

The following table lists various regional Version 6.1 LT departures from the 30-year (1991-2020) average for the last 22 months (record highs are in red):

YEARMOGLOBENHEM.SHEM.TROPICUSA48ARCTICAUST
2023Jan-0.07+0.06-0.21-0.42+0.14-0.11-0.45
2023Feb+0.06+0.12+0.01-0.15+0.64-0.28+0.11
2023Mar+0.17+0.21+0.14-0.18-1.35+0.15+0.57
2023Apr+0.12+0.04+0.20-0.10-0.43+0.46+0.38
2023May+0.29+0.16+0.42+0.33+0.38+0.54+0.13
2023June+0.31+0.34+0.28+0.51-0.54+0.32+0.24
2023July+0.57+0.60+0.55+0.83+0.28+0.81+1.49
2023Aug+0.61+0.77+0.44+0.77+0.69+1.49+1.29
2023Sep+0.80+0.83+0.77+0.82+0.28+1.12+1.15
2023Oct+0.78+0.84+0.72+0.84+0.81+0.81+0.56
2023Nov+0.77+0.87+0.67+0.87+0.52+1.07+0.28
2023Dec+0.74+0.91+0.57+1.00+1.23+0.31+0.64
2024Jan+0.79+1.01+0.57+1.18-0.19+0.39+1.10
2024Feb+0.86+0.93+0.79+1.14+1.30+0.84+1.14
2024Mar+0.87+0.95+0.80+1.24+0.23+1.05+1.27
2024Apr+0.94+1.12+0.76+1.14+0.87+0.89+0.51
2024May+0.78+0.78+0.79+1.20+0.06+0.23+0.53
2024June+0.70+0.78+0.61+0.85+1.38+0.65+0.92
2024July+0.74+0.86+0.62+0.97+0.42+0.58-0.13
2024Aug+0.75+0.81+0.69+0.73+0.38+0.90+1.73
2024Sep+0.80+1.03+0.56+0.80+1.28+1.49+0.96
2024Oct+0.73+0.87+0.59+0.61+1.84+0.81+1.07

The full UAH Global Temperature Report, along with the LT global gridpoint anomaly image for October, 2024, and a more detailed analysis by John Christy, should be available within the next several days here. This could take a little longer this time due to the changes resulting from going from v6.0 to v6.1 of the dataset.

The monthly anomalies for various regions for the four deep layers we monitor from satellites will be available in the next several days (also possibly delayed):

Lower Troposphere:

http://vortex.nsstc.uah.edu/data/msu/v6.1/tlt/uahncdc_lt_6.1.txt

Mid-Troposphere:

http://vortex.nsstc.uah.edu/data/msu/v6.1/tmt/uahncdc_mt_6.1.txt

Tropopause:

http://vortex.nsstc.uah.edu/data/msu/v6.1/ttp/uahncdc_tp_6.1.txt

Lower Stratosphere:

http://vortex.nsstc.uah.edu/data/msu/v6.1/tls/uahncdc_ls_6.1.txt

5 12 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

374 Comments
Inline Feedbacks
View all comments
November 4, 2024 6:13 am

It’s going to take some time to shed the Tonga/El Niño warmth in the Northern Hemisphere. Meanwhile, despite a La Nina, the Western US ski season is off to a great start…

Reply to  johnesm
November 4, 2024 6:44 am

Meanwhile, in the Southern Hemisphere . . .

leefor
Reply to  ToldYouSo
November 5, 2024 4:06 am

What about the SH? September 2023 was warmer than October 2024. October 2023 was cooler. Or were you comparing something else? Or not comparing at all?

Reply to  leefor
November 5, 2024 5:59 am

I see that your don’t know.

See the attached excerpt copied directly from the UAH Global Temperature Report for August 2024, dated 4 September 2024 (free download available at https://www.nsstc.uah.edu/climate/2024/August/GTR_202408AUG_v1.pdf ).

Bottom line: on average the SH is running significantly cooler (anomaly of +0.81 C, or +1.46°F, above seasonal average) than is the NH (anomaly of +0.96 C, or +1.73°F, above seasonal average).

I find that quite strange given that the Hunga Tonga volcano is located at 20.5° South latitude and that its injection of water vertically into the stratosphere in January 2022 is asserted to be causing warming of the lower atmosphere GLOBALLY.

Reply to  ToldYouSo
November 5, 2024 6:00 am

Ooops . . . here’s the copied excerpt:

Voila_Capture2223
Jeff Alberts
Reply to  johnesm
November 4, 2024 8:06 am

Where is all this heat? Hasn’t been here in the PNW.

Reply to  johnesm
November 4, 2024 10:54 am

As I mentioned a few weeks back, I suspect we will see a somewhat slower atmospheric cooling from the El Nino peak than usual, due to the extra HT WV in the stratosphere above the higher latitudes.

But I can’t see how anybody in there right mind could blame the 2023 El Nino event and HT event on any human causation.

Bob B.
Reply to  bnice2000
November 5, 2024 4:41 am

Oh but they do

November 4, 2024 6:17 am

Still no variances, standard deviations, or degrees of freedom reported for any of these myriad averages.

Reply to  karlomonte
November 4, 2024 7:53 am

If the error bars were on the UAH, NOAA or RSS graphs, they would be about .5 C wide….which would detract from the basic .15 C per decade global warming trend that is the basis of ALL government programs.

IMG_0390
Reply to  DMacKenzie
November 4, 2024 11:21 am

Exactly, would make those regression fit lines look rather silly.

Reply to  DMacKenzie
November 6, 2024 1:04 am

The basis “of all those government programs” and the general climate panic is that the gentle 0.15°C per decade, 1.5°C per century, will morph into 3-5°C per century because of mysterious forces known only by computer models written by egotistical socialist misanthropes, who delight in grooming and terrifying children, and those with the intellect of a child.

Reply to  karlomonte
November 4, 2024 8:36 am

Beat me to it. Measurements should always be quoted with a stated value and an uncertainty.

I would like to see an uncertainty budget and the propagation thought the calculations.

Reply to  Jim Gorman
November 6, 2024 1:07 am

Science 101.

The fact that climate scientists seem unacquainted with the basics of measurement should negate any of their findings.

November 4, 2024 6:19 am

After this update, this is the second warmest October on record.

Year Anomaly
 2024 0.73
 2023 0.78
 2017 0.48
 2020 0.39
 2021 0.38
 2022 0.33
 2019 0.31
 2015 0.28
 2016 0.28
 1998 0.24

Figures prior to 2023 are using the old version.

It’s still looking nearly certain that 2024 will be a record year by some margin. My projections are that 2024 will be 0.75 +/- 0.06°C. 2023 was 0.43°C.

I make the old trend 0.159C / decade, and the new 0.154C / decade. The new trend is only including changes made since 2023.

Reply to  Bellman
November 4, 2024 6:25 am

I have asked Spencer how this change will affect the uncertainty. His response is that it should increase the accuracy, but there will be more noise in the grid points.

strativarius
Reply to  Bellman
November 4, 2024 6:30 am

0.159C / decade, and the new 0.154C “

And you really felt that change, right?

Reply to  strativarius
November 4, 2024 7:05 am

It’s a change in the measurements, not the actual temperature.

strativarius
Reply to  Bellman
November 4, 2024 7:41 am

It’s navel gazing at best.

Reply to  Bellman
November 4, 2024 7:45 am

I’m looking for the comprehensive temperature records, at approximately 1M above sea level (Stevenson screen height) across the worlds oceans since, say, 1850.

Any idea where I can find it?

Jeff Alberts
Reply to  Bellman
November 4, 2024 8:08 am

Umm, wut?

Reply to  Bellman
November 4, 2024 11:00 am

But there is no evidence of any human causation, is there.

The 2023, 24 peak is NOT AGW, it is NGW

Jeff Alberts
Reply to  Bellman
November 4, 2024 8:07 am

After this update, this is the second warmest October on record.”

Where exactly?

Reply to  Jeff Alberts
November 4, 2024 8:27 am

The global average. You know, the thing this article is describing in the headline.

strativarius
Reply to  Bellman
November 4, 2024 8:35 am

The metric that has no basis in reality.

Reply to  strativarius
November 4, 2024 9:13 am

I think the average person, living in the average global temperature, would find it too cold.

strativarius
Reply to  David Pentland
November 4, 2024 9:16 am

I think you’re right…

Nick Stokes
Reply to  David Pentland
November 4, 2024 11:32 am

They certainly would up there where UAH measures it.

Reply to  Nick Stokes
November 4, 2024 1:51 pm

The average surface temperature on Earth is approximately 15 degrees Celsius, according to NASA.Sep 20, 2023

The World Health Organisation says that the ideal ambient temperature for humans is at least 18°C (64.4°F).

https://apps.who.int/iris/rest/bitstreams/1161792/retrieve#page=54

bdgwx
Reply to  David Pentland
November 4, 2024 7:41 pm

UAH does not measure the surface temperature. The layer of the atmosphere they measure is about -9 C.

Reply to  bdgwx
November 4, 2024 8:17 pm

The layer of the atmosphere they measure is about -9 C.

Don’t you mean something like 9.123°C? Otherwise, where do the significant digits for an anomaly in the millikelvins come from?

Reply to  Nick Stokes
November 4, 2024 2:46 pm

Do you wear a warm jumper in winter, Nick?

Probably for around half the year ??

Reply to  Bellman
November 4, 2024 9:25 am

I make the old trend 0.159C / decade, and the new 0.154C / decade. The new trend is only including changes made since 2023.

I now have all the data, and the trend for v6.1 is 0.151 ± 0.025°C / decade.

Top 10 for October is now

   Year Anomaly
1  2023    0.78
2  2024    0.73
3  2017    0.47
4  2020    0.38
5  2021    0.34
6  2019    0.29
7  2015    0.28
8  2016    0.28
9  1998    0.24
10 2022    0.23
Reply to  Bellman
November 4, 2024 9:55 am

And they’ve released the gridded data much quicker than usual, so here’s my interpretation of the global anomalies for October 2024, version 6.1.

20241004wuwt2
Reply to  Bellman
November 4, 2024 9:56 am

And here’s last month’s using the new version.

20241004wuwt3
Reply to  Bellman
November 4, 2024 10:11 am

Here’s the warming trend over the globe.

20241104wuwt6
Reply to  Bellman
November 4, 2024 11:05 am

Let’s see how that changes as the El Nino effect continues to subside.

sherro01
Reply to  Bellman
November 4, 2024 2:02 pm

Bellman,
Thankyou for contributing the maps.
Visually, broadly, the hotter large areas are over mountain areas, Antarctica, Himalayas, Rockies, Andes, Alps. Any chance you could correlate UAH grid T with grid altitude?
Ta. Geoff S

Reply to  Bellman
November 4, 2024 11:03 am

Shows very clearly that the effect of the 2023 El Nino atmospheric warming event is gradually disappearing.

Notice how the tropics has much less pale yellow than the chart below.

Reply to  bnice2000
November 4, 2024 2:52 pm

Shows very clearly that the effect of the 2023 El Nino atmospheric warming event is gradually disappearing.

It ended in May, so….?

Reply to  TheFinalNail
November 4, 2024 3:54 pm

OMG, the EFFECT of the El Nino, you mindless cretin !

Even you can’t be stupid enough to put any human causation on the extended 2023 El Nino effect.

Reply to  Bellman
November 4, 2024 11:39 am

The warm anomaly in the Western US can be explained by a stagnant high-pressure system that centered over the region during the first half of the month. So, sinking air combined with strong solar heating – unrelated to greenhouse gases.

If the signature of greenhouse gases can’t be detected in today’s weather, it doesn’t make sense to claim they explain long-term warming.

…..unless you’re a ruler monkey trendologist.

Reply to  walter.h893
November 4, 2024 11:45 am

The bellboy has never been able to show any human causation of the El Nino events he uses to show warming trends..

Reply to  bnice2000
November 4, 2024 11:55 am

He views the ENSO solely as statistical thresholds that the oceans merely cycle through every few years. No consideration is given to the underlying dynamics.

Scissor
Reply to  walter.h893
November 4, 2024 1:42 pm

If I could order it up for next year, I would.

Nick Stokes
Reply to  Bellman
November 4, 2024 1:25 pm

Roy is a bit late this month, and the TempLS surface measure is a bit early, so we can compare them. Here is TempLS, on the same 1991-2020 anomaly base

comment image

Reply to  Nick Stokes
November 4, 2024 2:25 pm

He said the graph would be late due to the change in version, but the gridded data came out much earlier than usual.

A little difficult to do a direct comparison as you are using a sensible colour scheme, where as I’m using the UHI (-9, +9) scale.

Here’s the UHI data, but with the scale going from (-5, + 5), still not the same as I’m using a linear scale.

There do seem to be pretty big differences in the oceans, with UAH showing above average to the West of South America, but below average to the East the USA.

20241104wuwt7
Reply to  Nick Stokes
November 4, 2024 2:47 pm

GHNC… ROFLMAO !!!!

KevinM
Reply to  Bellman
November 4, 2024 9:56 am

Is there any way to increase the rate? I’m cold.

Simon
Reply to  KevinM
November 4, 2024 10:31 am

Yes … burn more fossil fuels.

Reply to  Simon
November 4, 2024 10:50 am

China and India are working on it.

EU, UK, Canada etc seem to like freezing in winter. !

Derg
Reply to  Simon
November 4, 2024 11:33 am

Have you found the pee pee tape

Reply to  Derg
November 4, 2024 11:45 am

He’s probably saving up to make a new one of his own. !

Derg
Reply to  bnice2000
November 4, 2024 1:09 pm

I just want new readers to know that he is a known liar.

Reply to  Derg
November 4, 2024 2:48 pm

Not just a liar.. a simpleton to boot !!

Mr.
Reply to  Bellman
November 4, 2024 10:31 am

Thanks for all this.
I’ve been having sleepless nights for the past month worrying about the next 0.0045 C increase in night-time temperatures in some places in this huge planet.

Stay safe out there everybody!

Reply to  Bellman
November 4, 2024 10:59 am

Do you have any evidence of any human causation?

Or are you prepared to admit that 2023, 24 are totally natural combination of the 2023 EL Nino event and HT WV in the upper atmosphere slowing the cooling rate.

strativarius
November 4, 2024 6:21 am

Even with the best equipment [and will] available we cannot yet escape the similarity between “current climate science” and the parable of the blind men and the elephant.

A global temperature means very little if you happen to be in colder than normal temperatures, while others may be hotter than usual.. 

In the Arctic socialism does work, global warming means the sharp winter chill is spread far and wide reaching many more Europeans. Social justice achieved.

Jeff Alberts
Reply to  strativarius
November 4, 2024 8:09 am

A global temperature means very little”

It’s actually utterly meaningless.

Reply to  Jeff Alberts
November 4, 2024 9:04 am

Beat me to it. There seems to be a lot of confusion (intentional or not) between a model calculation or estimate and a real, physical measurement, as admirably demonstrated by Bellman’s two recent comments. Unfortunately, too many people believe that the model creates the reality rather than models are only (or should be only) tools to help understand reality. If the models don’t reflect reality, the models are wrong.

strativarius
Reply to  Phil R
November 4, 2024 9:17 am

And then there is blind faith in the models

Reply to  Phil R
November 4, 2024 9:41 am

demonstrated by Bellman’s two recent comments.

What am I being accused of now?

In no way do my reporting of UAH data constitute an endorsement of the that data set. But it is the only data set that is reported on here, and up to recently the only data set people here trust.

If the models don’t reflect reality, the models are wrong.

Obviously. And as the saying goes, all models are wrong.

bdgwx
Reply to  Phil R
November 4, 2024 12:23 pm

There seems to be a lot of confusion (intentional or not) between a model calculation or estimate and a real, physical measurement

Yes. Many here think there is a clear and separate distinction between measurements and models. However, the GUM in no uncertain terms squashes that notion. Many measurements are themselves outputs of complex multi-state modeling [JCGM 6:2020]. Even something as simple as a spot temperature measurement is the culmination of extensive material, thermodynamic, electrical, etc. modeling. So your insinuation otherwise here is erroneous.

If the models don’t reflect reality, the models are wrong.

But that doesn’t mean they aren’t useful. For example, we know that F=ma is dead wrong. Yet, despite this it is so useful that it is taught in every high school science class and used by engineers extensively.

Reply to  bdgwx
November 4, 2024 3:54 pm

Even something as simple as a spot temperature measurement is the culmination of extensive material, thermodynamic, electrical, etc. modeling.”

In other words you still have no basic understanding of metrology. Calibration of a measuring device doesn’t require “models” of any kind. Estimating measurement UNCERTAINTY does, especially after the instrument leaves the calibration lab.

Reply to  Tim Gorman
November 4, 2024 5:00 pm

But he bloviates as if he is the world’s expert.

John Power
Reply to  bdgwx
November 4, 2024 5:44 pm

“For example, we know that F=ma is dead wrong.”
 
I don’t know that, bdgwx. How is it ‘dead wrong’?

bdgwx
Reply to  John Power
November 4, 2024 7:23 pm

Because it doesn’t correctly model reality. It’s close in most everyday scenarios so it is meaningful and useful, but wrong nonetheless. A better model (one that is less wrong, but possibly still wrong) is F=d(1/√(1-v^2/c^2)*m*v)/dt. The point is that all models are wrong, but many of them are meaningful and useful nonetheless. Being wrong does not mean useless or meaningless.

Reply to  bdgwx
November 5, 2024 5:36 am

What you don’t seem to understand is that F=ma *is* correct. All your equation does is specify what the acceleration is in terms of relativity, i.e. the speed of light. There is nothing in the formula F = ma that requires a to be given in any specific form. Functional relationships can certainly be made up of independent variables that are themselves functions. a = f_1(v,c) is certainly correct. So is m = f_2(v,c). So writing F = ma is correct as well as F = f_1(v,c) * f_2(v,c).

Reply to  bdgwx
November 4, 2024 8:41 pm

But that doesn’t mean they aren’t useful.

If they are wrong as you say, then to be useful, one must be able quantify how wrong they are and what the acceptable performance profile is. So far GCM’s do not have that quantification.

For example, we know that F=ma is dead wrong.

It is not dead wrong. The profile where it provides acceptable answers is well known. To call it otherwise, is simply denying the Industrial Revolution in which it’s use was vital.

It is why, after all you have been told here about measurements, you still have no appreciation of physical science and the measurements used. If you think f=ma is not still used in engineering design, you are sadly mistaken. Go learn how vehicle crash tests are evaluated.

Reply to  bdgwx
November 5, 2024 12:09 pm

For example, we know that F=ma is dead wrong.

This is a good example of the difference between a GCM using a fitted, probabilistic “rule” to generate cloud data vs using a physics based equation such as F=ma

F=ma isn’t dead wrong, it’s physically correct and can be used to predict one quantity given the other two repeatedly and accurately within ranges that we usually encounter. Furthermore we recognise it’s limitations.

On the other hand generating cloud data in a GCM is incorrect in a general sense and when it’s parameters vary outside of those experienced (ie clouds in a warmer world), it’s wrong all of the time. It’s not physics based.

Then, when you add the non-physical quantity of clouds into the rest of the calculation then it doesn’t matter how physics based you thought it was, the result is non physics based and is no longer a physics based projection.

bdgwx
Reply to  TimTheToolMan
November 5, 2024 3:01 pm

F=ma isn’t dead wrong, it’s physically correct

You think it is physically correct to say that reality does not care about a body’s velocity in regard to a body’s ability to move nor that there is a universal speed limit on the movement of bodies? Basically what I’m asking is do you really think Newton is right and Einstein wrong in regard to their descriptions of the nature of reality?

Reply to  bdgwx
November 5, 2024 3:13 pm

So you would apply relativistic corrections to Statics and Dynamics calculations?

Reply to  bdgwx
November 5, 2024 6:27 pm

You think it is physically correct to say that reality does not care about a body’s velocity in regard to a body’s ability to move nor that there is a universal speed limit on the movement of bodies? 

I said

F=ma isn’t dead wrong, it’s physically correct and can be used to predict one quantity given the other two repeatedly and accurately within ranges that we usually encounter. Furthermore we recognise it’s limitations.

Reading and understanding FTW.

Yes, if you’re planning on accelerating a rocket that has a sizable velocity compared to the speed of light then you need the relativistic version and that’s why I mentioned recognising its limitations.

But you’re missing, or more probably avoiding the point re: F=ma vs the “cloud rule” in GCMs.

bdgwx
Reply to  TimTheToolMan
November 6, 2024 6:44 am

F=ma isn’t dead wrong, it’s physically correct

I’ll ask again…do you think we live in a universe where velocity does not impact a body’s ability to accelerate?

If you deflect and divert again then I don’t really have a choice but to accept that you feel classical mechanics is the right depiction of the universe and that relativity is the wrong depiction.

And for the record…I don’t necessarily think relativity is the right depiction either.

can be used to predict one quantity given the other two repeatedly and accurately within ranges that we usually encounter. Furthermore we recognise it’s limitations.

Which makes it meaningful and useful despite it being the wrong depiction of the universe.

But you’re missing, or more probably avoiding the point re: F=ma vs the “cloud rule” in GCMs.

We know for a fact that F=ma is wrong. But it still makes meaningful and useful predictions.

We know for a fact that the standard model is wrong. But it still makes meaningful and useful predictions.

We know for a fact that GCMs are wrong. But they still make meaningful and useful predictions.

This is the point.

Reply to  bdgwx
November 6, 2024 7:28 am

We know for a fact that F=ma is wrong. 

Who are “we”?

You and the Queen?

But they still make meaningful and useful predictions.

Bullshit. Name just one.

And for the record…I don’t necessarily think relativity is the right depiction either.

Fortunately what you think is irrelevant.

Reply to  bdgwx
November 6, 2024 7:31 am

I’ll ask again…do you think we live in a universe where velocity does not impact a body’s ability to accelerate?”

You are assuming a definition for “m” in F = ma that is not inherent in the formula itself.

Reply to  Tim Gorman
November 6, 2024 7:54 am

And revealing his total lack of any understanding of basic mechanics.

Reply to  bdgwx
November 6, 2024 11:26 am

I’ll ask again…do you think we live in a universe where velocity does not impact a body’s ability to accelerate?

Fairly obviously from my answer it does.

We know for a fact that F=ma is wrong. But it still makes meaningful and useful predictions.

Do you understand the difference between a result from a physical formula like F=ma and a statistical result such as the calculation for clouds in a GCM?

F=ma isn’t wrong in this context. It has limitations but will always, predictably and accurately give an answer. Cloud calculations in a GCM just don’t.

Reply to  TimTheToolMan
November 6, 2024 11:40 am

Do you understand the difference between a result from a physical formula like F=ma and a statistical result such as the calculation for clouds in a GCM?

This would be a “no”.

Cloud calculations in a GCM just don’t.

And never will.

bdgwx
Reply to  TimTheToolMan
November 6, 2024 6:04 pm

Fairly obviously from my answer it does.

F=ma says it doesn’t. Therefore that model of reality is wrong.

Do you understand the difference between a result from a physical formula like F=ma and a statistical result such as the calculation for clouds in a GCM?

Sure. But it doesn’t really matter. Remember, the standard model is a festering zoo of statistical results too so I could have used that as my example instead.

F=ma isn’t wrong in this context

I think we agree on the concept. We’re just disagreeing on the semantics. I’m coming from the prevailing semantics here in which a some models are said to be wrong if they do not include every facet of the physical processes involved, their predictions are not perfect, or otherwise are not indisputably and unequivocally 100% correct in every scenario.

Reply to  bdgwx
November 6, 2024 6:49 pm

F=ma says it doesn’t. Therefore that model of reality is wrong.

More nutter stuff.

Reply to  bdgwx
November 6, 2024 6:50 pm

F=ma says it doesn’t. Therefore that model of reality is wrong.

You may want to consider Tim Gorman’s explanation of why it does.

Reply to  TimTheToolMan
November 7, 2024 4:30 am

He won’t listen. He has his religious dogma and a set of blinders. He doesn’t understand that F = ma does *not* define how “m” and “a” are to be specified because he doesn’t want to understand.

bdgwx
Reply to  TimTheToolMan
November 7, 2024 6:48 am

You may want to consider Tim Gorman’s explanation

Tim Gorman believes Σ[x]/n = Σ[x], Σ[a^2] = Σ[a]^2, PEMDAS rules are optional, sqrt[xy^2] = xy, d(x/n)dx = 1, division (/) and addition (+) are equivalent, that radiant exitance W.m-2 is extensive, that density is volume/mass, shutting the door on your kitchen oven will not cause the inside to get warmer, the Stefan-Boltzmann law only works when a body is in equilibrium with its surroundings, and a bunch of other absurd notions. So no I’m not going to just blinding consider TG’s explanation of anything given his grossly erroneous position on other topics.

If you want to defend the position that F=ma and its implication that velocity does not impact a body’s ability to accelerate then go ahead. But 100+ years of observations is beyond convincing that this notion is dead wrong.

Reply to  bdgwx
November 7, 2024 6:57 am

Are you going to spam your “algebra errors” lits again?

If you want to defend the position that F=ma and its implication that velocity does not impact a body’s ability to accelerate then go ahead.

HAHAHAHAHAHAHAHAHAHAHA

This is as goofy as your oven door nonsense.

Reply to  karlomonte
November 8, 2024 4:59 am

There aren’t any algebra errors. bdgwx *still* hasn’t figured out that the partial derivative is the partial derivative of the factor and not the functional relationship.

That’s how Possolo got the uncertainty of a barrel volume being related to 2 (from R^2) and 1 (from H) and *NOT* (2R) *(πH) or (1) * (πR^2).

The GUM says the partial derivatives are *sensitivity* coefficients and can be calculated by experimentally “The combined variance 2
uc ( y) can therefore be viewed as a sum of terms, each of which represents the estimated variance associated with the output
estimate y generated by the estimated variance associated with each input estimate xi.”

In the average you have two input estimates, Σx and “n”. Since “n” can produce no change as it is a constant, it has no estimated variance. This leaves the variance of Σx as the only factor that can change the output estimate “avg”.

bdgwx keeps wanting to use the partial derivative of the entire function as the sensitivity coefficient. It isn’t. Σx and “n” are separate terms. You calculate separate sensitivity coefficients for each term. Taylor treats them as separate terms, Bevington treats them as separate terms, and Possolo treats them as separate terms.

But that doesn’t fit bdgwx’s religious dogma that you can reduce measurement uncertainty of uncorrelated measurements by averaging. There’s no algebra problems on my end.

Reply to  Tim Gorman
November 8, 2024 7:14 am

But that doesn’t fit bdgwx’s religious dogma that you can reduce measurement uncertainty of uncorrelated measurements by averaging. There’s no algebra problems on my end.

Exactly, he needs tiny numbers.

Reply to  bdgwx
November 7, 2024 10:23 am

If you want to defend the position that F=ma and its implication that velocity does not impact a body’s ability to accelerate then go ahead. But 100+ years of observations is beyond convincing that this notion is dead wrong.

You are lost in space and don’t even know it. The velocity that a body currently has DOES NOT affect its ability to accelerate. As relativistic speeds are attained, mass will increase but it was proven long ago that a cannonball and feathers of the same mass accelerate the same.

The real point is that all physical measurement systems have performance profiles that define limits of use. A scale having a platform that weighs 100 g can’t measure micrograms. It just can’t in today’s technology.

Reply to  bdgwx
November 7, 2024 2:32 pm

If you want to defend the position that F=ma and its implication that velocity does not impact a body’s ability to accelerate then go ahead.

I want to defend the position that mass is dependent on velocity and that F=ma holds relativistically when mass is allowed to vary.

bdgwx
Reply to  TimTheToolMan
November 7, 2024 5:04 pm

I want to defend the position that mass is dependent on velocity and that F=ma holds relativistically when mass is allowed to vary.

Then use a model where mass is dependent on velocity and allowed to vary. Classical mechanics and F=ma is not that model. I suggest upgrading to relativistic mechanics and F=d(1/√(1-v^2/c^2)*m*v)/dt which adds the Lorentz factor to mass.

My point stands…models like F=ma can be meaningful and useful even though they are wrong.

Reply to  bdgwx
November 7, 2024 9:19 pm

My point stands…models like F=ma can be meaningful and useful even though they are wrong.

But its not wrong. I’m inclined to think Tim was right and its how you consider mass to be constant in the equation that is wrong.

Whether Tim has been right or wrong about other things seems irrelevant on this point where he is right.

bdgwx
Reply to  TimTheToolMan
November 8, 2024 7:12 am

But its not wrong. 

It is wrong. If you have a 1 kg mass and apply 1 N force it isn’t getting to 3e9 m/s even if that force is applied continuously for 9.5 years like what that model says. That is indisputable and unequivocal.

I’m inclined to think Tim was right and its how you consider mass to be constant in the equation that is wrong.

That’s because it is constant in THAT equation.

Let me explain it with math. If Tim is right then both F=ma and F=d(1/√(1-v^2/c^2)*mv)/dt yield the same result. Thus it is necessarily the case that F = ma = d(1/√(1-v^2/c^2)*mv)/dt. Watch what happens though.

(1) ma=d(1/√(1-v^2/c^2)*mv)/dt

(2) dp/dt = d(1/√(1-v^2/c^2)*mv)/dt

(3) d(mv)/dt = d(1/√(1-v^2/c^2)*mv)/dt

(4) d(mv)/dt = (1/√(1-v^2/c^2))d(mv)/dt

(5) 1 = (1/√(1-v^2/c^2))

(6) √(1-v^2/c^2) = 1

(7) 1-v^2/c^2 = 1

(8) v^2/c^2 = 0

(9) v^2 = 0

(10) v = 0

See the problem? Whether he realizes it or not (and he probably doesn’t because he doesn’t understand derivatives) the math forces v = 0. In other words for his proposition to be true then v always has to be 0. Do we live in a universe where v is always zero?

Again…just because F=ma produces the wrong results doesn’t mean it isn’t meaningful or useful. The reason is because the amount by which it is wrong in most everyday cases is negligible.

Reply to  bdgwx
November 8, 2024 8:03 am

That’s because it is constant in THAT equation.”

That is *YOUR* bias showing, it is *NOT* in the definition of F = ma.

It’s just like the equation distance = time x velocity, d = vt. There is nothing in that formula that requires v to be a constant. But using *your* logic “v” has to be a constant. In reality v = sin(t) is a perfectly legitimate description of v in d = vt.

Reply to  bdgwx
November 8, 2024 10:55 am

See the problem? 

Look in the mirror.

Reply to  bdgwx
November 8, 2024 1:20 pm

That’s because it is constant in THAT equation.

Yes but it’s also correct in that equation. F=ma is an instantaneous result.

bdgwx
Reply to  TimTheToolMan
November 8, 2024 4:50 pm

Yes but it’s also correct in that equation. F=ma is an instantaneous result.

You think it is correct that a 1 kg mass exposed to a 1 N force will accelerate to 315e6 m/s in 10 years?

Earlier you seemed to indicate that you agreed that a mass’s ability to accelerate is sensitive to its velocity. Now you seem to question that. Why the sudden change in position?

What about the zero velocity result you get when you set the classical model equal to the relativistic model? Do you think the universe only allows zero velocities?

Reply to  bdgwx
November 8, 2024 6:22 pm

You think it is correct that a 1 kg mass exposed to a 1 N force will accelerate to 315e6 m/s in 10 years?

You think the work required can be generated?

Reply to  bdgwx
November 8, 2024 9:52 pm

You think it is correct that a 1 kg mass exposed to a 1 N force will accelerate to 315e6 m/s in 10 years?

No, because of the well understood limitation of the equation and the velocity that will result from such a long time for acceleration.

Fundamentally you’re assuming mass doesn’t change over that time but that’s incorrect.

Do you agree that a 1kg mass exposed to a 1 N force will accelerate at 1 m/s^2 ?

bdgwx
Reply to  TimTheToolMan
November 9, 2024 8:08 am

No, because of the well understood limitation of the equation and the velocity that will result from such a long time for acceleration.

So the model doesn’t match observations. Remember, this subthread is addressing Phil R’s statement….“If the models don’t reflect reality, the models are wrong.” F=ma does not reflect the reality that a body’s ability to accelerate is sensitive to its velocity. The F=ma assumption that velocity does not matter is indisputably wrong.

Fundamentally you’re assuming mass doesn’t change over that time but that’s incorrect.

I’m just plugging numbers into the model no different than anyone else who uses it. Like I said above it is the model making that assumption. That’s why it is wrong.

Do you agree that a 1kg mass exposed to a 1 N force will accelerate at 1 m/s^2 ?

No I don’t. And that’s the root of the issue behind my point.

BTW…even when v = 1000 m/s the error in the acceleration as computed from the classical model is only 0.0003%. This is why the classical model is still useful in most scenarios despite it being wrong.

Reply to  bdgwx
November 9, 2024 8:56 am

I’m just plugging numbers into the model

Blindly, with little or no real understand of physics.

Numbers is numbers.

Reply to  bdgwx
November 9, 2024 12:07 pm

I’m just plugging numbers into the model no different than anyone else who uses it. Like I said above it is the model making that assumption. That’s why it is wrong.

You continually reveal your lack of training in the physical sciences. In an introduction to circuit analysis you begin dealing with the equation V=IR. It is a linear equation just like F=ma. Is it correct? Of course it is, when dealing with an ideal environment. It is called Ohm’s Law. Think about the Ideal Gas Law. Is it correct? Of course it is, it has been accepted as a Law. These have all been validated with factual experimental results.

Part of advanced training in physics, chemistry, all engineering, and yes, even meteorology is learning about the performance envelopes for these TO WORK PROPERLY. You learn how and when to recognize when corrections must be applied to the laws. You also learn the ethical requirements for gather and reporting the data along with the adjustments made.

You have no idea of which you speak.

Reply to  bdgwx
November 11, 2024 11:38 am

When v=0, it’s completely correct. When v=1000 then m is 0.0003% larger than the version you’re proposing using so the “wrong” thing is m, not m due to v.

This is the well understood limitation of F=ma

Reply to  bdgwx
November 8, 2024 5:32 am

Abbreviation is part and parcel of physical science functional relationships.

You are just dancing around trying to avoid admitting that it is your bias that F = ma means “m” and “a” can’t be functional relationships of their own.

Take a look at the common use of term “I” in electricity. It is a unit called “ampere’. But in actuality it is a functional relationship of a number of factors. Those factors are abbreviated down to the term “I”. It’s no different for the terms “m” and “a”.

Reply to  bdgwx
November 8, 2024 4:45 am

You *still* haven’t figured out that the formula for an average is *NOT* a functional relationship and, therefore, is not subject to partial derivatives.

You given the quotes from the GUM as to how the partial derivatives are to be taken. As usual, you didn’t bother to learn from the quotes and want to keep going back to believing that the SEM is the measurement uncertainty of a group of measurements.

Go look at the notes associated with Eq 11a and 11b in the GUM. You’ll find the following:

P = f(V,R0,a,t) = V^2 / { R0[ 1+a(t-t0)] }

∂P/∂V = (2V) / { R0[1+a(t-t0) ] } = 2(P/V)

∂P/∂R0 = =V^2 / {R0^2[1+a(t-t0) ] } = – P/R0

Factor out the common P and you get and move to the left side of the equation and you get

u_c^2(y)/ P^2 = (2/V)^2 u(V)^2 + (1/R0)^2 u(R0)^2

These are RELATIVE UNCERTAINTIES.

And the uncertainty factors wind up being just the derivative of of the factor, not the derivative of the entire functional relationship

Applying this to your avg and you get

avg = (Σx) / n

u(avg)^2 / avg^2 = [∂avg/∂ (Σx)^2 [u(Σx)^2 / (Σx)]^2] + [u(n)/n]^2

==> [u(avg)/avg]^2 = u( (Σx)^2 / (Σx)^2

==> u(avg)/avg = u(Σx)/Σx

THE “n” DISPPEARS! You do *NOT* divide the propagated measurement uncertainty of the data by “n” or “sqrt(n)” to find the measurement uncertainty of the average.

This has been presented to you multiple times and you still refuse to learn.

If you can’t accept what the GUM shows then just go ahead and admit that you think you know better how to calculate measurement uncertainty of a functional relationship – even in the face of the average being a statistical descriptor and not a functional relationship!

bdgwx
Reply to  TimTheToolMan
November 7, 2024 9:02 am

You may want to consider Tim Gorman’s explanation

In addition to all of the previous absurd viewpoints of the Gormans we’ve got JG down below in a conversation with Bellman in this very blog post question the meaningfulness of the heat transfer equation because it involves the subtraction of two temperatures.

So do you really want to consider either of the Gorman’s viewpoints on the subject of how meaningful models are or any subject related to physics for that matter?

And you can see karlomonte still denying that the simple action of closing the door on a turned on kitchen oven will make the inside warmer than it would be otherwise. He’s so triggered by this mind numbingly obvious and trivial concept that he calls it “nonsense”. So is this really the side you want to throw your lot into?

Reply to  bdgwx
November 7, 2024 10:05 am

we’ve got JG down below in a conversation with Bellman in this very blog post question the meaningfulness of the heat transfer equation

ROTFLMAO!

You are cherry picking a formula you know nothing about.
You posted this formula.

Q = [K ∙ A ∙ (Thot – Tcold)] / d

You don’t even understand that this is the gradient equation for conduction in a solid body! It has nothing to do with two entirely different bodies that are not in contact which is the point that was being made. You can’t take two separate bodies and AVERAGE their temperature to obtain a meaningful temperature.

You might also note that (Thot – Tcold) is not an average but a difference!

Reply to  Jim Gorman
November 7, 2024 10:59 am

ROTFLMAO!

You are cherry picking a formula you know nothing about.

As always.

Reply to  bdgwx
November 7, 2024 10:58 am

And you can see karlomonte still denying that the simple action of closing the door on a turned on kitchen oven will make the inside warmer than it would be otherwise. He’s so triggered by this mind numbingly obvious and trivial concept that he calls it “nonsense”. So is this really the side you want to throw your lot into?

The nonsense is your goofy idea that the door is a heat source, or are you now backtracking away?

Your knowledge of heat transfer is abysmal.

Reply to  bdgwx
November 8, 2024 5:03 am

G down below in a conversation with Bellman in this very blog post question the meaningfulness of the heat transfer equation because it involves the subtraction of two temperatures.”

The heat equation bellman used was for CONDUCTION. How much conduction of heat is there between a parcel of air in Topeka, KS and Nome, AK?

The conduction heat equation ONLY applies where there is conduction! It’s not a question of whether the conductive heat transfer equation is correct, it’s a question of when it applies!

Reply to  bdgwx
November 6, 2024 8:50 pm

I’m coming from the prevailing semantics here in which a some models are said to be wrong if they do not include every facet of the physical processes involved, their predictions are not perfect, or otherwise are not indisputably and unequivocally 100% correct in every scenario.

On this point, do you understand why the projection from a few million non-physical results (ie non-physical timesteps at 20min for 100 years or so), each being the initial state for the next timestep’s calculation …has accumulated uncertainty that invalidates the result as a useful projection?

If you disagree with this, what is your underlying assumption on the accumulated uncertainty? Do you assume uncertainty cancels out rather than accumulates at each timestep?

Note. Uncertainty, not error.

bdgwx
Reply to  TimTheToolMan
November 7, 2024 6:58 am

On this point, do you understand why the projection from a few million non-physical results (ie non-physical timesteps at 20min for 100 years or so), each being the initial state for the next timestep’s calculation …has accumulated uncertainty that invalidates the result as a useful projection?

No.

First…just saying something is unphysical doesn’t make it so.

Second…F=ma can be rewritten as F=dp/dt and handled in time-step form too like is the case in the modeling of movements of astronomical bodies.

Third…uncertainty does not invalidate projections. It just means there is a dispersion of possibilities around the projections for which reality could take.

Note. Uncertainty, not error.

I know.

BTW…and notice how this conversation is diverting further away from my post above about how models can still be meaningful and useful despite being wrong.

Reply to  bdgwx
November 7, 2024 8:33 am

Third…uncertainty does not invalidate projections. It just means there is a dispersion of possibilities around the projections for which reality could take.

You didn’t answer the question that matters. What is the interval quantity that describes the dispersion of possibilities?

Funny how you use uncertainty and dispersion of possibilities together. That sounds a lot like a standard deviation versus how accurate a mean is using an SDOM.

Reply to  bdgwx
November 7, 2024 11:00 am

I know.

No, you don’t. You confuse the two again and again.

Reply to  bdgwx
November 7, 2024 2:48 pm

First…just saying something is unphysical doesn’t make it so.

But you’ve agreed that F=ma is a physical relationship (albeit with some discussions around relativistic considerations) and that clouds are statistical.

Actually I’m being generous when I say “statistical”. They’re actually a fit derived from rules of thumb “calculations” that can be described as… when the parameters for cloud production (eg humidity, temperature etc) align, then produce a cloud. But tune that production rate to be realistic according to what we’ve seen.

That’s non physical. Its not like F=ma

How do you think that looks in a warmer world? How well do you think that fits works out? How can it be tested?

Third…uncertainty does not invalidate projections. It just means there is a dispersion of possibilities around the projections for which reality could take.

But that dispersion encompasses the entire range of possibilities for climate from the model. Claiming the calculation gives the most probable climate response is disingenuous at best from a model that isn’t even calculating it.

BTW…and notice how this conversation is diverting further away from my post above about how models can still be meaningful and useful despite being wrong.

And there are models that aren’t meaningful or useful. A model claiming to predict lotto numbers would be a good example despite being 100% correctly in the ballpark of possible numbers with every run.

Reply to  bdgwx
November 6, 2024 7:30 am

Where in F = ma is it specified that either m or a is not velocity dependent?

Is m = 1kg or is m = (1kg) (1/sqrt(1-v^2/c^2) ?

You are assuming that in F = ma “m” is always 1kg and is *NOT* (1kg) (1/sqrt(1-v^2/c^2)

That’s a result of *YOUR* biases, which have nothing to do with the formula F = ma

Reply to  Jeff Alberts
November 4, 2024 9:37 am

So “utterly meaningless” that this website wastes time reporting it each month, along with monthly updates on the “pause” using this utterly meaningless data.

Mr.
Reply to  Bellman
November 4, 2024 10:33 am

Well, we all look forward to your visits here.
Gotta get a laff wherever we can these days 🙂

Richard M
November 4, 2024 6:36 am

We are now arguably into La Nina conditions. We were also there in January 2023 with a negative anomaly. Hence, most of the current anomaly difference cannot be due to ENSO. Looks like the Hunga Tonga eruption is still having a major effect.

The big question then is, how long will this continue? The following graphic shows the water vapor has not started to dissipate yet. It’s moved down in altitude a little though.

comment image

Reply to  Richard M
November 4, 2024 7:11 am

Looks like the Hunga Tonga eruption is still having a major effect.

It’s a very strange effect, then. The main HT eruption occurred in January 2022. There was a slight uptick in lower troposphere temperatures at the time, but this quickly died down again and by January 2023, one year after the HTE, the UAH anomaly was in negative territory again (see Dr Spencer’s table above).

The monthly series of UAH monthly temperature records didn’t start until July 2023 some 18-months after the HTE.

Where was all the heat hiding in the intervening period?

Reply to  TheFinalNail
November 4, 2024 7:48 am

In CO2. 🤣

Richard M
Reply to  TheFinalNail
November 4, 2024 7:51 am

It’s like a jigsaw puzzle. You have to consider multiple effects, both warming and cooling, all with different timing. Once you put it together it actually makes sense.

There were cloud effects, SO2 effect, water vapor effect, chlorine/ozone effect and changing ENSO effects.

The initial year (2022) after HTe was in a La Nina. This was also the period with the strongest SO2 cooling effect. Together they canceled out the initial warming effects. The SO2 cooling should be almost gone now as is the 2023/24 El Nino warming.

The warming effect (mostly via cloud reduction) over the past year has warmed all the oceans which I believe is one of the big reasons the warm temperatures are now persisting. They will take a little longer to cool

Reply to  Richard M
November 4, 2024 9:44 am

Once you put it together it actually makes sense.

So who has “put it all together” with respect to where all the HTE heat was hiding for 18-months?

Is there any scientific literature to explain this, or is it just opinions on blog comment pages?

Reply to  TheFinalNail
November 4, 2024 11:11 am

Fungal is saying WV doesn’t block the escape of atmospheric energy.

Has just destroyed the whole GHE mantra. !

Waiting for evidence of any human causation… which we know will never come.

Reply to  TheFinalNail
November 4, 2024 11:51 am

The HT ocean warming is very obvious in the Antarctic sea ice response.

The HT WV plume has spread out over time to cover most of the higher latitudes.

I know you are incredibly dumb, but even you should realise that the planet is quite large, and it takes a while for natural things to travel from place to place!

Richard M
Reply to  TheFinalNail
November 4, 2024 12:52 pm

What part of the SO2 cooling effect did you fail to understand? No heat was hiding. It was reflected back to space.

Reply to  Richard M
November 4, 2024 2:59 pm

What part of the SO2 cooling effect did you fail to understand? 

The part where it allowed the HTE heat to magically disappear for 18 months.

Reply to  TheFinalNail
November 4, 2024 3:56 pm

WOW.. the stupidity and ignorance. !!

Still not able to figure out what all that stratospheric WV is doing either.!

Reply to  TheFinalNail
November 4, 2024 4:07 pm

Oh and still waiting for some evidence of human causation for the 3 major El Nino events that make up the only warming in the UAH data.

Richard M
Reply to  TheFinalNail
November 4, 2024 7:08 pm

SO2 doesn’t “follow heat”, it reflects sunlight. I’m amazed you don’t understand volcanic cooling effects when you claim to know so much.

Jeff Alberts
Reply to  TheFinalNail
November 4, 2024 8:26 am

Where was all the heat hiding in the intervening period?”

Nothing is hiding. air masses move around, just as they always have. These averages tell us absolutely nothing useful.

Reply to  Jeff Alberts
November 4, 2024 9:44 am

Nothing is hiding. 

So it wasn’t there?

bdgwx
Reply to  Jeff Alberts
November 4, 2024 9:52 am

These averages tell us absolutely nothing useful.

According to the mean value theorem for integrals as applied to the heat capacity equation Q=mcΔT and 1LOT equation ΔU = Q – W then it is necessarily the case that if that HT eruption caused ΔU > 0 then the global average temperature would increase such that ΔT > 0 as well. Average ΔT could thus be a falsification test for hypothesis (like the HT eruption) involving an expected change in U. Being able to test hypothesis is what I and most others consider “useful”.

Reply to  bdgwx
November 4, 2024 11:24 am

Is this how your oven door heats the inside of the oven?

Reply to  karlomonte
November 4, 2024 11:53 am

LOL.. That was a classic piece of beeswax ignorance.

Showed just how low on the “understanding” scale ‘it’ really is.

Close to a “never go full retard” moment .

Reply to  bnice2000
November 4, 2024 5:02 pm

It is so bizarre and irrational there is no way I can quote it verbatim from memory.

Reply to  bdgwx
November 4, 2024 11:55 am

Great use of your Kamala education..

World-salad mixed with ignorance => complete gibberish.

Quite hilarious!

Reply to  bdgwx
November 4, 2024 5:07 pm

You do realize that H2O has a quality called “latent heat”, right? How does that affect ΔT?

Reply to  Jim Gorman
November 4, 2024 6:23 pm

I don’t think beeswax “realizes” much at all..

… it is all just fantasy and pretty lights to he/sh/it.

Reply to  TheFinalNail
November 4, 2024 11:08 am

Fungal is now in DENIAL of the greenhouse effect.

He thinks the upper atmosphere WV from HT isn’t blocking the atmospheric cooling from the El Nino.

So DUMB.. so funny !

Reply to  TheFinalNail
November 4, 2024 11:16 am

OMG , fungal can’t understand the chart Richard M posted.

Can’t see that the initial HT WV took several months to spread out to the higher latitudes of the Northern and Southern Hemispheres.

This one is a bit higher up and makes it more obvious

h2o_MLS_vLAT_tap_75S-75N_10hPa
Reply to  Richard M
November 4, 2024 7:14 am

Richard M,

Something is seriously wrong with the Aura MLS contour chart of water content at the 147 hPa level that you present in the context of HT-injected water appearing at the lower altitudes of the stratosphere:

1) It doesn’t show any significant variation in water content south of the equator from the time of the HT volcano eruption in January 2022 until about April 2024 . . . an interval of about 27 months! This despite the HT volcano being located at about 20.5° S latitude.

2) It doesn’t show any significant variation in water content for latitudes of 0° to about 7° north of the equator for any time after January 2022. This despite the appearance of significant water content anomalies seen as far as 75° N starting in June 2024.

Looks like the Hunga Tonga eruption is still having a major magical effect.

Richard M
Reply to  ToldYouSo
November 4, 2024 7:56 am

The water vapor injection went straight into the stratosphere (above the 147 hPa altitude). If you look at 75S you can see how it has changed. It took a whole year for the WV to get that far south and it is already dissipating at higher altitudes.

comment image

You can see a similar effect at 75N.

comment image

Reply to  Richard M
November 4, 2024 9:10 am

Just for clarification (for me, at least) do you mean dissipating at higher altitudes or latitudes (75° N& S are pretty high latitudes), or both?

Reply to  Phil R
November 4, 2024 11:24 am

It appear the WV spread out to the higher latitudes over both NH and SH which took several months.

It looks like it is gradually clearing over the tropics, and drifting to higher altitudes over the higher latitudes.

Richard M
Reply to  Phil R
November 4, 2024 1:06 pm

Initially it took awhile for the water vapor to reach the highest latitudes but finally had it covered in 2024.

Now, it looks to me like much of the water vapor has moved downward from its initial injection height leaving an area of lower concentration in the mid stratosphere. The upper stratosphere doesn’t appear to have changed much.

Keep in mind this chart is for anomalies and as you go up in height the overall air density continues to decrease. An anomaly of 1 ppm at 3 hPa represents less water vapor than a .2 ppm anomaly at 147 hPa.

Reply to  Richard M
November 4, 2024 4:48 pm

“Keep in mind this chart is for anomalies and as you go up in height the overall air density continues to decrease. An anomaly of 1 ppm at 3 hPa represents less water vapor than a .2 ppm anomaly at 147 hPa.”

Is the color scale for “H2O (ppm)” given in the contour plots you previously posted for 75 S and 75 N—in both cases with the y-axis covering an altitude range of 13 to 40 km (an equivalent stratospheric pressure range of about 150 to 3 hPa)—wrong then?

Or have has the scale already accounted for density variation with altitude in computing those indicated ppm levels? If not, then the plots are garbage.

Richard M
Reply to  ToldYouSo
November 4, 2024 7:12 pm

My assumption is the density changes are correctly covered by the color scale. Keep in mind, non-condensing gases tend to be well mixed which would keep the ppm values relatively consistent. At these altitudes even water vapor is non-condensing due to the absence of CCNs.

Reply to  Richard M
November 5, 2024 4:35 am

What does “well mixed” actually mean? Horizontally or vertically. Gravity alone should concentrate non-condensing gases vertically in some form or another.

Reply to  Tim Gorman
November 7, 2024 7:30 pm

Only above about 100km.

Reply to  Richard M
November 5, 2024 6:47 am

“At these altitudes even water vapor is non-condensing due to the absence of CCNs.”

At the altitude range of the stratosphere over the tropic and temperate latitudes (-60 C just above the tropopause, ~15 km, to -15 C at the top of the stratosphere, ~50 km), most volcano-injected salt water will have been flash frozen to micro ice crystals with a corresponding saturation-limit water vapor pressure in the range of about 4 Pa to 56 Pa, respectively*. This can be compared to the typical range of ambient stratospheric pressures associated with those altitudes (~10,000 Pa to 200 Pa). Obviously, water vapor can never come close to saturating the stratosphere.

The injection of seawater (salt water)—not to mention entrained sea-floor sediments—into the stratosphere provides all the cloud/ice condensation needed for the flash freezing of water.

*see “Vapor Saturation Pressure Over Ice Formulas and Calculator”, https://www.engineersedge.com/calculators/vapor_saturation_pressure_15731.htm

Reply to  Richard M
November 4, 2024 4:32 pm

“It took a whole year for the WV to get that far south and it is already dissipating at higher altitudes.”

Actually, your post of the Aura MLS contour plot for 75° S shows:

1) A sudden decrease of about 1.25–1.5 ppm (dark green to medium brown color coding) in the interval from about June to September 2023 over the stratospheric altitude range of 13–30 km. Then, very strangely, a jump back to 0.75–1.00 ppm H2O after September 2023 for most of that same altitude range (20–30 km). What the heck?

2) From about September 2023 through April 2024 the dissipation of stratospheric H2O happened earlier at lower stratospheric altitudes (below 25 km) than at higher altitudes (above 30 km).

Moreover, your post of the Aura MLS contour plot for 75° N shows the H2O dissipating much faster, starting in January 2023, at lower altitudes (below 23 km) compared to higher altitudes (above 25 km).

Reply to  ToldYouSo
November 4, 2024 11:21 am

Funny watching anti HT effect stooges now denying that the GHE from WV even exists.

Denying that extra WV in the atmosphere slows energy loss.

Still plenty of WV to slow the loss of energy from the 2023 El Nino event.

Rather than the normal fast drop-off like in 1998 and 2016, this will take a bit longer.

h2o_MLS_vLAT_tap_75S-75N_10hPa
Reply to  bnice2000
November 5, 2024 7:17 am

“Funny watching anti HT effect stooges now denying that the GHE from WV even exists.”

Really?

The typical range of water vapor in the stratosphere is between 3 and 7 parts per million by volume (ppmv). Note that the contour chart that you posted for H2O in the stratosphere at the 10 hPa pressure level—as well as the numerous similar charts posted by Richard M and you—have a color coding anomaly scale ranging from -1 to +1 ppm H20 concentration. So, at most the HT eruption caused a temporary increase of about 1.5 ppm (or 30% of an average of 5 ppm) in stratospheric water content assuming the Aura MLS data is accurate.

Meanwhile, the typical range of water vapor in the troposphere is around 3000 to 4000 ppmv, with the highest concentrations found near the equator and decreasing towards the poles.

Most science-oriented persons can understand that a concentration difference on the order of 1000:1 can mean that water vapor will have much greater “greenhouse effect” on Earth’s radiation balance when existing in the troposphere than it will have when existing in the stratosphere.

Now, you were saying something about “stooges” . . .

November 4, 2024 6:41 am

Wow, Dr. Spencer, thank you so much for your rather detailed summary of the “why and when” of your data collection and data adjustment methods, as well as reasoning for now dropping the NOAA-19 satellite data set from your averaging to obtain the LT trends that UAH reports. This made readily available for public consideration without apology . . . quite rare today!

This is just one reason that I trust UAH GLAT temperature reporting more than any other source (e.g., RSS and NOAA).

However, I wish that UAH would cease reporting monthly average temperature anomalies and decadal trending to 0.01 C resolution because I don’t think that is justifiable considering the involved trail of processing of uncertainties required to convert raw microwave sounding unit radiometric data from orbiting satellites to an effective global temperature. I realize that you probably are doing such to make the trending more apparent, but it conveys a sense of data accuracy that really isn’t factual IMHO.

bdgwx
Reply to  ToldYouSo
November 4, 2024 7:19 am

They actually report to 0.001 C resolution. See here.

Reply to  bdgwx
November 4, 2024 9:43 am

Here is what bothers me.

I = 5.67×10-⁸ • 274⁴ = 319.5842075
I = 5.67×10-⁸ • 274.001⁴ = 319.5888729

That means measuring the W/m² to a resolution of 0.001±0.0005 -> 319.588 – 319.584 = 0.004

That’s pretty good resolution. I found one paper that had a ±5 W/m² uncertainty. That would not allow the necessary resolution to determine temperatures to the thousandths degrees of temperature.

Jeff Alberts
Reply to  ToldYouSo
November 4, 2024 8:11 am

effective global temperature”

No such thing.

Reply to  Jeff Alberts
November 4, 2024 3:31 pm

Well, its certainly better that an ineffective local temperature 😜

Art Slartibartfast
November 4, 2024 6:50 am

I don’t get it. Why is the average global temperature tracked and reported? It has zero physical meaning and nothing can be inferred from this number changing.I would definitely would like to know if I am wrong on this and if so, why.

bdgwx
Reply to  Art Slartibartfast
November 4, 2024 7:17 am

Why is the average global temperature tracked and reported?

So that we can see if it is going up or down.

It has zero physical meaning and nothing can be inferred from this number changing.

It has no less meaning as any other global average that we track. In this particular case we can infer a change in internal energy via the heat capacity equation Q = mcΔT.

I would definitely would like to know if I am wrong on this and if so, why.

Perhaps an example of other global averages might help. NASA tracks a lot of them. See here for only a small subset.

Jeff Alberts
Reply to  bdgwx
November 4, 2024 8:13 am

It has no less meaning as any other global average that we track. “

So as meaningless as “global sea level” and the like? Sheesh.

bdgwx
Reply to  Jeff Alberts
November 4, 2024 9:19 am

So as meaningless as “global sea level” and the like?

I think it depends on your definition of “meaningless”. If your definition is so loose that equations like Q = mcΔT or ΔU = Q – W or mathematical theorems like the mean value theorem for integrals would qualify as “meaningless” then yeah I can see why you would describe the averages on NASA’s fact sheet for Earth or any average of intensive properties for that matter as “meaningless”.

Reply to  bdgwx
November 4, 2024 11:26 am

The attempt to look like you know something.

…. is FAILING completely !!

Reply to  bnice2000
November 4, 2024 3:04 pm

The attempt to look like you know something.

…. is FAILING completely !!

Like you would know, lol!

Reply to  TheFinalNail
November 4, 2024 3:58 pm

A whole lot more than you would.

You are one of the dumbest troll on the site.

Basically just a NIL-educated idiot !

Reply to  TheFinalNail
November 4, 2024 5:18 pm

Come on dolt. Look up the definition of “mean value theorem for integrals”..

… and explain how it is remotely relevant to what is being discussed.

It is just copy/paste Kamal-talk from beeswax.

Reply to  bdgwx
November 4, 2024 1:41 pm

A single temperature number for the entire planet is insanity, heck you can see that in the regional temperature changes in the chart where the changes are regional that doesn’t mesh with other regions at all.

In my region it has warmed slightly over 45 years mostly at night, but the climate is still the same in all that time.

bdgwx
Reply to  Sunsettommy
November 4, 2024 4:29 pm

A single temperature number for the entire planet is insanity,

It’s no less insane than any of the other global averages that scientists have published. See here for other examples.

heck you can see that in the regional temperature changes in the chart

Regional temperatures are still spatial averages. The only difference between those and a global average is the size of the domain.

Reply to  bdgwx
November 4, 2024 5:05 pm

Indirect Appeal to Authority — FAIL

Reply to  bdgwx
November 5, 2024 4:30 am

Which values in your link are intensive values? Volume, length, mass (including gravity which increases when you add mass), etc are extensive.

Regional temperatures do *NOT* have spatial averages. The temperature gradient between points may have an average value which is the slope of the temperature curve but not the temperature itself. Even local temperatures don’t have spatial averages. What is the average temperature between Pikes Peak and Colorado Springs at any point in time?

Reply to  bdgwx
November 4, 2024 4:08 pm

 mathematical theorems like the mean value theorem for integrals would qualify as “meaningless” “

Stop blowing smoke. You are assuming that the measurements forming the temperature curve have an accuracy you simply cannot support. Therefore the “mean value theorem” is nothing more than an excuse for a word salad. The issue is that you can’t calculate the value of the integral to any more decimal points than the measurements support – and on a global basis that’s somewhere in the units digit, not the hundredths or thousandths digit. In other words we don’t know the actual global average temp to more than 15C with a measurement uncertainty of +/- 5C.

In fact, the UAH is *not* even a global “temperature”. It is a metric that can, at best, be said to be related to the global temperature. Temperature is an intensive property and simply can’t be averaged. Someday you *really* should use Google and look up what an intensive property is.

Reply to  Tim Gorman
November 4, 2024 5:06 pm

+1000

Reply to  Tim Gorman
November 4, 2024 5:18 pm

It has read a book.. can copy-paste…

… but has basically ZERO COMPREHENSION. !

Reply to  bdgwx
November 5, 2024 4:55 am

The mean value theorem requires a CONTINUOUS function. Temperature is not a continuous function, at least not one you can write a functional relationship for using the samples in any temperature dataset.

Reply to  Tim Gorman
November 5, 2024 5:40 am

The mean value theorem requires a CONTINUOUS function

One could do a numerical integration on a daily temperature curve if the granularity was sufficient (5 minute?). It would give a total degree-day metric. Finding the average of that distribution would still be difficult. Let alone why one would even want to find an average value from a degree-day value.

Climate science is absolutely against moving on from a traditional computation of “average” daily temperatures. Imagine engineers, physicists and chemists not embracing the latest and greatest measurement devices because it would upset comparisons with older measurements.

Reply to  bdgwx
November 4, 2024 8:28 am

Averages are useful if you don’t abuse them.

For example, knowing your average driving speed might be useful to get a good idea of how long it will take you to get across town. But using that average in rush hour would show you that you need a different “rush hour average speed”.

If you keep stats you might find that your “rush hour average speed” has decreased with city population increase…. so there is no end of revisions that can be made to make “averages” more like the “historic real results”….And more sophisticated expertise is required to “improve the model”.

At some point, the random changes of reality exceed the predictive value of the model.

bdgwx
Reply to  DMacKenzie
November 4, 2024 9:36 am

Averages are useful if you don’t abuse them.

You can replace “averages” with pretty much anything in this statement and it would still be true.

Reply to  bdgwx
November 4, 2024 10:07 am

kinda like: (name favorite or most recent crisis, disaster or apocalyptic prediction here) is caused by “climate change.”

Mr.
Reply to  DMacKenzie
November 4, 2024 10:45 am

I equate averaging temperature readings with averaging your vehicle’s tire pressures –
the manual specifies 40 psi tire pressure.

You put your gauge on your tires and they read 15, 35, 45, and 65 psi.

So average psi for your 4 tires = 40 psi.

All good to set off on that cross-continental trip, hey?

bdgwx
Reply to  Mr.
November 4, 2024 12:06 pm

That would be like averaging the pressure of Mercury, Venus, Earth, and Mars and getting (0.005 picobar + 92 bars + 1014 mb + 6 mb) / 4 = 23 bars. Using this result to draw a conclusion about Earth and only Earth is an example of an abuse of averaging.

A better analogy would be to pick one and only one of the planets or tires and measure the pressure at different locations within that body and only that body. For your first tire you might get readings of 14.8, 15.0, 15.2, 15.2, 15.0 and 14.8 psi at 0, 60, 120, 180, 240, and 300 degrees respectively. The average is 15 psi which I think everyone including you would agree is physically meaningful.

The point is that just because you can craft an absurd example of an abuse of averaging doesn’t necessarily mean that all cases of averaging are abusive. And in general terms the fallacy you committed here is common enough that it has a name; fallacy of composition.

Mr.
Reply to  bdgwx
November 4, 2024 1:21 pm

measure the pressure at different locations within that body and only that body

And how t.f. would an ordinary vehicle driver measure the psi in different parts of each tire?

Punch a probe through the sidewall a dozen times or so?

And speaking of planets –
which planet are you living on?

Reply to  Mr.
November 4, 2024 4:13 pm

He’s living in “statistical world”, where he can average anything he wants to average and believe it has some kind of relationship to reality. He can also then call whatever he wants an “abuse of averaging” without ever really thinking about he physical similarities between the systems.

I.e. you can average temperature (an intensive property) but you can’t average a car’s tire pressures (an intensive property).

Reply to  bdgwx
November 4, 2024 4:11 pm

Averaging intensive properties *IS* an abuse of averaging. Since temperature is an intensive property, it can’t be averaged. Trying to do so is an abuse of averaging – just like trying to average your car’s tire pressures. Both are intensive properties.

Reply to  bdgwx
November 4, 2024 4:41 pm

The point is that just because you can craft an absurd example of an abuse of averaging doesn’t necessarily mean that all cases of averaging are abusive.

Your example is indicative of what is wrong with the GAT. What is the average temperature variance in the Arctic in a winter month versus the average temperature variance in Buenos Aries in the summer? Are these really comparable values that should be averaged? That leads into the question of why are there so many global sites with little to no warming?

I have never seen a cogent treatment of this from you, just silence. If there is one reason as to why CAGW continues to fall in importance to people that is the reason. Too many locations are not experiencing CAGW.

Reply to  bdgwx
November 4, 2024 5:06 pm

in general terms the fallacy you committed here

Irony Alert.

Reply to  Mr.
November 4, 2024 4:10 pm

This is an example of averaging an intensive property, something bdgwx doesn’t understand and will never understand.

Art Slartibartfast
Reply to  bdgwx
November 4, 2024 9:29 am

The trouble is that m and c are not constants across the globe.

If I have two equal volumes of dry air at standard pressure, one volume at 0 °C and the other at 20°C, bring them together and let them mix without any external influences, guess what the resulting temperature will be?

That’s right, about 9.65 °C, among other reasons because air at 0 °C has a 7.3% higher specific mass than at 20 °C. And then I have left humidity and air pressure out of the equation. To average Q accross the globe, you would need a full record of pressure and humidity. In and of itself, temperature does not say anything.

bdgwx
Reply to  Art Slartibartfast
November 4, 2024 10:46 am

The trouble is that m and c are not constants across the globe.

It doesn’t matter. The mean value theorem for integrals says that the average of a function divided by the domain is exactly equal to the full integration of the function over the domain. IOW…the average of m, c, and ΔT is sufficient to compute Q regardless of whether m, c, and ΔT are constant or not.

Reply to  bdgwx
November 4, 2024 11:58 am

Let me guess.. You were a “rote” learner…

… with zero comprehension what you were talking about.

Reply to  bnice2000
November 4, 2024 4:18 pm

T doesn’t have a “function”. How do you integrate it? Where does the mean value theorem come into play?

His base assumption is wrong – he believes you can average an intensive property. There is no “global” T to average, there is no “global T” function to integrate!

Nick Stokes
Reply to  bdgwx
November 4, 2024 3:46 pm

“The mean value theorem for integrals says that the average of a function divided by the domain is exactly equal to the full integration of the function over the domain.”

Not really. That is just the definition of the average. The mean value theorem says that for a continuous function, there is at least one point in the domain where the value equals the average.

bdgwx
Reply to  Nick Stokes
November 4, 2024 7:14 pm

Right, but f(c) is the average. And f(c)(b-a) = integral[f(x).dx,a,b]. What the MVTI also says is that the average of the domain multiplied (corrected typo above) by the domain is equal to the integral of the domain. We don’t even need to know how f(x) is defined. We just to need to know f(c), a, and b. The only stipulation is that f(x) is continuous. So we don’t need to know the specific m, c, and ΔT at every dx to compute the total energy Q via integration. An equally valid option is to use the average m, c, and ΔT and multiply by the domain.

For example, we don’t need to know the density kg.m-3 of each specific dx of Earth to calculate its mass. I mean it can be done that way, but the MVTI says it is equally valid to use the average density f(c) = 5513 kg.m-3 and the domain (b-a) = 1.08e21 m3 to get 5.97e24 kg with a simple multiplication. We don’t even need to know how (and it is surely very complex) the function f(x) determines the density at location x. We know f(c)(b-a) will equally integral[f(x).dx,a,b] regardless.

Reply to  bdgwx
November 4, 2024 9:00 pm

For example, we don’t need to know the density kg.m-3 of each specific dx of Earth to calculate its mass.

More bullshit from a mathematician.

we don’t need to know the density kg.m-3 of each specific dx of Earth to calculate its mass. MVTI says it is equally valid to use the average density

Exactly how do you get an average density if you don’t measure each “dx” in the volume before you start you method ? Do you think the mass of the earth is homogeneous? If so, explain why satellite orbits vary due to gravitational anomalies.

You continually display your lack of knowledge about accountability for what you assert. People who have experience in the real world with measurements and calculations understand the vagaries involved and know that uncertainty exists.

Tell us where you included any uncertainty in the calculation of 5.97e24 kg! Didn’t enter your mind at all did it?

Reply to  bdgwx
November 5, 2024 12:44 pm

If I have a lead ball with a mass of 1kg and a second lead ball with a mass of 3kg I can “add” them by putting them both on a scale. The result is a mass of 4kg. The masses add physically.

If I have a temp of 10C and a temp of 20C what does their sum represent physically? Can I set them both on a thermometer and measure a temp of 30C?

If you can’t add the quantities then you can’t calculate an average value of the quantities.

Reply to  bdgwx
November 4, 2024 4:16 pm

You continue to leave out the fact that the function has measurement uncertainty – which carries over to the mean value theorem.

You don’t even understand the physical reality of what you are talking about. Q is an extensive property – enthalpy. Temperature is an intensive property. You can’t *get* a function for Q using only T. So what do you think you are integrating?

Reply to  Art Slartibartfast
November 4, 2024 7:25 am

The Web can be your friend in answering such questions . . . attached is a screen grab of a response from Google’s AI (IOW, the GLAT as a numerical—albeit average—number does have physical significance):

Voila_Capture2221
Reply to  ToldYouSo
November 4, 2024 7:56 am

Even Gemini tells users to double check the information it provides.

You didn’t read that bit, did you?

Reply to  HotScot
November 4, 2024 3:35 pm

I didn’t need to because it makes perfect sense with current scientific understanding (as the response properly notes) . . . but I might change my opinion if you can provide a good rebuttal to the stated information—you know, since you favor double checking on all things.

How about it?

Reply to  ToldYouSo
November 4, 2024 4:22 pm

Rebuttal? How do you average an intensive property? And temperature *is* an intensive property.

Did “current science” find the Earth’s tongue somewhere so it could put a thermometer under it?

Reply to  Tim Gorman
November 5, 2024 7:27 am

Back at you: does a thermometer under a human tongue represent a meaningful average temperature for the whole human body having said tongue, or is it instead just a meaningless number (other than measuring just the underside-of-tongue temperature) that physicians record just for fun.

Reply to  ToldYouSo
November 5, 2024 7:56 am

does a thermometer under a human tongue represent a meaningful average temperature

For an individual it does. It is measuring the same thing with a resolution uncertainty of ±0.05. My average temperature is 97.7. A temperature of 98.5 to 98.7 means I have a slight fever.

The issue here is that the doctor doesn’t attempt to derive a temperature to two or even three decimal places to evaluate my condition.

old cocky
Reply to  Jim Gorman
November 5, 2024 1:06 pm

The reference oral temperature reading of “about 36 to 37 degrees C” (98.6 degrees F is spurious precision) was empirically derived. It’s a reasonable approximation to the actual core temperature, and seems to be more acceptable to most people than the more representative rectal temperature.
It’s not intended to be an average body temperature. There is known to be an offset to the core temperature, but it’s a quick and easy way to gauge whether there is some anomaly. Remote IR thermometers measuring forehead temperature seem to be the preferred first pass now.

The issue here is that the doctor doesn’t attempt to derive a temperature to two or even three decimal places to evaluate my condition.

Even 1 decimal place is pushing it, especially using Fahrenheit. Half a degree C either side of the normal range warrants further investigation.

Reply to  ToldYouSo
November 5, 2024 1:46 pm

I don’t think you understand what a metric is in terms of physical reality. It is a quantity that can be used to assess or compare something. The temperature under your tongue is a “metric” used to assess the physical reality of your body. Same for a forehead temp or an under-the-arm temp.

The temp under your tongue or under your arm is *NOT* an average temp of your body. It is a metric used by the physician to assess your physical condition.



Jeff Alberts
Reply to  ToldYouSo
November 4, 2024 8:14 am

It doesn’t actually say that it has physical significance. Does the “AI” know about averaging intensive properties?

KevinM
Reply to  Jeff Alberts
November 4, 2024 10:07 am

Googled FYI to remind other curious readers:

“Intensive and extensive properties are characteristics used to describe the physical properties of substances in the fields of physics and chemistry. The terms intensive and extensive were first described by physicist Richard C. Tolman in 1917.

Intensive properties are those that do not depend on the amount of substance present. Examples include temperature, density, and color. These characteristics remain constant regardless of the quantity of the substance.”

Reply to  KevinM
November 4, 2024 3:57 pm

Since age is an intensive property of an give human, it now becomes apparent how non-sensical it is to talk about the average age of any select group of humans.

Got it!

Reply to  ToldYouSo
November 4, 2024 10:19 pm
  1. Intensive vs extensive properties are associated with describing matter.
  2. where did you find “age” as a property of matter?
  3. why do you think age is an intensive property?

Extensive properties of a system depend on the amount of matter involved, e.g. mass and volume. Add two masses together and you get more mass.

The age of a system depends on the amount of time it has existed, very similar to the amount of mass and volume. Add more time to the existence of the system and its age increases by the same amount. Thus age should be classed as an extensive property. Adding time increases age, just like adding mass increases mass. You *can* average extensive properties.

Temperature is intensive because if you add two masses of the same temperature together you get the same temperature. Age is extensive because if you add time of existence to a piece of matter you don’t get the same age.

Reply to  Tim Gorman
November 5, 2024 7:39 am

“2. where did you find “age” as a property of matter?”

Ever hear of carbon-14 dating? Radioactive elements (thus matter) have characteristic nuclear decay rates that reflect their age since they were formed.

As for the age of such matter being an intrinsic property, it meets that definition by being both independent of the specific sample size being considered and independent of being variable as the result of external factors.

According to Google AI:
“While radioactive dating is not directly used to determine the age of the universe, it plays a crucial role in estimating the age by analyzing the composition of very old stars, particularly by measuring the abundance of elements like uranium and thorium, which allows scientists to infer the age of the stellar populations and thus provide constraints on the age of the universe itself.”

Reply to  ToldYouSo
November 5, 2024 1:59 pm

If age were an intensive property then how would carbon dating work since the *amount* depends on the age?

it meets that definition by being both independent of the specific sample size being considered and independent of being variable as the result of external factors.”

You are trying to conflate mass with age. Age has nothing to do with mass. Age has to do with time. If an object has existed for one second and then exists for another second then it has existed for a sum of one second plus one second equaling two seconds. The property of “age” can be directly summed.

If you have two objects, say a 1kg lead ball and a 2kg lead ball, each which have existed for ten years and then continue to exist for another ten years then the age of each will be twenty years. Time adds. And the age of each ball has nothing to do with the mass of the ball.

If one of the balls has existed for 10 years and the other for 20 years then you *can* sum their ages and get a total age of 30 years, meaning the average of the two balls is 15 years. Again, time adds. It’s is an extensive property.

Reply to  Jeff Alberts
November 4, 2024 4:51 pm

Sure, sure . . . if you want to go down the path of asserting that average temperature has no “physical significance”, go right ahead.

Try this, when the weatherman says that the temperature at LAX will peak today at 76 °F under sunny skies, but only reached a high of 58 °F yesterday due to overcast conditions, do you think that is nonsense because either (a) LAX represents a distributed area and thus cannot have a meaningful average temperature, or (b) temperature is an intensive property and therefore cannot be meaningfully averaged?

Reply to  ToldYouSo
November 4, 2024 10:26 pm

If you add 76F and 58F together what do you get? 76F? 58F? 130F?

If you add 2kg and 3kg together what do you get? 2kg? 3kg? 5kg?

You have to add quantities before you can determine an average. What does adding temperatures mean physically? What does adding masses mean physically?

Just because you can add two numbers together doesn’t mean the sum makes physical sense. If it did make physical sense then the distinction between intensive and extensive properties of matter wouldn’t exist.

Reply to  Tim Gorman
November 5, 2024 6:16 am

What does adding temperatures mean physically?

Are you still trying to pull this nonsense? The parts of an equation do not all have to have a physical meaning. You keep going on about the variance of temperature. How do you find that if you every part needs to have a physical meaning?

To get the variance you first have to find the mean of the temperature, so by your logic that’s already physically meaningless. But then you have to square all the differences. What physical meaning can you attach to a square of a temperature difference? And what meaning can you attach to the sum of the squares of the differences?

And as has been pointed out many times – if you don;t like the sum of temperatures, you can always convert it to a problem of summing temperature times an extensive property. By definition the product of an intensive and an extensive property is extensive. You can multiply temperature by area, volume or time, and sum the extensive property. You should understand this because it’s what you do when you add degree-days.

Reply to  Bellman
November 5, 2024 9:10 am

The parts of an equation do not all have to have a physical meaning.

An equation describing a physical property must have parts with a physical meaning. Does the term dimensional analysis mean anything to you?

Only a mathematician would think numbers are just numbers!

To get the variance you first have to find the mean of the temperature, so by your logic that’s already physically meaningless.

You almost have the right idea, but then you fall off the wagon!

If I have a block of Pb that is 12 grams and cut it into 4 equal pieces, what is the average mass? 12/4=3 grams. Now the temperature of the block is 80°F. Does the temperature of each smaller block equal 80/4=20°F?

That is the difference between extensive and intensive. You can’t just dismiss the difference without a physical explanation of why you can ignore the intensive property.

Essentially, averaging temperature values makes the assumption that all the conditions that determine temperature are exactly alike at all locations. Since we know that is not true, averaging introduces significant uncertainty in the result and that is on top of any measurement uncertainty.

Climate science just ignores all this and just says, hell, these are just numbers, let’s just average them wily only and see what we get!

Reply to  Jim Gorman
November 5, 2024 9:25 am

Number is numbers!

Reply to  Jim Gorman
November 5, 2024 10:03 am

An equation describing a physical property must have parts with a physical meaning.

You’re just reasserting what I deny. We could be here all day doing that.

Does the term dimensional analysis mean anything to you?

Could you explain why adding temperatures fails dimensional analysis. The dimension of a temperature is Θ. The sum of any number of temperatures is still Θ. Dividing by a number still leaves you with Θ.

Only a mathematician would think numbers are just numbers!

I think you underestimate the intelligence of the average non-mathematician. I’m sure most people understand the concept of 2 + 2 = 4.

Numbers can be “just” numbers, and they can also represent real things. The beauty of maths is it’s possible to switch between the two.

You almost have the right idea, but then you fall off the wagon!

You completely ignored the points. If you think averages of temperatures cannot have any physical meaning than how can you talk about the variance of temperatures? If every part of an equation has to make physical sense, then what physical meaning do you think the square of a temperature has?

Do you think variance has a physical meaning for intensive properties, and if not do you still accept it has a meaning?

Essentially, averaging temperature values makes the assumption that all the conditions that determine temperature are exactly alike at all locations.

No it does not. As always you seem to be blinded by your desire for everything to have a single physical meaning. You never accept that something can be useful even if it does not represent an exact physical property.

Do you think the average May temperature calculated in TN1900 had a physical meaning? If so what? Do you think every day had exactly the same conditions?

Reply to  Bellman
November 5, 2024 11:33 am

Could you explain why adding temperatures fails dimensional analysis.

Now you are being a troll.

Bellman

The parts of an equation do not all have to have a physical meaning.

JG (my response)

An equation describing a physical property must have parts with a physical meaning. Does the term dimensional analysis mean anything to you?

We are discussing measurements. Measurements are done in SI units. If there are non-dimensional constants (π) they are always applied to a dimensioned measurement.

JG

Essentially, averaging temperature values makes the assumption that all the conditions that determine temperature are exactly alike at all locations.”

Bellman

No it does not. As always you seem to be blinded by your desire for everything to have a single physical meaning. You never accept that something can be useful even if it does not represent an exact physical property.

This makes little sense and is no more than a word salad with no meaning.

If the factors that determine a given temperature are not similar, then averaging them into a single number makes no sense at all. Why do you think the GUM, Dr. Taylor, NIST, etc., emphasize measuring the SAME THING? You are so far afield you don’t even know what kind of ball park you are in.

Reply to  Jim Gorman
November 5, 2024 2:32 pm

Now you are being a troll.

No. Genuine question. I may not understand dimensional analysis as well as you.

But nothing you write explains why you think dimensional analysis means you cannot add temperatures.

If the factors that determine a given temperature are not similar, then averaging them into a single number makes no sense at all.

Your just asserting this rather than explaining why you think it makes no sense. Though you have changed from insisting all the factors have to be “exactly alike” to they have to be similar. How similar are the factors in that single weather station in TN 1900? Why would you get such a range if temperatures if all the factors were similar?

But you are also digressing from the point, which was the claim that intensive properties cannot be averaged. Nothing to do with similar conditions.

And you’ve ignored all my questions about variance. Do you think it’s possible to have a variance of temperatures?

Reply to  Bellman
November 6, 2024 6:47 am

“No. Genuine question. I may not understand dimensional analysis as well as you.
But nothing you write explains why you think dimensional analysis means you cannot add temperatures.”

Your reading comprehension skills are showing again. Dimensional analysis is a clue that the variables in a functional physical relationship have to have a physical meaning.

The problem with adding temperatures is that temperature is an intensive property – meaning that creating a psuedo-functional relationship (i.e. an average temperature) that requires adding temperatures doesn’t work – the addition of temperatures has no physical meaning so the psuedo-functional relationship has no physical meaning either.

Your just asserting this rather than explaining why you think it makes no sense.”

It’s the old adage of apples and oranges. What does the average diameter calculated from a mixed data set of apple and orange diameters tell you about either apples or oranges? The average is just a statistical descriptor of the data set you get from jamming the unlike things together – and has no physical meaning at all. Calculating the average is nothing more than mathematical masturbation.

Temperatures derived from different conditions are exactly like apples and oranges. The average has no real physical meaning. It’s based on an unphysical assumption that the global temperatures are all part of a common gradient field and that if the temperature here is X then the temperature at Y must be somehow related to X based on a linear gradient. And “infilling” and “homogenization” doesn’t even purport to follow the gradient assumption, those practices just assume that if the temp here is A then the temp there is A as well!

The old IPCC stricture of “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible” is abandoned when trying to define global averages.

Why would you get such a range if temperatures if all the factors were similar?”

Every single time you mention TN1900 you *need* to list out all the assumptions made in that document so you can try to reach an understanding of what it is doing and teaching. It’s obvious that the only understanding you get from the document is “you can average temperatures”.

But you are also digressing from the point, which was the claim that intensive properties cannot be averaged. Nothing to do with similar conditions.”

You just further demonstrated your lack of understanding of intensive vs extensive properties.

“And you’ve ignored all my questions about variance. Do you think it’s possible to have a variance of temperatures?”

Local temperatures can vary. Global temperature DATA has variance. Local temperatures vary based on a functional relationship. Global temperature data has variance based on the shape of the data distribution. The two are *not* the same thing. One is a physical realization and the other is a statistical descriptor of data derived from different things.

Reply to  Bellman
November 5, 2024 1:30 pm

Are you still trying to pull this nonsense? “

It’s not nonsense to anyone trained in physical science. It’s quite obvious that you didn’t answer my question because you *can’t*.

“The parts of an equation do not all have to have a physical meaning.”

Every part of a function involving reality has to have a physical meaning. You didn’t offer up one single example to refute that.

“To get the variance you first have to find the mean of the temperature, so by your logic that’s already physically meaningless.”

YEP!

“What physical meaning can you attach to a square of a temperature difference?”

The word “difference” indicates a sum is being done. You can’t sum intensive properties.

“And as has been pointed out many times – if you don;t like the sum of temperatures, you can always convert it to a problem of summing temperature times an extensive property”

Yep, and those multipliers are things like humidity and pressure – and result in a calculation of enthalpy. You didn’t even know about enthalpy until it was explained to you here on WUWT! Enthalpy *is* an extensive property.

It’s why climate science should be using ENTHALPY instead of temperature to assess the state of the Earth’s biosphere. You *can* average enthalpy.

Numbers is numbers is a meme for statisticians and mathematicians who have not a care for the real world. That seems to include you.

Reply to  Tim Gorman
November 5, 2024 2:52 pm

It’s quite obvious that you didn’t answer my question because you *can’t*.

You didn’t ask me any questions. If you mean the ones you asked ToldYouSo – I was answering them by pointing out that just because part of the equation has no physical meaning does not mean the equation cannot be done, or the result is meaningless.

If you add 76F and 58F together what do you get? 76F? 58F? 130F?

Sorry, but F is not an SI unit, so as Jim said these are not measurements. But regardless if you add 76F and 58F you get 130F, which is meaningless. If you divide by 2 you get 65F, which is the average of your two measurements and has meaning. If you donl;t think it has meaning you will have to explain why the GUM, TN1900. Or explain how example 8.5 from Taylor works. It involves the sum of 5 different temperatures. Taylor spells this out as ΣT = 260, that 260 is a meaningless value, yet the regression derived from it has meaning.

Reply to  Bellman
November 6, 2024 7:22 am

 I was answering them by pointing out that just because part of the equation has no physical meaning does not mean the equation cannot be done, or the result is meaningless.”

An equation that has no physical meaning is what? Mathematical masturbation?

If part of of the so-called “functional” relationship has no physical meaning then the functional relationship has no meaning either.

 65F, which is the average of your two measurements and has meaning.”

And exactly what “meaning” does it have? It is a statistical descriptor. What is it describing? If 130F is meaningless then 130F/2 is meaningless as well. You can’t add meaning by multiplying by a constant. Meaningless x 2 = meaningless.

” If you donl;t think it has meaning you will have to explain why the GUM, TN1900.”

You STILL have no understanding of TN1900 at all. You are as dense as an anvil. The assumptions in TN1900 are meant to make the temperature data into measurements of the SAME thing taken at the same time, i.e. under repeatable conditions. You are *not* averaging intrinsic values of different things to come up with an average in TN1900. You are trying to come up with an accurate-as-possible value for the intrinsic property of a single physical object, i.e. Tmax. You *can* measure intrinsic properties. You can’t average intrinsic properties of different objects!

Reply to  Tim Gorman
November 6, 2024 3:26 pm

Still avoiding the question. Do you consider the variance of temperatures to by meaningful?

The assumptions in TN1900 are meant to make the temperature data into measurements of the SAME thing taken at the same time

You *can* measure intrinsic properties. You can’t average intrinsic properties of different objects!

And now you are just contradicting yourself. First you insist that adding temperatures is physically meaningless, and therefore any average of temperatures is meaningless. But then argue that it is meaningful if the values are of the “same thing”. How? What physical meaning do you attach to adding multiple readings of the same thing? If you measure the temperature of something as 20°C, then measure it again as 21°C, what physical meaning do you attach to the sum of 41°C? How is that any more meaningful than adding the temperature of two different things?

As to TN1900, if you want to claim that the thing you are measuring across 31 days is “the same thing”, you have to understand that that thing is not a temperature, it’s the mean of a probability distribution of temperatures. And if you had the ability, you might also realize that this is no different to measuring the mean of the probability distribution of global temperatures.

Reply to  Bellman
November 6, 2024 4:31 pm

Do you consider the variance of temperatures to by meaningful?”

Temperature doesn’t have variance. Variance is a statistical descriptor of a data set. Temperature is a functional relationship, not a data set. You question is ill-posed – as usual.

And now you are just contradicting yourself. First you insist that adding temperatures is physically meaningless, and therefore any average of temperatures is meaningless. But then argue that it is meaningful if the values are of the “same thing””

In the first case you are averaging the intensive properties of different things. In the second case you are averaging measurements of an intensive property. They are *NOT* the same thing at all!

An object has an intensive property known as temperature. In order to accurately quantify the value of that intensive property you make multiple measurements of the object. Averaging the measurements is *NOT* averaging the intrinsic properties of multiple things.

Stop trolling. A six year old would understand the difference here.

” What physical meaning do you attach to adding multiple readings of the same thing?”

That measurements have a physical measurement uncertainty that has nothing to do with the actual intrinsic property of the object being measured!

“If you measure the temperature of something as 20°C, then measure it again as 21°C, what physical meaning do you attach to the sum of 41°C? How is that any more meaningful than adding the temperature of two different things?” (bolding mine, tpg)

The operative word here is “measure” although you can’t seem to recognize that. Averaging measurements is not the same thing as averaging intrinsic properties.

ou have to understand that that thing is not a temperature, it’s the mean of a probability distribution of temperatures”

The assumptions Possolo makes turns the MEASUREMENTS into a probability distribution of MEASUREMENTS, not into a probability distribution of of intrinsic properties.

Again, a six year old could understand this, why can’t you?

Reply to  Bellman
November 6, 2024 4:52 pm

you have to understand that that thing is not a temperature, it’s the mean of a probability distribution of temperatures.

From the GUM:

4.2.1 In most cases, the best available estimate of the expectation or expected value μq of a quantity q that varies randomly [a random variable (C.2.2)], and for which n independent observations qk have been obtained under the same conditions of measurement (see B.2.15), is the arithmetic mean or average q (C.2.19) of the n observations:

In TN 1900, “q” is a random variable holding values of “monthly_average_of_Tmax”. It has 22 measured quantities contained in it. The readings have been obtained under the same conditions of measurement, in this case, reproducibility conditions. If you check NIST’s Engineers Statistical Handbook, you will see that measurements on successive days satisfy the uncertainty of reproducibility conditions.

Continuing from the GUM Section 4.2.1

Thus, for an input quantity Xi estimated from n independent repeated observations Xi,k, the arithmetic mean Xi obtained from Equation (3) is used as the input estimate xi in Equation (2) to determine the measurement result y; that is, xi = Xi . Those input estimates not evaluated from repeated observations must be obtained by other methods, such as those indicated in the second category of 4.1.3.

There is only one input quantity in this random variable, Tmax, so i=1 and k=22.

From the GUM:

4.2.2 The individual observations qk differ in value because of random variations in the influence quantities, or random effects (see 3.2.2). The experimental variance of the observations, which estimates the variance σ² of the probability distribution of q, is given by

s²(qk)

Random variations in the influence quantities is what measurement uncertainty is all about.

Section 4.2.2 goes on.

This estimate of variance and its positive square root s(qk), termed the experimental standard deviation (B.2.17), characterize the variability of the observed values qk , or more specifically, dispersion about their mean .

From the GUM

B.2.18

uncertainty (of measurement)

parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

If the stated value of the random variable is the mean, i.e., q̅, then s²(k) is the dispersion of measurements about the mean q̅.

If you have a different section of the GUM you would like to use as a reference, feel free to post it along with your interpretation.

Reply to  Jim Gorman
November 6, 2024 6:55 pm

Yet more endless quoting from the GUM. And all such a distraction from he point. If you claim that averaging temperatures is impossible because the sum of temperatures is physically meaningless, how can you then turn round and say it’s fine to do it if the measurements are of the same thing? Everything here is a desperate attempt to distract from that.

In TN 1900, “q” is a random variable holding values of “monthly_average_of_Tmax”.

Meaningless nonsense. q is not a random variable of monthly_average_of_Tmax. There is only one monthly average of TMax, and that is not a random variable. q, in Ex2 is the random variable from which daily maximum values are taken. The qk are the 22 daily values, individual measurements taken from the assumed normal probability distribution with mean t.

The readings have been obtained under the same conditions of measurement, in this case, reproducibility conditions.

Ex2 never claims these are “reproducibility conditions”. If I’ve missed something please quote where it does make such a claim.

Random variations in the influence quantities is what measurement uncertainty is all about.

Usually your the ones going on about all the other types of measurement uncertainty, such a uncertainty in the definition, uncertainty in adjustments for systematic errors, etc.

If the stated value of the random variable is the mean, i.e., q̅, then s²(k) is the dispersion of measurements about the mean q̅.”

So this is where we end up? Your continued inability to distinguish between the uncertainty of an individual measurement and the uncertainty of the mean – which is really odd, given that TN1900 Ex 2 is telling you exactly how they calculate the uncertainty of the mean.

If you have a different section of the GUM you would like to use as a reference, feel free to post it along with your interpretation.”

Section 4.2.3, the one immediately after 4.2.2. The section specifically mentioned by Ex2. (along with 4.4.3 & G.3.2). I’m not going to post the whole section and then edit all the symbols, when you obviously have the document to hand. But it’s the section that starts by saying the “best estimate of … the uncertainty of the mean”, and gives the equation as dividing the variance of qk, by N. And of course this is the same as dividing the standard deviation of qk by √N to get the standard error of the mean, or as they call it the experimental standard deviation of the mean.

4.4.3 is just the example of this involving getting the mean of 20 observations of temperature and dividing their standard deviation by √20 to get the standard uncertainty of the mean.

G.3.2. is the part where they tell you to use a student-t distribution, rather than a normal one, to get the expanded uncertainty.

Reply to  Bellman
November 7, 2024 3:43 am

Yet more endless quoting from the GUM.”

The GUM is a recognized source. YOU are not. You can’t even tell the difference between averaging multiple measurements of the same thing from averaging multiple measurements taken from *different* things.

how can you then turn round and say it’s fine to do it if the measurements are of the same thing?”

Because one is multiple measurements of a single instance of a intensive property while the other is averaging different single instances of an intensive property.

Averaging the different measurements is *NOT* averaging different instances of an intensive property.

Why is this so hard for you to figure out?

Meaningless nonsense.”

Only to *YOU*. Primarily because you refuse every single time to list out the assumptions made in TN1900 and the implications of those assumptions!

“Ex2 never claims these are “reproducibility conditions”. If I’ve missed something please quote where it does make such a claim.”

Of course it does! Again, you always refuse to list out the assumptions in TN1900 and to understand their implications. For example: “The daily maximum temperature r in the month of May, 2012, in this Stevenson shelter, maybe defined as the mean of the thirty-one true daily maxima of that month in that shelter.” This certainly implies “reproducibility” in measurements of “Tmax”.

So this is where we end up? Your continued inability to distinguish between the uncertainty of an individual measurement and the uncertainty of the mean”

The typical use of the term “uncertainty of the mean” is as a measure of sampling error. This has *nothing* do do with uncertainty of measurement. It only has to do with how precisely the average of the data can be determined, which is totally separate from how accurate the average of the measurement data is.

Once again, you need to start using the term “standard deviation of the sample means” instead of “uncertainty of the mean”. Your use of the term “uncertainty of the mean” is nothing more than a lever to use the argumentative fallacy of Equivocation so you can use whatever definition you need for “uncertainty of the mean”. At the very least use the terms “sampling error of the population mean” and “measurement uncertainty of the mean”. They are *NOT* the same thing.

Reply to  Tim Gorman
November 7, 2024 6:59 am

The GUM is a recognized source. YOU are not. You can’t even tell the difference between averaging multiple measurements of the same thing from averaging multiple measurements taken from *different* things.

This so true, despite these jokers having been told the truth many, many times.

Reply to  Bellman
November 7, 2024 8:09 am

Meaningless nonsense. q is not a random variable of monthly_average_of_Tmax. There is only one monthly average of TMax, and that is not a random variable. q, in Ex2 is the random variable from which daily maximum values are taken.

From the GUM

In most cases, the best available estimate of the expectation or expected value μq of a quantity q that varies randomly [a random variable (C.2.2)]

“q” is a random variable containing daily measurements of Tmax, i.e., qₖ. The mean μ(q) is the monthly average stated value. You apparently don’t want to take the time to understand anything.

But it’s the section that starts by saying the “best estimate of … the uncertainty of the mean”, and gives the equation as dividing the variance of qₖ, by N

You don’t even understand the difference between the experimental standard deviation of the mean and standard deviation.

From the GUM

This estimate of variance and its positive square root s(qk), termed the experimental standard deviation (B.2.17), characterize the variability of the observed values qk, or more specifically, their dispersion about their mean .

the experimental standard deviation of the mean s(q) … quantify how well estimates the expectation µq of q, and … may be used as a measure of the uncertainty of .

Read these closely. The experimental SD is the dispersion of measurement results about the mean q̅. The experimental SD of the mean is the uncertainty of the mean.

From the GUM
B.2.18 uncertainty (of measurement) parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

Does the word dispersion appear in both types of SD? I wonder why not?

Yet more endless quoting from the GUM

If you don’t like the GUM, then find your own metrology resource to quote from. There are a number of them on line.

Reply to  Jim Gorman
November 7, 2024 6:05 pm

I’m not sure you know what you are arguing with now. You say

The experimental SD of the mean is the uncertainty of the mean.

That’s what I’ve been telling you, so why do you keep arguing that the standard deviation is the uncertainty of the mean?

If you don’t like the GUM

I like the GUM. The GUM isn’t the problem. It’s the fact that you just needlessly cut and past from it at length, whilst never seeming to understand what it says. Mindless copying is not as useful as saying what it says in your own words.

Does the word dispersion appear in both types of SD? I wonder why not?

A standard deviation is by definition a dispersion.

Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartile range. For instance, when the variance of data in a set is large, the data is widely scattered. On the other hand, when the variance is small, the data in the set is clustered.

https://en.wikipedia.org/wiki/Statistical_dispersion

Reply to  Tim Gorman
November 6, 2024 5:56 pm

YEP!

So when people complain that UAH doesn’t publish variance or standard deviations, we can just say that would be physically meaningless.

The word “difference” indicates a sum is being done. You can’t sum intensive properties.

So now you are saying the difference between two temperatures is physically meaningless? Presumably that makes the Celsius and Fahrenheit scales meaningless, seeing as they are based on the difference of two temperatures.

Reply to  Bellman
November 7, 2024 4:23 am

So when people complain that UAH doesn’t publish variance or standard deviations, we can just say that would be physically meaningless.”

If you can’t calculate one statistical descriptor then what makes you think you can calculate others? If the mean is meaningless then how can standard deviation be? The standard deviation surrounds the mean. If you don’t have a mean then how do you locate the standard deviation interval?

“So now you are saying the difference between two temperatures is physically meaningless?”

Two different temps of the SAME THING? Yes, that is physically meaningful. It’s how you calculate heat loss/gain. The difference in temp of two DIFFERENT THINGS? How would that let you calculate heat loss/gain for anything?

If you have a 1/4″ aluminum rod at X degF and a 1/4″ copper rod at Y degF what does the difference in temp actually tell you that is physically meaningful? It might tell you that one will oxidize your skin and the other will freeze your skin but how does the difference in temp let you calculate anything associated with heat?

You refuse to leave your statistical world and learn about the real world. Location A can be at X degF and Location B can be at Y degF yet both can have the same enthalpy. When the CAGW alarmists say “the earth is warming” that implies a gain in enthalpy, i.e. a gain in heat content. Climate science assumes higher temps mean higher enthalpy but you can’t tell that from temperature alone!

Reply to  Tim Gorman
November 7, 2024 5:55 am

If you can’t calculate one statistical descriptor then what makes you think you can calculate others?

We’ll done. That’s the point I’m making. I just wanted to be clear you accept the logic of “a meaningful equation cannot have physically meaningless components”.

So you do claim that the standard deviation of temperatures is meaningless. And hopefully you will remember this the next time someone insists that all temperature records need to report variance.

But there are still many issues with your ban on “physically meaningless” components. Do you allow the standard deviation of a set of time records? If so what physical meaning do you attache to square time? The same with just about any type of measurement. You can;t calculate the standard deviation without squaring the values, and those squares are unlikely to have any physical meaning.

Two different temps of the SAME THING? Yes, that is physically meaningful.

I was asking about your statement that “The word “difference” indicates a sum is being done. You can’t sum intensive properties.“. You keep falling back on this special pleading that temperatures are not intensive if they are the same thing. You never justify this claim.

The logic of what you are saying is that you can say that a bar that changes from 20°C to 30°C has warmed by 10°C. But you cannot say that a bar with a temperature of 30°C is 10°C warmer than one with a temperature of 20°C. You really need to explain why this makes sense to you, rather than just asserting it makes sense.

The difference in temp of two DIFFERENT THINGS? How would that let you calculate heat loss/gain for anything?

You keep doing this. Thinking up one reason why you might want to know something, and assuming that must therefore be the only reason.

I don’t have your expertise on thermodynamics, but surely knowing the difference in temperatures between different things is part of understanding heat transfer between them. Heat flows from a hooter body to a cooler one. Here’s the first equation I found online for the rate of heat transfer

Q = [K ∙ A ∙ (Thot – Tcold)] / d

Is that equation meaningless because it involves Thot – Tcold?

Reply to  Bellman
November 7, 2024 8:21 am

Is that equation meaningless because it involves Thot – Tcold?

You are joking right? Hold one rod in your left hand using an insulated glove and the other rod in your right hand using an insulated glove. Is the heat transfer equation useful? Is the average of the two temperatures useful or even a real thing?

Reply to  Jim Gorman
November 7, 2024 5:22 pm

You are joking right?

You could try answering the question.

Hold one rod in your left hand using an insulated glove and the other rod in your right hand using an insulated glove.

What has that got to do with conduction between the rods?

Is the heat transfer equation useful?

I would assume it is – as I say, I don’t have your “expertise” in thermodynamics.

Reply to  Bellman
November 8, 2024 5:39 am

Is the average of the two temperatures useful or even a real thing?”

Why didn’t you answer this question?

“I would assume it is”

Which is apparently why you can justify using the temperature of a parcel of air in Topeka, KS and a separate one in Nome, AK to get an “average” temperature for the globe.

Reply to  Tim Gorman
November 8, 2024 5:31 pm

km whine: you are in position to demand I answer your questions.

The average of two temperatures can be useful – it depends on what you want to use it for. An average of lots of things is more likely to be useful. Whether it’s a real thing depends on your philosophical outlook.

“Which is apparently why you can justify using the temperature of a parcel of air in Topeka, KS and a separate one in Nome, AK to get an “average” temperature for the globe.

Why would you want to use just two measurements, from the same small part of the world, to estimate a global average?

Reply to  Bellman
November 8, 2024 6:23 pm

Fail, your bait is stale.

/plonk/

Reply to  karlomonte
November 8, 2024 6:41 pm

I’m beginning to suspect km doesn’t like me.

Reply to  Bellman
November 9, 2024 6:35 am

he average of two temperatures can be useful – it depends on what you want to use it for. An average of lots of things is more likely to be useful. Whether it’s a real thing depends on your philosophical outlook.”

Of what use is the average of the intensive property of two different objects?

Why would you want to use just two measurements, from the same small part of the world, to estimate a global average?”

If averaging two independent intensive properties doesn’t work then averaging 10,000 won’t either.

Reply to  Tim Gorman
November 9, 2024 8:57 am

He won’t answer.

Reply to  Bellman
November 7, 2024 3:10 pm

So you do claim that the standard deviation of temperatures is meaningless.”

Read what you just said! It’s what we’ve been trying to tell you for over two years. You can’t average intensive properties. That is *NOT* the same thing as averaging measurements of a single object to determine the best estimate of a specific intensive property of that single object. E.g. take ten measurements of the temperature of a steel rod and then average the measurements to get the best estimate of the temperature of the rod.

“Do you allow the standard deviation of a set of time records?”

Time is not a intensive value. You *can* add time. You can add the time you take to travel from Point-A to Point-B to the amount of time it takes for you to then travel from Point-B to Point-C in order to find the total time taken to go from Point-A to Point-C.

You *still* don’t understand what standard deviation is. It’s not just a calculation you make of a data set. Take the temperatures recorded by a temp measurement station over the period of a day. You digitize the temperatures and put them into a data set. Exactly what do you think the standard deviation of that data set tells you?

The logic of what you are saying is that you can say that a bar that changes from 20°C to 30°C has warmed by 10°C. But you cannot say that a bar with a temperature of 30°C is 10°C warmer than one with a temperature of 20°C.”

Your lack of reading skills is showing again. That is *NOT* what I said at all!

I spoke of a 1/4″ aluminum rod and a 1/4″ copper rod, not a “bar” (i.e. singular)!

Neither can you equate the amount of heat it takes to move an aluminum rod 10 degF with the amount of heat it takes to move a copper rod 10 degF. The same thing applies to surface and atmospheric temps. The amount of heat it takes to move a cubic foot of air up 10 degF in Topeka, KS is *NOT* the same as the amount of heat needed to move a cubic foot of air up 10 degF in Miexico City. But climate science assumes that it does! And so do you!

but surely knowing the difference in temperatures between different things is part of understanding heat transfer between them.”

Who said there was any heat transfer *between* them?

“Is that equation meaningless because it involves Thot – Tcold”

Just stop. You are making a fool of yourself. Your equation is for conduction of heat. You apparently don’t even realize what “K” is. Conduction occurs through a medium such as from the end of a rod being heated to the far end, or from a hot rod to cold rod clamped to a cold rod (which also then requires finding the area of contact). And on and on.

That has *nothing* to do with temperature being an intensive property! How much *conduction* do you suppose there is between a cubic foot of atmosphere in Topeka and a cubic foot of atmosphere in Boston?

Reply to  Tim Gorman
November 7, 2024 6:23 pm

Time is not a intensive value.”

Still avoiding the point. You claim that every part of an equation has to have a physical meaning or else the equation is meaningless. I’m asking if you think time squared is physically meaningful.

You *still* don’t understand what standard deviation is.

It’s so obvious you know you’ve lost the argument when you have to resort to these sniveling insults.

Your lack of reading skills is showing again. That is *NOT* what I said at all!

It’s the implication of what you are saying. Nothing you say contradicts it.

I spoke of a 1/4″ aluminum rod and a 1/4″ copper rod, not a “bar” (i.e. singular)!“”

Yes, I’m sure the difference between bars and rods makes all the difference. And I spoke of bars plural – the difference between one bar and another.

Neither can you equate the amount of heat it takes to move an aluminum rod 10 degF with the amount of heat it takes to move a copper rod 10 degF.

Classic Gorman evasion. The question was an]bout the difference in temperatures, not how much heat it took to get to that temperature.

Who said there was any heat transfer *between* them?

I did. You said the difference in temperatures was meaningless – I gave an example where it would have meaning.

You apparently don’t even realize what “K” is.

K is the coefficient of thermal conductivity of the substance

But nice evasion.

Conduction occurs through a medium such as from the end of a rod being heated to the far end, or from a hot rod to cold rod clamped to a cold rod (which also then requires finding the area of contact)

Yes, there are a number of variables. None of which have anything to do with the fact that you need (Thot – Tcold) in the equation. And by your logic the equation is meaningless if Thot – Tcold has no physical meaning.

That has *nothing* to do with temperature being an intensive property!

And more distractions. The question is whether the difference in temperatures has a physical meaning.

How much *conduction* do you suppose there is between a cubic foot of atmosphere in Topeka and a cubic foot of atmosphere in Boston?

And to end we have a red-herring so large it can be seen from space.

Reply to  Bellman
November 8, 2024 5:22 am

The minute you put two bars in contact then they become “an object”, i.e. singular. When the bar is in equilibrium it has ONE intensive property of temperature, not multiple ones. If the bar is *NOT* in equilibrium then each point you measure becomes a separate object. You can’t find an *average* temperature of the two bars in contact by averaging the temperature of the multiple points. That average is not physical, you can’t use it to calculate anything. The amount of heat conducted is based on the temperature difference between adjacent minute portions of the bar, not on the average temperature of the bar as a whole.

You just keep on showing how little you understand physical science.

The only red herring here is *YOU* trying to use heat conduction in an object to prove that you can average the intensive property of temperature to create an average temperature for a parcel of air in Topeka, KS and a different parcel of air in Nome, AK.

Reply to  Tim Gorman
November 8, 2024 5:26 pm

The minute you put two bars in contact then they become “an object”, i.e. singular.

I worry for about your spine, given the contortions you put yourself through.

So do you think the old is “an object”, and can it have an average temperature?

The only red herring here is *YOU* trying to use heat conduction in an object to prove that you can average the intensive property of temperature…

More distractions. This wasn’t about averaging, but subtracting.

Reply to  Bellman
November 9, 2024 6:32 am

I worry for about your spine, given the contortions you put yourself through.”

The problem is that you simply have no understanding of physical science at all, NONE.

Conduction requires direct heat transfer through a slice of a media. It is calculated normal to that slice, i.e. dA. If there is no interface between two objects then heat transfer is via conduction and/or radiation. The coefficient for convection is different than that for conduction.

“More distractions. This wasn’t about averaging, but subtracting”

No, it’s about whether you can average intensive properties. It was *YOU* that are trying to distract by talking about conduction which has nothing to do with averaging intensive properties.

Reply to  Tim Gorman
November 9, 2024 9:07 am

The problem is that you simply have no understanding of physical science at all, NONE.

Ditto bwx and his force-acceleration “model”. He also thinks curve fitting gives you a model.

Reply to  Tim Gorman
November 7, 2024 7:01 am

Just because it is possible to stuff any set of numbers into the mean formula does not make it a meaningful exercise.

Numbers is numbers!

Reply to  Tim Gorman
November 5, 2024 7:51 am

“If you add 76F and 58F together what do you get? 76F? 58F? 130F?

If you add 2kg and 3kg together what do you get? 2kg? 3kg? 5kg?

You have to add quantities before you can determine an average.”

Who knew?

“Just because you can add two numbers together doesn’t mean the sum makes physical sense.”

If I have two marbles and Johnny gives me three marbles, I can reasonably and practically conclude that I end up with 2+3 = 5 marbles. That makes physical sense to me . . . much more so than discussing the average number of marbles resulting from Johnny’s gift.

Reply to  ToldYouSo
November 5, 2024 9:17 am

If I have two marbles and Johnny gives me three marbles, I can reasonably and practically conclude that I end up with 2+3 = 5 marbles. That makes physical sense to me . . . much more so than discussing the average number of marbles resulting from Johnny’s gift.

Not a good analogy. Marbles are counting numbers with no uncertainty.

A better analogy would be to smash 10 marbles into thousands of pieces and distribute equal weights to 5 different people. What is the uncertainty of weights that each person receives?

Reply to  Jim Gorman
November 5, 2024 5:09 pm

“Marbles are counting numbers with no uncertainty.”

Huh??? . . . and all along I really and truly thought that marbles were physical objects, totally unlike numbers.

BTW, I think it can be said that the irrational number “pi” will always have some uncertainty no matter how many numerals are used to quantify it . . . IOW, there are numbers in use today that DO have uncertainty.

And to further confound you, I’ll just point out that the number called “pi” has physical meaning since it represents the ratio of a circle’s circumference to its diameter, a geometric property with real-world applications.

I literally lost all my marbles a long time ago, so I have none left to smash and thus remain uncertain about THAT analogy. 🙂

Reply to  ToldYouSo
November 7, 2024 6:05 am

And to further confound you, I’ll just point out that the number called “pi” has physical meaning since it represents the ratio of a circle’s circumference to its diameter, a geometric property with real-world applications.

Doesn’t confound me at all. Here is an Instagram link showing this.

https://www.instagram.com/reel/Ckfk2IIPL3e/?igsh=MWo1bmlvc2l4YmcwbA==

Reply to  ToldYouSo
November 7, 2024 6:22 am

I’ll just point out that the number called “pi” has physical meaning since it represents the ratio of a circle’s circumference to its diameter, a geometric property with real-world applications.

The problem is that pi as the irrational mathematical abstraction, isn’t what has real-world applications. For the real world you only need a rough approximation of pi.

A mere 30 or so decimal places of pi is enough to calculate the circumference of a a circle the size of the galaxy to the nearest nanometer.

Reply to  ToldYouSo
November 7, 2024 7:04 am

“[counting] Marbles are counting numbers with no uncertainty.”

Huh??? . . . and all along I really and truly thought that marbles were physical objects, totally unlike numbers.

Either your reading comprehension is lacking or you are playing semantic games.

Reply to  ToldYouSo
November 4, 2024 10:10 am

An average has NO physical significance. Period.

Reply to  Phil R
November 4, 2024 10:22 am

A mean (average) only has meaning if the variance it is calculated from is also quoted. A measurement is only valid if the model, derivation,, degrees of freedom, etc. is also stated.

Reply to  Jim Gorman
November 4, 2024 10:42 am

You really ought to take this up with Spencer, although his blog is having problems at the moment.

If you want the weighted variance for anomalies based on the grid data, I make it 0.69 K² for October.

For temperature it’s 116.7 K².

bdgwx
Reply to  Bellman
November 4, 2024 1:04 pm

although his blog is having problems at the moment.

I was getting a 403 error trying to post. I came in from a different IP address and I was able to get a post in. I switched back to the original IP address and I’m getting a 403 error again. It is acting like I’ve been very recently IP banned which is obviously weird since I’m not actually banned.

Reply to  bdgwx
November 4, 2024 1:56 pm

Yes, it seems to come and go. I’ll post a comment with no problem, then the next one keeps throwing up 403 errors. But a few minutes later I can post. Either it’s a problem with his hosting, or it’s deliberately limiting the rate of comments.

bdgwx
Reply to  Bellman
November 5, 2024 7:10 am

I’m still getting a 403 error. Maybe he really did ban me yesterday.

Reply to  bdgwx
November 6, 2024 12:52 pm

I’m getting it again today. I doubt that it’s a ban though.

Reply to  Bellman
November 4, 2024 4:05 pm

If you want the weighted variance for anomalies based on the grid data, I make it 0.69 K² for October.

√0.69 = 0.83. That’s a pretty large uncertanty, especially when trying to pass off knowing temperatures to a milikelvin value. Do you understand what uncertainty in measurement really means?

Reply to  Jim Gorman
November 4, 2024 4:56 pm

It’s not an uncertainty. It’s just a measure of how much the anomaly varies across the globe. If you just based a statement on one random part of the globe, it would be the uncertainty as to how much that that one value represented the global average.

But UAH is not based on just one random part of the world.

Reply to  Bellman
November 4, 2024 10:30 pm

It’s just a measure of how much the anomaly varies across the globe.”

Variance (i.e. varies across the globe) is a direct metric for uncertainty. The larger the variance the less certain the “average” is. The less certain a value is the higher its uncertainty is.

it would be the uncertainty as to how much that that one value represented the global average.”

That is the direct definition of “measurement uncertainty”!

Reply to  Tim Gorman
November 5, 2024 5:44 am

The larger the variance the less certain the “average” is.

Yes, in general. If this was a random sample of anomalies across the globe (it isn’t) then the uncertainty of the mean would be standard deviation divided by root N. The larger the variance in the data the less certainty in the mean – but that does not mean the variance in the data is the uncertainty of the mean.

That is the direct definition of “measurement uncertainty”!

Read what I said. If you based the global value on just one observation taken at a random place on the globe – then the global variance would be the uncertainty of the global value.

Reply to  Bellman
November 5, 2024 1:06 pm

Yes, in general.”

As far as my library is concerned the average of a distribution with a higher variance is *always* less certain than one with a lower variance.

 random sample of anomalies”

” the uncertainty of the mean would be standard deviation divided by root N.”

The uncertainty of the mean, i.e. the standard deviation of the sample means, is *NOT* the uncertainty of the value of the average. It is a metric for sampling error and *NOT* for accuracy. The sampling error represented by the standard deviation of the sample means is an ADDITIONAL factor which increases the uncertainty of the value of average.

You remain stuck in that idiotic meme that all measurement uncertainty is random, Gaussian, and cancels. It is only then that the standard deviation of the sample means represents the uncertainty of the value of the mean.

BTW, “A” (as in singular) sample of temperatures does not have a standard deviation of the sample meanS. In order to calculate a standard error, i.e. SD/sqrt(n), from a single sample *requires* one to assume that the SD of the single sample is equal to the SD of the population. That is an assumption that must be justified on a case-by-case basis. I’ve never seen that assumption justified for global temperatures or anomalies.

You just keep beating the same old dead horse – all measurement uncertainty is random, Gaussian, and cancels. You don’t even recognize any more when you are using the meme.

Reply to  Tim Gorman
November 5, 2024 3:21 pm

As far as my library is concerned the average of a distribution with a higher variance is *always* less certain than one with a lower variance.

Then you need a better library.

The uncertainty of the mean, i.e. the standard deviation of the sample means, is *NOT* the uncertainty of the value of the average.

The uncertainty of the mean is not the uncertainty of the value of the average? Is that what your library says?

It is a metric for sampling error and *NOT* for accuracy

I was responding to your claim that:

Variance (i.e. varies across the globe) is a direct metric for uncertainty.

Variance will not tell you anything about the accuracy of your measurements, anymore than the standard error of the mean. If there’s a systematic error in all your measurements it will be invisible to the variance and the SEM.

BTW, “A” (as in singular) sample of temperatures does not have a standard deviation of the sample meanS

You are the only people who think there is an S at the end of mean. The standard error of the mean, the experimental standard deviation of the mean, or SDOM, or whatever you want to call it – is an expression explaining the probability distribution from which your single mean has drawn. You usually estimate this from the distribution of the individual observations in the single sample. This has been explained to you countless times, by me, and I expect every book in your library.

…from a single sample *requires* one to assume that the SD of the single sample is equal to the SD of the population.

You do not assume that. You assume it’s an imperfect estimate of the population SD, which becomes better as sample size increases.

all measurement uncertainty is random, Gaussian, and cancels

Stop lying.

Reply to  Bellman
November 5, 2024 4:25 pm

The uncertainty of the mean is not the uncertainty of the value of the average? Is that what your library says?

It is what it says. From:

Standard Error of the Mean (SEM) – Statistics By Jim

The standard error of the mean is the variability of sample means in a sampling distribution of means.

Inferential statistics uses samples to estimate the properties of entire populations. The standard error of the mean involves fundamental concepts in inferential statistics—namely repeated sampling and sampling distributions. SEMs are a crucial component of that process. (bold by me)

Read these carefully. A sampling distribution of means is developed from many samples and calculating the mean value of each sample.

When discussing a monthly average at a station, you have two choices:

  • 30 samples of size 1
  • 1 sample of size 30

30 samples allows for determining the standard deviation with SEM = σ/1. In other words, the standard deviation of the samples is the measurement uncertainty.

A single sample of size 30, does not allow the development of a sample means distribution. Therefore, there is no SEM. You can not infer anything about any population property.

Here is another reference from:

Sampling Distribution: Definition, Formula & Examples – Statistics By Jim

Bottom line – when you calculate the SD (σ) and divide that by the √n, all you are doing is ESTIMATING the SEM you might get with correct sampling using multiple samples. It is not a real SEM because you have not done the sampling.

In actuality, you can not have multiple samples of of size “n” when measuring temperatures. You have one reading which is sample size of 1.

Reply to  Jim Gorman
November 5, 2024 5:12 pm

It is what it says. From:

And as usual not a single thing you quote disagrees with what I’m saying. Nor does it explain what you mean by “the uncertainty of the mean” is not the “the uncertainty of the value of the average”. It might help if you defined what you mean by the “value of the average”. I’m assuming you mean the average value you obtained from the sample.

A sampling distribution of means is developed from many samples and calculating the mean value of each sample

And again, that’s how you can think of a sampling distribution. It does not mean you actually have to take an infinite number of samples. As the article says

Fortunately, you don’t need to repeat your study an insane number of times to obtain the standard error of the mean. Statisticians know how to estimate the properties of sampling distributions mathematically, as you’ll see later in this post. Consequently, you can assess the precision of your sample estimates without performing the repeated sampling.

30 samples allows for determining the standard deviation with SEM = σ/1.

A sample of size 1 is not much of a sample. And you cannot obtain σ/1 as you do not know the population distribution, and there is no sample standard deviation when the sample has only one observation.

In other words, the standard deviation of the samples is the measurement uncertainty.

Assuming you know the population distribution, it’s the measurement uncertainty of the mean of a single value – which is the same as the uncertainty of that one value.

A single sample of size 30, does not allow the development of a sample means distribution.

Then you will have to explain why your beloved TN1900 is wrong.

You can not infer anything about any population property.

Then how did you know what the uncertainty of your individual value was? If you can accept that the distribution of the 30 individual values tells you something about the population, why do you think the 30 individual values making up the size 30 sample does not tell you anything about the population?

when you calculate the SD (σ) and divide that by the √n, all you are doing is ESTIMATING the SEM you might get with correct sampling using multiple samples.

Yes – that’s the point.

It is not a real SEM because you have not done the sampling.

A sample of samples is not going to give you the real SEM unless you have infinitely many – but the point you never grasp is that of you have multiple samples of samples, you can just combine them into a single sample with a smaller SEM. There’s no point in taking 100 different samples of size 30, just to estimate how much uncertainty there is in a single sample of size 30, when you could just take a sample of 3000. The object of sampling isn’t to find the best estimate of the SEM, it’s to get the best estimate of the mean.

Reply to  Bellman
November 6, 2024 8:07 am

Then you need a better library.”

You need a better understanding of the theory of statistics. The wider the variance the wider the tails of the distribution, even a Gaussian distribution. The wider the tails then the more possible values exist in the standard deviation interval. As the peak of that standard deviation interval gets smaller compared to the surrounding values (i.e. the larger the variance) the less certain it becomes that the average *is* the actual average and it becomes more and more likely that a value close to the “average” is the true average.

If that is too confusing for you then think of it terms of the standard error, SE = SD/n. “n” is a constant for a given distribution. As SD gets larger, i.e. the variance goes up, then the SE goes up as well. The SE going up means that the sampling error associated with the “average” goes up and the value of the “average” becomes less certain.

“The uncertainty of the mean is not the uncertainty of the value of the average? Is that what your library says?”

It’s what *all* metrology texts say. The “uncertainty of the mean” is always defined as the standard error of the sample means. That is *NOT* the measurement uncertainty of the average. The measurement uncertainty of the mean is that value which is propagated from the individual measurement uncertainties.

The two are *NOT* the same. It’s why I keep telling you that you need to abandon the term “uncertainty of the mean” and actually describe what you are talking about. Either use the term “standard deviation of the sample means” or the term “measurement uncertainty”. You continue to use “uncertainty of the mean” because it is ambiguous and you can use the argumentative fallacy of Equivocation to change the definition of what you are talking about as needed in any specific context.

“Variance will not tell you anything about the accuracy of your measurements,”

I didn’t say it would. Like most CAGW advocates on here you have no basic understanding of real world terms. The term “metric” is *not* the same as the term “accuracy”. A “metric” is a value that is used for comparison or assessment of a product, measurement, process, etc. It is *not* necessarily a direct measurement of an objects properties. E.g. a weight of product collected in a sieve can be used as a metric for how much of a product exceeds a certain length. It won’t tell you anything about the actual product pieces collected in the sieve but is a METRIC for the process. The more weight collected the more product that does not meet requirements.

UAH can be considered a metric for something about the Earth’s biosphere but it can’t tell you anything specific about that biosphere. Thinking that UAH can tell you anything about how much a collection of intensive property measurements of different things is changing is only fooling yourself, especially when you aren’t even directly measuring that intensive property itself!

” is an expression explaining the probability distribution from which your single mean has drawn.”

Bullshit!

The theoretical definition of the SEM is σ/sqrt(n) where σ is the POPULATION STANDARD DEVIATION.

You keep wanting to substitute the approximation of s/sqrt(n) where s is the sample standard deviation. This requires assuming that the standard deviation of the sample is the same as the standard deviation of the sample. This requires JUSTIFICATION for assuming that σ = s. A justification which you NEVER, ever provide.

You do not assume that. You assume it’s an imperfect estimate of the population SD, which becomes better as sample size increases.”

More crap being thrown against the wall. How do you judge the “imperfection” level from a single sample? And, as usual, all increasing the sample size does is decrease the interval in which the average value lies. It tells you nothing, absolutely nothing about whether that interval is accurate or not.

Reply to  Tim Gorman
November 6, 2024 8:25 am

This requires assuming that the standard deviation of the sample is the same as the standard deviation of the sample.

This requires assuming that the standard deviation of the sample is the same as the standard deviation of the sample population.

Slight mistake.

Reply to  Phil R
November 4, 2024 3:59 pm

Is your IQ above or below average? Question mark.

Reply to  Phil R
November 4, 2024 4:32 pm

An average is a STATISTICAL DESCRIPTOR. It is generally understood to describe the value that is the most common one – i.e. it is an EXPECTATION of the value to be found most often in a distribution of values.

An average is *NOT* a measurement. A group of measurements of different things using different devices is *not* a measurement of an “average”. The average is just the value you would (hopefully) find the most often.

In essence, the “global average temperature” is actually a statistical descriptor, it is the value you would expect to find most often if you travelled around the globe taking a bunch of measurements. But what does that mean? You simply don’t know if you don’t have the other required statistical descriptors typically used with a distribution of values – e.g. the variance, the skewness, and the kurtosis.

What should *really* be provided is what is known as the 5-number statistical description, the min, max, median, first quartile, third quartile. What you would *actually* find is that the temperature distribution around the earth is a bi-modal (or more probably a multi-modal) distribution – warm temps in one hemisphere juxtaposed with cold temps in the other. And the “average” value of a bi-modal distribution is almost useless for describing physical reality.

Reply to  Tim Gorman
November 4, 2024 5:00 pm

It is why the probability of unseen spurious trends is likely. The GAT is a time series made up of monthly segments being averaged multiple times. No one here has even addressed of the time series being used is stationary and if seasonal (think bimodality) combinations have been addressed.

It is basically 5th grade averaging counting numbers.

Reply to  Tim Gorman
November 11, 2024 9:43 am

An average is a STATISTICAL DESCRIPTOR. It is generally understood to describe the value that is the most common one”

No that’s the ‘mode’.
https://www.ncl.ac.uk/webtemplate/ask-assets/external/maths-resources/statistics/descriptive-statistics/mean-median-and-mode.html#:~:text=The%20mode%20is%20the%20most,most%20times%20is%20the%20mode.

Reply to  Phil.
November 11, 2024 2:31 pm

climate science always assumes *everything* is random and Gaussian. In such a “statistical world” the mode and the mean are equal.

If you want to say climate science should stop assuming everything is random and Gaussian, including measurement uncertainty, then I would agree with you and the mode would not necessarily be equal to the average.

But then climate science would also have to show those statistical descriptors that shows how the data is skewed or multi-modal, e.g. the 5-number description, instead of just the mean because then the mode would not equal the mean.

I’m not going to hold my breath, just like I’m not going to hold my breath waiting for climate science to start weighting the data to account for different variances in warm weather vs cold weather.

Reply to  ToldYouSo
November 4, 2024 11:29 am

We are apparently still pretty much in “ColdHouse” conditions.

Coldhouse
Reply to  bnice2000
November 4, 2024 2:28 pm

We are, but your chart doesn’t show that. It just shows the correlation between CO2 and global temperature.

Reply to  Bellman
November 4, 2024 2:53 pm

Except the chart does CLEARLY shows we are in a ColdHouse period.

Can’t see the words down the bottom ??? Or just very DUMB.

Not only a “Coldhouse” period, but a low CO2 period.

Reply to  bnice2000
November 4, 2024 3:55 pm

Where in the “words down the bottom” does it mention time? The graph is breaking up the past into 5 non-consecutive temperature ranges, and showing the range of CO2 during those periods. No mention of which state we are currently in.

If you want to see that we are in a “cold house” period you need to look at the graph above, which for some reason you cropped out.

comment image

Reply to  Bellman
November 4, 2024 4:18 pm

Where in the “words down the bottom” does it mention time?

Dude, the graph is not based on time. The relationship is a quality of temperature versus CO2 concentration. Time has nothing to do with it.

Reply to  Jim Gorman
November 4, 2024 4:27 pm

Yes, that’s what I said. It does not tell you that we are currently in a cold house state, just that you get the lowest CO2 levels when the world is at it’s coldest.

Reply to  Bellman
November 4, 2024 5:24 pm

OMG you are so thick..

Yes, we have very low CO2 and are in a “coldhouse” period.

Why is it so difficult for you to comprehend.

Do you have brain damage or something ??

Reply to  bnice2000
November 4, 2024 5:37 pm

Why is it so difficult for you to read what I said. Literally the first two words were “We are”. I keep telling you we are in a coldhouse, I’m trying to explain to you that the chart you are using is not the thing that demonstrates that.

Reply to  Bellman
November 4, 2024 6:18 pm

Then stop your moronic caterwauling about warming.

It is obviously NECESSARY. !

As is a greater level of atmospheric CO2

Reply to  Bellman
November 4, 2024 4:38 pm

I should say that the graph is from “A 485-million-year history of Earth’s surface temperature”, by Emily J Judd et.al.
https://www.science.org/doi/10.1126/science.adk3705

Unfortunately the full paper, and that particular chart, now seems to be behind a paywall.

Reply to  Bellman
November 4, 2024 5:22 pm

Thanks for showing EXACTLY what I said, dolt. !!

We are in a “coldhouse” period with low CO2.

The red line is current CO2.. time is irrelevant

Reply to  bnice2000
November 4, 2024 5:44 pm

We are in a “coldhouse” period with low CO2.

Yes we are. As I kept pointing out to you. The red line is not from the paper, it’;s something you’ve added, and it’s only suggesting we are in a coolhouse because of the correlation between CO2 and temperature.

Does this mean you accept that if CO2 were to rise above 650ppm we might be in a hothouse state?

Reply to  Bellman
November 4, 2024 6:15 pm

So are you saying the red line is wrong.?

Or just making mindless spurious anti-logic yabberings.

At least you have now admitted that the globe is currently in a “coldhouse” condition.

This is a fact that is well established.

Only a degree or so above the coldest period in 10,000 years, and still with very low CO2 levels

So stop your moronic caterwauling about warming !!

Reply to  bnice2000
November 4, 2024 6:46 pm

This is obviously to complicated for you to understand. I have never said anything other than we are in a coldhouse condition. The world has been cooling over the last 50 million or so years. This is well known, and models in the paper you are quoting agree with that. I have no reason to doubt the paper, and it was me who showed you chart you are using.

The very simple point, however, is that the part of the chart you clipped out is not the bit that tells you we are currently in a coldhouse condition. What it shows is the range of CO2 levels present in any given temperature range. You might infer from it, that because CO2 levels were low, that this means we are currently in a coldhouse condition, and you would be correct. But at the current levels we could also be in a coolhouse or possibly even a warmhouse.

CO2 is not the only determinant of temperature, especially not on the scale of hundreds of millions of years. You cannot just assume that increasing CO2 to levels to 500ppm will result in a global average temperature of 20°C, even though that’s what the chart would suggest.

Please reply with as much infantile creaming as you want – as far as I’m concerned you will be yelling into the void.

Reply to  Bellman
November 5, 2024 2:12 am

ROFLMAO

Another rambling diatribe of irrational nonsense.

Yes we are currently in a “coldhouse” state, only a degree above the coldest period in 10,000 years..

And yes CO2 IS at a very low level.

The graph is just another confirmation of the known current state.

There is no evidence CO2 is a determinant of temperature at all.

You cannot assume that raising CO2 levels to 500ppm would have any effect whatsoever, except enhanced plant growth..

If I yell in your ear.. then I would be yelling into a void. !

Reply to  bnice2000
November 5, 2024 6:21 am

ROFLMAO

Another rambling diatribe of irrational nonsense.

He has a special talent here.

Reply to  bnice2000
November 5, 2024 7:41 am

You have to admire the determination not to see the obvious.

bnice accepts the world is at it’s coldest. Accepts CO2 is at it’s lowest points. And then asserts there is no evidence that CO2 affect temperature.

Reply to  ToldYouSo
November 4, 2024 4:20 pm

Please note carefully that “roughly” is a limiting factor in how many decimal places you can use. The quote only specifies temperatures in the units digit, not in the hundredths or thousandths digit.

And exactly *what* is “current scientific understanding” when it comes to averaging an intensive property?

Reply to  ToldYouSo
November 4, 2024 6:15 pm

During many of Earth’s past ‘hothouse’ periods, the positions of the continents were different due to plate tectonics.

Comparing the ‘global average temperature’ of those times to today’s is like comparing apples to oranges.

Mr.
Reply to  Art Slartibartfast
November 4, 2024 10:36 am

You’re asking Climate Cranks why you’re wrong?

LT3
Reply to  Art Slartibartfast
November 5, 2024 12:02 pm

What you see in the graphs of the UAH data, is not the average temperature, it is the averaged anomaly. These types of datasets are used throughout the climate community and very useful for ENSO forecast etc..

Reply to  LT3
November 5, 2024 3:20 pm

And these numbers are NEVER reported with a realistic uncertainty attached. And climate science ignores significant digit rules.

bdgwx
November 4, 2024 7:03 am

Here is the updated adjustment table for UAH and how each adjustment effected the overall trend.

Year / Version / Effect / Description / Citation

Adjustment 1: 1992 : A : unknown effect : simple bias correction : Spencer & Christy 1992

Adjustment 2: 1994 : B : -0.03 C/decade : linear diurnal drift : Christy et al. 1995

Adjustment 3: 1997 : C : +0.03 C/decade : removal of residual annual cycle related to hot
target variations : Christy et al. 1998

Adjustment 4: 1998 : D : +0.10 C/decade : orbital decay : Christy et al. 2000

Adjustment 5: 1998 : D : -0.07 C/decade : removal of dependence on time variations of hot target temperature : Christy et al. 2000

Adjustment 6: 2003 : 5.0 : +0.008 C/decade : non-linear diurnal drift : Christy et al. 2003

Adjustment 7: 2004 : 5.1 : -0.004 C/decade : data criteria acceptance : Karl et al. 2006 

Adjustment 8: 2005 : 5.2 : +0.035 C/decade : diurnal drift : Spencer et al. 2006

Adjustment 9: 2017 : 6.0 : -0.03 C/decade : new method : Spencer et al. 2017 [open]

Adjustment 10: 2024 : 6.1 : -0.01 C/decade : NOAA19 drift : [Spencer 2024]

bdgwx
Reply to  bdgwx
November 4, 2024 7:30 am

Given the values as reported the trend from 1979/01 to 2024/09 is 0.1509 C.decade-1 for v6.1 and 0.1579 C.decade-1 for v6.0. That is a difference of -0.007 C.decade-1.

Reply to  bdgwx
November 4, 2024 7:40 am

Why did you stop at using just four decimal places for the rates?

bdgwx
Reply to  ToldYouSo
November 4, 2024 7:51 am

4 digits was enough to report the difference to 1 significant figure.

Anyway, here is the full IEEE 754 breakdown. I’m not sure how much value is added here, but since you asked I’ll present it anyway.

v6.1: 0.149843723824339
v6.0: 0.157923047635624

That is a difference of exactly 0.00807932381128526 using IEEE 754 and UAH’s 3 digit file.

Reply to  bdgwx
November 4, 2024 6:10 pm

Mathematical idiot !!

bdgwx
Reply to  bdgwx
November 4, 2024 7:53 am

Oops…I accidently did through 2024/10 for v6.1. Through 2024/09 it is 0.1498 C.decade-1. That makes the difference -0.008 C.decade-1. Sorry about that.

Mr.
Reply to  bdgwx
November 4, 2024 1:42 pm

Well there we are.

Now I can stop wondering why nobody on the whole planet can sensibly detect a scintilla (sometimes called a ‘poofteenth’) of climatic change in their localities.

(unless of course you’re a Canadian who flees from their native climate conditions each winter to experience the different, and much comfortably warmer conditions of Florida)

bdgwx
Reply to  Mr.
November 4, 2024 7:27 pm

Now I can stop wondering why nobody on the whole planet can sensibly detect a scintilla (sometimes called a ‘poofteenth’) of climatic change in their localities.

Serious question…why did you ever consider that it would be possible for a person to “sensibly detect” the global average temperature in the first place?

Mr.
Reply to  bdgwx
November 4, 2024 8:04 pm

because when we’re rumaging around aimlessly in the realms of absurdities (eg GAT), any associated absurdities are all grist for the mill.

bdgwx
Reply to  Mr.
November 5, 2024 6:41 am

Ok, that definitely helps explain the difference between you and I because my tendency for initial belief is inversely proportional to the absurdity of the claim.

Reply to  bdgwx
November 5, 2024 7:16 am

Which you are woefully unprepared to evaluate.

Reply to  bdgwx
November 4, 2024 8:08 pm

Serious question…why did you ever consider that it would be possible for a person to “sensibly detect” the global average temperature in the first place?

Because you and every CAGW advocate complain about how the HEAT, in the form of TEMPERATURE is going to destroy the earth ability to support life

Don’t try to minimize CAGW. It is why muli trillions that we don’t have is being spent.

The “Who me?” response just isn’t going to impress anyone! If you truly think the increase in global temperature is not sensible than global warming is a no show.

Reply to  bdgwx
November 4, 2024 11:31 am

Every adjustment for sound scientific measurement reasons

… unlike the agenda-based anti-science adjustments of the surface temperature fakeries.

Reply to  bdgwx
November 4, 2024 3:09 pm

So many adjustments in WUWT’s “gold standard” global temperature database.

Can you even imagine if GISS or NOAA made an adjustment of this magnitude in their monthly update?

Anthony’s head would explode!

Reply to  TheFinalNail
November 4, 2024 4:03 pm

ROFLMAO.. GISS does make RANDOM non-science adjustments all the time.

They happen every time they run their fake homogenisation routines.

Individual site past data changes all the time.

Sorry you are too dumb to realise the difference between scientifically valid adjustments…

… and agenda-based mal-adjustments.

November 4, 2024 7:41 am

“Satellite calibration biases….typically tenths of a degree.”

Hmmm….that’s not good in the range of accuracy we are expecting…if the calibration bias is determined by a person or committee working with statistically derived correction factors and affected by possible cognitive bias…which could be as simple as trying to show that your work is worthy of further funding.

bdgwx
November 4, 2024 7:44 am

v6.1 changes things up in regards to trends. Below are select trends presented for both v6.0 and v6.1.

At its peak the Monckton Pause lasted 107 months starting in 2014/06. From 2014/06 to 2024/09 the trend is +0.42 C.decade-1 (v6.0) and +0.33 C.decade-1 (v6.1)

Here are some more trends with v6.0 listed first and v6.1 listed second. These are only through 2024/09 so that a like-to-like comparison can be made.

1st half: +0.14 C.decade-1, +0.14 C.decade-1
2nd half: +0.23 C.decade-1, +0.21 C.decade-1

Last 10 years: +0.41 C.decade-1, +0.32 C.decade-1
Last 15 years: +0.39 C.decade-1, +0.34 C.decade-1
Last 20 years: +0.30 C.decade-1, +0.27 C.decade-1
Last 25 years: +0.23 C.decade-1, +0.20 C.decade-1
Last 30 years: +0.17 C.decade-1, +0.16 C.decade-1

The acceleration is +0.03 C.decade-2, +0.02 C.decade-2.

Reply to  bdgwx
November 4, 2024 8:08 am

Lmao… you mentioned at Spencer’s that your Type A evaluation arrived at an uncertainty estimate of 0.15C.

Yet now you’re claiming an acceleration of 0.03C.

Care to explain that? Seems like you’re just trying to stir the pot, Bdgwx.

https://www.drroyspencer.com/2024/11/uah-global-temperature-update-for-october-2024-truncation-of-the-noaa-19-satellite-record/#comment-1694118

bdgwx
Reply to  walter.h893
November 4, 2024 8:18 am

Yet now you’re claiming an acceleration of 0.03C.

No didn’t. I said it was 0.03 C.decade-2 for v6.0. Notice the units. They are important.

Care to explain that?

There’s not much to explain. It is what it is. BTW…for v6.1 the acceleration is +0.02 C.decade-2.

Reply to  bdgwx
November 4, 2024 4:58 pm

Both are smaller than your figure of 0.15C. Care to explain?

bdgwx
Reply to  walter.h893
November 4, 2024 6:45 pm

Both are smaller than your figure of 0.15C. Care to explain?

As I’ve already explained…

±0.15 C is the type A evaluation of monthly uncertainties from UAH as compared to RSS and STAR.

0.03 C.decade-2 is the coefficient of the x^2 term on a polynomial regression using v6.0.

0.02 C.decade-2 is the coefficient of the x^2 term on a polynomial regression using v6.1

Metrics with units of C are different than metrics with units of C.decade-2. They cannot be compared because they are different things.

Saying 0.03 C.decade-2 is smaller than 0.15 C doesn’t even make any sense. It’s neither smaller nor larger. It would be like saying 10 m is smaller than 100 kg. Get it?

Reply to  bdgwx
November 5, 2024 2:04 am

Monumental stupidity and waste of time fitting a 2nd degree polynomial to data which is obviously driven by step events.

Absolutely MEANINGLESS.

All it will do is increase your ignorance… if that is even possible.

Reply to  bdgwx
November 5, 2024 4:29 am

As I’ve already explained…

You have really explained nothing except how to curve fit a line to a time series.

Reply to  bdgwx
November 5, 2024 5:49 am

Best fit metrics are not MEASUREMENTS and, therefore, are not measurement uncertainties.

Saying 0.03 C.decade-2 is smaller than 0.15 C doesn’t even make any sense.”

Yet you and climate science keep using 0.03 C.decade-2 as some kind of uncertainty in the temperature measurements. It isn’t. It is just a metric for how well you have fit your regression line to the data, it tells you nothing about how accurate the measurement data is.

The true fact is that if the measurement uncertainty is greater than your best fit metric then you simply don’t know if the best fit metric is correct or not. The measurement uncertainty subsumes any attempt to fit a regression line to the data.

Reply to  Tim Gorman
November 5, 2024 6:41 am

Dr. Taylor in chapter 8 covers linear regression. A linear equation is y = mx + b. “x” has no uncertainty in a time series because it is essentially a counting number. The “b” value however, has uncertainty. That means the regression line has a number of y-intercept values. The interval of the y-intercept values due to different uncertainty combinations gives quite a large total uncertainty in where the regression line should be.

Reply to  Jim Gorman
November 5, 2024 7:21 am

Good to see you are actually reading up on what a simple linear regression is.

The “b” value however, has uncertainty. That means the regression line has a number of y-intercept values.

Not really. What it means is there a range of possible values for the intercept that would have a reasonable chance of producing the same data.

The same goes for the slope, “m”.

The interval of the y-intercept values due to different uncertainty combinations gives quite a large total uncertainty in where the regression line should be.

The uncertainty of slope and intercept are both given by the equations we discussed last time. And they show that the more observations you have the smaller the uncertainty in both. In addition, the uncertainty depends on the deviation of the x values. The higher the standard deviation the less uncertainty.

You cannot just say “there will be a large total uncertainty”. The size will depend on those factors.

Reply to  Bellman
November 5, 2024 11:07 am

Not really. What it means is there a range of possible values for the intercept that would have a reasonable chance of producing the same data.

Bull pucky. The x-axis values have no uncertainty, they are constant intervals in a time series where time is not an independent predictor of the dependent variable.

There is uncertainty in the y-axis stated values that affect both the y-intercept and the slope.

Look at Dr. Taylor’s equations 8.12, 8.16, and 8.17.

Δ = NΣx² – (Σx)² (8.12)
σA = σy√(Σx²/Δ) (8.16)
σB = σy√(N/Δ). (8.17)

For σy Dr. Taylor says:

If we already have an independent estimate of our uncertainty in y₁, … , yn, we would expect this estimate to compare with σy computed from (8.15).

What do you think σy might be from NIST TN 1900?

Reply to  Jim Gorman
November 6, 2024 10:52 am

Bull pucky.

Yet you say nothing that disagrees with what I said.

The x-axis values have no uncertainty

This is one of the main assumptions of all simple regression.

There is uncertainty in the y-axis stated values that affect both the y-intercept and the slope.

Yes, and again that uncertainty can come from measurement error, or variation in the data. The assumption is that it is independent, identically distributed, etc.

Look at Dr. Taylor’s equations 8.12, 8.16, and 8.17.

You mean the ones I was telling you to look at a couple of days ago? It’s good that you’ve looked at them. It would be even better if you demonstrated you understood them.

For σy Dr. Taylor says

σy is the standard deviation of the residuals of y. It’s correct to say that this should be the same as the estimated measurement uncertainty, assuming that measurement uncertainty is the only source of variation. (And this is treating uncertainty as error. It says nothing about other types of uncertainty).

But you are ignoring my point – the uncertainty of the slope and intercept do not just depend on σy. They also depend on the deviation in x and on the number of observations, as detailed in the equations for σA and σB you quoted.

What do you think σy might be from NIST TN 1900?

Any particular example from TN 1900?

Reply to  Jim Gorman
November 5, 2024 1:36 pm

It’s true for any polynomial, not just a linear one.

bdgwx
Reply to  bdgwx
November 4, 2024 8:24 am

Using only v6.1 and going through 2024/10 here the trends of interest.

1st half: +0.14 C.decade-1
2nd half: +0.21 C.decade-1

Last 10 years: +0.33 C.decade-1
Last 15 years: +0.34 C.decade-1
Last 20 years: +0.28 C.decade-1
Last 25 years: +0.21 C.decade-1
Last 30 years: +0.16 C.decade-1

The acceleration is +0.02 C.decade-2.

Reply to  bdgwx
November 4, 2024 9:54 am

Using only v6.1 and going through 2024/10 here the trends of interest.

Seeing that all your numbers derive from the radiance measures at various satellites it would be useful to let everyone know what the resolution uncertainty is in W/m² for each of the measuring units. That ultimately determines the resolution of temperature calculations.

Richard M
Reply to  bdgwx
November 4, 2024 8:26 am

The change in trend over time is likely due to the large 2023/24 anomalies driven by the HTe.

bdgwx
Reply to  Richard M
November 4, 2024 12:28 pm

I agree to the extent that it is likely that HTe is a contributing factor. Whether it is the dominating factor is a matter for the consilience of evidence to adjudicate. So far the consilience says otherwise, but I don’t mind sticking it out for a bit longer to see if there is a shift with new evidence.

Richard M
Reply to  bdgwx
November 4, 2024 12:41 pm

Yes, like I said above, there’s a lot of pieces to this puzzle.

However, just subtract out 0.6 C from the 2023/24 anomalies and calculate the trends. Should be pretty close to what would have occurred w/o the HTe.

Reply to  Richard M
November 4, 2024 5:00 pm

This is what he views as the consilience of evidence, or at least what contributes to it:

W4s2UGF
bdgwx
Reply to  walter.h893
November 4, 2024 6:41 pm

This is what he views as the consilience of evidence.

No it isn’t. As I’ve said multiple times this is what I view as a falsification of the hypothesis that there is no correlation between CO2 and temperature.

As I’ve also said before and the context of the HT eruption specifically the following are examples of what I view as contributing to the consilience of evidence. If you have other peer-reviewed studies to add to this list please let me know.

DOI: 10.22541/essoar.169111653.36341315/v2
DOI: 10.1038/s41558-022-01568-2
DOI: 10.1007/s13351-022-2013-6
DOI: 10.1038/s43247-022-00580-w
DOI: 10.1038/s43247-022-00618-z
DOI: 10.1029/2024JD041296 
DOI: 10.1175/JCLI-D-23-0437.1

And I’ll tell you what I tell everyone else. If you don’t know my views then ask. Don’t just make stuff up.

Reply to  bdgwx
November 5, 2024 2:01 am

Nobody cares about the views of a low-end scientifically ignorant twit. !

You are one just making stuff up.

There is no empirical scientific evidence that atmospheric CO2 causes warming.

Reply to  bnice2000
November 11, 2024 9:56 am

There is no empirical scientific evidence that atmospheric CO2 causes warming.”

Of course there is!
comment image

Reply to  bdgwx
November 4, 2024 2:55 pm

So far all the evidence shows absolutely ZERO human causation.

Reply to  bdgwx
November 4, 2024 11:36 am

ROFLMAO.. using a large El Nino event to try to show acceleration.

Total anti-science gibberish.

And of course beeswax will be totally unable to show any human causation

… because even someone as dumb as he is must know that El Nino events are totally natural.

Little twit still mixing up natural transient events with AGW. Just DUMB.

Reply to  bdgwx
November 4, 2024 5:10 pm

The acceleration is

HAHAHAAHAHAHAAHAHAHAHAHHA

Reply to  bdgwx
November 5, 2024 5:10 am

From one of your earlier comments :

They actually report to 0.001 C resolution. See here.

I also prefer the “tXXglhmam_6.N.txt” text files, where XX is the atmospheric layer (“ls”, “lt”, “mt” or “tp”) and N is the version number (previously “0” only, now with “1” options).

They provide the “Monthly means” figures to three decimal places, which is slightly less inaccurate than the “uahncdc_XX_6.N.txt” alternatives (2dp, but with more “zones / latitude bands” along with areas like “USA48” and “AUST”).

Starting URL : https://www.nsstc.uah.edu/data/msu/

For the V6.0 lower-troposphere (TLT) data navigate to the “../v6.0/tlt/tltglhmam_6.0.txt” file.

For V6.1 navigate to “../v6.1/tlt/tltglhmam_6.1.txt” instead.

v6.1 changes things up in regards to trends

Attached is my initial “quick and dirty” idiot-check … i.e. a verification performed by an idiot, which would be “me” … of the differences between versions 6.0 and 6.1, along with the changes they make to the “start to X” trends.

Notes

– “Minor” changes were made to values in the “Reference Period” UAH uses of 1991-2020. This shifted the pre-2013 values by a fixed +0.003°C (+/- 0.001°C, probably due to rounding errors).

– I cannot see anything in Dr. Spencer’s explanation above about where the 2013-2017 (-0.006/7°C) and 2017-2020 (-0.011/12) “steps” came from. Did I miss it / them ???

– The changes in the trends are relatively large … maybe due to “endpoints are extreme values” effects for the points at the end of the graph ? …

UAH-TLT_V6-to-V6.1_Deltas-and-trends
Reply to  Mark BLR
November 5, 2024 5:27 am

They provide the “Monthly means” figures to three decimal places

I would like to see a paper that explains the uncertainty of the satellite measurement of irradiance in order to achieve a 12 month average to 3 decimal digits.

My research has only found an uncertainty of ~±5 W/m². That is not enough to resolve temperatures to 3 decimal digits.

Reply to  Jim Gorman
November 5, 2024 7:09 am

So would I — an important side effect of a formal analysis is it will often highlight problems in the measurement process that otherwise might go unnoticed.

As far as I know the only analysis that has been done is a comparison of temperature time regressions against radiosonde data. But this is in no way a real uncertainty analysis because neither dataset is a true value against which a comparison can be made.

Reply to  Jim Gorman
November 5, 2024 8:00 am

We asked about an uncertainty analysis yesterday. Spencer said they couldn’t afford it.

Reply to  Bellman
November 5, 2024 9:39 am

So no one knows for sure. Let’s just assume numbers is numbers and do high school averages while recommending that trillions of dollars be spent.

Reply to  Jim Gorman
November 6, 2024 4:00 am

I would like to see a paper that explains the uncertainty

Yes, the lack of either an “error range” or an “uncertainty interval” is unhelpful.

That is not enough to resolve temperatures to 3 decimal digits.

I am only “an interested amateur”, but doesn’t this come under the old debate about the difference between “accuracy” and “precision” ?

As I “understand” the process, which is probably completely wrong …

The satellite MSU measurements, of the microwave frequency of O2 molecules, is “translated” into average temperatures of the “cone” of the Earth’s atmosphere that was sampled at time T.

These data are then processed to give the temperatures (+/- delta ?) of various volumes of the atmosphere, with the goal of isolating fixed vertical “layers” within each “cone”.

Averaging hundreds (/ thousands ?) of these “volumes” into a single global average per layer should allow you to add two (or three ?) decimal places of precision, even though the accuracy (+/- 0.01 or 0.15 degrees Celsius ???) will be unchanged.

.

In any case, for the global averages I prefer using the 3dp UAH datasets (the first “GLOBAL” column [ 3, after “Year” and “Mo” ] in each “tXXglhmam_6.N.txt” file) to the 2dp ones (the “Globe” column [ also 3, after “YEAR” and “MON” ] in each “uahncdc_XX_6.N.txt” file).

Attached is a comparison of the “V6.1 minus V6.0” differences using both options.

NB : “Bellman” has produced a continuous graph of just the 2dp deltas, with a single Y-axis, below (the fourth-from-last top-level post as I type this). In some ways it is “clearer” than mine.

UAH-TLT_V6-to-V6.1_Deltas_V2
Reply to  Mark BLR
November 6, 2024 6:26 am

The satellite MSU measurements, of the microwave frequency of O2 molecules, is “translated” into average temperatures of the “cone” of the Earth’s atmosphere that was sampled at time T.

There is no single temperature of the lower troposphere (0-10 km altitude): the microwave radiance measured by the sounding units is a convolution of the Gaussian frequency response function and the temperature lapse rate of the LT.

Over high elevations the convolution is different and the numbers are different.

These data are then processed to give the temperatures (+/- delta ?) of various volumes of the atmosphere, with the goal of isolating fixed vertical “layers” within each “cone”.

Layers are differentiated using different microwave frequency response functions, the cutoffs are not sharp and there is overlap.

Averaging hundreds (/ thousands ?) of these “volumes” into a single global average per layer should allow you to add two (or three ?) decimal places of precision, even though the accuracy (+/- 0.01 or 0.15 degrees Celsius ???) will be unchanged.

Radiometric instruments commonly have relative uncertainties on the order of several percent, and it is really hard to get smaller numbers. At 250K, 3% uncertainty is ±7.5K. NOAA and UAH have never demonstrated how they get ±0.15K.

Repeated multiple measurements of the same quantity are not made, in fact the number of repetitions is always exactly one in any time series measurement. Claiming tiny uncertainties via averaging is invalid, yet this is Standard Procedure for climate science.

Reply to  Mark BLR
November 6, 2024 8:29 am

precision – putting multiple arrows into the same hole on the target
accuracy – how far from the bullseye the arrow hits
resolution – how many “circles” you have on the target. e.g. separated by 1/4″ or 1″ circles.

You can have high precision with low accuracy and low resolution.

You cannot, however, increase resolution by averaging, that would require assuming knowledge you simply don’t have. Averaging doesn’t help precision either, precision is just how many times you get the same reading from the same object being measured. High precision won’t help you if the accuracy or resolution is low, you still won’t be able to add decimal places.

And high resolution won’t help if you are spraying all over the target, i.e. low accuracy. Your accuracy will be low and averaging won’t increase the accuracy at all.

Measuring microwave irradiance is *very* dependent on the absorbing media between the source and the receiver. For microwaves one major absorbing media is water vapor. The MSU’s in the satellites have no way to measure the water vapor in the atmosphere so the accuracy of the readings have significant uncertainty from that alone. You can’t average that uncertainty away. When combining different readings from different sample sites that measurement uncertainty adds, it doesn’t go down. I’m pretty sure UAH does the same thing the climate models do and just assume a common “parameter” for water vapor.

This makes UAH into a “metric” for temperature and not a temperature measurement. Whether that metric is really useful for a global average “something” is questionable at best.

Reply to  Tim Gorman
November 8, 2024 7:43 am

The reason MSU Channel 1 isn’t used is its sensitivity to water vapour/liquid water.

bdgwx
Reply to  Mark BLR
November 5, 2024 6:39 am

That is a really interesting find. I’d guess that the jumps you found provide clues as to how Spencer, Christy, and Braswell handle the satellite drift. NOAA19 began operation around 2010. METOP-B began operation around 2013 and NOAA18 got cutoff around 2017. I wonder if that helps explains the jumps.

Reply to  Mark BLR
November 5, 2024 7:03 am

The UAH calculations are apparently done as a single big FORTRAN batch job that is run at the end of every month, including the 30-year baseline subtraction arrays, even though the new data for the month should not change them.

The baseline subtraction arrays are provided on the UAH FTP data site; they are organized as 12 subarrays of ~10,000 points (the UAH grid locations), stored as 5-digit integers of temperatures in Kelvin, multiplied by 100. So the resolution is 0.01K.

The corresponding monthly temperature averages are not provided, but instead only the anomaly values (one ~10,000 point array for each month).

For a time I was undoing the anomalies to see what the actual recorded temperatures were by adding back the baseline arrays. It was at this point I noticed the baseline arrays are not constant but instead can change by 0.01K month-to-month. I attributed this to rounding in FORTRAN DATA statements. It has the same effect as the pre-2021 portion of your top graph.

Sparta Nova 4
November 4, 2024 9:34 am

So, the satellite record is not exactly perfect.

Reply to  Sparta Nova 4
November 4, 2024 11:37 am

One heck of a lot more reliable than the surface fabrication though.

Reply to  bnice2000
November 4, 2024 5:03 pm

You can say that again.

Henry Pool
November 4, 2024 12:23 pm

I seriously doubt the info we are given.

Henry Pool
November 4, 2024 12:32 pm

On average
Temperature is going down
comment image

Reply to  Henry Pool
November 4, 2024 4:12 pm

Do you ever wonder why there are so many stations like this scattered over the GLOBE, yet the global anomaly shows a large increase. It would seem there are spurious trends being generated via the anomaly determination procedure.

Anthony Banton
Reply to  Henry Pool
November 5, 2024 12:06 am

Can you tell me the source of that graph please?

It is completely at odds with the data from this source ….

https://climateknowledgeportal.worldbank.org/country/curacao/trends-variability-historical

Which shows that Curacao has experienced a 0.09 C/dec warming from 1950 to 2020.

Reply to  Anthony Banton
November 5, 2024 1:58 am

That would be after AGW agenda adjustment, dopey. !!

Anthony Banton
Reply to  bnice2000
November 5, 2024 4:36 am

Oh, yes the same “agenda” that Spencer and Christy maintain with UAH V6/6.1.
And all meteorological agencies across the world.

That the resident paranoic ranter denies.
Bless.
Now Oxy, did you ever see “Carry on Cleo” ?

https://clip.cafe/carry-on-cleo-1964/theyve-all-got-in-me/

Anthony Banton
Reply to  bnice2000
November 5, 2024 4:36 am

Oh, yes the same “agenda” that Spencer and Christy maintain with UAH V6/6.1.
And all meteorological agencies across the world.

That the resident paranoic ranter denies.
Bless.
Now Oxy, did you ever see “Carry on Cleo” ?

https://www.youtube.com/shorts/wUhN2X_pxEk

Reply to  Anthony Banton
November 5, 2024 4:36 am

Perhaps you can show us your math that derives a measurement that has a resolution at least an order of magnitude smaller than the measurements used to calculate it.

Where do you find a university lab course in a physical science that allows one to increase the measurement resolution of what was actually measured?

Ireneusz
November 4, 2024 4:00 pm

By the 14th of November, the center of the polar vortex will have moved completely over Russia.
comment image

waclimate
November 4, 2024 7:52 pm

This readjustment has increased Australian warming by about 30% since 2021.

Ireneusz
November 5, 2024 12:41 am

You can see the weakening of the solar wind speed since October 14. Such solar wind spikes will cause serious anomalies in the distribution of ozone in high latitudes and will shift the center of the polar vortex over Siberia, where there is a strong center of the geomagnetic field.
comment image

November 5, 2024 6:35 am

Here’s a comparison of the difference between version 6.1 and 6.0.

There are a lot of tiny 0.01°C throughout the bulk of the time series. Up to 2012 these are all positive, i.e. 6.1 has warmed the past slightly. I suspect this rise in the anomalies is due to the changes during the base period, rather than an actual increase in temperature.

After 2012 we see the sign flip but the difference is still only 0.01 or 0.02°C.

Then after 2021 we see the big changes, with the first few months actually getting warmer, before a large downward trend. The final month on that graph is September with a difference of 0.16°C. October’s difference would be 0.21°C.

20241105wuwt1
Reply to  Bellman
November 5, 2024 6:37 am

Here’s the same but showing annual averages. The final point is the average up to September 2024.

20241105wuwt2
Reply to  Bellman
November 5, 2024 6:48 am

Finally, here’s the side by side comparison showing annual temperatures. (This time I remembered to include October 2024.) Almost no practical difference until 2021.

20241105wuwt4
bdgwx
November 5, 2024 6:58 am

Geoff Sherrington,

You showed the pause for Australia starting in 2016/02. With the latest update the trend from 2016/02 to 2024/10 is now +0.16 C.decade-1. The pause did end last month, but this should be an even more convincing update.

It may interest you that the warming rate since 2010/07 is +0.6 C.decade-1.

Ireneusz
November 5, 2024 7:01 am

Meanwhile, above the 60th parallel.
comment image

November 5, 2024 8:51 am

More fun with the gridded data.

Here’s a chart showing the difference between the two versions for September. Some quite big differences in places, but it mostly averages out.

I assume the checkerboard effect seen at the higher latitudes, especially in the south, are the result of the errors caused by drifting.

20241105wuwt4
Reply to  Bellman
November 5, 2024 8:53 am

Here’s the average up to September for 2024.

20241105wuwt8
Reply to  Bellman
November 5, 2024 8:55 am

And for all of 2023.

20241105wuwt9
Reply to  Bellman
November 5, 2024 8:56 am

2022.

Interesting that this has added quite a several land areas.

20241105wuwt10
Reply to  Bellman
November 5, 2024 8:59 am

And finally 2021.

Again, a lot of the mid latitude land masses are now warmer than in the previous version. I wonder what effect this has on the Australian Pause.

20241105wuwt11
Steve Z
November 7, 2024 6:05 am

Is there a monthly report on how much sea floor geo-thermal energy is released into the Earth oceans?

Since there are literally thousands of un-charted sea floor thermal vents and magma chambers, it seems like that is a gaping hole in our temperature data.