From Dr Spencer’s Global Warming Blog
by Roy W. Spencer, Ph. D.
The Version 6.1 global average lower tropospheric temperature (LT) anomaly for October, 2024 was +0.73 deg. C departure from the 1991-2020 mean, down from the September, 2024 anomaly of +0.80 deg. C.
The new (Version 6.1) global area-averaged temperature trend (January 1979 through October 2024) is now +0.15 deg/ C/decade (+0.21 C/decade over land, +0.13 C/decade over oceans).
The previous (version 6.0) trends through September 2024 were +0.16 C/decade (global), +0.21 C/decade (land) and +0.14 C/decade (ocean).
The following provides background for the change leading to the new version (v6.1) of the UAH dataset.
Key Points
- The older NOAA-19 satellite has now drifted too far through the diurnal cycle for our drift correction methodology to provide useful adjustments. Therefore, we have decided to truncate the NOAA-19 data processing starting in 2021. This leaves Metop-B as the only satellite in the UAH dataset since that date. This truncation is consistent with those made to previous satellites after orbital drift began to impact temperature measurements.
- This change reduces recent record global warmth only a little, bringing our calculated global temperatures more in line with the RSS and NOAA satellite datasets over the last 2-3 years.
- Despite the reduction in recent temperatures, the 1979-2024 trend is reduced by only 0.01 deg/ C/decade, from +0.16 C/decade to +0.15 C per decade. Recent warmth during 2023-2024 remains record-setting for the satellite era, with each month since October 2023 setting a record for that calendar month.
Background
Monitoring of global atmospheric deep-layer temperatures with satellite microwave radiometers (systems originally designed for daily global weather monitoring) has always required corrections and adjustments to the calibrated data to enable long-term trend detection. The most important of these corrections/adjustments are:
- Satellite calibration biases, requiring intercalibration between successively launched satellites during overlaps in operational coverage. These adjustments are typically tenths of a degree C.
- Drift of the orbits from their nominal sun-synchronous observation times, requiring empirical corrections from comparison of a drifting satellite to a non-drifting satellite (the UAH method), or from climate models (the Remote Sensing Systems [RSS] method, which I believe the NOAA dataset also uses). These corrections can reach 1 deg. C or more for the lower tropospheric (LT) temperature product, especially over land and during the summer.
- Correction for instrument body temperature effects on the calibrated temperature (an issue with only the older MSU instruments, which produced spurious warming).
- Orbital altitude decay adjustment for the multi-view angle version of the lower tropospheric (LT) product (no longer needed for the UAH dataset as of V6.0, which uses multiple channels instead of multiple angles from a single channel.)
The second of these adjustments (diurnal drift) is the subject of the change made going from from UAH v6.0 to v6.1. The following chart shows the equator crossing times (local solar time) for the various satellites making up the satellite temperature record. The drift of the satellites (except the non-drifting Aqua and MetOp satellites, which have fuel onboard to allow orbit maintenance) produces cooling for the afternoon satellites’ LT measurements as the afternoon observation transitions from early afternoon to evening. Drift of the morning satellites makes their LT temperatures warm as their evening observations transition to the late afternoon.
The red vertical lines indicate the dates after which a satellite’s data are no longer included in the v6.0 (UAH) processing, with the NOAA-19 truncation added for v6.1. Note that the NOAA-19 satellite has drifted further in local observation time than any of the previous afternoon satellites. The NOAA-19 local observation times have been running outside our training dataset which includes the assumption of a linear diurnal temperature drift with time. So we have decided it is now necessary to truncate the data from NOAA-19 starting in 2021, which we are now doing as of the October, 2024 update.
Thus begins Version 6.1 of our dataset, a name change meant to reduce confusion and indicate a significant change in our processing. As seen in the above figure, 2020 as the last year of NOAA-19 data inclusion is roughly consistent with the v6.0 cutoff times from the NOAA-18 and NOAA-14 (afternoon) satellites.
This type of change in our processing is analogous to changes we have made in previous years, after a few years of data being collected to firmly establish a problem exists. The time lag is necessary because we have previously found that two operating satellites in different orbits can diverge in their processed temperatures, only to converge again later. As will be shown below, we now have sufficient reason to truncate the NOAA-19 data record starting in 2021.
Why Do We Even Include a Satellite if it is Drifting in Local Observation Time?
The reasons why a diurnally drifting satellite is included in processing (with imperfect adjustments) are three-fold: (1) most satellites in the 1979-2024 period of record drifted, and so their inclusion was necessary to make a complete, intercalibrated satellite record of temperatures; (2) two operational satellites (usually one drifting much more than the other) provide more complete sampling during the month for our gridded dataset, which has 2.5 deg. lat/lon resolution; (3) having two (or sometimes 3) satellites allows monitoring of potential drifts, i.e., the time series of the difference between 2 satellite measurements should remain relatively stable over time.
Version 6.1 Brings the UAH Data closer to RSS and NOAA in the Last Few Years
Several people have noted that our temperature anomalies have been running warmer than those from the RSS or NOAA satellite products. It now appears this was due to the orbital drift of NOAA-19 beyond the useful range of our drift correction. The following plot (preliminary, provided to me by John Christy) shows that truncation of the NOAA-19 record now brings the UAH anomalies more in line with the RSS and NOAA products.
As can be seen, this change has lowered recent global-average temperatures considerably. For example, without truncation of NOAA-19, the October anomaly would have been +0.94 deg. C, but with only MetOp-B after 2020 it is now +0.73 deg. C.
The following table lists various regional Version 6.1 LT departures from the 30-year (1991-2020) average for the last 22 months (record highs are in red):
| YEAR | MO | GLOBE | NHEM. | SHEM. | TROPIC | USA48 | ARCTIC | AUST |
| 2023 | Jan | -0.07 | +0.06 | -0.21 | -0.42 | +0.14 | -0.11 | -0.45 |
| 2023 | Feb | +0.06 | +0.12 | +0.01 | -0.15 | +0.64 | -0.28 | +0.11 |
| 2023 | Mar | +0.17 | +0.21 | +0.14 | -0.18 | -1.35 | +0.15 | +0.57 |
| 2023 | Apr | +0.12 | +0.04 | +0.20 | -0.10 | -0.43 | +0.46 | +0.38 |
| 2023 | May | +0.29 | +0.16 | +0.42 | +0.33 | +0.38 | +0.54 | +0.13 |
| 2023 | June | +0.31 | +0.34 | +0.28 | +0.51 | -0.54 | +0.32 | +0.24 |
| 2023 | July | +0.57 | +0.60 | +0.55 | +0.83 | +0.28 | +0.81 | +1.49 |
| 2023 | Aug | +0.61 | +0.77 | +0.44 | +0.77 | +0.69 | +1.49 | +1.29 |
| 2023 | Sep | +0.80 | +0.83 | +0.77 | +0.82 | +0.28 | +1.12 | +1.15 |
| 2023 | Oct | +0.78 | +0.84 | +0.72 | +0.84 | +0.81 | +0.81 | +0.56 |
| 2023 | Nov | +0.77 | +0.87 | +0.67 | +0.87 | +0.52 | +1.07 | +0.28 |
| 2023 | Dec | +0.74 | +0.91 | +0.57 | +1.00 | +1.23 | +0.31 | +0.64 |
| 2024 | Jan | +0.79 | +1.01 | +0.57 | +1.18 | -0.19 | +0.39 | +1.10 |
| 2024 | Feb | +0.86 | +0.93 | +0.79 | +1.14 | +1.30 | +0.84 | +1.14 |
| 2024 | Mar | +0.87 | +0.95 | +0.80 | +1.24 | +0.23 | +1.05 | +1.27 |
| 2024 | Apr | +0.94 | +1.12 | +0.76 | +1.14 | +0.87 | +0.89 | +0.51 |
| 2024 | May | +0.78 | +0.78 | +0.79 | +1.20 | +0.06 | +0.23 | +0.53 |
| 2024 | June | +0.70 | +0.78 | +0.61 | +0.85 | +1.38 | +0.65 | +0.92 |
| 2024 | July | +0.74 | +0.86 | +0.62 | +0.97 | +0.42 | +0.58 | -0.13 |
| 2024 | Aug | +0.75 | +0.81 | +0.69 | +0.73 | +0.38 | +0.90 | +1.73 |
| 2024 | Sep | +0.80 | +1.03 | +0.56 | +0.80 | +1.28 | +1.49 | +0.96 |
| 2024 | Oct | +0.73 | +0.87 | +0.59 | +0.61 | +1.84 | +0.81 | +1.07 |
The full UAH Global Temperature Report, along with the LT global gridpoint anomaly image for October, 2024, and a more detailed analysis by John Christy, should be available within the next several days here. This could take a little longer this time due to the changes resulting from going from v6.0 to v6.1 of the dataset.
The monthly anomalies for various regions for the four deep layers we monitor from satellites will be available in the next several days (also possibly delayed):
Lower Troposphere:
http://vortex.nsstc.uah.edu/data/msu/v6.1/tlt/uahncdc_lt_6.1.txt
Mid-Troposphere:
http://vortex.nsstc.uah.edu/data/msu/v6.1/tmt/uahncdc_mt_6.1.txt
Tropopause:
http://vortex.nsstc.uah.edu/data/msu/v6.1/ttp/uahncdc_tp_6.1.txt
Lower Stratosphere:
http://vortex.nsstc.uah.edu/data/msu/v6.1/tls/uahncdc_ls_6.1.txt
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



It’s going to take some time to shed the Tonga/El Niño warmth in the Northern Hemisphere. Meanwhile, despite a La Nina, the Western US ski season is off to a great start…
Meanwhile, in the Southern Hemisphere . . .
What about the SH? September 2023 was warmer than October 2024. October 2023 was cooler. Or were you comparing something else? Or not comparing at all?
I see that your don’t know.
See the attached excerpt copied directly from the UAH Global Temperature Report for August 2024, dated 4 September 2024 (free download available at https://www.nsstc.uah.edu/climate/2024/August/GTR_202408AUG_v1.pdf ).
Bottom line: on average the SH is running significantly cooler (anomaly of +0.81 C, or +1.46°F, above seasonal average) than is the NH (anomaly of +0.96 C, or +1.73°F, above seasonal average).
I find that quite strange given that the Hunga Tonga volcano is located at 20.5° South latitude and that its injection of water vertically into the stratosphere in January 2022 is asserted to be causing warming of the lower atmosphere GLOBALLY.
Ooops . . . here’s the copied excerpt:
Where is all this heat? Hasn’t been here in the PNW.
As I mentioned a few weeks back, I suspect we will see a somewhat slower atmospheric cooling from the El Nino peak than usual, due to the extra HT WV in the stratosphere above the higher latitudes.
But I can’t see how anybody in there right mind could blame the 2023 El Nino event and HT event on any human causation.
Oh but they do
Still no variances, standard deviations, or degrees of freedom reported for any of these myriad averages.
If the error bars were on the UAH, NOAA or RSS graphs, they would be about .5 C wide….which would detract from the basic .15 C per decade global warming trend that is the basis of ALL government programs.
Exactly, would make those regression fit lines look rather silly.
The basis “of all those government programs” and the general climate panic is that the gentle 0.15°C per decade, 1.5°C per century, will morph into 3-5°C per century because of mysterious forces known only by computer models written by egotistical socialist misanthropes, who delight in grooming and terrifying children, and those with the intellect of a child.
Beat me to it. Measurements should always be quoted with a stated value and an uncertainty.
I would like to see an uncertainty budget and the propagation thought the calculations.
Science 101.
The fact that climate scientists seem unacquainted with the basics of measurement should negate any of their findings.
After this update, this is the second warmest October on record.
Figures prior to 2023 are using the old version.
It’s still looking nearly certain that 2024 will be a record year by some margin. My projections are that 2024 will be 0.75 +/- 0.06°C. 2023 was 0.43°C.
I make the old trend 0.159C / decade, and the new 0.154C / decade. The new trend is only including changes made since 2023.
I have asked Spencer how this change will affect the uncertainty. His response is that it should increase the accuracy, but there will be more noise in the grid points.
“0.159C / decade, and the new 0.154C “
And you really felt that change, right?
It’s a change in the measurements, not the actual temperature.
It’s navel gazing at best.
I’m looking for the comprehensive temperature records, at approximately 1M above sea level (Stevenson screen height) across the worlds oceans since, say, 1850.
Any idea where I can find it?
Umm, wut?
But there is no evidence of any human causation, is there.
The 2023, 24 peak is NOT AGW, it is NGW
“After this update, this is the second warmest October on record.”
Where exactly?
The global average. You know, the thing this article is describing in the headline.
The metric that has no basis in reality.
I think the average person, living in the average global temperature, would find it too cold.
I think you’re right…
They certainly would up there where UAH measures it.
The average surface temperature on Earth is approximately 15 degrees Celsius, according to NASA.Sep 20, 2023
The World Health Organisation says that the ideal ambient temperature for humans is at least 18°C (64.4°F).
https://apps.who.int/iris/rest/bitstreams/1161792/retrieve#page=54
UAH does not measure the surface temperature. The layer of the atmosphere they measure is about -9 C.
Don’t you mean something like 9.123°C? Otherwise, where do the significant digits for an anomaly in the millikelvins come from?
Do you wear a warm jumper in winter, Nick?
Probably for around half the year ??
“I make the old trend 0.159C / decade, and the new 0.154C / decade. The new trend is only including changes made since 2023.”
I now have all the data, and the trend for v6.1 is 0.151 ± 0.025°C / decade.
Top 10 for October is now
And they’ve released the gridded data much quicker than usual, so here’s my interpretation of the global anomalies for October 2024, version 6.1.
And here’s last month’s using the new version.
Here’s the warming trend over the globe.
Let’s see how that changes as the El Nino effect continues to subside.
Bellman,
Thankyou for contributing the maps.
Visually, broadly, the hotter large areas are over mountain areas, Antarctica, Himalayas, Rockies, Andes, Alps. Any chance you could correlate UAH grid T with grid altitude?
Ta. Geoff S
Shows very clearly that the effect of the 2023 El Nino atmospheric warming event is gradually disappearing.
Notice how the tropics has much less pale yellow than the chart below.
It ended in May, so….?
OMG, the EFFECT of the El Nino, you mindless cretin !
Even you can’t be stupid enough to put any human causation on the extended 2023 El Nino effect.
The warm anomaly in the Western US can be explained by a stagnant high-pressure system that centered over the region during the first half of the month. So, sinking air combined with strong solar heating – unrelated to greenhouse gases.
If the signature of greenhouse gases can’t be detected in today’s weather, it doesn’t make sense to claim they explain long-term warming.
…..unless you’re a ruler monkey trendologist.
The bellboy has never been able to show any human causation of the El Nino events he uses to show warming trends..
He views the ENSO solely as statistical thresholds that the oceans merely cycle through every few years. No consideration is given to the underlying dynamics.
If I could order it up for next year, I would.
Roy is a bit late this month, and the TempLS surface measure is a bit early, so we can compare them. Here is TempLS, on the same 1991-2020 anomaly base
He said the graph would be late due to the change in version, but the gridded data came out much earlier than usual.
A little difficult to do a direct comparison as you are using a sensible colour scheme, where as I’m using the UHI (-9, +9) scale.
Here’s the UHI data, but with the scale going from (-5, + 5), still not the same as I’m using a linear scale.
There do seem to be pretty big differences in the oceans, with UAH showing above average to the West of South America, but below average to the East the USA.
GHNC… ROFLMAO !!!!
Is there any way to increase the rate? I’m cold.
Yes … burn more fossil fuels.
China and India are working on it.
EU, UK, Canada etc seem to like freezing in winter. !
Have you found the pee pee tape
He’s probably saving up to make a new one of his own. !
I just want new readers to know that he is a known liar.
Not just a liar.. a simpleton to boot !!
Thanks for all this.
I’ve been having sleepless nights for the past month worrying about the next 0.0045 C increase in night-time temperatures in some places in this huge planet.
Stay safe out there everybody!
Do you have any evidence of any human causation?
Or are you prepared to admit that 2023, 24 are totally natural combination of the 2023 EL Nino event and HT WV in the upper atmosphere slowing the cooling rate.
Even with the best equipment [and will] available we cannot yet escape the similarity between “current climate science” and the parable of the blind men and the elephant.
A global temperature means very little if you happen to be in colder than normal temperatures, while others may be hotter than usual..
In the Arctic socialism does work, global warming means the sharp winter chill is spread far and wide reaching many more Europeans. Social justice achieved.
“A global temperature means very little”
It’s actually utterly meaningless.
Beat me to it. There seems to be a lot of confusion (intentional or not) between a model calculation or estimate and a real, physical measurement, as admirably demonstrated by Bellman’s two recent comments. Unfortunately, too many people believe that the model creates the reality rather than models are only (or should be only) tools to help understand reality. If the models don’t reflect reality, the models are wrong.
And then there is blind faith in the models
“demonstrated by Bellman’s two recent comments.”
What am I being accused of now?
In no way do my reporting of UAH data constitute an endorsement of the that data set. But it is the only data set that is reported on here, and up to recently the only data set people here trust.
“If the models don’t reflect reality, the models are wrong.”
Obviously. And as the saying goes, all models are wrong.
Yes. Many here think there is a clear and separate distinction between measurements and models. However, the GUM in no uncertain terms squashes that notion. Many measurements are themselves outputs of complex multi-state modeling [JCGM 6:2020]. Even something as simple as a spot temperature measurement is the culmination of extensive material, thermodynamic, electrical, etc. modeling. So your insinuation otherwise here is erroneous.
But that doesn’t mean they aren’t useful. For example, we know that F=ma is dead wrong. Yet, despite this it is so useful that it is taught in every high school science class and used by engineers extensively.
“Even something as simple as a spot temperature measurement is the culmination of extensive material, thermodynamic, electrical, etc. modeling.”
In other words you still have no basic understanding of metrology. Calibration of a measuring device doesn’t require “models” of any kind. Estimating measurement UNCERTAINTY does, especially after the instrument leaves the calibration lab.
But he bloviates as if he is the world’s expert.
“For example, we know that F=ma is dead wrong.”
I don’t know that, bdgwx. How is it ‘dead wrong’?
Because it doesn’t correctly model reality. It’s close in most everyday scenarios so it is meaningful and useful, but wrong nonetheless. A better model (one that is less wrong, but possibly still wrong) is F=d(1/√(1-v^2/c^2)*m*v)/dt. The point is that all models are wrong, but many of them are meaningful and useful nonetheless. Being wrong does not mean useless or meaningless.
What you don’t seem to understand is that F=ma *is* correct. All your equation does is specify what the acceleration is in terms of relativity, i.e. the speed of light. There is nothing in the formula F = ma that requires a to be given in any specific form. Functional relationships can certainly be made up of independent variables that are themselves functions. a = f_1(v,c) is certainly correct. So is m = f_2(v,c). So writing F = ma is correct as well as F = f_1(v,c) * f_2(v,c).
If they are wrong as you say, then to be useful, one must be able quantify how wrong they are and what the acceptable performance profile is. So far GCM’s do not have that quantification.
It is not dead wrong. The profile where it provides acceptable answers is well known. To call it otherwise, is simply denying the Industrial Revolution in which it’s use was vital.
It is why, after all you have been told here about measurements, you still have no appreciation of physical science and the measurements used. If you think f=ma is not still used in engineering design, you are sadly mistaken. Go learn how vehicle crash tests are evaluated.
This is a good example of the difference between a GCM using a fitted, probabilistic “rule” to generate cloud data vs using a physics based equation such as F=ma
F=ma isn’t dead wrong, it’s physically correct and can be used to predict one quantity given the other two repeatedly and accurately within ranges that we usually encounter. Furthermore we recognise it’s limitations.
On the other hand generating cloud data in a GCM is incorrect in a general sense and when it’s parameters vary outside of those experienced (ie clouds in a warmer world), it’s wrong all of the time. It’s not physics based.
Then, when you add the non-physical quantity of clouds into the rest of the calculation then it doesn’t matter how physics based you thought it was, the result is non physics based and is no longer a physics based projection.
You think it is physically correct to say that reality does not care about a body’s velocity in regard to a body’s ability to move nor that there is a universal speed limit on the movement of bodies? Basically what I’m asking is do you really think Newton is right and Einstein wrong in regard to their descriptions of the nature of reality?
So you would apply relativistic corrections to Statics and Dynamics calculations?
I said
F=ma isn’t dead wrong, it’s physically correct and can be used to predict one quantity given the other two repeatedly and accurately within ranges that we usually encounter. Furthermore we recognise it’s limitations.
Reading and understanding FTW.
Yes, if you’re planning on accelerating a rocket that has a sizable velocity compared to the speed of light then you need the relativistic version and that’s why I mentioned recognising its limitations.
But you’re missing, or more probably avoiding the point re: F=ma vs the “cloud rule” in GCMs.
I’ll ask again…do you think we live in a universe where velocity does not impact a body’s ability to accelerate?
If you deflect and divert again then I don’t really have a choice but to accept that you feel classical mechanics is the right depiction of the universe and that relativity is the wrong depiction.
And for the record…I don’t necessarily think relativity is the right depiction either.
Which makes it meaningful and useful despite it being the wrong depiction of the universe.
We know for a fact that F=ma is wrong. But it still makes meaningful and useful predictions.
We know for a fact that the standard model is wrong. But it still makes meaningful and useful predictions.
We know for a fact that GCMs are wrong. But they still make meaningful and useful predictions.
This is the point.
Who are “we”?
You and the Queen?
Bullshit. Name just one.
Fortunately what you think is irrelevant.
“I’ll ask again…do you think we live in a universe where velocity does not impact a body’s ability to accelerate?”
You are assuming a definition for “m” in F = ma that is not inherent in the formula itself.
And revealing his total lack of any understanding of basic mechanics.
Fairly obviously from my answer it does.
Do you understand the difference between a result from a physical formula like F=ma and a statistical result such as the calculation for clouds in a GCM?
F=ma isn’t wrong in this context. It has limitations but will always, predictably and accurately give an answer. Cloud calculations in a GCM just don’t.
This would be a “no”.
And never will.
F=ma says it doesn’t. Therefore that model of reality is wrong.
Sure. But it doesn’t really matter. Remember, the standard model is a festering zoo of statistical results too so I could have used that as my example instead.
I think we agree on the concept. We’re just disagreeing on the semantics. I’m coming from the prevailing semantics here in which a some models are said to be wrong if they do not include every facet of the physical processes involved, their predictions are not perfect, or otherwise are not indisputably and unequivocally 100% correct in every scenario.
More nutter stuff.
You may want to consider Tim Gorman’s explanation of why it does.
He won’t listen. He has his religious dogma and a set of blinders. He doesn’t understand that F = ma does *not* define how “m” and “a” are to be specified because he doesn’t want to understand.
Tim Gorman believes Σ[x]/n = Σ[x], Σ[a^2] = Σ[a]^2, PEMDAS rules are optional, sqrt[xy^2] = xy, d(x/n)dx = 1, division (/) and addition (+) are equivalent, that radiant exitance W.m-2 is extensive, that density is volume/mass, shutting the door on your kitchen oven will not cause the inside to get warmer, the Stefan-Boltzmann law only works when a body is in equilibrium with its surroundings, and a bunch of other absurd notions. So no I’m not going to just blinding consider TG’s explanation of anything given his grossly erroneous position on other topics.
If you want to defend the position that F=ma and its implication that velocity does not impact a body’s ability to accelerate then go ahead. But 100+ years of observations is beyond convincing that this notion is dead wrong.
Are you going to spam your “algebra errors” lits again?
HAHAHAHAHAHAHAHAHAHAHA
This is as goofy as your oven door nonsense.
There aren’t any algebra errors. bdgwx *still* hasn’t figured out that the partial derivative is the partial derivative of the factor and not the functional relationship.
That’s how Possolo got the uncertainty of a barrel volume being related to 2 (from R^2) and 1 (from H) and *NOT* (2R) *(πH) or (1) * (πR^2).
The GUM says the partial derivatives are *sensitivity* coefficients and can be calculated by experimentally “The combined variance 2
uc ( y) can therefore be viewed as a sum of terms, each of which represents the estimated variance associated with the output
estimate y generated by the estimated variance associated with each input estimate xi.”
In the average you have two input estimates, Σx and “n”. Since “n” can produce no change as it is a constant, it has no estimated variance. This leaves the variance of Σx as the only factor that can change the output estimate “avg”.
bdgwx keeps wanting to use the partial derivative of the entire function as the sensitivity coefficient. It isn’t. Σx and “n” are separate terms. You calculate separate sensitivity coefficients for each term. Taylor treats them as separate terms, Bevington treats them as separate terms, and Possolo treats them as separate terms.
But that doesn’t fit bdgwx’s religious dogma that you can reduce measurement uncertainty of uncorrelated measurements by averaging. There’s no algebra problems on my end.
Exactly, he needs tiny numbers.
You are lost in space and don’t even know it. The velocity that a body currently has DOES NOT affect its ability to accelerate. As relativistic speeds are attained, mass will increase but it was proven long ago that a cannonball and feathers of the same mass accelerate the same.
The real point is that all physical measurement systems have performance profiles that define limits of use. A scale having a platform that weighs 100 g can’t measure micrograms. It just can’t in today’s technology.
I want to defend the position that mass is dependent on velocity and that F=ma holds relativistically when mass is allowed to vary.
Then use a model where mass is dependent on velocity and allowed to vary. Classical mechanics and F=ma is not that model. I suggest upgrading to relativistic mechanics and F=d(1/√(1-v^2/c^2)*m*v)/dt which adds the Lorentz factor to mass.
My point stands…models like F=ma can be meaningful and useful even though they are wrong.
But its not wrong. I’m inclined to think Tim was right and its how you consider mass to be constant in the equation that is wrong.
Whether Tim has been right or wrong about other things seems irrelevant on this point where he is right.
It is wrong. If you have a 1 kg mass and apply 1 N force it isn’t getting to 3e9 m/s even if that force is applied continuously for 9.5 years like what that model says. That is indisputable and unequivocal.
That’s because it is constant in THAT equation.
Let me explain it with math. If Tim is right then both F=ma and F=d(1/√(1-v^2/c^2)*mv)/dt yield the same result. Thus it is necessarily the case that F = ma = d(1/√(1-v^2/c^2)*mv)/dt. Watch what happens though.
(1) ma=d(1/√(1-v^2/c^2)*mv)/dt
(2) dp/dt = d(1/√(1-v^2/c^2)*mv)/dt
(3) d(mv)/dt = d(1/√(1-v^2/c^2)*mv)/dt
(4) d(mv)/dt = (1/√(1-v^2/c^2))d(mv)/dt
(5) 1 = (1/√(1-v^2/c^2))
(6) √(1-v^2/c^2) = 1
(7) 1-v^2/c^2 = 1
(8) v^2/c^2 = 0
(9) v^2 = 0
(10) v = 0
See the problem? Whether he realizes it or not (and he probably doesn’t because he doesn’t understand derivatives) the math forces v = 0. In other words for his proposition to be true then v always has to be 0. Do we live in a universe where v is always zero?
Again…just because F=ma produces the wrong results doesn’t mean it isn’t meaningful or useful. The reason is because the amount by which it is wrong in most everyday cases is negligible.
“That’s because it is constant in THAT equation.”
That is *YOUR* bias showing, it is *NOT* in the definition of F = ma.
It’s just like the equation distance = time x velocity, d = vt. There is nothing in that formula that requires v to be a constant. But using *your* logic “v” has to be a constant. In reality v = sin(t) is a perfectly legitimate description of v in d = vt.
Look in the mirror.
Yes but it’s also correct in that equation. F=ma is an instantaneous result.
You think it is correct that a 1 kg mass exposed to a 1 N force will accelerate to 315e6 m/s in 10 years?
Earlier you seemed to indicate that you agreed that a mass’s ability to accelerate is sensitive to its velocity. Now you seem to question that. Why the sudden change in position?
What about the zero velocity result you get when you set the classical model equal to the relativistic model? Do you think the universe only allows zero velocities?
You think the work required can be generated?
No, because of the well understood limitation of the equation and the velocity that will result from such a long time for acceleration.
Fundamentally you’re assuming mass doesn’t change over that time but that’s incorrect.
Do you agree that a 1kg mass exposed to a 1 N force will accelerate at 1 m/s^2 ?
So the model doesn’t match observations. Remember, this subthread is addressing Phil R’s statement….“If the models don’t reflect reality, the models are wrong.” F=ma does not reflect the reality that a body’s ability to accelerate is sensitive to its velocity. The F=ma assumption that velocity does not matter is indisputably wrong.
I’m just plugging numbers into the model no different than anyone else who uses it. Like I said above it is the model making that assumption. That’s why it is wrong.
No I don’t. And that’s the root of the issue behind my point.
BTW…even when v = 1000 m/s the error in the acceleration as computed from the classical model is only 0.0003%. This is why the classical model is still useful in most scenarios despite it being wrong.
Blindly, with little or no real understand of physics.
Numbers is numbers.
You continually reveal your lack of training in the physical sciences. In an introduction to circuit analysis you begin dealing with the equation V=IR. It is a linear equation just like F=ma. Is it correct? Of course it is, when dealing with an ideal environment. It is called Ohm’s Law. Think about the Ideal Gas Law. Is it correct? Of course it is, it has been accepted as a Law. These have all been validated with factual experimental results.
Part of advanced training in physics, chemistry, all engineering, and yes, even meteorology is learning about the performance envelopes for these TO WORK PROPERLY. You learn how and when to recognize when corrections must be applied to the laws. You also learn the ethical requirements for gather and reporting the data along with the adjustments made.
You have no idea of which you speak.
When v=0, it’s completely correct. When v=1000 then m is 0.0003% larger than the version you’re proposing using so the “wrong” thing is m, not m due to v.
This is the well understood limitation of F=ma
Abbreviation is part and parcel of physical science functional relationships.
You are just dancing around trying to avoid admitting that it is your bias that F = ma means “m” and “a” can’t be functional relationships of their own.
Take a look at the common use of term “I” in electricity. It is a unit called “ampere’. But in actuality it is a functional relationship of a number of factors. Those factors are abbreviated down to the term “I”. It’s no different for the terms “m” and “a”.
You *still* haven’t figured out that the formula for an average is *NOT* a functional relationship and, therefore, is not subject to partial derivatives.
You given the quotes from the GUM as to how the partial derivatives are to be taken. As usual, you didn’t bother to learn from the quotes and want to keep going back to believing that the SEM is the measurement uncertainty of a group of measurements.
Go look at the notes associated with Eq 11a and 11b in the GUM. You’ll find the following:
P = f(V,R0,a,t) = V^2 / { R0[ 1+a(t-t0)] }
∂P/∂V = (2V) / { R0[1+a(t-t0) ] } = 2(P/V)
∂P/∂R0 = =V^2 / {R0^2[1+a(t-t0) ] } = – P/R0
Factor out the common P and you get and move to the left side of the equation and you get
u_c^2(y)/ P^2 = (2/V)^2 u(V)^2 + (1/R0)^2 u(R0)^2
These are RELATIVE UNCERTAINTIES.
And the uncertainty factors wind up being just the derivative of of the factor, not the derivative of the entire functional relationship
Applying this to your avg and you get
avg = (Σx) / n
u(avg)^2 / avg^2 = [∂avg/∂ (Σx)^2 [u(Σx)^2 / (Σx)]^2] + [u(n)/n]^2
==> [u(avg)/avg]^2 = u( (Σx)^2 / (Σx)^2
==> u(avg)/avg = u(Σx)/Σx
THE “n” DISPPEARS! You do *NOT* divide the propagated measurement uncertainty of the data by “n” or “sqrt(n)” to find the measurement uncertainty of the average.
This has been presented to you multiple times and you still refuse to learn.
If you can’t accept what the GUM shows then just go ahead and admit that you think you know better how to calculate measurement uncertainty of a functional relationship – even in the face of the average being a statistical descriptor and not a functional relationship!
In addition to all of the previous absurd viewpoints of the Gormans we’ve got JG down below in a conversation with Bellman in this very blog post question the meaningfulness of the heat transfer equation because it involves the subtraction of two temperatures.
So do you really want to consider either of the Gorman’s viewpoints on the subject of how meaningful models are or any subject related to physics for that matter?
And you can see karlomonte still denying that the simple action of closing the door on a turned on kitchen oven will make the inside warmer than it would be otherwise. He’s so triggered by this mind numbingly obvious and trivial concept that he calls it “nonsense”. So is this really the side you want to throw your lot into?
ROTFLMAO!
You are cherry picking a formula you know nothing about.
You posted this formula.
Q = [K ∙ A ∙ (Thot – Tcold)] / d
You don’t even understand that this is the gradient equation for conduction in a solid body! It has nothing to do with two entirely different bodies that are not in contact which is the point that was being made. You can’t take two separate bodies and AVERAGE their temperature to obtain a meaningful temperature.
You might also note that (Thot – Tcold) is not an average but a difference!
As always.
The nonsense is your goofy idea that the door is a heat source, or are you now backtracking away?
Your knowledge of heat transfer is abysmal.
“G down below in a conversation with Bellman in this very blog post question the meaningfulness of the heat transfer equation because it involves the subtraction of two temperatures.”
The heat equation bellman used was for CONDUCTION. How much conduction of heat is there between a parcel of air in Topeka, KS and Nome, AK?
The conduction heat equation ONLY applies where there is conduction! It’s not a question of whether the conductive heat transfer equation is correct, it’s a question of when it applies!
On this point, do you understand why the projection from a few million non-physical results (ie non-physical timesteps at 20min for 100 years or so), each being the initial state for the next timestep’s calculation …has accumulated uncertainty that invalidates the result as a useful projection?
If you disagree with this, what is your underlying assumption on the accumulated uncertainty? Do you assume uncertainty cancels out rather than accumulates at each timestep?
Note. Uncertainty, not error.
On this point, do you understand why the projection from a few million non-physical results (ie non-physical timesteps at 20min for 100 years or so), each being the initial state for the next timestep’s calculation …has accumulated uncertainty that invalidates the result as a useful projection?
No.
First…just saying something is unphysical doesn’t make it so.
Second…F=ma can be rewritten as F=dp/dt and handled in time-step form too like is the case in the modeling of movements of astronomical bodies.
Third…uncertainty does not invalidate projections. It just means there is a dispersion of possibilities around the projections for which reality could take.
I know.
BTW…and notice how this conversation is diverting further away from my post above about how models can still be meaningful and useful despite being wrong.
You didn’t answer the question that matters. What is the interval quantity that describes the dispersion of possibilities?
Funny how you use uncertainty and dispersion of possibilities together. That sounds a lot like a standard deviation versus how accurate a mean is using an SDOM.
No, you don’t. You confuse the two again and again.
But you’ve agreed that F=ma is a physical relationship (albeit with some discussions around relativistic considerations) and that clouds are statistical.
Actually I’m being generous when I say “statistical”. They’re actually a fit derived from rules of thumb “calculations” that can be described as… when the parameters for cloud production (eg humidity, temperature etc) align, then produce a cloud. But tune that production rate to be realistic according to what we’ve seen.
That’s non physical. Its not like F=ma
How do you think that looks in a warmer world? How well do you think that fits works out? How can it be tested?
But that dispersion encompasses the entire range of possibilities for climate from the model. Claiming the calculation gives the most probable climate response is disingenuous at best from a model that isn’t even calculating it.
And there are models that aren’t meaningful or useful. A model claiming to predict lotto numbers would be a good example despite being 100% correctly in the ballpark of possible numbers with every run.
Where in F = ma is it specified that either m or a is not velocity dependent?
Is m = 1kg or is m = (1kg) (1/sqrt(1-v^2/c^2) ?
You are assuming that in F = ma “m” is always 1kg and is *NOT* (1kg) (1/sqrt(1-v^2/c^2)
That’s a result of *YOUR* biases, which have nothing to do with the formula F = ma
So “utterly meaningless” that this website wastes time reporting it each month, along with monthly updates on the “pause” using this utterly meaningless data.
Well, we all look forward to your visits here.
Gotta get a laff wherever we can these days 🙂
We are now arguably into La Nina conditions. We were also there in January 2023 with a negative anomaly. Hence, most of the current anomaly difference cannot be due to ENSO. Looks like the Hunga Tonga eruption is still having a major effect.
The big question then is, how long will this continue? The following graphic shows the water vapor has not started to dissipate yet. It’s moved down in altitude a little though.
It’s a very strange effect, then. The main HT eruption occurred in January 2022. There was a slight uptick in lower troposphere temperatures at the time, but this quickly died down again and by January 2023, one year after the HTE, the UAH anomaly was in negative territory again (see Dr Spencer’s table above).
The monthly series of UAH monthly temperature records didn’t start until July 2023 some 18-months after the HTE.
Where was all the heat hiding in the intervening period?
In CO2. 🤣
It’s like a jigsaw puzzle. You have to consider multiple effects, both warming and cooling, all with different timing. Once you put it together it actually makes sense.
There were cloud effects, SO2 effect, water vapor effect, chlorine/ozone effect and changing ENSO effects.
The initial year (2022) after HTe was in a La Nina. This was also the period with the strongest SO2 cooling effect. Together they canceled out the initial warming effects. The SO2 cooling should be almost gone now as is the 2023/24 El Nino warming.
The warming effect (mostly via cloud reduction) over the past year has warmed all the oceans which I believe is one of the big reasons the warm temperatures are now persisting. They will take a little longer to cool
So who has “put it all together” with respect to where all the HTE heat was hiding for 18-months?
Is there any scientific literature to explain this, or is it just opinions on blog comment pages?
Fungal is saying WV doesn’t block the escape of atmospheric energy.
Has just destroyed the whole GHE mantra. !
Waiting for evidence of any human causation… which we know will never come.
The HT ocean warming is very obvious in the Antarctic sea ice response.
The HT WV plume has spread out over time to cover most of the higher latitudes.
I know you are incredibly dumb, but even you should realise that the planet is quite large, and it takes a while for natural things to travel from place to place!
What part of the SO2 cooling effect did you fail to understand? No heat was hiding. It was reflected back to space.
The part where it allowed the HTE heat to magically disappear for 18 months.
WOW.. the stupidity and ignorance. !!
Still not able to figure out what all that stratospheric WV is doing either.!
Oh and still waiting for some evidence of human causation for the 3 major El Nino events that make up the only warming in the UAH data.
SO2 doesn’t “follow heat”, it reflects sunlight. I’m amazed you don’t understand volcanic cooling effects when you claim to know so much.
“Where was all the heat hiding in the intervening period?”
Nothing is hiding. air masses move around, just as they always have. These averages tell us absolutely nothing useful.
So it wasn’t there?
According to the mean value theorem for integrals as applied to the heat capacity equation Q=mcΔT and 1LOT equation ΔU = Q – W then it is necessarily the case that if that HT eruption caused ΔU > 0 then the global average temperature would increase such that ΔT > 0 as well. Average ΔT could thus be a falsification test for hypothesis (like the HT eruption) involving an expected change in U. Being able to test hypothesis is what I and most others consider “useful”.
Is this how your oven door heats the inside of the oven?
LOL.. That was a classic piece of beeswax ignorance.
Showed just how low on the “understanding” scale ‘it’ really is.
Close to a “never go full retard” moment .
It is so bizarre and irrational there is no way I can quote it verbatim from memory.
Great use of your Kamala education..
World-salad mixed with ignorance => complete gibberish.
Quite hilarious!
You do realize that H2O has a quality called “latent heat”, right? How does that affect ΔT?
I don’t think beeswax “realizes” much at all..
… it is all just fantasy and pretty lights to he/sh/it.
Fungal is now in DENIAL of the greenhouse effect.
He thinks the upper atmosphere WV from HT isn’t blocking the atmospheric cooling from the El Nino.
So DUMB.. so funny !
OMG , fungal can’t understand the chart Richard M posted.
Can’t see that the initial HT WV took several months to spread out to the higher latitudes of the Northern and Southern Hemispheres.
This one is a bit higher up and makes it more obvious
Richard M,
Something is seriously wrong with the Aura MLS contour chart of water content at the 147 hPa level that you present in the context of HT-injected water appearing at the lower altitudes of the stratosphere:
1) It doesn’t show any significant variation in water content south of the equator from the time of the HT volcano eruption in January 2022 until about April 2024 . . . an interval of about 27 months! This despite the HT volcano being located at about 20.5° S latitude.
2) It doesn’t show any significant variation in water content for latitudes of 0° to about 7° north of the equator for any time after January 2022. This despite the appearance of significant water content anomalies seen as far as 75° N starting in June 2024.
Looks like the Hunga Tonga eruption is still having a
majormagical effect.The water vapor injection went straight into the stratosphere (above the 147 hPa altitude). If you look at 75S you can see how it has changed. It took a whole year for the WV to get that far south and it is already dissipating at higher altitudes.
You can see a similar effect at 75N.
Just for clarification (for me, at least) do you mean dissipating at higher altitudes or latitudes (75° N& S are pretty high latitudes), or both?
It appear the WV spread out to the higher latitudes over both NH and SH which took several months.
It looks like it is gradually clearing over the tropics, and drifting to higher altitudes over the higher latitudes.
Initially it took awhile for the water vapor to reach the highest latitudes but finally had it covered in 2024.
Now, it looks to me like much of the water vapor has moved downward from its initial injection height leaving an area of lower concentration in the mid stratosphere. The upper stratosphere doesn’t appear to have changed much.
Keep in mind this chart is for anomalies and as you go up in height the overall air density continues to decrease. An anomaly of 1 ppm at 3 hPa represents less water vapor than a .2 ppm anomaly at 147 hPa.
Is the color scale for “H2O (ppm)” given in the contour plots you previously posted for 75 S and 75 N—in both cases with the y-axis covering an altitude range of 13 to 40 km (an equivalent stratospheric pressure range of about 150 to 3 hPa)—wrong then?
Or have has the scale already accounted for density variation with altitude in computing those indicated ppm levels? If not, then the plots are garbage.
My assumption is the density changes are correctly covered by the color scale. Keep in mind, non-condensing gases tend to be well mixed which would keep the ppm values relatively consistent. At these altitudes even water vapor is non-condensing due to the absence of CCNs.
What does “well mixed” actually mean? Horizontally or vertically. Gravity alone should concentrate non-condensing gases vertically in some form or another.
Only above about 100km.
At the altitude range of the stratosphere over the tropic and temperate latitudes (-60 C just above the tropopause, ~15 km, to -15 C at the top of the stratosphere, ~50 km), most volcano-injected salt water will have been flash frozen to micro ice crystals with a corresponding saturation-limit water vapor pressure in the range of about 4 Pa to 56 Pa, respectively*. This can be compared to the typical range of ambient stratospheric pressures associated with those altitudes (~10,000 Pa to 200 Pa). Obviously, water vapor can never come close to saturating the stratosphere.
The injection of seawater (salt water)—not to mention entrained sea-floor sediments—into the stratosphere provides all the cloud/ice condensation needed for the flash freezing of water.
*see “Vapor Saturation Pressure Over Ice Formulas and Calculator”, https://www.engineersedge.com/calculators/vapor_saturation_pressure_15731.htm
Actually, your post of the Aura MLS contour plot for 75° S shows:
1) A sudden decrease of about 1.25–1.5 ppm (dark green to medium brown color coding) in the interval from about June to September 2023 over the stratospheric altitude range of 13–30 km. Then, very strangely, a jump back to 0.75–1.00 ppm H2O after September 2023 for most of that same altitude range (20–30 km). What the heck?
2) From about September 2023 through April 2024 the dissipation of stratospheric H2O happened earlier at lower stratospheric altitudes (below 25 km) than at higher altitudes (above 30 km).
Moreover, your post of the Aura MLS contour plot for 75° N shows the H2O dissipating much faster, starting in January 2023, at lower altitudes (below 23 km) compared to higher altitudes (above 25 km).
Funny watching anti HT effect stooges now denying that the GHE from WV even exists.
Denying that extra WV in the atmosphere slows energy loss.
Still plenty of WV to slow the loss of energy from the 2023 El Nino event.
Rather than the normal fast drop-off like in 1998 and 2016, this will take a bit longer.
Really?
The typical range of water vapor in the stratosphere is between 3 and 7 parts per million by volume (ppmv). Note that the contour chart that you posted for H2O in the stratosphere at the 10 hPa pressure level—as well as the numerous similar charts posted by Richard M and you—have a color coding anomaly scale ranging from -1 to +1 ppm H20 concentration. So, at most the HT eruption caused a temporary increase of about 1.5 ppm (or 30% of an average of 5 ppm) in stratospheric water content assuming the Aura MLS data is accurate.
Meanwhile, the typical range of water vapor in the troposphere is around 3000 to 4000 ppmv, with the highest concentrations found near the equator and decreasing towards the poles.
Most science-oriented persons can understand that a concentration difference on the order of 1000:1 can mean that water vapor will have much greater “greenhouse effect” on Earth’s radiation balance when existing in the troposphere than it will have when existing in the stratosphere.
Now, you were saying something about “stooges” . . .
Wow, Dr. Spencer, thank you so much for your rather detailed summary of the “why and when” of your data collection and data adjustment methods, as well as reasoning for now dropping the NOAA-19 satellite data set from your averaging to obtain the LT trends that UAH reports. This made readily available for public consideration without apology . . . quite rare today!
This is just one reason that I trust UAH GLAT temperature reporting more than any other source (e.g., RSS and NOAA).
However, I wish that UAH would cease reporting monthly average temperature anomalies and decadal trending to 0.01 C resolution because I don’t think that is justifiable considering the involved trail of processing of uncertainties required to convert raw microwave sounding unit radiometric data from orbiting satellites to an effective global temperature. I realize that you probably are doing such to make the trending more apparent, but it conveys a sense of data accuracy that really isn’t factual IMHO.
They actually report to 0.001 C resolution. See here.
Here is what bothers me.
I = 5.67×10-⁸ • 274⁴ = 319.5842075
I = 5.67×10-⁸ • 274.001⁴ = 319.5888729
That means measuring the W/m² to a resolution of 0.001±0.0005 -> 319.588 – 319.584 = 0.004
That’s pretty good resolution. I found one paper that had a ±5 W/m² uncertainty. That would not allow the necessary resolution to determine temperatures to the thousandths degrees of temperature.
“effective global temperature”
No such thing.
Well, its certainly better that an ineffective local temperature 😜
I don’t get it. Why is the average global temperature tracked and reported? It has zero physical meaning and nothing can be inferred from this number changing.I would definitely would like to know if I am wrong on this and if so, why.
So that we can see if it is going up or down.
It has no less meaning as any other global average that we track. In this particular case we can infer a change in internal energy via the heat capacity equation Q = mcΔT.
Perhaps an example of other global averages might help. NASA tracks a lot of them. See here for only a small subset.
“It has no less meaning as any other global average that we track. “
So as meaningless as “global sea level” and the like? Sheesh.
I think it depends on your definition of “meaningless”. If your definition is so loose that equations like Q = mcΔT or ΔU = Q – W or mathematical theorems like the mean value theorem for integrals would qualify as “meaningless” then yeah I can see why you would describe the averages on NASA’s fact sheet for Earth or any average of intensive properties for that matter as “meaningless”.
The attempt to look like you know something.
…. is FAILING completely !!
Like you would know, lol!
A whole lot more than you would.
You are one of the dumbest troll on the site.
Basically just a NIL-educated idiot !
Come on dolt. Look up the definition of “mean value theorem for integrals”..
… and explain how it is remotely relevant to what is being discussed.
It is just copy/paste Kamal-talk from beeswax.
A single temperature number for the entire planet is insanity, heck you can see that in the regional temperature changes in the chart where the changes are regional that doesn’t mesh with other regions at all.
In my region it has warmed slightly over 45 years mostly at night, but the climate is still the same in all that time.
It’s no less insane than any of the other global averages that scientists have published. See here for other examples.
Regional temperatures are still spatial averages. The only difference between those and a global average is the size of the domain.
Indirect Appeal to Authority — FAIL
Which values in your link are intensive values? Volume, length, mass (including gravity which increases when you add mass), etc are extensive.
Regional temperatures do *NOT* have spatial averages. The temperature gradient between points may have an average value which is the slope of the temperature curve but not the temperature itself. Even local temperatures don’t have spatial averages. What is the average temperature between Pikes Peak and Colorado Springs at any point in time?
“ mathematical theorems like the mean value theorem for integrals would qualify as “meaningless” “
Stop blowing smoke. You are assuming that the measurements forming the temperature curve have an accuracy you simply cannot support. Therefore the “mean value theorem” is nothing more than an excuse for a word salad. The issue is that you can’t calculate the value of the integral to any more decimal points than the measurements support – and on a global basis that’s somewhere in the units digit, not the hundredths or thousandths digit. In other words we don’t know the actual global average temp to more than 15C with a measurement uncertainty of +/- 5C.
In fact, the UAH is *not* even a global “temperature”. It is a metric that can, at best, be said to be related to the global temperature. Temperature is an intensive property and simply can’t be averaged. Someday you *really* should use Google and look up what an intensive property is.
+1000
It has read a book.. can copy-paste…
… but has basically ZERO COMPREHENSION. !
The mean value theorem requires a CONTINUOUS function. Temperature is not a continuous function, at least not one you can write a functional relationship for using the samples in any temperature dataset.
One could do a numerical integration on a daily temperature curve if the granularity was sufficient (5 minute?). It would give a total degree-day metric. Finding the average of that distribution would still be difficult. Let alone why one would even want to find an average value from a degree-day value.
Climate science is absolutely against moving on from a traditional computation of “average” daily temperatures. Imagine engineers, physicists and chemists not embracing the latest and greatest measurement devices because it would upset comparisons with older measurements.
Averages are useful if you don’t abuse them.
For example, knowing your average driving speed might be useful to get a good idea of how long it will take you to get across town. But using that average in rush hour would show you that you need a different “rush hour average speed”.
If you keep stats you might find that your “rush hour average speed” has decreased with city population increase…. so there is no end of revisions that can be made to make “averages” more like the “historic real results”….And more sophisticated expertise is required to “improve the model”.
At some point, the random changes of reality exceed the predictive value of the model.
You can replace “averages” with pretty much anything in this statement and it would still be true.
kinda like: (name favorite or most recent crisis, disaster or apocalyptic prediction here) is caused by “climate change.”
I equate averaging temperature readings with averaging your vehicle’s tire pressures –
the manual specifies 40 psi tire pressure.
You put your gauge on your tires and they read 15, 35, 45, and 65 psi.
So average psi for your 4 tires = 40 psi.
All good to set off on that cross-continental trip, hey?
That would be like averaging the pressure of Mercury, Venus, Earth, and Mars and getting (0.005 picobar + 92 bars + 1014 mb + 6 mb) / 4 = 23 bars. Using this result to draw a conclusion about Earth and only Earth is an example of an abuse of averaging.
A better analogy would be to pick one and only one of the planets or tires and measure the pressure at different locations within that body and only that body. For your first tire you might get readings of 14.8, 15.0, 15.2, 15.2, 15.0 and 14.8 psi at 0, 60, 120, 180, 240, and 300 degrees respectively. The average is 15 psi which I think everyone including you would agree is physically meaningful.
The point is that just because you can craft an absurd example of an abuse of averaging doesn’t necessarily mean that all cases of averaging are abusive. And in general terms the fallacy you committed here is common enough that it has a name; fallacy of composition.
And how t.f. would an ordinary vehicle driver measure the psi in different parts of each tire?
Punch a probe through the sidewall a dozen times or so?
And speaking of planets –
which planet are you living on?
He’s living in “statistical world”, where he can average anything he wants to average and believe it has some kind of relationship to reality. He can also then call whatever he wants an “abuse of averaging” without ever really thinking about he physical similarities between the systems.
I.e. you can average temperature (an intensive property) but you can’t average a car’s tire pressures (an intensive property).
Averaging intensive properties *IS* an abuse of averaging. Since temperature is an intensive property, it can’t be averaged. Trying to do so is an abuse of averaging – just like trying to average your car’s tire pressures. Both are intensive properties.
Your example is indicative of what is wrong with the GAT. What is the average temperature variance in the Arctic in a winter month versus the average temperature variance in Buenos Aries in the summer? Are these really comparable values that should be averaged? That leads into the question of why are there so many global sites with little to no warming?
I have never seen a cogent treatment of this from you, just silence. If there is one reason as to why CAGW continues to fall in importance to people that is the reason. Too many locations are not experiencing CAGW.
Irony Alert.
This is an example of averaging an intensive property, something bdgwx doesn’t understand and will never understand.
The trouble is that m and c are not constants across the globe.
If I have two equal volumes of dry air at standard pressure, one volume at 0 °C and the other at 20°C, bring them together and let them mix without any external influences, guess what the resulting temperature will be?
That’s right, about 9.65 °C, among other reasons because air at 0 °C has a 7.3% higher specific mass than at 20 °C. And then I have left humidity and air pressure out of the equation. To average Q accross the globe, you would need a full record of pressure and humidity. In and of itself, temperature does not say anything.
It doesn’t matter. The mean value theorem for integrals says that the average of a function divided by the domain is exactly equal to the full integration of the function over the domain. IOW…the average of m, c, and ΔT is sufficient to compute Q regardless of whether m, c, and ΔT are constant or not.
Let me guess.. You were a “rote” learner…
… with zero comprehension what you were talking about.
T doesn’t have a “function”. How do you integrate it? Where does the mean value theorem come into play?
His base assumption is wrong – he believes you can average an intensive property. There is no “global” T to average, there is no “global T” function to integrate!
“The mean value theorem for integrals says that the average of a function divided by the domain is exactly equal to the full integration of the function over the domain.”
Not really. That is just the definition of the average. The mean value theorem says that for a continuous function, there is at least one point in the domain where the value equals the average.
Right, but f(c) is the average. And f(c)(b-a) = integral[f(x).dx,a,b]. What the MVTI also says is that the average of the domain multiplied (corrected typo above) by the domain is equal to the integral of the domain. We don’t even need to know how f(x) is defined. We just to need to know f(c), a, and b. The only stipulation is that f(x) is continuous. So we don’t need to know the specific m, c, and ΔT at every dx to compute the total energy Q via integration. An equally valid option is to use the average m, c, and ΔT and multiply by the domain.
For example, we don’t need to know the density kg.m-3 of each specific dx of Earth to calculate its mass. I mean it can be done that way, but the MVTI says it is equally valid to use the average density f(c) = 5513 kg.m-3 and the domain (b-a) = 1.08e21 m3 to get 5.97e24 kg with a simple multiplication. We don’t even need to know how (and it is surely very complex) the function f(x) determines the density at location x. We know f(c)(b-a) will equally integral[f(x).dx,a,b] regardless.
More bullshit from a mathematician.
Exactly how do you get an average density if you don’t measure each “dx” in the volume before you start you method ? Do you think the mass of the earth is homogeneous? If so, explain why satellite orbits vary due to gravitational anomalies.
You continually display your lack of knowledge about accountability for what you assert. People who have experience in the real world with measurements and calculations understand the vagaries involved and know that uncertainty exists.
Tell us where you included any uncertainty in the calculation of 5.97e24 kg! Didn’t enter your mind at all did it?
If I have a lead ball with a mass of 1kg and a second lead ball with a mass of 3kg I can “add” them by putting them both on a scale. The result is a mass of 4kg. The masses add physically.
If I have a temp of 10C and a temp of 20C what does their sum represent physically? Can I set them both on a thermometer and measure a temp of 30C?
If you can’t add the quantities then you can’t calculate an average value of the quantities.
You continue to leave out the fact that the function has measurement uncertainty – which carries over to the mean value theorem.
You don’t even understand the physical reality of what you are talking about. Q is an extensive property – enthalpy. Temperature is an intensive property. You can’t *get* a function for Q using only T. So what do you think you are integrating?
The Web can be your friend in answering such questions . . . attached is a screen grab of a response from Google’s AI (IOW, the GLAT as a numerical—albeit average—number does have physical significance):
Even Gemini tells users to double check the information it provides.
You didn’t read that bit, did you?
I didn’t need to because it makes perfect sense with current scientific understanding (as the response properly notes) . . . but I might change my opinion if you can provide a good rebuttal to the stated information—you know, since you favor double checking on all things.
How about it?
Rebuttal? How do you average an intensive property? And temperature *is* an intensive property.
Did “current science” find the Earth’s tongue somewhere so it could put a thermometer under it?
Back at you: does a thermometer under a human tongue represent a meaningful average temperature for the whole human body having said tongue, or is it instead just a meaningless number (other than measuring just the underside-of-tongue temperature) that physicians record just for fun.
For an individual it does. It is measuring the same thing with a resolution uncertainty of ±0.05. My average temperature is 97.7. A temperature of 98.5 to 98.7 means I have a slight fever.
The issue here is that the doctor doesn’t attempt to derive a temperature to two or even three decimal places to evaluate my condition.
The reference oral temperature reading of “about 36 to 37 degrees C” (98.6 degrees F is spurious precision) was empirically derived. It’s a reasonable approximation to the actual core temperature, and seems to be more acceptable to most people than the more representative rectal temperature.
It’s not intended to be an average body temperature. There is known to be an offset to the core temperature, but it’s a quick and easy way to gauge whether there is some anomaly. Remote IR thermometers measuring forehead temperature seem to be the preferred first pass now.
Even 1 decimal place is pushing it, especially using Fahrenheit. Half a degree C either side of the normal range warrants further investigation.
I don’t think you understand what a metric is in terms of physical reality. It is a quantity that can be used to assess or compare something. The temperature under your tongue is a “metric” used to assess the physical reality of your body. Same for a forehead temp or an under-the-arm temp.
The temp under your tongue or under your arm is *NOT* an average temp of your body. It is a metric used by the physician to assess your physical condition.
It doesn’t actually say that it has physical significance. Does the “AI” know about averaging intensive properties?
Googled FYI to remind other curious readers:
“Intensive and extensive properties are characteristics used to describe the physical properties of substances in the fields of physics and chemistry. The terms intensive and extensive were first described by physicist Richard C. Tolman in 1917.
Intensive properties are those that do not depend on the amount of substance present. Examples include temperature, density, and color. These characteristics remain constant regardless of the quantity of the substance.”
Since age is an intensive property of an give human, it now becomes apparent how non-sensical it is to talk about the average age of any select group of humans.
Got it!
Extensive properties of a system depend on the amount of matter involved, e.g. mass and volume. Add two masses together and you get more mass.
The age of a system depends on the amount of time it has existed, very similar to the amount of mass and volume. Add more time to the existence of the system and its age increases by the same amount. Thus age should be classed as an extensive property. Adding time increases age, just like adding mass increases mass. You *can* average extensive properties.
Temperature is intensive because if you add two masses of the same temperature together you get the same temperature. Age is extensive because if you add time of existence to a piece of matter you don’t get the same age.
Ever hear of carbon-14 dating? Radioactive elements (thus matter) have characteristic nuclear decay rates that reflect their age since they were formed.
As for the age of such matter being an intrinsic property, it meets that definition by being both independent of the specific sample size being considered and independent of being variable as the result of external factors.
According to Google AI:
“While radioactive dating is not directly used to determine the age of the universe, it plays a crucial role in estimating the age by analyzing the composition of very old stars, particularly by measuring the abundance of elements like uranium and thorium, which allows scientists to infer the age of the stellar populations and thus provide constraints on the age of the universe itself.”
If age were an intensive property then how would carbon dating work since the *amount* depends on the age?
“it meets that definition by being both independent of the specific sample size being considered and independent of being variable as the result of external factors.”
You are trying to conflate mass with age. Age has nothing to do with mass. Age has to do with time. If an object has existed for one second and then exists for another second then it has existed for a sum of one second plus one second equaling two seconds. The property of “age” can be directly summed.
If you have two objects, say a 1kg lead ball and a 2kg lead ball, each which have existed for ten years and then continue to exist for another ten years then the age of each will be twenty years. Time adds. And the age of each ball has nothing to do with the mass of the ball.
If one of the balls has existed for 10 years and the other for 20 years then you *can* sum their ages and get a total age of 30 years, meaning the average of the two balls is 15 years. Again, time adds. It’s is an extensive property.
Sure, sure . . . if you want to go down the path of asserting that average temperature has no “physical significance”, go right ahead.
Try this, when the weatherman says that the temperature at LAX will peak today at 76 °F under sunny skies, but only reached a high of 58 °F yesterday due to overcast conditions, do you think that is nonsense because either (a) LAX represents a distributed area and thus cannot have a meaningful average temperature, or (b) temperature is an intensive property and therefore cannot be meaningfully averaged?
If you add 76F and 58F together what do you get? 76F? 58F? 130F?
If you add 2kg and 3kg together what do you get? 2kg? 3kg? 5kg?
You have to add quantities before you can determine an average. What does adding temperatures mean physically? What does adding masses mean physically?
Just because you can add two numbers together doesn’t mean the sum makes physical sense. If it did make physical sense then the distinction between intensive and extensive properties of matter wouldn’t exist.
“What does adding temperatures mean physically?”
Are you still trying to pull this nonsense? The parts of an equation do not all have to have a physical meaning. You keep going on about the variance of temperature. How do you find that if you every part needs to have a physical meaning?
To get the variance you first have to find the mean of the temperature, so by your logic that’s already physically meaningless. But then you have to square all the differences. What physical meaning can you attach to a square of a temperature difference? And what meaning can you attach to the sum of the squares of the differences?
And as has been pointed out many times – if you don;t like the sum of temperatures, you can always convert it to a problem of summing temperature times an extensive property. By definition the product of an intensive and an extensive property is extensive. You can multiply temperature by area, volume or time, and sum the extensive property. You should understand this because it’s what you do when you add degree-days.
An equation describing a physical property must have parts with a physical meaning. Does the term dimensional analysis mean anything to you?
Only a mathematician would think numbers are just numbers!
You almost have the right idea, but then you fall off the wagon!
If I have a block of Pb that is 12 grams and cut it into 4 equal pieces, what is the average mass? 12/4=3 grams. Now the temperature of the block is 80°F. Does the temperature of each smaller block equal 80/4=20°F?
That is the difference between extensive and intensive. You can’t just dismiss the difference without a physical explanation of why you can ignore the intensive property.
Essentially, averaging temperature values makes the assumption that all the conditions that determine temperature are exactly alike at all locations. Since we know that is not true, averaging introduces significant uncertainty in the result and that is on top of any measurement uncertainty.
Climate science just ignores all this and just says, hell, these are just numbers, let’s just average them wily only and see what we get!
Number is numbers!
“An equation describing a physical property must have parts with a physical meaning.”
You’re just reasserting what I deny. We could be here all day doing that.
“Does the term dimensional analysis mean anything to you?”
Could you explain why adding temperatures fails dimensional analysis. The dimension of a temperature is Θ. The sum of any number of temperatures is still Θ. Dividing by a number still leaves you with Θ.
“Only a mathematician would think numbers are just numbers!”
I think you underestimate the intelligence of the average non-mathematician. I’m sure most people understand the concept of 2 + 2 = 4.
Numbers can be “just” numbers, and they can also represent real things. The beauty of maths is it’s possible to switch between the two.
“You almost have the right idea, but then you fall off the wagon!”
You completely ignored the points. If you think averages of temperatures cannot have any physical meaning than how can you talk about the variance of temperatures? If every part of an equation has to make physical sense, then what physical meaning do you think the square of a temperature has?
Do you think variance has a physical meaning for intensive properties, and if not do you still accept it has a meaning?
“Essentially, averaging temperature values makes the assumption that all the conditions that determine temperature are exactly alike at all locations.”
No it does not. As always you seem to be blinded by your desire for everything to have a single physical meaning. You never accept that something can be useful even if it does not represent an exact physical property.
Do you think the average May temperature calculated in TN1900 had a physical meaning? If so what? Do you think every day had exactly the same conditions?
Could you explain why adding temperatures fails dimensional analysis.
Now you are being a troll.
Bellman
JG (my response)
We are discussing measurements. Measurements are done in SI units. If there are non-dimensional constants (π) they are always applied to a dimensioned measurement.
JG
Bellman
This makes little sense and is no more than a word salad with no meaning.
If the factors that determine a given temperature are not similar, then averaging them into a single number makes no sense at all. Why do you think the GUM, Dr. Taylor, NIST, etc., emphasize measuring the SAME THING? You are so far afield you don’t even know what kind of ball park you are in.
“Now you are being a troll.”
No. Genuine question. I may not understand dimensional analysis as well as you.
But nothing you write explains why you think dimensional analysis means you cannot add temperatures.
“If the factors that determine a given temperature are not similar, then averaging them into a single number makes no sense at all.”
Your just asserting this rather than explaining why you think it makes no sense. Though you have changed from insisting all the factors have to be “exactly alike” to they have to be similar. How similar are the factors in that single weather station in TN 1900? Why would you get such a range if temperatures if all the factors were similar?
But you are also digressing from the point, which was the claim that intensive properties cannot be averaged. Nothing to do with similar conditions.
And you’ve ignored all my questions about variance. Do you think it’s possible to have a variance of temperatures?
“No. Genuine question. I may not understand dimensional analysis as well as you.
But nothing you write explains why you think dimensional analysis means you cannot add temperatures.”
Your reading comprehension skills are showing again. Dimensional analysis is a clue that the variables in a functional physical relationship have to have a physical meaning.
The problem with adding temperatures is that temperature is an intensive property – meaning that creating a psuedo-functional relationship (i.e. an average temperature) that requires adding temperatures doesn’t work – the addition of temperatures has no physical meaning so the psuedo-functional relationship has no physical meaning either.
“Your just asserting this rather than explaining why you think it makes no sense.”
It’s the old adage of apples and oranges. What does the average diameter calculated from a mixed data set of apple and orange diameters tell you about either apples or oranges? The average is just a statistical descriptor of the data set you get from jamming the unlike things together – and has no physical meaning at all. Calculating the average is nothing more than mathematical masturbation.
Temperatures derived from different conditions are exactly like apples and oranges. The average has no real physical meaning. It’s based on an unphysical assumption that the global temperatures are all part of a common gradient field and that if the temperature here is X then the temperature at Y must be somehow related to X based on a linear gradient. And “infilling” and “homogenization” doesn’t even purport to follow the gradient assumption, those practices just assume that if the temp here is A then the temp there is A as well!
The old IPCC stricture of “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible” is abandoned when trying to define global averages.
“Why would you get such a range if temperatures if all the factors were similar?”
Every single time you mention TN1900 you *need* to list out all the assumptions made in that document so you can try to reach an understanding of what it is doing and teaching. It’s obvious that the only understanding you get from the document is “you can average temperatures”.
“But you are also digressing from the point, which was the claim that intensive properties cannot be averaged. Nothing to do with similar conditions.”
You just further demonstrated your lack of understanding of intensive vs extensive properties.
“And you’ve ignored all my questions about variance. Do you think it’s possible to have a variance of temperatures?”
Local temperatures can vary. Global temperature DATA has variance. Local temperatures vary based on a functional relationship. Global temperature data has variance based on the shape of the data distribution. The two are *not* the same thing. One is a physical realization and the other is a statistical descriptor of data derived from different things.
“Are you still trying to pull this nonsense? “
It’s not nonsense to anyone trained in physical science. It’s quite obvious that you didn’t answer my question because you *can’t*.
“The parts of an equation do not all have to have a physical meaning.”
Every part of a function involving reality has to have a physical meaning. You didn’t offer up one single example to refute that.
“To get the variance you first have to find the mean of the temperature, so by your logic that’s already physically meaningless.”
YEP!
“What physical meaning can you attach to a square of a temperature difference?”
The word “difference” indicates a sum is being done. You can’t sum intensive properties.
“And as has been pointed out many times – if you don;t like the sum of temperatures, you can always convert it to a problem of summing temperature times an extensive property”
Yep, and those multipliers are things like humidity and pressure – and result in a calculation of enthalpy. You didn’t even know about enthalpy until it was explained to you here on WUWT! Enthalpy *is* an extensive property.
It’s why climate science should be using ENTHALPY instead of temperature to assess the state of the Earth’s biosphere. You *can* average enthalpy.
Numbers is numbers is a meme for statisticians and mathematicians who have not a care for the real world. That seems to include you.
“It’s quite obvious that you didn’t answer my question because you *can’t*.”
You didn’t ask me any questions. If you mean the ones you asked ToldYouSo – I was answering them by pointing out that just because part of the equation has no physical meaning does not mean the equation cannot be done, or the result is meaningless.
“If you add 76F and 58F together what do you get? 76F? 58F? 130F?”
Sorry, but F is not an SI unit, so as Jim said these are not measurements. But regardless if you add 76F and 58F you get 130F, which is meaningless. If you divide by 2 you get 65F, which is the average of your two measurements and has meaning. If you donl;t think it has meaning you will have to explain why the GUM, TN1900. Or explain how example 8.5 from Taylor works. It involves the sum of 5 different temperatures. Taylor spells this out as ΣT = 260, that 260 is a meaningless value, yet the regression derived from it has meaning.
“ I was answering them by pointing out that just because part of the equation has no physical meaning does not mean the equation cannot be done, or the result is meaningless.”
An equation that has no physical meaning is what? Mathematical masturbation?
If part of of the so-called “functional” relationship has no physical meaning then the functional relationship has no meaning either.
“ 65F, which is the average of your two measurements and has meaning.”
And exactly what “meaning” does it have? It is a statistical descriptor. What is it describing? If 130F is meaningless then 130F/2 is meaningless as well. You can’t add meaning by multiplying by a constant. Meaningless x 2 = meaningless.
” If you donl;t think it has meaning you will have to explain why the GUM, TN1900.”
You STILL have no understanding of TN1900 at all. You are as dense as an anvil. The assumptions in TN1900 are meant to make the temperature data into measurements of the SAME thing taken at the same time, i.e. under repeatable conditions. You are *not* averaging intrinsic values of different things to come up with an average in TN1900. You are trying to come up with an accurate-as-possible value for the intrinsic property of a single physical object, i.e. Tmax. You *can* measure intrinsic properties. You can’t average intrinsic properties of different objects!
Still avoiding the question. Do you consider the variance of temperatures to by meaningful?
“The assumptions in TN1900 are meant to make the temperature data into measurements of the SAME thing taken at the same time”
“You *can* measure intrinsic properties. You can’t average intrinsic properties of different objects!”
And now you are just contradicting yourself. First you insist that adding temperatures is physically meaningless, and therefore any average of temperatures is meaningless. But then argue that it is meaningful if the values are of the “same thing”. How? What physical meaning do you attach to adding multiple readings of the same thing? If you measure the temperature of something as 20°C, then measure it again as 21°C, what physical meaning do you attach to the sum of 41°C? How is that any more meaningful than adding the temperature of two different things?
As to TN1900, if you want to claim that the thing you are measuring across 31 days is “the same thing”, you have to understand that that thing is not a temperature, it’s the mean of a probability distribution of temperatures. And if you had the ability, you might also realize that this is no different to measuring the mean of the probability distribution of global temperatures.
“Do you consider the variance of temperatures to by meaningful?”
Temperature doesn’t have variance. Variance is a statistical descriptor of a data set. Temperature is a functional relationship, not a data set. You question is ill-posed – as usual.
“And now you are just contradicting yourself. First you insist that adding temperatures is physically meaningless, and therefore any average of temperatures is meaningless. But then argue that it is meaningful if the values are of the “same thing””
In the first case you are averaging the intensive properties of different things. In the second case you are averaging measurements of an intensive property. They are *NOT* the same thing at all!
An object has an intensive property known as temperature. In order to accurately quantify the value of that intensive property you make multiple measurements of the object. Averaging the measurements is *NOT* averaging the intrinsic properties of multiple things.
Stop trolling. A six year old would understand the difference here.
” What physical meaning do you attach to adding multiple readings of the same thing?”
That measurements have a physical measurement uncertainty that has nothing to do with the actual intrinsic property of the object being measured!
“If you measure the temperature of something as 20°C, then measure it again as 21°C, what physical meaning do you attach to the sum of 41°C? How is that any more meaningful than adding the temperature of two different things?” (bolding mine, tpg)
The operative word here is “measure” although you can’t seem to recognize that. Averaging measurements is not the same thing as averaging intrinsic properties.
“ou have to understand that that thing is not a temperature, it’s the mean of a probability distribution of temperatures”
The assumptions Possolo makes turns the MEASUREMENTS into a probability distribution of MEASUREMENTS, not into a probability distribution of of intrinsic properties.
Again, a six year old could understand this, why can’t you?
From the GUM:
In TN 1900, “q” is a random variable holding values of “monthly_average_of_Tmax”. It has 22 measured quantities contained in it. The readings have been obtained under the same conditions of measurement, in this case, reproducibility conditions. If you check NIST’s Engineers Statistical Handbook, you will see that measurements on successive days satisfy the uncertainty of reproducibility conditions.
Continuing from the GUM Section 4.2.1
There is only one input quantity in this random variable, Tmax, so i=1 and k=22.
From the GUM:
4.2.2 The individual observations qk differ in value because of random variations in the influence quantities, or random effects (see 3.2.2). The experimental variance of the observations, which estimates the variance σ² of the probability distribution of q, is given by
Random variations in the influence quantities is what measurement uncertainty is all about.
Section 4.2.2 goes on.
From the GUM
If the stated value of the random variable is the mean, i.e., q̅, then s²(k) is the dispersion of measurements about the mean q̅.
If you have a different section of the GUM you would like to use as a reference, feel free to post it along with your interpretation.
Yet more endless quoting from the GUM. And all such a distraction from he point. If you claim that averaging temperatures is impossible because the sum of temperatures is physically meaningless, how can you then turn round and say it’s fine to do it if the measurements are of the same thing? Everything here is a desperate attempt to distract from that.
“In TN 1900, “q” is a random variable holding values of “monthly_average_of_Tmax”.”
Meaningless nonsense. q is not a random variable of monthly_average_of_Tmax. There is only one monthly average of TMax, and that is not a random variable. q, in Ex2 is the random variable from which daily maximum values are taken. The qk are the 22 daily values, individual measurements taken from the assumed normal probability distribution with mean t.
“The readings have been obtained under the same conditions of measurement, in this case, reproducibility conditions.”
Ex2 never claims these are “reproducibility conditions”. If I’ve missed something please quote where it does make such a claim.
“Random variations in the influence quantities is what measurement uncertainty is all about.”
Usually your the ones going on about all the other types of measurement uncertainty, such a uncertainty in the definition, uncertainty in adjustments for systematic errors, etc.
“If the stated value of the random variable is the mean, i.e., q̅, then s²(k) is the dispersion of measurements about the mean q̅.”
So this is where we end up? Your continued inability to distinguish between the uncertainty of an individual measurement and the uncertainty of the mean – which is really odd, given that TN1900 Ex 2 is telling you exactly how they calculate the uncertainty of the mean.
“If you have a different section of the GUM you would like to use as a reference, feel free to post it along with your interpretation.”
Section 4.2.3, the one immediately after 4.2.2. The section specifically mentioned by Ex2. (along with 4.4.3 & G.3.2). I’m not going to post the whole section and then edit all the symbols, when you obviously have the document to hand. But it’s the section that starts by saying the “best estimate of … the uncertainty of the mean”, and gives the equation as dividing the variance of qk, by N. And of course this is the same as dividing the standard deviation of qk by √N to get the standard error of the mean, or as they call it the experimental standard deviation of the mean.
4.4.3 is just the example of this involving getting the mean of 20 observations of temperature and dividing their standard deviation by √20 to get the standard uncertainty of the mean.
G.3.2. is the part where they tell you to use a student-t distribution, rather than a normal one, to get the expanded uncertainty.
“Yet more endless quoting from the GUM.”
The GUM is a recognized source. YOU are not. You can’t even tell the difference between averaging multiple measurements of the same thing from averaging multiple measurements taken from *different* things.
“how can you then turn round and say it’s fine to do it if the measurements are of the same thing?”
Because one is multiple measurements of a single instance of a intensive property while the other is averaging different single instances of an intensive property.
Averaging the different measurements is *NOT* averaging different instances of an intensive property.
Why is this so hard for you to figure out?
“Meaningless nonsense.”
Only to *YOU*. Primarily because you refuse every single time to list out the assumptions made in TN1900 and the implications of those assumptions!
“Ex2 never claims these are “reproducibility conditions”. If I’ve missed something please quote where it does make such a claim.”
Of course it does! Again, you always refuse to list out the assumptions in TN1900 and to understand their implications. For example: “The daily maximum temperature r in the month of May, 2012, in this Stevenson shelter, maybe defined as the mean of the thirty-one true daily maxima of that month in that shelter.” This certainly implies “reproducibility” in measurements of “Tmax”.
“So this is where we end up? Your continued inability to distinguish between the uncertainty of an individual measurement and the uncertainty of the mean”
The typical use of the term “uncertainty of the mean” is as a measure of sampling error. This has *nothing* do do with uncertainty of measurement. It only has to do with how precisely the average of the data can be determined, which is totally separate from how accurate the average of the measurement data is.
Once again, you need to start using the term “standard deviation of the sample means” instead of “uncertainty of the mean”. Your use of the term “uncertainty of the mean” is nothing more than a lever to use the argumentative fallacy of Equivocation so you can use whatever definition you need for “uncertainty of the mean”. At the very least use the terms “sampling error of the population mean” and “measurement uncertainty of the mean”. They are *NOT* the same thing.
This so true, despite these jokers having been told the truth many, many times.
From the GUM
“q” is a random variable containing daily measurements of Tmax, i.e., qₖ. The mean μ(q) is the monthly average stated value. You apparently don’t want to take the time to understand anything.
You don’t even understand the difference between the experimental standard deviation of the mean and standard deviation.
From the GUM
Read these closely. The experimental SD is the dispersion of measurement results about the mean q̅. The experimental SD of the mean is the uncertainty of the mean.
From the GUM
B.2.18 uncertainty (of measurement) parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand
Does the word dispersion appear in both types of SD? I wonder why not?
If you don’t like the GUM, then find your own metrology resource to quote from. There are a number of them on line.
I’m not sure you know what you are arguing with now. You say
That’s what I’ve been telling you, so why do you keep arguing that the standard deviation is the uncertainty of the mean?
“If you don’t like the GUM”
I like the GUM. The GUM isn’t the problem. It’s the fact that you just needlessly cut and past from it at length, whilst never seeming to understand what it says. Mindless copying is not as useful as saying what it says in your own words.
“Does the word dispersion appear in both types of SD? I wonder why not?”
A standard deviation is by definition a dispersion.
https://en.wikipedia.org/wiki/Statistical_dispersion
“YEP! ”
So when people complain that UAH doesn’t publish variance or standard deviations, we can just say that would be physically meaningless.
“The word “difference” indicates a sum is being done. You can’t sum intensive properties.”
So now you are saying the difference between two temperatures is physically meaningless? Presumably that makes the Celsius and Fahrenheit scales meaningless, seeing as they are based on the difference of two temperatures.
“So when people complain that UAH doesn’t publish variance or standard deviations, we can just say that would be physically meaningless.”
If you can’t calculate one statistical descriptor then what makes you think you can calculate others? If the mean is meaningless then how can standard deviation be? The standard deviation surrounds the mean. If you don’t have a mean then how do you locate the standard deviation interval?
“So now you are saying the difference between two temperatures is physically meaningless?”
Two different temps of the SAME THING? Yes, that is physically meaningful. It’s how you calculate heat loss/gain. The difference in temp of two DIFFERENT THINGS? How would that let you calculate heat loss/gain for anything?
If you have a 1/4″ aluminum rod at X degF and a 1/4″ copper rod at Y degF what does the difference in temp actually tell you that is physically meaningful? It might tell you that one will oxidize your skin and the other will freeze your skin but how does the difference in temp let you calculate anything associated with heat?
You refuse to leave your statistical world and learn about the real world. Location A can be at X degF and Location B can be at Y degF yet both can have the same enthalpy. When the CAGW alarmists say “the earth is warming” that implies a gain in enthalpy, i.e. a gain in heat content. Climate science assumes higher temps mean higher enthalpy but you can’t tell that from temperature alone!
“If you can’t calculate one statistical descriptor then what makes you think you can calculate others?”
We’ll done. That’s the point I’m making. I just wanted to be clear you accept the logic of “a meaningful equation cannot have physically meaningless components”.
So you do claim that the standard deviation of temperatures is meaningless. And hopefully you will remember this the next time someone insists that all temperature records need to report variance.
But there are still many issues with your ban on “physically meaningless” components. Do you allow the standard deviation of a set of time records? If so what physical meaning do you attache to square time? The same with just about any type of measurement. You can;t calculate the standard deviation without squaring the values, and those squares are unlikely to have any physical meaning.
“Two different temps of the SAME THING? Yes, that is physically meaningful. ”
I was asking about your statement that “The word “difference” indicates a sum is being done. You can’t sum intensive properties.“. You keep falling back on this special pleading that temperatures are not intensive if they are the same thing. You never justify this claim.
The logic of what you are saying is that you can say that a bar that changes from 20°C to 30°C has warmed by 10°C. But you cannot say that a bar with a temperature of 30°C is 10°C warmer than one with a temperature of 20°C. You really need to explain why this makes sense to you, rather than just asserting it makes sense.
“The difference in temp of two DIFFERENT THINGS? How would that let you calculate heat loss/gain for anything?”
You keep doing this. Thinking up one reason why you might want to know something, and assuming that must therefore be the only reason.
I don’t have your expertise on thermodynamics, but surely knowing the difference in temperatures between different things is part of understanding heat transfer between them. Heat flows from a hooter body to a cooler one. Here’s the first equation I found online for the rate of heat transfer
Q = [K ∙ A ∙ (Thot – Tcold)] / d
Is that equation meaningless because it involves Thot – Tcold?
You are joking right? Hold one rod in your left hand using an insulated glove and the other rod in your right hand using an insulated glove. Is the heat transfer equation useful? Is the average of the two temperatures useful or even a real thing?
“You are joking right?”
You could try answering the question.
“Hold one rod in your left hand using an insulated glove and the other rod in your right hand using an insulated glove.”
What has that got to do with conduction between the rods?
“Is the heat transfer equation useful?”
I would assume it is – as I say, I don’t have your “expertise” in thermodynamics.
“Is the average of the two temperatures useful or even a real thing?”
Why didn’t you answer this question?
“I would assume it is”
Which is apparently why you can justify using the temperature of a parcel of air in Topeka, KS and a separate one in Nome, AK to get an “average” temperature for the globe.
km whine: you are in position to demand I answer your questions.
The average of two temperatures can be useful – it depends on what you want to use it for. An average of lots of things is more likely to be useful. Whether it’s a real thing depends on your philosophical outlook.
“Which is apparently why you can justify using the temperature of a parcel of air in Topeka, KS and a separate one in Nome, AK to get an “average” temperature for the globe.”
Why would you want to use just two measurements, from the same small part of the world, to estimate a global average?
Fail, your bait is stale.
/plonk/
I’m beginning to suspect km doesn’t like me.
“he average of two temperatures can be useful – it depends on what you want to use it for. An average of lots of things is more likely to be useful. Whether it’s a real thing depends on your philosophical outlook.”
Of what use is the average of the intensive property of two different objects?
Why would you want to use just two measurements, from the same small part of the world, to estimate a global average?”
If averaging two independent intensive properties doesn’t work then averaging 10,000 won’t either.
He won’t answer.
“So you do claim that the standard deviation of temperatures is meaningless.”
Read what you just said! It’s what we’ve been trying to tell you for over two years. You can’t average intensive properties. That is *NOT* the same thing as averaging measurements of a single object to determine the best estimate of a specific intensive property of that single object. E.g. take ten measurements of the temperature of a steel rod and then average the measurements to get the best estimate of the temperature of the rod.
“Do you allow the standard deviation of a set of time records?”
Time is not a intensive value. You *can* add time. You can add the time you take to travel from Point-A to Point-B to the amount of time it takes for you to then travel from Point-B to Point-C in order to find the total time taken to go from Point-A to Point-C.
You *still* don’t understand what standard deviation is. It’s not just a calculation you make of a data set. Take the temperatures recorded by a temp measurement station over the period of a day. You digitize the temperatures and put them into a data set. Exactly what do you think the standard deviation of that data set tells you?
“The logic of what you are saying is that you can say that a bar that changes from 20°C to 30°C has warmed by 10°C. But you cannot say that a bar with a temperature of 30°C is 10°C warmer than one with a temperature of 20°C.”
Your lack of reading skills is showing again. That is *NOT* what I said at all!
I spoke of a 1/4″ aluminum rod and a 1/4″ copper rod, not a “bar” (i.e. singular)!
Neither can you equate the amount of heat it takes to move an aluminum rod 10 degF with the amount of heat it takes to move a copper rod 10 degF. The same thing applies to surface and atmospheric temps. The amount of heat it takes to move a cubic foot of air up 10 degF in Topeka, KS is *NOT* the same as the amount of heat needed to move a cubic foot of air up 10 degF in Miexico City. But climate science assumes that it does! And so do you!
“but surely knowing the difference in temperatures between different things is part of understanding heat transfer between them.”
Who said there was any heat transfer *between* them?
“Is that equation meaningless because it involves Thot – Tcold”
Just stop. You are making a fool of yourself. Your equation is for conduction of heat. You apparently don’t even realize what “K” is. Conduction occurs through a medium such as from the end of a rod being heated to the far end, or from a hot rod to cold rod clamped to a cold rod (which also then requires finding the area of contact). And on and on.
That has *nothing* to do with temperature being an intensive property! How much *conduction* do you suppose there is between a cubic foot of atmosphere in Topeka and a cubic foot of atmosphere in Boston?
“Time is not a intensive value.”
Still avoiding the point. You claim that every part of an equation has to have a physical meaning or else the equation is meaningless. I’m asking if you think time squared is physically meaningful.
“You *still* don’t understand what standard deviation is.”
It’s so obvious you know you’ve lost the argument when you have to resort to these sniveling insults.
“Your lack of reading skills is showing again. That is *NOT* what I said at all!”
It’s the implication of what you are saying. Nothing you say contradicts it.
“I spoke of a 1/4″ aluminum rod and a 1/4″ copper rod, not a “bar” (i.e. singular)!“”
Yes, I’m sure the difference between bars and rods makes all the difference. And I spoke of bars plural – the difference between one bar and another.
“Neither can you equate the amount of heat it takes to move an aluminum rod 10 degF with the amount of heat it takes to move a copper rod 10 degF.”
Classic Gorman evasion. The question was an]bout the difference in temperatures, not how much heat it took to get to that temperature.
“Who said there was any heat transfer *between* them?”
I did. You said the difference in temperatures was meaningless – I gave an example where it would have meaning.
“You apparently don’t even realize what “K” is.”
But nice evasion.
“Conduction occurs through a medium such as from the end of a rod being heated to the far end, or from a hot rod to cold rod clamped to a cold rod (which also then requires finding the area of contact)”
Yes, there are a number of variables. None of which have anything to do with the fact that you need (Thot – Tcold) in the equation. And by your logic the equation is meaningless if Thot – Tcold has no physical meaning.
“That has *nothing* to do with temperature being an intensive property!”
And more distractions. The question is whether the difference in temperatures has a physical meaning.
“How much *conduction* do you suppose there is between a cubic foot of atmosphere in Topeka and a cubic foot of atmosphere in Boston?”
And to end we have a red-herring so large it can be seen from space.
The minute you put two bars in contact then they become “an object”, i.e. singular. When the bar is in equilibrium it has ONE intensive property of temperature, not multiple ones. If the bar is *NOT* in equilibrium then each point you measure becomes a separate object. You can’t find an *average* temperature of the two bars in contact by averaging the temperature of the multiple points. That average is not physical, you can’t use it to calculate anything. The amount of heat conducted is based on the temperature difference between adjacent minute portions of the bar, not on the average temperature of the bar as a whole.
You just keep on showing how little you understand physical science.
The only red herring here is *YOU* trying to use heat conduction in an object to prove that you can average the intensive property of temperature to create an average temperature for a parcel of air in Topeka, KS and a different parcel of air in Nome, AK.
“The minute you put two bars in contact then they become “an object”, i.e. singular.”
I worry for about your spine, given the contortions you put yourself through.
So do you think the old is “an object”, and can it have an average temperature?
“The only red herring here is *YOU* trying to use heat conduction in an object to prove that you can average the intensive property of temperature…”
More distractions. This wasn’t about averaging, but subtracting.
“I worry for about your spine, given the contortions you put yourself through.”
The problem is that you simply have no understanding of physical science at all, NONE.
Conduction requires direct heat transfer through a slice of a media. It is calculated normal to that slice, i.e. dA. If there is no interface between two objects then heat transfer is via conduction and/or radiation. The coefficient for convection is different than that for conduction.
“More distractions. This wasn’t about averaging, but subtracting”
No, it’s about whether you can average intensive properties. It was *YOU* that are trying to distract by talking about conduction which has nothing to do with averaging intensive properties.
Ditto bwx and his force-acceleration “model”. He also thinks curve fitting gives you a model.
Just because it is possible to stuff any set of numbers into the mean formula does not make it a meaningful exercise.
Numbers is numbers!
Who knew?
If I have two marbles and Johnny gives me three marbles, I can reasonably and practically conclude that I end up with 2+3 = 5 marbles. That makes physical sense to me . . . much more so than discussing the average number of marbles resulting from Johnny’s gift.
Not a good analogy. Marbles are counting numbers with no uncertainty.
A better analogy would be to smash 10 marbles into thousands of pieces and distribute equal weights to 5 different people. What is the uncertainty of weights that each person receives?
Huh??? . . . and all along I really and truly thought that marbles were physical objects, totally unlike numbers.
BTW, I think it can be said that the irrational number “pi” will always have some uncertainty no matter how many numerals are used to quantify it . . . IOW, there are numbers in use today that DO have uncertainty.
And to further confound you, I’ll just point out that the number called “pi” has physical meaning since it represents the ratio of a circle’s circumference to its diameter, a geometric property with real-world applications.
I literally lost all my marbles a long time ago, so I have none left to smash and thus remain uncertain about THAT analogy. 🙂
Doesn’t confound me at all. Here is an Instagram link showing this.
https://www.instagram.com/reel/Ckfk2IIPL3e/?igsh=MWo1bmlvc2l4YmcwbA==
“I’ll just point out that the number called “pi” has physical meaning since it represents the ratio of a circle’s circumference to its diameter, a geometric property with real-world applications.”
The problem is that pi as the irrational mathematical abstraction, isn’t what has real-world applications. For the real world you only need a rough approximation of pi.
A mere 30 or so decimal places of pi is enough to calculate the circumference of a a circle the size of the galaxy to the nearest nanometer.
“[counting] Marbles are counting numbers with no uncertainty.”
Either your reading comprehension is lacking or you are playing semantic games.
An average has NO physical significance. Period.
A mean (average) only has meaning if the variance it is calculated from is also quoted. A measurement is only valid if the model, derivation,, degrees of freedom, etc. is also stated.
You really ought to take this up with Spencer, although his blog is having problems at the moment.
If you want the weighted variance for anomalies based on the grid data, I make it 0.69 K² for October.
For temperature it’s 116.7 K².
I was getting a 403 error trying to post. I came in from a different IP address and I was able to get a post in. I switched back to the original IP address and I’m getting a 403 error again. It is acting like I’ve been very recently IP banned which is obviously weird since I’m not actually banned.
Yes, it seems to come and go. I’ll post a comment with no problem, then the next one keeps throwing up 403 errors. But a few minutes later I can post. Either it’s a problem with his hosting, or it’s deliberately limiting the rate of comments.
I’m still getting a 403 error. Maybe he really did ban me yesterday.
I’m getting it again today. I doubt that it’s a ban though.
√0.69 = 0.83. That’s a pretty large uncertanty, especially when trying to pass off knowing temperatures to a milikelvin value. Do you understand what uncertainty in measurement really means?
It’s not an uncertainty. It’s just a measure of how much the anomaly varies across the globe. If you just based a statement on one random part of the globe, it would be the uncertainty as to how much that that one value represented the global average.
But UAH is not based on just one random part of the world.
“It’s just a measure of how much the anomaly varies across the globe.”
Variance (i.e. varies across the globe) is a direct metric for uncertainty. The larger the variance the less certain the “average” is. The less certain a value is the higher its uncertainty is.
“it would be the uncertainty as to how much that that one value represented the global average.”
That is the direct definition of “measurement uncertainty”!
“The larger the variance the less certain the “average” is.”
Yes, in general. If this was a random sample of anomalies across the globe (it isn’t) then the uncertainty of the mean would be standard deviation divided by root N. The larger the variance in the data the less certainty in the mean – but that does not mean the variance in the data is the uncertainty of the mean.
“That is the direct definition of “measurement uncertainty”!”
Read what I said. If you based the global value on just one observation taken at a random place on the globe – then the global variance would be the uncertainty of the global value.
“Yes, in general.”
As far as my library is concerned the average of a distribution with a higher variance is *always* less certain than one with a lower variance.
“ random sample of anomalies”
” the uncertainty of the mean would be standard deviation divided by root N.”
The uncertainty of the mean, i.e. the standard deviation of the sample means, is *NOT* the uncertainty of the value of the average. It is a metric for sampling error and *NOT* for accuracy. The sampling error represented by the standard deviation of the sample means is an ADDITIONAL factor which increases the uncertainty of the value of average.
You remain stuck in that idiotic meme that all measurement uncertainty is random, Gaussian, and cancels. It is only then that the standard deviation of the sample means represents the uncertainty of the value of the mean.
BTW, “A” (as in singular) sample of temperatures does not have a standard deviation of the sample meanS. In order to calculate a standard error, i.e. SD/sqrt(n), from a single sample *requires* one to assume that the SD of the single sample is equal to the SD of the population. That is an assumption that must be justified on a case-by-case basis. I’ve never seen that assumption justified for global temperatures or anomalies.
You just keep beating the same old dead horse – all measurement uncertainty is random, Gaussian, and cancels. You don’t even recognize any more when you are using the meme.
“As far as my library is concerned the average of a distribution with a higher variance is *always* less certain than one with a lower variance.”
Then you need a better library.
“The uncertainty of the mean, i.e. the standard deviation of the sample means, is *NOT* the uncertainty of the value of the average.”
The uncertainty of the mean is not the uncertainty of the value of the average? Is that what your library says?
“It is a metric for sampling error and *NOT* for accuracy”
I was responding to your claim that:
“Variance (i.e. varies across the globe) is a direct metric for uncertainty.”
Variance will not tell you anything about the accuracy of your measurements, anymore than the standard error of the mean. If there’s a systematic error in all your measurements it will be invisible to the variance and the SEM.
“BTW, “A” (as in singular) sample of temperatures does not have a standard deviation of the sample meanS”
You are the only people who think there is an S at the end of mean. The standard error of the mean, the experimental standard deviation of the mean, or SDOM, or whatever you want to call it – is an expression explaining the probability distribution from which your single mean has drawn. You usually estimate this from the distribution of the individual observations in the single sample. This has been explained to you countless times, by me, and I expect every book in your library.
“…from a single sample *requires* one to assume that the SD of the single sample is equal to the SD of the population.”
You do not assume that. You assume it’s an imperfect estimate of the population SD, which becomes better as sample size increases.
“all measurement uncertainty is random, Gaussian, and cancels”
Stop lying.
It is what it says. From:
Standard Error of the Mean (SEM) – Statistics By Jim
Read these carefully. A sampling distribution of means is developed from many samples and calculating the mean value of each sample.
When discussing a monthly average at a station, you have two choices:
30 samples allows for determining the standard deviation with SEM = σ/1. In other words, the standard deviation of the samples is the measurement uncertainty.
A single sample of size 30, does not allow the development of a sample means distribution. Therefore, there is no SEM. You can not infer anything about any population property.
Here is another reference from:
Sampling Distribution: Definition, Formula & Examples – Statistics By Jim
Bottom line – when you calculate the SD (σ) and divide that by the √n, all you are doing is ESTIMATING the SEM you might get with correct sampling using multiple samples. It is not a real SEM because you have not done the sampling.
In actuality, you can not have multiple samples of of size “n” when measuring temperatures. You have one reading which is sample size of 1.
“It is what it says. From:”
And as usual not a single thing you quote disagrees with what I’m saying. Nor does it explain what you mean by “the uncertainty of the mean” is not the “the uncertainty of the value of the average”. It might help if you defined what you mean by the “value of the average”. I’m assuming you mean the average value you obtained from the sample.
“A sampling distribution of means is developed from many samples and calculating the mean value of each sample”
And again, that’s how you can think of a sampling distribution. It does not mean you actually have to take an infinite number of samples. As the article says
“30 samples allows for determining the standard deviation with SEM = σ/1.”
A sample of size 1 is not much of a sample. And you cannot obtain σ/1 as you do not know the population distribution, and there is no sample standard deviation when the sample has only one observation.
“In other words, the standard deviation of the samples is the measurement uncertainty.”
Assuming you know the population distribution, it’s the measurement uncertainty of the mean of a single value – which is the same as the uncertainty of that one value.
“A single sample of size 30, does not allow the development of a sample means distribution.”
Then you will have to explain why your beloved TN1900 is wrong.
“You can not infer anything about any population property.”
Then how did you know what the uncertainty of your individual value was? If you can accept that the distribution of the 30 individual values tells you something about the population, why do you think the 30 individual values making up the size 30 sample does not tell you anything about the population?
“when you calculate the SD (σ) and divide that by the √n, all you are doing is ESTIMATING the SEM you might get with correct sampling using multiple samples.”
Yes – that’s the point.
“It is not a real SEM because you have not done the sampling.”
A sample of samples is not going to give you the real SEM unless you have infinitely many – but the point you never grasp is that of you have multiple samples of samples, you can just combine them into a single sample with a smaller SEM. There’s no point in taking 100 different samples of size 30, just to estimate how much uncertainty there is in a single sample of size 30, when you could just take a sample of 3000. The object of sampling isn’t to find the best estimate of the SEM, it’s to get the best estimate of the mean.
“Then you need a better library.”
You need a better understanding of the theory of statistics. The wider the variance the wider the tails of the distribution, even a Gaussian distribution. The wider the tails then the more possible values exist in the standard deviation interval. As the peak of that standard deviation interval gets smaller compared to the surrounding values (i.e. the larger the variance) the less certain it becomes that the average *is* the actual average and it becomes more and more likely that a value close to the “average” is the true average.
If that is too confusing for you then think of it terms of the standard error, SE = SD/n. “n” is a constant for a given distribution. As SD gets larger, i.e. the variance goes up, then the SE goes up as well. The SE going up means that the sampling error associated with the “average” goes up and the value of the “average” becomes less certain.
“The uncertainty of the mean is not the uncertainty of the value of the average? Is that what your library says?”
It’s what *all* metrology texts say. The “uncertainty of the mean” is always defined as the standard error of the sample means. That is *NOT* the measurement uncertainty of the average. The measurement uncertainty of the mean is that value which is propagated from the individual measurement uncertainties.
The two are *NOT* the same. It’s why I keep telling you that you need to abandon the term “uncertainty of the mean” and actually describe what you are talking about. Either use the term “standard deviation of the sample means” or the term “measurement uncertainty”. You continue to use “uncertainty of the mean” because it is ambiguous and you can use the argumentative fallacy of Equivocation to change the definition of what you are talking about as needed in any specific context.
“Variance will not tell you anything about the accuracy of your measurements,”
I didn’t say it would. Like most CAGW advocates on here you have no basic understanding of real world terms. The term “metric” is *not* the same as the term “accuracy”. A “metric” is a value that is used for comparison or assessment of a product, measurement, process, etc. It is *not* necessarily a direct measurement of an objects properties. E.g. a weight of product collected in a sieve can be used as a metric for how much of a product exceeds a certain length. It won’t tell you anything about the actual product pieces collected in the sieve but is a METRIC for the process. The more weight collected the more product that does not meet requirements.
UAH can be considered a metric for something about the Earth’s biosphere but it can’t tell you anything specific about that biosphere. Thinking that UAH can tell you anything about how much a collection of intensive property measurements of different things is changing is only fooling yourself, especially when you aren’t even directly measuring that intensive property itself!
” is an expression explaining the probability distribution from which your single mean has drawn.”
Bullshit!
The theoretical definition of the SEM is σ/sqrt(n) where σ is the POPULATION STANDARD DEVIATION.
You keep wanting to substitute the approximation of s/sqrt(n) where s is the sample standard deviation. This requires assuming that the standard deviation of the sample is the same as the standard deviation of the sample. This requires JUSTIFICATION for assuming that σ = s. A justification which you NEVER, ever provide.
“You do not assume that. You assume it’s an imperfect estimate of the population SD, which becomes better as sample size increases.”
More crap being thrown against the wall. How do you judge the “imperfection” level from a single sample? And, as usual, all increasing the sample size does is decrease the interval in which the average value lies. It tells you nothing, absolutely nothing about whether that interval is accurate or not.
This requires assuming that the standard deviation of the sample is the same as the standard deviation of the
samplepopulation.Slight mistake.
Is your IQ above or below average? Question mark.
An average is a STATISTICAL DESCRIPTOR. It is generally understood to describe the value that is the most common one – i.e. it is an EXPECTATION of the value to be found most often in a distribution of values.
An average is *NOT* a measurement. A group of measurements of different things using different devices is *not* a measurement of an “average”. The average is just the value you would (hopefully) find the most often.
In essence, the “global average temperature” is actually a statistical descriptor, it is the value you would expect to find most often if you travelled around the globe taking a bunch of measurements. But what does that mean? You simply don’t know if you don’t have the other required statistical descriptors typically used with a distribution of values – e.g. the variance, the skewness, and the kurtosis.
What should *really* be provided is what is known as the 5-number statistical description, the min, max, median, first quartile, third quartile. What you would *actually* find is that the temperature distribution around the earth is a bi-modal (or more probably a multi-modal) distribution – warm temps in one hemisphere juxtaposed with cold temps in the other. And the “average” value of a bi-modal distribution is almost useless for describing physical reality.
It is why the probability of unseen spurious trends is likely. The GAT is a time series made up of monthly segments being averaged multiple times. No one here has even addressed of the time series being used is stationary and if seasonal (think bimodality) combinations have been addressed.
It is basically 5th grade averaging counting numbers.
“An average is a STATISTICAL DESCRIPTOR. It is generally understood to describe the value that is the most common one”
No that’s the ‘mode’.
https://www.ncl.ac.uk/webtemplate/ask-assets/external/maths-resources/statistics/descriptive-statistics/mean-median-and-mode.html#:~:text=The%20mode%20is%20the%20most,most%20times%20is%20the%20mode.
climate science always assumes *everything* is random and Gaussian. In such a “statistical world” the mode and the mean are equal.
If you want to say climate science should stop assuming everything is random and Gaussian, including measurement uncertainty, then I would agree with you and the mode would not necessarily be equal to the average.
But then climate science would also have to show those statistical descriptors that shows how the data is skewed or multi-modal, e.g. the 5-number description, instead of just the mean because then the mode would not equal the mean.
I’m not going to hold my breath, just like I’m not going to hold my breath waiting for climate science to start weighting the data to account for different variances in warm weather vs cold weather.
We are apparently still pretty much in “ColdHouse” conditions.
We are, but your chart doesn’t show that. It just shows the correlation between CO2 and global temperature.
Except the chart does CLEARLY shows we are in a ColdHouse period.
Can’t see the words down the bottom ??? Or just very DUMB.
Not only a “Coldhouse” period, but a low CO2 period.
Where in the “words down the bottom” does it mention time? The graph is breaking up the past into 5 non-consecutive temperature ranges, and showing the range of CO2 during those periods. No mention of which state we are currently in.
If you want to see that we are in a “cold house” period you need to look at the graph above, which for some reason you cropped out.
Dude, the graph is not based on time. The relationship is a quality of temperature versus CO2 concentration. Time has nothing to do with it.
Yes, that’s what I said. It does not tell you that we are currently in a cold house state, just that you get the lowest CO2 levels when the world is at it’s coldest.
OMG you are so thick..
Yes, we have very low CO2 and are in a “coldhouse” period.
Why is it so difficult for you to comprehend.
Do you have brain damage or something ??
Why is it so difficult for you to read what I said. Literally the first two words were “We are”. I keep telling you we are in a coldhouse, I’m trying to explain to you that the chart you are using is not the thing that demonstrates that.
Then stop your moronic caterwauling about warming.
It is obviously NECESSARY. !
As is a greater level of atmospheric CO2
I should say that the graph is from “A 485-million-year history of Earth’s surface temperature”, by Emily J Judd et.al.
https://www.science.org/doi/10.1126/science.adk3705
Unfortunately the full paper, and that particular chart, now seems to be behind a paywall.
Thanks for showing EXACTLY what I said, dolt. !!
We are in a “coldhouse” period with low CO2.
The red line is current CO2.. time is irrelevant
“We are in a “coldhouse” period with low CO2.”
Yes we are. As I kept pointing out to you. The red line is not from the paper, it’;s something you’ve added, and it’s only suggesting we are in a coolhouse because of the correlation between CO2 and temperature.
Does this mean you accept that if CO2 were to rise above 650ppm we might be in a hothouse state?
So are you saying the red line is wrong.?
Or just making mindless spurious anti-logic yabberings.
At least you have now admitted that the globe is currently in a “coldhouse” condition.
This is a fact that is well established.
Only a degree or so above the coldest period in 10,000 years, and still with very low CO2 levels
So stop your moronic caterwauling about warming !!
This is obviously to complicated for you to understand. I have never said anything other than we are in a coldhouse condition. The world has been cooling over the last 50 million or so years. This is well known, and models in the paper you are quoting agree with that. I have no reason to doubt the paper, and it was me who showed you chart you are using.
The very simple point, however, is that the part of the chart you clipped out is not the bit that tells you we are currently in a coldhouse condition. What it shows is the range of CO2 levels present in any given temperature range. You might infer from it, that because CO2 levels were low, that this means we are currently in a coldhouse condition, and you would be correct. But at the current levels we could also be in a coolhouse or possibly even a warmhouse.
CO2 is not the only determinant of temperature, especially not on the scale of hundreds of millions of years. You cannot just assume that increasing CO2 to levels to 500ppm will result in a global average temperature of 20°C, even though that’s what the chart would suggest.
Please reply with as much infantile creaming as you want – as far as I’m concerned you will be yelling into the void.
ROFLMAO
Another rambling diatribe of irrational nonsense.
Yes we are currently in a “coldhouse” state, only a degree above the coldest period in 10,000 years..
And yes CO2 IS at a very low level.
The graph is just another confirmation of the known current state.
There is no evidence CO2 is a determinant of temperature at all.
You cannot assume that raising CO2 levels to 500ppm would have any effect whatsoever, except enhanced plant growth..
If I yell in your ear.. then I would be yelling into a void. !
He has a special talent here.
You have to admire the determination not to see the obvious.
bnice accepts the world is at it’s coldest. Accepts CO2 is at it’s lowest points. And then asserts there is no evidence that CO2 affect temperature.
Please note carefully that “roughly” is a limiting factor in how many decimal places you can use. The quote only specifies temperatures in the units digit, not in the hundredths or thousandths digit.
And exactly *what* is “current scientific understanding” when it comes to averaging an intensive property?
During many of Earth’s past ‘hothouse’ periods, the positions of the continents were different due to plate tectonics.
Comparing the ‘global average temperature’ of those times to today’s is like comparing apples to oranges.
You’re asking Climate Cranks why you’re wrong?
What you see in the graphs of the UAH data, is not the average temperature, it is the averaged anomaly. These types of datasets are used throughout the climate community and very useful for ENSO forecast etc..
And these numbers are NEVER reported with a realistic uncertainty attached. And climate science ignores significant digit rules.
Here is the updated adjustment table for UAH and how each adjustment effected the overall trend.
Year / Version / Effect / Description / Citation
Adjustment 1: 1992 : A : unknown effect : simple bias correction : Spencer & Christy 1992
Adjustment 2: 1994 : B : -0.03 C/decade : linear diurnal drift : Christy et al. 1995
Adjustment 3: 1997 : C : +0.03 C/decade : removal of residual annual cycle related to hot
target variations : Christy et al. 1998
Adjustment 4: 1998 : D : +0.10 C/decade : orbital decay : Christy et al. 2000
Adjustment 5: 1998 : D : -0.07 C/decade : removal of dependence on time variations of hot target temperature : Christy et al. 2000
Adjustment 6: 2003 : 5.0 : +0.008 C/decade : non-linear diurnal drift : Christy et al. 2003
Adjustment 7: 2004 : 5.1 : -0.004 C/decade : data criteria acceptance : Karl et al. 2006
Adjustment 8: 2005 : 5.2 : +0.035 C/decade : diurnal drift : Spencer et al. 2006
Adjustment 9: 2017 : 6.0 : -0.03 C/decade : new method : Spencer et al. 2017 [open]
Adjustment 10: 2024 : 6.1 : -0.01 C/decade : NOAA19 drift : [Spencer 2024]
Given the values as reported the trend from 1979/01 to 2024/09 is 0.1509 C.decade-1 for v6.1 and 0.1579 C.decade-1 for v6.0. That is a difference of -0.007 C.decade-1.
Why did you stop at using just four decimal places for the rates?
4 digits was enough to report the difference to 1 significant figure.
Anyway, here is the full IEEE 754 breakdown. I’m not sure how much value is added here, but since you asked I’ll present it anyway.
v6.1: 0.149843723824339
v6.0: 0.157923047635624
That is a difference of exactly 0.00807932381128526 using IEEE 754 and UAH’s 3 digit file.
Mathematical idiot !!
Oops…I accidently did through 2024/10 for v6.1. Through 2024/09 it is 0.1498 C.decade-1. That makes the difference -0.008 C.decade-1. Sorry about that.
Well there we are.
Now I can stop wondering why nobody on the whole planet can sensibly detect a scintilla (sometimes called a ‘poofteenth’) of climatic change in their localities.
(unless of course you’re a Canadian who flees from their native climate conditions each winter to experience the different, and much comfortably warmer conditions of Florida)
Serious question…why did you ever consider that it would be possible for a person to “sensibly detect” the global average temperature in the first place?
because when we’re rumaging around aimlessly in the realms of absurdities (eg GAT), any associated absurdities are all grist for the mill.
Ok, that definitely helps explain the difference between you and I because my tendency for initial belief is inversely proportional to the absurdity of the claim.
Which you are woefully unprepared to evaluate.
Because you and every CAGW advocate complain about how the HEAT, in the form of TEMPERATURE is going to destroy the earth ability to support life
Don’t try to minimize CAGW. It is why muli trillions that we don’t have is being spent.
The “Who me?” response just isn’t going to impress anyone! If you truly think the increase in global temperature is not sensible than global warming is a no show.
Every adjustment for sound scientific measurement reasons…
… unlike the agenda-based anti-science adjustments of the surface temperature fakeries.
So many adjustments in WUWT’s “gold standard” global temperature database.
Can you even imagine if GISS or NOAA made an adjustment of this magnitude in their monthly update?
Anthony’s head would explode!
ROFLMAO.. GISS does make RANDOM non-science adjustments all the time.
They happen every time they run their fake homogenisation routines.
Individual site past data changes all the time.
Sorry you are too dumb to realise the difference between scientifically valid adjustments…
… and agenda-based mal-adjustments.
“Satellite calibration biases….typically tenths of a degree.”
Hmmm….that’s not good in the range of accuracy we are expecting…if the calibration bias is determined by a person or committee working with statistically derived correction factors and affected by possible cognitive bias…which could be as simple as trying to show that your work is worthy of further funding.
v6.1 changes things up in regards to trends. Below are select trends presented for both v6.0 and v6.1.
At its peak the Monckton Pause lasted 107 months starting in 2014/06. From 2014/06 to 2024/09 the trend is +0.42 C.decade-1 (v6.0) and +0.33 C.decade-1 (v6.1)
Here are some more trends with v6.0 listed first and v6.1 listed second. These are only through 2024/09 so that a like-to-like comparison can be made.
1st half: +0.14 C.decade-1, +0.14 C.decade-1
2nd half: +0.23 C.decade-1, +0.21 C.decade-1
Last 10 years: +0.41 C.decade-1, +0.32 C.decade-1
Last 15 years: +0.39 C.decade-1, +0.34 C.decade-1
Last 20 years: +0.30 C.decade-1, +0.27 C.decade-1
Last 25 years: +0.23 C.decade-1, +0.20 C.decade-1
Last 30 years: +0.17 C.decade-1, +0.16 C.decade-1
The acceleration is +0.03 C.decade-2, +0.02 C.decade-2.
Lmao… you mentioned at Spencer’s that your Type A evaluation arrived at an uncertainty estimate of 0.15C.
Yet now you’re claiming an acceleration of 0.03C.
Care to explain that? Seems like you’re just trying to stir the pot, Bdgwx.
https://www.drroyspencer.com/2024/11/uah-global-temperature-update-for-october-2024-truncation-of-the-noaa-19-satellite-record/#comment-1694118
No didn’t. I said it was 0.03 C.decade-2 for v6.0. Notice the units. They are important.
There’s not much to explain. It is what it is. BTW…for v6.1 the acceleration is +0.02 C.decade-2.
Both are smaller than your figure of 0.15C. Care to explain?
As I’ve already explained…
±0.15 C is the type A evaluation of monthly uncertainties from UAH as compared to RSS and STAR.
0.03 C.decade-2 is the coefficient of the x^2 term on a polynomial regression using v6.0.
0.02 C.decade-2 is the coefficient of the x^2 term on a polynomial regression using v6.1
Metrics with units of C are different than metrics with units of C.decade-2. They cannot be compared because they are different things.
Saying 0.03 C.decade-2 is smaller than 0.15 C doesn’t even make any sense. It’s neither smaller nor larger. It would be like saying 10 m is smaller than 100 kg. Get it?
Monumental stupidity and waste of time fitting a 2nd degree polynomial to data which is obviously driven by step events.
Absolutely MEANINGLESS.
All it will do is increase your ignorance… if that is even possible.
You have really explained nothing except how to curve fit a line to a time series.
Best fit metrics are not MEASUREMENTS and, therefore, are not measurement uncertainties.
“Saying 0.03 C.decade-2 is smaller than 0.15 C doesn’t even make any sense.”
Yet you and climate science keep using 0.03 C.decade-2 as some kind of uncertainty in the temperature measurements. It isn’t. It is just a metric for how well you have fit your regression line to the data, it tells you nothing about how accurate the measurement data is.
The true fact is that if the measurement uncertainty is greater than your best fit metric then you simply don’t know if the best fit metric is correct or not. The measurement uncertainty subsumes any attempt to fit a regression line to the data.
Dr. Taylor in chapter 8 covers linear regression. A linear equation is y = mx + b. “x” has no uncertainty in a time series because it is essentially a counting number. The “b” value however, has uncertainty. That means the regression line has a number of y-intercept values. The interval of the y-intercept values due to different uncertainty combinations gives quite a large total uncertainty in where the regression line should be.
Good to see you are actually reading up on what a simple linear regression is.
“The “b” value however, has uncertainty. That means the regression line has a number of y-intercept values.”
Not really. What it means is there a range of possible values for the intercept that would have a reasonable chance of producing the same data.
The same goes for the slope, “m”.
“The interval of the y-intercept values due to different uncertainty combinations gives quite a large total uncertainty in where the regression line should be.”
The uncertainty of slope and intercept are both given by the equations we discussed last time. And they show that the more observations you have the smaller the uncertainty in both. In addition, the uncertainty depends on the deviation of the x values. The higher the standard deviation the less uncertainty.
You cannot just say “there will be a large total uncertainty”. The size will depend on those factors.
Bull pucky. The x-axis values have no uncertainty, they are constant intervals in a time series where time is not an independent predictor of the dependent variable.
There is uncertainty in the y-axis stated values that affect both the y-intercept and the slope.
Look at Dr. Taylor’s equations 8.12, 8.16, and 8.17.
Δ = NΣx² – (Σx)² (8.12)
σA = σy√(Σx²/Δ) (8.16)
σB = σy√(N/Δ). (8.17)
For σy Dr. Taylor says:
What do you think σy might be from NIST TN 1900?
“Bull pucky.”
Yet you say nothing that disagrees with what I said.
“The x-axis values have no uncertainty”
This is one of the main assumptions of all simple regression.
“There is uncertainty in the y-axis stated values that affect both the y-intercept and the slope.”
Yes, and again that uncertainty can come from measurement error, or variation in the data. The assumption is that it is independent, identically distributed, etc.
“Look at Dr. Taylor’s equations 8.12, 8.16, and 8.17.”
You mean the ones I was telling you to look at a couple of days ago? It’s good that you’ve looked at them. It would be even better if you demonstrated you understood them.
“For σy Dr. Taylor says”
σy is the standard deviation of the residuals of y. It’s correct to say that this should be the same as the estimated measurement uncertainty, assuming that measurement uncertainty is the only source of variation. (And this is treating uncertainty as error. It says nothing about other types of uncertainty).
But you are ignoring my point – the uncertainty of the slope and intercept do not just depend on σy. They also depend on the deviation in x and on the number of observations, as detailed in the equations for σA and σB you quoted.
“What do you think σy might be from NIST TN 1900?”
Any particular example from TN 1900?
It’s true for any polynomial, not just a linear one.
Using only v6.1 and going through 2024/10 here the trends of interest.
1st half: +0.14 C.decade-1
2nd half: +0.21 C.decade-1
Last 10 years: +0.33 C.decade-1
Last 15 years: +0.34 C.decade-1
Last 20 years: +0.28 C.decade-1
Last 25 years: +0.21 C.decade-1
Last 30 years: +0.16 C.decade-1
The acceleration is +0.02 C.decade-2.
Seeing that all your numbers derive from the radiance measures at various satellites it would be useful to let everyone know what the resolution uncertainty is in W/m² for each of the measuring units. That ultimately determines the resolution of temperature calculations.
The change in trend over time is likely due to the large 2023/24 anomalies driven by the HTe.
I agree to the extent that it is likely that HTe is a contributing factor. Whether it is the dominating factor is a matter for the consilience of evidence to adjudicate. So far the consilience says otherwise, but I don’t mind sticking it out for a bit longer to see if there is a shift with new evidence.
Yes, like I said above, there’s a lot of pieces to this puzzle.
However, just subtract out 0.6 C from the 2023/24 anomalies and calculate the trends. Should be pretty close to what would have occurred w/o the HTe.
This is what he views as the consilience of evidence, or at least what contributes to it:
No it isn’t. As I’ve said multiple times this is what I view as a falsification of the hypothesis that there is no correlation between CO2 and temperature.
As I’ve also said before and the context of the HT eruption specifically the following are examples of what I view as contributing to the consilience of evidence. If you have other peer-reviewed studies to add to this list please let me know.
DOI: 10.22541/essoar.169111653.36341315/v2
DOI: 10.1038/s41558-022-01568-2
DOI: 10.1007/s13351-022-2013-6
DOI: 10.1038/s43247-022-00580-w
DOI: 10.1038/s43247-022-00618-z
DOI: 10.1029/2024JD041296
DOI: 10.1175/JCLI-D-23-0437.1
And I’ll tell you what I tell everyone else. If you don’t know my views then ask. Don’t just make stuff up.
Nobody cares about the views of a low-end scientifically ignorant twit. !
You are one just making stuff up.
There is no empirical scientific evidence that atmospheric CO2 causes warming.
“There is no empirical scientific evidence that atmospheric CO2 causes warming.”
Of course there is!

So far all the evidence shows absolutely ZERO human causation.
ROFLMAO.. using a large El Nino event to try to show acceleration.
Total anti-science gibberish.
And of course beeswax will be totally unable to show any human causation…
… because even someone as dumb as he is must know that El Nino events are totally natural.
Little twit still mixing up natural transient events with AGW. Just DUMB.
HAHAHAAHAHAHAAHAHAHAHAHHA
From one of your earlier comments :
I also prefer the “tXXglhmam_6.N.txt” text files, where XX is the atmospheric layer (“ls”, “lt”, “mt” or “tp”) and N is the version number (previously “0” only, now with “1” options).
They provide the “Monthly means” figures to three decimal places, which is slightly less inaccurate than the “uahncdc_XX_6.N.txt” alternatives (2dp, but with more “zones / latitude bands” along with areas like “USA48” and “AUST”).
Starting URL : https://www.nsstc.uah.edu/data/msu/
For the V6.0 lower-troposphere (TLT) data navigate to the “../v6.0/tlt/tltglhmam_6.0.txt” file.
For V6.1 navigate to “../v6.1/tlt/tltglhmam_6.1.txt” instead.
Attached is my initial “quick and dirty” idiot-check … i.e. a verification performed by an idiot, which would be “me” … of the differences between versions 6.0 and 6.1, along with the changes they make to the “start to X” trends.
Notes
– “Minor” changes were made to values in the “Reference Period” UAH uses of 1991-2020. This shifted the pre-2013 values by a fixed +0.003°C (+/- 0.001°C, probably due to rounding errors).
– I cannot see anything in Dr. Spencer’s explanation above about where the 2013-2017 (-0.006/7°C) and 2017-2020 (-0.011/12) “steps” came from. Did I miss it / them ???
– The changes in the trends are relatively large … maybe due to “endpoints are extreme values” effects for the points at the end of the graph ? …
I would like to see a paper that explains the uncertainty of the satellite measurement of irradiance in order to achieve a 12 month average to 3 decimal digits.
My research has only found an uncertainty of ~±5 W/m². That is not enough to resolve temperatures to 3 decimal digits.
So would I — an important side effect of a formal analysis is it will often highlight problems in the measurement process that otherwise might go unnoticed.
As far as I know the only analysis that has been done is a comparison of temperature time regressions against radiosonde data. But this is in no way a real uncertainty analysis because neither dataset is a true value against which a comparison can be made.
We asked about an uncertainty analysis yesterday. Spencer said they couldn’t afford it.
So no one knows for sure. Let’s just assume numbers is numbers and do high school averages while recommending that trillions of dollars be spent.
Yes, the lack of either an “error range” or an “uncertainty interval” is unhelpful.
I am only “an interested amateur”, but doesn’t this come under the old debate about the difference between “accuracy” and “precision” ?
As I “understand” the process, which is probably completely wrong …
The satellite MSU measurements, of the microwave frequency of O2 molecules, is “translated” into average temperatures of the “cone” of the Earth’s atmosphere that was sampled at time T.
These data are then processed to give the temperatures (+/- delta ?) of various volumes of the atmosphere, with the goal of isolating fixed vertical “layers” within each “cone”.
Averaging hundreds (/ thousands ?) of these “volumes” into a single global average per layer should allow you to add two (or three ?) decimal places of precision, even though the accuracy (+/- 0.01 or 0.15 degrees Celsius ???) will be unchanged.
.
In any case, for the global averages I prefer using the 3dp UAH datasets (the first “GLOBAL” column [ 3, after “Year” and “Mo” ] in each “tXXglhmam_6.N.txt” file) to the 2dp ones (the “Globe” column [ also 3, after “YEAR” and “MON” ] in each “uahncdc_XX_6.N.txt” file).
Attached is a comparison of the “V6.1 minus V6.0” differences using both options.
NB : “Bellman” has produced a continuous graph of just the 2dp deltas, with a single Y-axis, below (the fourth-from-last top-level post as I type this). In some ways it is “clearer” than mine.
There is no single temperature of the lower troposphere (0-10 km altitude): the microwave radiance measured by the sounding units is a convolution of the Gaussian frequency response function and the temperature lapse rate of the LT.
Over high elevations the convolution is different and the numbers are different.
Layers are differentiated using different microwave frequency response functions, the cutoffs are not sharp and there is overlap.
Radiometric instruments commonly have relative uncertainties on the order of several percent, and it is really hard to get smaller numbers. At 250K, 3% uncertainty is ±7.5K. NOAA and UAH have never demonstrated how they get ±0.15K.
Repeated multiple measurements of the same quantity are not made, in fact the number of repetitions is always exactly one in any time series measurement. Claiming tiny uncertainties via averaging is invalid, yet this is Standard Procedure for climate science.
precision – putting multiple arrows into the same hole on the target
accuracy – how far from the bullseye the arrow hits
resolution – how many “circles” you have on the target. e.g. separated by 1/4″ or 1″ circles.
You can have high precision with low accuracy and low resolution.
You cannot, however, increase resolution by averaging, that would require assuming knowledge you simply don’t have. Averaging doesn’t help precision either, precision is just how many times you get the same reading from the same object being measured. High precision won’t help you if the accuracy or resolution is low, you still won’t be able to add decimal places.
And high resolution won’t help if you are spraying all over the target, i.e. low accuracy. Your accuracy will be low and averaging won’t increase the accuracy at all.
Measuring microwave irradiance is *very* dependent on the absorbing media between the source and the receiver. For microwaves one major absorbing media is water vapor. The MSU’s in the satellites have no way to measure the water vapor in the atmosphere so the accuracy of the readings have significant uncertainty from that alone. You can’t average that uncertainty away. When combining different readings from different sample sites that measurement uncertainty adds, it doesn’t go down. I’m pretty sure UAH does the same thing the climate models do and just assume a common “parameter” for water vapor.
This makes UAH into a “metric” for temperature and not a temperature measurement. Whether that metric is really useful for a global average “something” is questionable at best.
The reason MSU Channel 1 isn’t used is its sensitivity to water vapour/liquid water.
That is a really interesting find. I’d guess that the jumps you found provide clues as to how Spencer, Christy, and Braswell handle the satellite drift. NOAA19 began operation around 2010. METOP-B began operation around 2013 and NOAA18 got cutoff around 2017. I wonder if that helps explains the jumps.
The UAH calculations are apparently done as a single big FORTRAN batch job that is run at the end of every month, including the 30-year baseline subtraction arrays, even though the new data for the month should not change them.
The baseline subtraction arrays are provided on the UAH FTP data site; they are organized as 12 subarrays of ~10,000 points (the UAH grid locations), stored as 5-digit integers of temperatures in Kelvin, multiplied by 100. So the resolution is 0.01K.
The corresponding monthly temperature averages are not provided, but instead only the anomaly values (one ~10,000 point array for each month).
For a time I was undoing the anomalies to see what the actual recorded temperatures were by adding back the baseline arrays. It was at this point I noticed the baseline arrays are not constant but instead can change by 0.01K month-to-month. I attributed this to rounding in FORTRAN DATA statements. It has the same effect as the pre-2021 portion of your top graph.
So, the satellite record is not exactly perfect.
One heck of a lot more reliable than the surface fabrication though.
You can say that again.
I seriously doubt the info we are given.
On average

Temperature is going down
Do you ever wonder why there are so many stations like this scattered over the GLOBE, yet the global anomaly shows a large increase. It would seem there are spurious trends being generated via the anomaly determination procedure.
Can you tell me the source of that graph please?
It is completely at odds with the data from this source ….
https://climateknowledgeportal.worldbank.org/country/curacao/trends-variability-historical
Which shows that Curacao has experienced a 0.09 C/dec warming from 1950 to 2020.
That would be after AGW agenda adjustment, dopey. !!
Oh, yes the same “agenda” that Spencer and Christy maintain with UAH V6/6.1.
And all meteorological agencies across the world.
That the resident paranoic ranter denies.
Bless.
Now Oxy, did you ever see “Carry on Cleo” ?
https://clip.cafe/carry-on-cleo-1964/theyve-all-got-in-me/
Oh, yes the same “agenda” that Spencer and Christy maintain with UAH V6/6.1.
And all meteorological agencies across the world.
That the resident paranoic ranter denies.
Bless.
Now Oxy, did you ever see “Carry on Cleo” ?
https://www.youtube.com/shorts/wUhN2X_pxEk
Perhaps you can show us your math that derives a measurement that has a resolution at least an order of magnitude smaller than the measurements used to calculate it.
Where do you find a university lab course in a physical science that allows one to increase the measurement resolution of what was actually measured?
By the 14th of November, the center of the polar vortex will have moved completely over Russia.

This readjustment has increased Australian warming by about 30% since 2021.
You can see the weakening of the solar wind speed since October 14. Such solar wind spikes will cause serious anomalies in the distribution of ozone in high latitudes and will shift the center of the polar vortex over Siberia, where there is a strong center of the geomagnetic field.

Here’s a comparison of the difference between version 6.1 and 6.0.
There are a lot of tiny 0.01°C throughout the bulk of the time series. Up to 2012 these are all positive, i.e. 6.1 has warmed the past slightly. I suspect this rise in the anomalies is due to the changes during the base period, rather than an actual increase in temperature.
After 2012 we see the sign flip but the difference is still only 0.01 or 0.02°C.
Then after 2021 we see the big changes, with the first few months actually getting warmer, before a large downward trend. The final month on that graph is September with a difference of 0.16°C. October’s difference would be 0.21°C.
Here’s the same but showing annual averages. The final point is the average up to September 2024.
Finally, here’s the side by side comparison showing annual temperatures. (This time I remembered to include October 2024.) Almost no practical difference until 2021.
Geoff Sherrington,
You showed the pause for Australia starting in 2016/02. With the latest update the trend from 2016/02 to 2024/10 is now +0.16 C.decade-1. The pause did end last month, but this should be an even more convincing update.
It may interest you that the warming rate since 2010/07 is +0.6 C.decade-1.
Meanwhile, above the 60th parallel.

More fun with the gridded data.
Here’s a chart showing the difference between the two versions for September. Some quite big differences in places, but it mostly averages out.
I assume the checkerboard effect seen at the higher latitudes, especially in the south, are the result of the errors caused by drifting.
Here’s the average up to September for 2024.
And for all of 2023.
2022.
Interesting that this has added quite a several land areas.
And finally 2021.
Again, a lot of the mid latitude land masses are now warmer than in the previous version. I wonder what effect this has on the Australian Pause.
Is there a monthly report on how much sea floor geo-thermal energy is released into the Earth oceans?
Since there are literally thousands of un-charted sea floor thermal vents and magma chambers, it seems like that is a gaping hole in our temperature data.