Global temperature anomalies on both land and sea are dropping like a stone. Net Zero-obsessed mainstream media, science and politics do not do cooling. Confirmation bias that holds humans responsible for hockey-stick style global warming with all its risible ‘settled’ notions has gravely damaged genuine climate science. But the world is cooling rapidly and the silence from the mainstream is both laughable and disgraceful.
Exhibit 1: The accurate UAH satellite record shows the plunge clearly with the difference or anomaly from the 1991-2020 average falling during 2025 to end the year at just 0.3°C.

Needless to say, mainstream media ignore satellite temperature data. In January 2022 at the height of the Greta climate hysteria, Google AdSense banned a page promoting the monthly update on the grounds of publishing “unreliable and harmful claims”. In the UK, the stone-dropping global inconvenience was passed over recently in favour of highlighting the latest hooey from the Met Office claiming another local ‘hottest year evah’ based on its junk, unnaturally heat-ravaged weather stations. Rather than advance a balanced global view (or even mention it), the Met Office activists proclaimed that its six hundredth of a degree centigrade ‘record’ was made 260 times more likely due to humans fiddling with the weather. Such imaginative precision from such junk data is a wonder to behold. Science, it is not.
The UAH scientists, Dr Roy Spencer and Professor John Christy, also produced results showing how the monthly temperature anomalies have fallen over the last two years. The table below shows both a global figure and measurements broken down in a number of regions.

To the left, the red global anomaly in April 2024 was a two-year high, as was the figure next to it for the northern hemisphere. The other columns continuing from the left are the southern hemisphere, tropics, mainland US, the Arctic and Australia. Across all regions, a downward trend can be clearly seen.
Exhibit 2: Along the equatorial Pacific Ocean, sea surface temperatures (SSTs) have been falling for months. In its recent report on the formation of El Niño (warming) and La Niña (cooling) oscillations, the US weather service NOAA provides the latest three-month running anomalies. Since last September, NOA notes “below average SSTs persist across most of the equatorial Pacific”.

Note the warming oceans around 2015-16 caused by a particularly strong El Niño. The recent El Niño also caused warmer oceans, or ‘boiling’ to accurately report the sentiments loudly bloviated by the Guterres/Gore/Kerry gang.
This is the latest graph showing SSTs from 60°S to 60°N.

Again, it seems temperatures are coming off the boil, with 2026 starting cooler than 2025, which was cooler than 2024.
The last few years have witnessed extraordinary climatic events combined with astonishing levels of scientific disinterest in the causes of them. The ‘agreed’ answer of course was always to hand – it was humans wot did it, we have the computer models to prove it. And if you don’t agree with us, then don’t slam the door behind you. Writing recently on Judith Curry’s blog, Javier Vinos wrote that what he termed the 2023 event revealed the “greatest failure of climate science”. Vinos is a leading proponent of the suggestion that the massive underwater Hunga Tonga volcano eruption in 2022, which increased water vapour in the upper atmosphere by up to 13%, was the prime cause of all the weather anomalies. Water vapour is a powerful warming gas of relatively short duration.
The scale of the massive increase in stratospheric water vapour can be seen in the latest measuring chart from NASA shown below. There is still a lot of extra water compared to the years before 2022, but it is gradually decreasing.

Activists jumped on all the unusual weather events to promote a politically acceptable, pre-defined narrative. But the large blips since 2023 cannot be explained by anthropogenic causes since such changes if they occur are small, regular and only noticeable over a long period of time.
The reason climate science in general has failed to rise to the discovery challenge over the last few years, observes Vinos, is due to strong confirmation bias. “The first step to learning from the 2023 event is accepting its exceptional nature, which many fail to do,” he argues. Rather than trying to determine the causes of the event, scientists have attempted to fit it into the dominant narrative using models, he charged. Vinos’s contribution makes interesting reading and offers a convincing argument to lay much of the blame for the recent dramatic but temporary climate changes on an event unique in the recorded record. Unlike Hunga Tonga, most onshore volcanic eruptions emit large quantities of particles into the atmosphere which can lead to temporary but noticeably global cooling. Meanwhile, Vinos states that “climate science has failed the test of an externally forced natural climate event”.
The great tragedy of the settled climate science era, now facing increased scrutiny, is the draining of public confidence in once revered scientific institutions. Covid was hardly a high point in medical science, while climate fear mongering is in danger of becoming a social joke. ‘Boiling’ oceans and constant risible records are mixed with obvious pseudoscience such as human ‘attribution’ claims. The blast from Hunga Tonga may well help in blowing away much of this fake news for good.
Chris Morrison is the Daily Sceptic’s Environment Editor. Follow him on X.
2025 was the year of the alarming marine heatwave and the hottest, sunniest evah etc etc etc. They doubled down on it all.
Waters surrounding UK experiencing significant marine heatwave – Met Office
2025 is double-record breaker: UK’s warmest and sunniest year on record – Met Office
Ok, you can stop laughing now, it is the Met Office after all. But that is the narrative in the UK – officially. And there is a queue with mad schemes to suit:
The architect of the London Eye wants to build a vast tidal power station in a 14-mile arc off the coast of Somerset that could help Britain meet surging electricity demand to power artificial intelligence – and create a new race track to let cyclists skim over the Bristol Channel. – Grauniad
The London Eye is a nothing more than a glorified Ferris wheel…
Fortunately, architects have to get their ‘designs’ certified by professional engineers skilled in both calculations and construction methods….before financial types will give them money to build their dream…although politicians sometimes bypass this norm by promising tax dollars from their devoted and deluded voter base.
Yep . . . 2025, one of the hottest (blah) (blah) (blah) . . . overlooking the observational data that between the months of October and December 2025 the UAH established global averaged lower troposphere temperature anomaly decreased from +0.53 to +0.30 °C, a rate of -13.8 °C/decade (see https://wattsupwiththat.com/2026/01/05/uah-v6-1-global-temperature-update-for-december-2025-0-30-deg-c/ ).
That can be compared to the UAH long term trending showing an average warming rate of +0.16 °C/decade.
Of course, there is the issue of month-to-month measurement uncertainty in the reported UAH global anomaly values that could have a huge effect on the magnitude of such a short term transient, but nonetheless that is what the data indicates: a significant cooling in the final months of 2025.
It will be fascinating to see if this drastic cool down continues through the month January 2026, or if it bottoms out for some inexplicable reason.
We confuse weather and climate, everyone knows that! At least, that’s what the alarmists reply, their voices tinged with disdainful annoyance.
By the way, does anyone here know a good way to counter this argument? I can quite effectively refute a climate alarmist about models, their inaccuracies, the urban heat island effect, the history of weather stations, and the unrealistic nature of extreme scenarios (provided my interlocutor is willing to listen, which isn’t a given—in which case I don’t bother and move on to something else), but when an unusual cold spell is mentioned, like the one we’re currently experiencing in Europe, we’re constantly reminded of this famous “confusion between climate and weather.”
It depends on the tolerance of the person you’re talking to, but generally speaking, it’s rather disqualifying to bring up episodes of severe cold during a discussion with an alarmist. Except, perhaps, in one very specific way: there is undeniable warming, natural variability remains a determining factor, and the fact that we lose feeling in our ears and our lips start to turn blue proves that heat is still less harmful to the body than cold.
EDIT : I read a bit too quickly before commenting: so there is a cooling trend observed? We’ll see if it continues. (I hope the temperatures don’t drop too much.) If the trend is confirmed, however, it will take quite a few colder years before the alarmists accept that continuous and catastrophic warming is a thing of the past.
In my experience, talking to a climate alarmist is like trying to explain quantum mechanics to a five year old. Most climate alarmists are useful idiots who simply don’t understand science or the scientific method.
I’m thinking more about people I know well, people I know are capable of changing their minds. After all, yes, it’s probably very energy-intensive for very little gain.
I understand the meaning of the image, of course, but it’s interesting! Quantum physics is much less politicized, and immensely more difficult to popularize, even in its simplest aspects, than climate-realist arguments… Quantum physics is absolutely counterintuitive, unlike the statistics demonstrating that alarmist narratives are unfounded. On the one hand, there’s a very natural difficulty in understanding it; on the other, a refusal to broaden one’s perspective. This second scenario is far worse.
Coming from a family of teachers, I remain convinced that one can explain concepts of quantum physics to very young children, or at least try to do so, which, in my opinion, represents a good pedagogical exercise. Didn’t Feynman say that if you can’t explain something simply, it’s because you haven’t understood it?
Children become what you put in their heads. Fill their brains with apocalyptic nonsense, and once they’re adults, they’ll never let go. Similarly, completely reform the national education system to create generations of staunch anti-nuclear activists, and you’ll see the havoc that will have been wrought in two decades.
Charles,
children understand through experience the properties of mass (gravity and inertia) without comprehending relativity. Similarly the properties of light (reflection, transmission, absorption) are experienced without comprehending quantum mechanics.
The public has effectively been brainwashed to believe CO2 has magical properties, fundamentally different from the effects of clouds in the sky.
We haven’t really evolved since the witch trials or the dark ages.
“if you believe in things that you don’t understand, then you suffer” Stevie Wonder
Sience!™ is the new priesthood.
Charles, I’ve been having great success with simple arguments. For my technical friends I dive into the supporting data. I don’t start by attacking CO2, but I can argue that point as well — if required. And while I don’t view temperature data as unbiased, I don’t attack it. I prefer to offer a better explanation of warming trends.
Most people who lack a technical background, but who trust mine, are satisfied with this “It’s the Sun” animation. The prediction is simply filtered sunspot data, though the 99-year moving average filter is much more complicated than it appears. Due to delay’s in Earth’s response, this simple model can predict 13 years into the future.
We entered a slight cooling period in 2016 that will last at least 20 years.
When the 2023 temperature spike began, it wasn’t predicted by my model, so I was reasonably confident it was a transient effect that wouldn’t significantly affect ocean heat content and that we’d eventually return to the original climate prediction. I can’t change the prediction to match new temperature data as the model is a filter of sunspot data, so new sunspot data simply extends the prediction in time. If temperature takes another step, I’m wrong. If temperatures are flat or declining until 2035, or longer, I’m not wrong, which is not quite the same as being right.
Many people treat climate as the integral of weather. This is wrong. Weather is the noise on climate driven by the faster atmosphere and sea-surface responses and climate is dominated by the slower responses due to ocean heat content and slower heat-transport mechanisms. My own personal opinion is that anything lasting shorter than 10-11 years is dominated by weather. Ten years is justified based on the clear break point in the computed frequency response between temperature and sunspots.
Everything to the right of center has periods shorter than 10 years. The blue-dashed line is the prediction from moving-average model (above). Other points:
1) The periods in the left, center panel all relate to the Jovian planets.
2) The 20dB/decade line represents the amplitude response of an ideal integrator (think oceans). My model doesn’t do weather, it follows this response.
3) Using coherent averaging I can attenuate random noise to observe the periods of solar-related weather (lower, right panel).
4) The 11-year sunspot cycle doesn’t affect climate (notch in the middle). This is nature’s head fake. The solar activity affecting climate is encoded in the sunspot signal. The sunspot signal is not solar activity.
I’ve also observed that Jupiter-Saturn conjunctions correlate with climate cycles and major ENSO events. Many people find this convincing even though I admit I don’t know how the conjunctions modulate solar activity, or how variations in solar activity affect climate.
Now for the closing argument which completely avoids issues with: models, over fitting, correlation vs. causation, and even whether or not global temperature is a valid metric. Climate largely repeats after 3560 years.
The motions of the the Sun and Jovian planets result in many periodicities — too many — but a few are special. One in particular is 3560 years. Pick a point on Earth and climate is likely to repeat in 3560 years, or 7120 years. The Bray cycle is ~2400 years, so it’s inverted after a 3560-year shift, but is back in phase after 7120 years.
This reduces that “It’s the Sun” argument to three questions. 1) Does the climate at a location largely repeat after 3560 years? 2) Can this period be explained without the synchronization of the Jovian planets? 3) Is the Sun involved?
All climate is local — even in Greenland.
Max Plank: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
Well, in rebuttal, how about the following simple and factual arguments:
1) “In fact, conjunctions between Jupiter and Saturn happen every 19.6 years. Because of the angle at which Jupiter (1.3º) and Saturn (2.5º) both orbit the sun, when they meet up every 19.6 years they will be at varying distances from each other, anywhere from 4º or less.”
— https://www.komu.com/weather/the-great-conjunction-planets-visibly-double-up-for-first-time-in-nearly-800-years/article_e1e83f90-39c1-11eb-97da-97372de76aa1.html
This is based on the understanding that the term “conjunction’ as used in astronomy is when two or more celestial objects appear close together in the sky from Earth’s perspective, not necessarily close in absolute distance of separation (particularly noting the huge distance differences between an inferior conjunction and superior conjuction!)
2) So, please cite the “climate cycles” and “major ENSO events” that have a periodicity of 19.6 years or a harmonic of such. In doing so you will need to consider these facts:
— (a) “It is generally believed that the low-frequency variability of climatic parameters seems to be connected to solar cycles. The principal periodicities are: 11-year (Schwabe), 22-year (Hale), 33-year (Bruckner) and 80–100-year (Gleissberg) cycles.”
— abstract of https://adgeo.copernicus.org/articles/13/25/2007/ (my bold emphasis added)
For the life of me, I don’t see any correlation with any of these three cycles with a 19.6 year periodicity or first, second or third harmonic of such.
— (b) “While their frequency can be quite irregular, El Niño and La Niña events occur on average every 2—7 years. Typically, El Niño occurs more frequently than La Niña.” (source: https://psl.noaa.gov/enso/enso_101.html ). For the life of me, I don’t see any correlation there with a 19.6 year periodicity or harmonic of such.
And just one more fact (before I run out of comment space):
This is definitely falsified by UAH satellite-based measurements of Earth’s global lower troposphere temperature trending of about +0.16 ºC.decade warming over the period of 1979 to end-2025 (see https://wattsupwiththat.com/2026/01/05/uah-v6-1-global-temperature-update-for-december-2025-0-30-deg-c/ )
ToldYouSo, I’m not sure where you obtained your facts, Conjunctions occur, on average every 19.86 years, not 19.6. Over the last three conjunctions the interval has varied from ~19.5 to ~20.2 years.
I’ve used an average delay of 15 years between conjunctions and major climate inflections. I believe ENSO timing varies a bit more based on the internal state of the oceans. The 15-year delay works for 1877, 1916, 1997, and 2016. For 1976, I suggest you look at the ONI index. It took a major jump and warming started well before the 1982 El Niño event. This is also true with the 1937 conjunction. Warming started well before the associated 1941 El Niño event
I never said every ENSO event was timed to conjunctions. Most ENSO evens are just weather. Here’s an expanded view showing temperature inflections and their relationship to conjunctions.
I also never denied the warming prior to 2016, or the cooling period prior to 1978, so I don’t understand your point. I’m sure you’ll still be capable of drawing a trend line with a positive slope between 1979 and 2035 — my predicted end for the cooling period. The overall warming trend will likely end a little before 2200.
Ummmm . . . try re-reading my post above whereupon you might just notice that I specifically cited this URL as the source of the 19.6 year periodicity in Jupiter-Saturn conjunctions:
— https://www.komu.com/weather/the-great-conjunction-planets-visibly-double-up-for-first-time-in-nearly-800-years/article_e1e83f90-39c1-11eb-97da-97372de76aa1.html
I likewise provided URL links to other sources of cited numerical values where I thought they were needed.
BTW, citing an average value to a precision of 0.01 years (as in “19.86 years”) is equivalent to a precision of ± 3.7 days . . . I guess that’s important for some reason?
Finally,
Well, in the third paragraph of your prior comment you stated (in bold no less):
“We entered a slight cooling period in 2016 that will last at least 20 years.”
So my point is that the UAH data and trending shows that since 2016 there has not been ANY sustained cooling in global lower atmospheric average temperature.
Conjunctions occur, on average every 19.86 years, not 19.6.
BTW, citing an average value to a precision of 0.01 years (as in “19.86 years”) is equivalent to a precision of ± 3.7 days . . . I guess that’s important for some reason?
19.86 versus 19.6. A comment on precision of 0.01 was not the point.
“Most climate
alarmistsscientists areusefuluseless idiots….”Fixed it! 🙂
weather and climate
In the early days of the climate scare the ‘high priests/climate scientists were at great pains to point out that weather is not climate; one event countering their narrative was mere noise. They also promoted the idea of one paper syndrome, seizing on a single result and running with it.
Now it appears to have reversed completely. After over 30 years and a 100% failure record and no real catastrophe in sight it’s getting really desperate. A paper that is unreviewed that has the right message is propelled into major headlines in the scariest manner possible across the media. And, if the paper is rubbish, or even gets retracted, no correction is ever printed or broadcast.
One Hurricane is enough, weather now… is climate:
The atmospheric and ocean conditions that led to the rapid intensification of the hurricane were made six times more likely by climate change, a World Weather Attribution study has found. – BBC
World Weather Attribution is such a fraud! It’s not science, it’s guessing and it is guessing combined with a bias, which is even worse than mere guessing. This is not science.
This bastardization of science is what motivates politicians in the EU, UK, Australia, and New Zealand to bankrupt their nations in a vain effort to control the temperature of the Earth’s atmosphere.
This bastardization of science is a very costly fraud.
WWA is digital tea leaf reading. And only the high priests can do that with their hallowed computers.
That is part of the religious training. The high priests cannot be questioned.
“Weaponized weather analysis” or “willfully wrong assertions”
That’s true. Actually, my question was more about the intrinsic difference between a weather forecast and a climate projection, and how to convincingly explain the difference between the two (purely on a personal note, I don’t like things to escape me, linguistically speaking. I like to be able to clearly formulate what I know or feel).
Perhaps it’s a non-issue after all. Until recently, we weren’t concerned with what the weather would be like across the globe in X years, but rather with the weather of the next day or the following days. Creating this ambiguity is a tour de force on the part of the IPCC and its cronies.
I was sadly amused to read that AR6 had seen the emergence of a “pattern effect,” intended to explain that the “real” effects of warming would kick in later, once the feedback loops had truly “started.” Until now, natural variability didn’t count for much, obviously…
The BBC? Soon to be bankrupt after Trump gets done suing them.
If only he had got his act together first time around. The world has changed a lot since 2016
“If only he had got his act together first time around. ” The first time around I do not think that President Trump realized that the Republicans were his enemy also. Many of the people that he chose to help were subversive RINOs that were out to destroy him. This time around he knows better.
If only – hindsight is a wonderful thing.
I suspect he really needed the four years off to identify good people. He went into his first term trusting established politicians. Big mistake. As far as I can see, only Pompeo was not a RINO. He also identified major principles to guide his new team. Much different kickoff this round.
Indeed, it has. The fanatics have done so much damage and have gone so far off the deep end that they can no longer maintain the various illusions they have hidden behind. The Biden Administration was such an abject failure that it is perhaps a good thing that Trump’s second episode was delayed. In addition, the stranglehold on messaging the Left has enjoyed is slipping, in part thanks to Elon Musk. (And our own intrepid Team here.)
Now, if The Donald can just avoid being his own wrecking ball, it is possible that much of the world is ready to step away from the brink.
I have no illusions that Mr. Trump is some kind of savior, but he may be a “monkey in the wrench” for the designs of the destroyers. (h/t Bruce “John McClane” Willis.)
“Now, if The Donald can just avoid being his own wrecking ball,”
There is always that danger. 🙂
Many times Trump says controversial things to some purpose. And what he says is mostly controversial to his enemies, foreign and domestic. Anything conservative or common sense is controversial to the Radical Left.
Trump likes making the Radical Left angry. Like changing the name of the John F. Kennedy Center for Performing Arts. to the Donald J. Trump and John F. Kennedy Center for the Performing Arts. The Radical Left exploded! Trump loved it.
The BBC is going to need good luck when Nigel Farage becomes PM. Maybe they will be refashioned into some sort of subscription service.
Which will probably not be survivable – should be interesting to see.
“If the trend is confirmed, however, it will take quite a few colder years before the alarmists accept that continuous and catastrophic warming is a thing of the past.”
Yes, I think the temperatures will have to get down to the same level as the 1970’s before Climate Alarmists acknowledge that cooling has taken place. So it will be a while, although Climate Alarmists think it won’t ever get cool again “because CO2”. We shall see. 🙂
Perhaps if the temperatures do return to 1970s level they will bastardise the CO2 figures to ‘prove’ that CO2 is the control knob.
I’m surprised they are not bastardizing the temperatures now.
They bastardized them after 1998, turning a cooling trend into a warming trend.
But they are not fiddling with the temperatures now. Perhaps the magnitude of the cooling is interfering with their tricks, and would be obvious to outside observers.
Climate Alarmist won’t acknowledge crap.
1) They don’t tolerate such heresy.
2) Never underestimate the power of adjusted temperatures.
“Never underestimate the power of adjusted temperatures.”
No, I don’t underestimate the power of adjusted temperatures. I even see some skeptics buying into the fraudulent “hotter and hotter and hotter” trend line of the Hockey Stick chart.
The “adjusted temperatures” are what got us all in this mess. Without the adjusted temperatures, the Climate Alarmists would have absolutely nothing to point to as a connection between CO2 and the Earth’s temperatures. It’s all they have, and it is fraudulent as hell.
Climate Alarmists have nothing to back up any of their claims about any connection to CO2 and the Earth’s climate or weather.
CO2-based Climate Alarmisim is the biggest Mass Delusion in human history.
Charles, it’s my belief that trying to discuss Climate and educate people about the underlying science is less of a technical challenge but rather, it is a debate about religion. The great majority of people are not trained in science and cannot perform the simplest of Algebra steps. They don’t possess the skill sets to have a discussion about climate from a science perspective. What they have is a religious belief and a faith in what authority figures have told them. They are guilty, they have sinned and must pay the penance (carbon taxes) to absolve them and their families of a great harm inflicted upon the earth. To me, the guilt inflicted upon people over “climate” is perfectly consistent with religious practices throughout history. One must “believe” and have “faith” in the climate “experts” to be a member in good standing in the congregation.
Indeed. Instead of debating i usually send links to very based and well presented websites and/ or videos that outline the inherent issues that the climate system poses as a whole.
As a counterpoint to ‘ settled science’. Seeing the system as it is and atmospheric science as a whole you quickly learn about the uncertainties in the underlying assumptions and the discussions by scientists who are trying to make sense of it.
The big takeaway is that the proposed easy to understand AGW you are told is nothing but a forced idea.
But for that you need someone w a somewhat open mind..
Beat me to it Doug.
Yes the only way to position oneself for a discussion with a climate alarmist is to recognize that you’re dealing with a religious acolyte, a zealot for a belief that they don’t HAVE to comprehend or rationalize because it is , well – their BELIEF.
Try getting say an ardent Muslim to accept that Mohammad was just a bloke who was an early version of televangelists, or say convince a Christian that Jesus was not a messiah, he was just a naughty boy.
As I began researching alternative viewpoints to the mainstream media narrative, I felt a strange kind of unease, or even shame. A genuine sense of transgression, the kind a child in the 1950s, steeped in superstition from a young age, might have felt upon opening one of those forbidden comic books for the first time—those “works of the devil” against which his elderly aunt warned him, crossing herself three or four times. The confessional is just around the corner! Forgive me, Father Al Gore, for I have sinned…
I’m very glad I persisted in my research. Considering this initial period of personal re-education, I perhaps see it as proof of the cult-like nature of the dominant climate discourse. The fear of venturing too far into a kind of “forbidden zone,” especially when it concerns, in principle, purely scientific matters, is a sign that there are interesting things to discover. I like the image of the anti-aircraft fire we take when approaching the target.
“What they have is a religious belief and a faith in what authority figures have told them.”
Unfortunately it is not just the authorities. It is also the corrupt “science” that was bought and paid for by governments controlled by the ultra wealthy elite.
The West is suffering an enormous ethics and morals crises !
Thanks for the post, Doug. But you don’t have to be a farmer to know when bullshit is being spread around!
December here in Wokeachusetts was much colder than in the past few decades. I don’t have research to back it up- but I’ve been an outdoors-man all my life- and that’s my opinion.
Your feelings matter. Whatever the “experts” may say, personal feelings are very important—especially when it comes to the weather, and not extremely specific questions, like, I don’t know, the creation of a dam or the construction of a rocket.
Eastern France experienced its first snowfall in some time. It also snowed in Paris, a rare occurrence worth noting. Temperatures reached -22°C in Mouthe, in the Doubs region. While this is fairly common in North America, it’s unusual in our latitudes. (The record low temperature in France: in Mouthe, in January 1968, it reached -36.7°C. The mercury froze in the thermometers, and some claim to have seen a woolly mammoth in the town center.)
The northeast United States has been getting cold air from up north pumped into them all winter long. It is still doing it today.
Meanwhile, in the center of the nation, the jet stream confirguration which keeps the cold air to our north, gives us mild weather this winter.
The middle of January is usually the coldest part of our year, but it’s not very cold this year. And we are happy about that.
But, it is still early. We could get a cold shot of air before the winter is over.
https://earth.nullschool.net/#2026/01/10/2100Z/wind/isobaric/500hPa/orthographic=-108.93,22.61,301
“We confuse weather and climate”
Climate science confuses temperature with climate. If temperature was climate then Las Vegas and Miami would have similar climates. Temperature is not a proper metric for either climate or heat. Yet “climate” science refuses to move to actually using the proper metrics.
Right. The “global mean temperature” is a surprising invention. We hear people making a pointless comparison between this GMT and the temperature of a human body.
“If your temperature rises by 1°C for two days, it’s not a big deal, but in the long run, it ends up damaging your body. The planet is the same,” they say. This fallacy is often repeated without the slightest shame. Anthropomorphism is one of the poisons of science: deceptively obvious, dangerously easy.
Start by insisting that they define those terms, and then insist they define what an ideal climate is, when it has existed, and for how long. The point is that when they use that response, it is generally a conditioned reflex used for evasion. They very likely cannot defend the tactic with any argument of substance. They are the ones making the grand claims, after all. Keep the burden of proof on them. If they are capable of rationality, this will give them something to at least think about.
Most likely, they will assert some platitude or other, include an ad hominem or two, and revert to a deer in the headlights countenance.
My general observation is that no matter what the climate does or is likely to do, gets warmer, gets colder, or stays the same, it will be good for some, bad for others, and nothing much to notice for most.
Simpler is to ask if 1850 is the optimum or ideal climate.
That works, though it probably was for somebody, somewhere.
I would suggest that any discussion of ‘climate’ that only uses a timeline of less than, at a minimum 10,000 years, is only talking about weather and not climate change.
The earth’s been around for 4.5 billion years, is currenly in an ice age (and has been in one for the past 2 million years), and currently has the rarity of ice on both poles (which occurs about 20% of the time historically). And yet there is a large percentage of the population that’s been brainwashed into believing the current conditions are ‘too warm’.
This theocratic climate regime with daily enforcement of descriptive dress code during self imposed societal dark ages will fall when the money runs out or depreciates to worthless status.
Why is it that I always get an ad for Blue pills when moving back to the home page? Has my wife complained? Or one of my three mistresses?
AI has your number! 🙂
Apparently they all have! 🙂
Perhaps it is due to to the web browser that you are using to view WUWT. I have used the Vivaldi web browser for years and never see ANY of those ads. Vivaldi is a much more privacy focused browser that doesn’t allow trackers, ads, etc. Their motto is Privacy is the default and everything is customizable.
Try reading WUWT on an iPhone with Safari as your browser….pretty much unreadable due to pop-ups….try Brave
If the ads are annoying to you, you should get an ad blocker. Try searching your browser extension store for: uBlock Origin. If no blocker available there, do this search: Where can I obtain a free ad blocker? There are free ad blockers available. Be careful. Several commenters here have mentioned that some ad blockers install spyware.
Awhile ago, I went to a website and displayed in the first screen in big block letters was: TURN OFF AD BLOCKER. No thank you! I came right back to WUWT.
You refused to get red pilled,
now the Matrix wants you to get blue ones.
If you are using the WUWT app, do not ever use the back button. When finished reading an article/comments, hit the Home button. If you use the back button, you will get ads even if you are a paid subscriber.
Only three? You slacker! ;-))
Story Tip:
Another Grim Fairy Tale
How clean energy could save us trillions
A return to the mediaeval peasant farm with added wind turbines. As for saving money, that is out of the question with the British government. They seem to believe that it grows on trees.
Money can grow on trees, in a sense, if you manage forests correctly. Being a retired forester with 50 years experience, that’s my opinion. Unfortunately, too many forests are abused or locked up to sequester carbon to “save the planet”.
Save the undergrowth….so that it can be turned to CO2 all at once.
Factoid: In BC 10 billion trees have planted since 1930. Last year 240 million trees were planted.
How about supplying a pertinent excerpt. I don’t feel like giving MSN any hits on their webpage.
And while I’m on the subject, how about everyone who promotes a link, also supply a pertinent excerpt to go along with it.
If it’s just a link, I’m prone to skip it, especially if it is a link to a leftwing organization.
I agree.
I always try to do this. Who wants to read a multipage document that a has a sentence or two that pertinent to the subject. Much easier to search for a portion that is given in the post. It also requires the poster to explain the actual meaning is.
Good points!
Trillions of Iranian Rials.
What would that be in Uzbekistani tiyin?
We no longer mint penniess.
I skimmed the article which reports generating electricity with wind turbines and solar panels. The lady author is unaware that 80% of general energy use is thermal energy such as is used by the heavy industries and heavy transportation systems. Large amounts of fossil fuels will always be used in region with long cold and snowy winters like Canada where I live.
https://www.epw.senate.gov/public/index.cfm/press-releases-all?ID=469DD8F9-802A-23AD-4459-CC5C23C24651
Things are hotting-up, on Pluto!
So . . . .
I did click on this link and learned that in 2007 the U. S. Senate’s Environment and Public Works committee entertained an article from the National Post by Lorne Gunter with information supplied by Dr. Imke de Pater of Berkeley University (sic). Actually a Dutch astronomer working at the University of California in Berkeley.
The bottom line is: The Sun is important to the temperature on the planets and its current active phase (note the 2007 date) is expected to wane in 20 to 40 years, at which time the planet (Earth) will begin cooling.
Story tip:
Marc Marano was on Fox News Channel this morning explaining how Trump pulling out of the UNFCCC organization was very significant as any subsequent president who would want to rejoin this organization would have to submit it to the U.S. Senate for approval.
He also explained that this covered treaties that went back to 1992. The foundation of the Alarmist Climate Change impetus.
I did a search and found the UNFCCC has about 1,200 workers in various location. The IPCC has about 400 workers in Bern. For these workers the global warming gravy train is coming to an end.
Story Tip – H20 Symposium review
What’s that?
Special thanks to our co-organisers Dr Niamh Malone and Alison Jones for helping us navigate these meaningful waters, and to our commissioned artists (Ruby Westgate, Aimee Clarke, and Alison Reid) for adding their fresh streams of creativity to our flow. The day was a perfect blend of art-science-activism, proving that when it comes to water justice, we’re all in the same boat!
Happy New Queer! Have You Got Your State-Funded Gay Microbe Calendar Yet? – Daily Sceptic
Uncanny Strat, that’s exactly how I imagined you’d look.
I can believe that. Warped minds are so predictable. Why is that?
Sorry, no, happy to crush your dream…
Keep posting garbage Strat, you’ve obviously got nothing else to do.
But maybe you could take a day off a week and give us all a rest.
My, what a funny weasel you are.
Tell us, what do you do for an encore?
Says the guy who posts more often then him.
You should take a day or a year or a webpage off and go to websites with some sciency trannies,
there you can suck all the sausages you can see above for free.
They’ve resurrected Dame Edna?
Edna – nee Barry Humphries – was a razor sharp wit.
Yes, and of course he got ‘canceled’ in the end.
(Which, according to Ricky Gervais, is an accomplishment every entertainer should strive for and covet)
Yes, surely this time global warming is over for good. The staircase of denial is getting a new step:
So, what is it exactly that we are denying? Because it isn’t reality.
Exactly what does that graph inform us as to what is causing change?
The x-axis is time and the y-axis is some index of warming. Is time the independent value that CAUSES the rising trend? If not then you are not providing any scientific evidence of what is causing the changes.
Here are some questions.
Tell us how you can discern any of this from your graph. Do you consider any of the answers scientific?
Agricultural science already knows the answer. It’s climate science that doesn’t. Climate science as an impetus to remain willfully ignorant in that “climate catastrophe” fuels their funding. If the narrative changed to “longer growing seasons are GOOD for humanity” their funding would disappear in 24 hours.
The graph tells you nothing whatsoever about what is causing the change. It tells you what change is happening over time. And that change is a long term warming trend over which is superimposed random and quasi-cyclic variability.
So you admit that you have no scientific evidence from this graph that warming is caused by anthropogenic activity! Why am I not surprised.
You also have no evidence that the warming is not a consequence of natural warming from the Little Ice Age.
You have no evidence that we have exceeded the optimum temperature of the earth.
You have no evidence of what the optimum temperature should be for the earth.
You do realize that the logical conclusion to your paranoid fear of warming is that the population of humans on the earth needs to be reduced to the barest minimum that will insure propagation of the species, right?
We should cease using fossil fuels, using plastic, minimize agricultural land use, only have small cities to minimize UHI, no artificial fertilizers, All that because you don’t have the ability to actually determine what anthropogenic caused warming is actually caused by. So, by the Precautionary Principle we should minimize anything that can cause anthropogenic warming.
>So you admit that you have no scientific evidence from this graph that warming is caused by anthropogenic activity!
Yes. You are conflating what a temperature time series shows with how causation is established in climate science. This error is yours.
We know the causation of most of the surface warming.
A mix of El Nino events, bad sites, urban warming and data adjustment.
“Yes. You are conflating what a temperature time series shows with how causation is established in climate science. This error is yours.”
Sorry, bud! *YOU* are the one that posted the graph as if it meant something. No on else. *YOU*.
Since you can’t explain what generates the data the error is YOURS, no one else’s.
“Sorry, bud! *YOU* are the one that posted the graph as if it meant something. No on else. *YOU*.”
It does mean something because it refutes the Chris Morrison’s central claim that current cooling is exceptional.
edit: It does mean something because it refutes
theChris Morrison’s central claim that current cooling is exceptional.Morrison didn’t say the current cooling is EXCEPTIONAL. This is a strawman you’ve made up to argue with. Provide an applicable quote if you have one from Morrison.
“Global temperature anomalies on both land and sea are dropping like a stone. Net Zero-obsessed mainstream media, science and politics do not do cooling. Confirmation bias that holds humans responsible for hockey-stick style global warming with all its risible ‘settled’ notions has gravely damaged genuine climate science. But the world is cooling rapidly and the silence from the mainstream is both laughable and disgraceful.”
I don’t see the word “exceptional” in the quote anywhere. Do you?
“dropping like a stone” doesn’t equate to “exceptional”.
You are *still* criticizing *YOUR* own, made-up strawman, not what Morrison or anyone else said.
Typical for a climate science supporter apparently.
He may not explicitly use the word “exceptional,” but he clearly frames the recent cooling as out of the ordinary.
“He may not explicitly use the word “exceptional,” but he clearly frames the recent cooling as out of the ordinary.”
The two are *NOT* equivalent. Having a deer run out of the woods in front of your moving vehicle is “out of the ordinary” but it is *NOT* exceptional.
Why do so many people defending climate science not actually understand probability and statistics? Sometimes the long shot wins. That doesn’t mean it is an “exceptional” happening, it’s just out of the ordinary. “Exceptional” implies a difference in kind. “Out of the ordinary” implies not the average.
We just had a temperature drop of 30F last night. A significant drop in temperature. Out of the ordinary but *not* exceptional. Typical diurnal range in January is about 20F.
Tim, you dodge the point.
Nope. The fact that you can’t tell the difference is telling as to your goal.
Posting a temperature graph does not imply that the graph alone establishes causation. It shows what happened, not why it happened. I explicitly stated that from the outset. Interpreting the mere presentation of observational data as a claim about cause is a misreading, not an inference I made.
In science, it’s entirely normal to present descriptive evidence first and then discuss causation using additional lines of evidence. The mistake here is assuming that showing a trend is equivalent to asserting its cause, which is something I have not done.
Great answer, NOT! I’m not conflating a temperature time series with anything! That is the whole point! The only conflating going on is climate science saying, “Look temperature is rising and so is CO2 concentration, CO2 must be causing it since they kinda match”. If there was evidence of the physical connection that verifies a functional relationship, you could show it to everyone. Currently, time series are the only game in town which provers nothing.
Climate science does not say this, that is a misunderstanding on your part. The physical connection between CO2 and warming was established independently of modern climate trends through laboratory spectroscopy: CO2 absorbs and emits infrared radiation at specific wavelengths fixed by quantum mechanics. That mechanism is measured, not inferred from correlation.
We then observe that mechanism operating in the real atmosphere. Satellites detect reduced outgoing longwave radiation to space specifically at CO2 absorption bands, while surface instruments measure increasing downward longwave radiation at those same wavelengths. At the same time, the troposphere warms while the stratosphere cools, which is a vertical temperature structure predicted by greenhouse forcing and inconsistent with solar or internal variability explanations. The climate system also shows a persistent positive energy imbalance, with most excess heat accumulating in the oceans, which natural variability cannot sustain over decades
.
Time series describe the pattern of change; they are not the basis for attribution. Attribution comes from physics, radiative transfer, vertical “fingerprints,” and energy accounting, all of which independently point to anthropogenic greenhouse gases as the dominant driver of recent warming. Calling this “just correlation” misrepresents both the evidence and how causation is established in physical science.
So what? Ice melts when put in a bowl on the kitchen table. That tells you absolutely nothing about what happens to water as it moves through the atmosphere.
Again, so what? There are lots of things, like water vapor, that can cause decreased longwave radiation to space!
Again, so what? Climate science says that longwave radiation causes “trapped heat” while thermodynamics says that is impossible! As T goes up linearly, heat loss goes up by T^5. In physical science that’s called a negative feedback, not a positive feedback.
Again, so what? If the concentration of CO2 going up causes the cooling of the stratosphere it also raises the number of molecules radiating.
I asked you before how the amount of joules per diurnal cycle changes due to the cooling. You still haven’t answered. So I’m asking again – how does the amount of joules (i.e. heat) being expelled to space change? Average flux intensity won’t tell you.
Do you have a clue?
No, attribution comes from correlation, not physics.
I have yet to see an actual accounting of the joules emitted to space per diurnal cycle. Without that radiative transfer is meaningless.
What energy accounting? Energy is accounted for using JOULES. You have offered *NO* accounting in terms of JOULES. Neither does climate science.
This entire reply is addressing a position I have never taken. I have repeatedly said that a temperature-vs-time graph does not establish causation. I posted the graph to show the observed pattern of change, not to claim it proves the cause.
You keep arguing as if I said “the graph proves CO2 caused warming.” I did not. That is a false characterization of my argument. Causation is a separate question that requires additional evidence beyond a time series. Until that distinction is acknowledged, you’re debating a claim I never made.
Then why do you keep posting graphs showing temperature vs time and CO2 vs time?
If there is no causation then any correlation is called “spurious”. From dictionary.com: “spurious – not genuine, authentic, or true; not from the claimed, pretended, or proper source; counterfeit.”
Then stop posting graphs of rising temperatures and CO2 concentration as if they are related.
Stop sayng things like: “The physical connection between CO2 and warming was established independently of modern climate trends through laboratory spectroscopy”
A “physical connection” implies causation. And you claim is that this causal relationship has been proved.
Now you are tryng to say that it hasn’t been proved.
Which is it?
I have not posted a CO2 vs time graph, nor a CO2-temperature comparison, in this thread. I posted only a temperature-over-time graph.
The reason was narrow and specific: the post implies that a low annual anomaly in 2025 means global warming is no longer progressing as anticipated. The graph shows that short-term dips and apparent plateaus occur frequently due to natural variability and have never implied that the long-term warming trend had stopped or slowed.
I have not argued about the cause of the warming trend in this thread at all. Attributing causation to that graph is something others are projecting onto it, not something I claimed.
Keep bashing those strawmen.
Causation hasn’t been established in climate science.
The warmists haven’t had the physical science education to understand what they are claiming. If you graph two time series and say, “Look the are correlated, so one causes the other!”. You have just claimed causality. Causality means a functional relationship exists between the two and will allow calculating one from the other. Somehow that second step never appears, not even after 50 years of claiming it.
Every prediction of what temperature will do should be prefaced with phrases like:
“If you graph two time series and say, “Look the are correlated, so one causes the other!”. You have just claimed causality.”
Nobody does this. If you think they do, privide an actual scientific paper making such a claim.
“Causality means a functional relationship exists between the two…”
No it does not.
Why is the world spending trillions of dollars on removing CO2 if CO2 isn’t considered the cause of temperature rise? Somebody thinks CO2 is the cause. What do you think?
Because it is considered the main cause of rising temperatures.
You just told me this, and then you turn around and say this.
Yet there is no physical evidence relating CO2 concentration to temperature. As a result you and many others “consider” correlation to be proof of causlty. Pseudoscience at its finest. You’d better hope engineering doesn’t fall into this acceptance or you’ll be flying in questionable aircraft.
Read what you said and then what I said. They are not the same thing.
Your claim was:
“As a result you and many others “consider” correlation to be proof of causlty.”
Just stop lying.
Nobody does this. If you think they do, privide an actual scientific paper making such a claim.
Because it is considered the main cause of rising temperatures.
Oops.
““Causality means a functional relationship exists between the two…”
you: “No it does not.”
The typical excuse given to support your assertion is that probabilistic causality does not fit the definition of a function.
This is the same issue Planck faced. The answer is that the macro aggregate of the quantum probabilistic causality *is* a functional relationship. Otherwise he could never have described the radiation from a black body. The very same thing applies to electrons tunneling through an energy barrier in a semiconductor. While each individual case is a probabilistic one, in aggregate the current produced *is* a functional relationship.
Climate science is exactly the same. While the absorption and emittance of a photon of energy at the individual molecule level may be probabilistic, in aggregate those probabilities produce a functional relationship. Just as does convection and conduction of heat from the surface to the heat sink we call “space”.
When what you are graphing is an AGGREGATE, then a functional relationship is required for AGGREGATE causality.
“The typical excuse given to support your assertion is that probabilistic causality does not fit the definition of a function.”
It doesn’t fit the definition of a functional relationship.
“This is the same issue Planck faced.”
I’m not sure it is. I’m not talking about quantum mechanics, or random effects as such – just the impossibility of having a functional relationship when there are any number of factors that will determine the output. You can model that as a functional relationship plus a “random” error – but then you don’t have a functional relationship.
“While the absorption and emittance of a photon of energy at the individual molecule level may be probabilistic, in aggregate those probabilities produce a functional relationship.”
Almost as if you accept that the average of a large number of random things can tend to a true value. But how do you use this in the correlation between global temperature and CO2? You can’t run the world multiple times to get close to a deterministic average.
You are back to using the argumentative fallacy of Equivocation.
If this were true in the aggregate then the current through a semi-conductior junction could not be defined by a functional relationship.
How many factors must there be? Ohm’s Law of V = IR has multiple factors such as the resistivity of the medium carrying the current, such as the probabilistic function describing the tunneling of electrons through an energy barrier, such as the heating impact on the material involved.
The number of factors is *NOT* a determining factor of whether a functional relationship exists of not. The number of factors may influence the measurement uncertainty of the result but that does *NOT* mean the functional relationship doesn’t exist.
You are not sure because you’ve never actually read Planck.
Why do you make statements like this when you *know* you have no idea what you are talking about? A functional relationship only requires that there be one output for one set of inputs. The very simple functional relationship of distance = velocity x time has multiple factors. Yet you only get one value for the answer, distance, for one set of input factors, velocity and time. The fact that there can be multiple values for each factor doesn’t mean the equation doesn’t describe a functional relationship.
You STILL have never actually studied the GUM for meaning and context. The inability to accurately measure a true value doesn’t mean that a true value doesn’t exist! The functional relationship doesn’t require that we be able to actually measure the true value in order for the functional relationship to exist.
GUM: 4.1.1 In most cases, a measurand Y is not measured directly, but is determined from N other quantities X1, X2, …, XN through a functional relationship
Y = f (X1, X2, …, XN )
One more proof that you’ve never actually read Planck at all.
Planck: “It might be added that a very similar and equally essential restriction is made in the kinetic theory of gases by dividing the motions of a chemically simple gas into two classes: visible, coarse, or molar, and invisible, ne, or molecular. For, since the velocity of a single molecule is a perfectly unambiguous quantity, this distinction cannot be drawn unless the assumption be made that the velocity-components of the molecules contained in sufficiently small volumes have certain mean
values, independent of the size of the volumes. This in general need
not by any means be the case. If such a mean value, including the
value zero, does not exist, the distinction between motion of the gas as
a whole and random undirected heat motion cannot be made.”
Your inability to understand simple physical science concepts knows no bounds, does it? An AGGREGATE value is not the same thing as a “true value”.
Temperature itself is an AGGREGATE value. It is based on the aggregate total “pressure” exerted by the random motion of the atoms making up the measurand. If you can’t use aggregate values in a functional relationship, then how is temperature able to be averaged? How can it be used in *anything” associated with thermodynamics?
“ But how do you use this in the correlation between global temperature and CO2? You can’t run the world multiple times to get close to a deterministic average.”
If global temperature (something which doesn’t actually exist) and CO2 have no functional relationship then any correlation between them is spurious. Spuroius meaning: “not genuine, authentic, or true; not from the claimed, pretended, or proper source; counterfeit.” It doesn’t matter how many “multiple times” you run the world, if there is no functional relationship then there will be no deterministic average.
“You are back to using the argumentative fallacy of Equivocation.”
Do you actually know what the word means?
“The number of factors is *NOT* a determining factor of whether a functional relationship exists of not.”
The claim I was disputing was
not, whether there is a hypothetical functional relationship between all possible influence factors and the output.
“The fact that there can be multiple values for each factor doesn’t mean the equation doesn’t describe a functional relationship.”
We are not talking a simple relationship involving two factors. We are talking about the relationship between CO2 and temperature. Where annual temperature depends on countless factors, many unknowable.
“You STILL have never actually studied the GUM for meaning and context. ”
Irrelevant. I wasn’t talking about measurement uncertainty.
“If global temperature (something which doesn’t actually exist) and CO2 have no functional relationship then any correlation between them is spurious.”
Do you think there is a correlation between the length of a day and temperature? Can you describe a functional relationship between the two? If not, do you think that makes any correlation spurious?
We’ve been thru this before. A functional relationship allows on to predict a unique value from a set of inputs. This requires making a graph with an independent variable(s) and a unique dependent variable. Simply, the ability to replicate the information in later experiments.
Has it never occurred to you that of the trillions spent on salaries, super computers, wind and solar, etc. no one has said “let’s build a 600 foot tall tower with a diameter of 30 feet in order to test the radiative qualities of CO2? Basically a windmill tower with environmental controls. That isn’t nearly as complicated as a miles long super collider buried underground.
Nobody wants to risk losing a cash cow so let’s just wait till the climate proves us wrong!
“This requires making a graph with an independent variable(s) and a unique dependent variable.”
Your claim is that this is required if there is causality. I’m saying it is not.
“Simply, the ability to replicate the information in later experiments.”
And how do you so this when the experiment is the globe?
Time is not generally a “causative” factor in most functional relationships. Rates are not typically causative, they are a tracking variable. E.g. the “rate” at which a hot-air balloon rises doesn’t *cause* the balloon to rise, it merely tracks the results of the causative factors.
The functional relationship is: F_bouyancy = f(p,V)
The functional relationship is not F_bouyancy = f(p,V,t)
The experiment is right there in front of you every second of every day. You merely have to learn from it instead of “guessing” at what it could be.
“Time is not generally a “causative” factor in most functional relationships. ”
More red herrings. We are not talking about time.
If you can replace the time value with a simple interval value, and the dependent variable values stay the same, then time is not causitive. What you are dealing with is a time series.
It means the dependent variable does not have a functional value of temperature/time.
me: ““Time is not generally a “causative” factor in most functional relationships. ”
you: “More red herrings. We are not talking about time.”
Is the temperature data from UAH that you graph not plotted against time?
If the UAH data is plotted against time then CO2 must also be plotted against time in order to show correlation!
You just keep getting further and further out into left field with every pitch. You already can’t see home plate. How much further are you going to go?
“Is the temperature data from UAH that you graph not plotted against time?”
Which graph? I’ve plotted graphs showing UAH and other data sources against time – but also showing the correlation with CO2 and other factors.
But the correlation, indicated by the red line is not against time, it’s against CO2, ENSO etc.
“But the correlation, indicated by the red line is not against time, it’s against CO2, ENSO etc.”
you would get the same kind of correlation graphing butter prices against gun prices. So what?
This only makes sense if you believe correlation is causation.
Again, have you heard of “confounding variables”? Things that go up over time are CORRELATED. EVERY THING THAT GOES UP OVER TIME is correlated with EVERYTHING else that goes up over time. The confounding variable is TIME.
They are all correlated with TIME, not with each other.
Climate science will never determine causation with their predilection with time series. They will require more and more parameters in their models to make the models and observations agree.
Parameters are not functional relationships, they are guesses. Their use is to curve match. With natural variation, the modelers will never catch up.
“you would get the same kind of correlation graphing butter prices against gun prices. So what?”
You keep asking that, and never listen to the answer. The so what, is that it demonstrates there is a correlation. This was produced in response to frequent claims that there was no correlation.
Correlation does not imply causation, but it is a necessary if not sufficient condition.
If temperatures are going up linearly over time then any thing going up more or less linearly over time will show a correlation. But there are two main points there.
Firstly, most things that increase over time do not a have a reasonable way of causing a rise in temperature. CO2 on the other hand has been predicted to cause warming long before any correlation could be observed. Unless there is a reasonable explanation for how postage or butter prices can effect global temperatures then it’s easy to dismiss any correlation as spurious. Not so easy to dismiss the CO2 correlation.
Secondly, whilst it’s true that any warming trend will correlate with any other increasing trend – there is no reason to suppose that if CO2 is not a cause, that there would be any warming. If the trend was flat since the 1970s, or if the globe had started cooling, as there were good reasons to expect it might, then that would falsify the CO2 hypothesis. And this is the way science is meant to work, according to the Popper paradigm, you never prove a hypothesis, just fail to falsify it.
I know EXACTLY what it means. It means using the same word to describe different things and never specifying which definition is being used. That way you can always say “I wasn’t talking about that, I was talking about this”.
You do it ALL the time with the word “uncertainty”. Even after multiple requests to start specifying what uncertainty you are speaking of at the time you refuse to do so. That way you can say “I wasn’t talking about measurement uncertainty, I was talking about sampling uncertainty”. Or “I wasn’t talking about sampling uncertainty, I was talking about measurement uncertainty”.
Give me a break! If there is a functional relationship then it simply doesn’t matter if there is only one influence factor or an infinite number of influence factors. The only difference between all of them is the ability to accurately specify the influence factors involved.
And now you are back to assuming that there *IS* a functional relationship in which CO2 is one of the factors. If you can’t define the functional relationship THEN YOU DON’T KNOW IF CO2 IS AN INFLUENCE FACTOR OR NOT.
Exactly! Equivocation.
No, there isn’t a correlation. Long days can be cold or they can be hot. There *is* a correlation between daytime temperature and the sun insolation. While it is far more complicated than what even the climate models encompass, a functional relationship *can* be defined. But it is NOT* a direct functional relationship because temperature is an intensive value. The functional relationship is actually with enthalpy, i.e. HEAT, not temperature. If all the factors associated with the enthalpy value are know then the temperature can be calculated. The issue is how do you determine the internal energy part of enthalpy – it is not a measurable measurand.
You don’t know enough physical science to even understand what you don’t know. Yet you come on here lecturing about how thermodynamics and atmospheric physics all work. Don’t you ever get tired of having your nose rubbed in the mess you make on the carpet?
“I know EXACTLY what it means.”
And yet you saying I’m using it with regard to a functional relationship.
“You do it ALL the time with the word “uncertainty”.”
Red herring – we are not talking about uncertainty here.
“If there is a functional relationship then it simply doesn’t matter if there is only one influence factor or an infinite number of influence factors.”
It matters if you insist that a functional relationship has to be shown in order for causality to exist.
“And now you are back to assuming that there *IS* a functional relationship in which CO2 is one of the factors.”
Maybe there is, maybe there isn’t. The point is, you do not have to demonstrate a functional relationship involving an infinite number of inputs in order to deduce that there is causality.
“If you can’t define the functional relationship THEN YOU DON’T KNOW IF CO2 IS AN INFLUENCE FACTOR OR NOT.”
I believe that the sun is an influence factor in global temperature. Yet there’s no way anyone would define a functional relationship, so by your logic it’s impossible to know if the sun is an influence or not.
“Exactly! Equivocation.”
You are claiming it’s equivocation to not talk about measurement uncertainty? Are you absolutely sure you know exactly what the word means?
“No, there isn’t a correlation.”
You are saying there is no correlation between day length and temperature? Do you understand what correlation means?
No, I am saying you are not differentiating between individual properties and aggregate properties. The vector for an individual atom/molecule is *NOT* a quantum mechanics function but is still impossible to determine. Yet the aggregate vector *can* be determined – otherwise it would be impossible to measure temperature. That means a functional relationship can be defined for the aggregate vector.
You are using multiple definitions for “functional relationship factors” just like you use multiple definitions for the “uncertainty”. And you bounce back and forth between the definitions based on the moment by saying “I was talking about the other definition”. ARGUMENT BY EQUIVOCATION!
So what? That doesn’t prevent its use as an example of how you use Argument by Equivocation. It is the Equivocation that is the issue. You do the same with “functional releationship”.
If you can’t define a functional relationship then you SIMPLY DO NOT KNOW if causality exists or not. Being able to define the functional relationship does *NOT* determine if the functional relationship exists. Nor does the number of influence factors involved determine if a functional relationship exists or not.
Your lack of knowledge concerning the form of the functional relationship is a PERSONAL issue, it is not an issue with the physical world. In 0 BC no one knew Ohm’s Law existed as a functional relationship with multiple influence factors. Yet that did *NOT* mean that the functional relationship with multiple influence factors didn’t exist in the physical world.
Unfreakingbelievable! It’s like you’ve never heard the term “confounding variable”. Your logic here is how it was deduced that the causality of daylight comes from Apollo riding his chariot across the sky! Just because the ancient Greeks didn’t know all the factors involved in the orbital mechanisms of the solar system didn’t mean that the functional relationship didn’t exist.
Are you kidding? The sun’s insolation can be measured. That insolation is the transport of heat energy. That means that a functional relationship between the sun and the amount of heat injected into the Earth’s biosphere can be determined based on physical world laws of thermodynamics. The fact that you might not know *all* the factors associated with the heat engine known as Earth doesn’t mean that it doesn’t exist.
Now you are just dissembling in order to avoid having to admit that you use the word “uncertainty” all the time without actually defining which uncertainty you are talking about — allowing you to say “I was talking about a different uncertainty”.
Ignoring all the usual attempts at distraction, this is the key point
“The fact that you might not know *all* the factors associated with the heat engine known as Earth doesn’t mean that it doesn’t exist. ”
Of course a functional relationship exists, but you can only know it if you are omnisient. So usually. in the teal workd. you have a non-determanistic non-functional relationship. But the Gormans also insist that
“If you can’t define a functional relationship then you SIMPLY DO NOT KNOW if causality exists or not.”
And following that logic you don’t know if the sun is a cause or not.
“Of course a functional relationship exists, but you can only know it if you are omnisient.”
Talk about a distraction!!!!
You don’t have to be omniscient to know that a functional relationship exists. You don’t even understand that you don’t have to know the exact equation in order to define that a functional relationship exists.
The fact that you can write F_buoyancy = f(p,V) shows a functional relationship exists. It allows hypotheses to be formulated on the relationship between the factors.
The problem is that you cannot write T = f(a,b,c,CO2,…) because you simply don’t know if CO2 is a factor in the functional relationship!
You may as well write T = f(population_of_extra-solar_aliens_on_Mars), ….).
“And following that logic you don’t know if the sun is a cause or not.”
Of course I know if the sun is a factor. It is the only source of heat into the biosphere of Earth. CO2 is *NOT* a source of heat. CO2 does not “trap” heat. The loss of heat from the Earth to space is a time function and must be integrated over time. The loss of heat from the Earth to space is *NOT* an instantaneous balance of radiative flux at the top of the atmosphere. The loss of heat from the Earth to space is *NOT* an arithmetic average of the maximum flux intensity and minimum flux intensity because the flux is *NOT* linear, it is exponential.
I’ve asked you this before and you just ignore it every time it is asked.
I place a rock at 70F in your hand and then place a second rock at 80F in the same hand. Do you have a total of 150F in your hand?
“You don’t have to be omniscient to know that a functional relationship exists.”
You misconstrued my meaning. A functional relationshio exists, but you don’t know what it is unless you are omniscient.
Your math is atrocious. As I pointed out you do *NOT* need to quantify all coefficients and powers of every factor to identify a functional relationship. y = f(x,y,z,….) is sufficient. But you must *KNOW* that each component is truly an influence factor in the function in order to include it in the list. No one knows if CO2 *is* an influence factor in the heat balance of the earth or merely a transport mechanism internal to the thermodynamic system of the earth.
“As I pointed out you do *NOT* need to quantify all coefficients and powers of every factor to identify a functional relationship. y = f(x,y,z,….) is sufficient.”
That is not defining a function.You are simply claiming a function relationship exists – not saying what it is.
And that, my friend, is the issue. Defining the function by “saying what it is” is the purpose of funding climate science research. Has 50 years of research and probably trillions of dollars gotten even one variable defined?
WRONG.. The only warming in the UAH data comes at El Nino events.
Between those El Nino spike+step events….. there is no warming.
There is no “long term warming trend”..
In fact the trend over 3000 years is very much a COOLING trend.
This is true!
However, as I’ve pointed out before, the satellite period is very short. 45/6 years is the blink of an eye. Of
50 years is not long term.
Where are your uncertainty limits?
“Where are your uncertainty limits?”
ROFL! Do you really think he has a clue on this? He still believes they all cancel out!
Of course he believes this, and he was too timid to admit it! Couldn’t even generate a multi-paragraph word salad in support.
Uncertainty estimates are published in the primary literature alongside the dataset. See, e.g., Rhode, et al., 2013, Rhode and Hausfather 2020 for descriptions of methodology. Here is the Berkeley Earth temperature series with 95% uncertainty plotted:
Ha, ha! LOL.
Uncertainty estimates? Show us the uncertainty budgets and their propagations.
Be a professional if you are going indicate that you know what you are discussing
I provided citations to the primary literature. Let me know if you have questions related to the methodology after reading the papers.
“Uncertainty estimates are published in the primary literature alongside the dataset.”
You mean like in the Berkely Earth dataset that shows some measurements from the 19th century with measurement uncertainties in the hundredths digit?
Yeah, those are *so* believable!
“Yeah, those are *so* believable!”
Yes, the law of large numbers is believable.
The law of large numbers has no bearing on the measurement uncertainty of an individual measurement. Nor does the law of large numbers have anything to do with the accuracy of a group of measurements. It only has to do with how precisely you can locate the average value of the given measurements, it tells you NOTHING about the accuracy of the average you have so precisely located. A precisely located average of wildly inaccurate measurements will also be wildly inaccurate.
Stop using terms you obviously do not understand.
I said uncertainty limits, not “estimates”.
How does Berserkly Earth get “estimates” of milli-Kelvins from data that has at best ±1°F instrumental uncertainty?
Answer—they don’t understand that uncertainty always accumulates any better than you (don’t).
This is a category error. Berkeley Earth is not claiming milli-Kelvin accuracy for individual thermometer readings. The milli-Kelvin figures refer to the estimated mean temperature anomaly of a very large dataset, where random measurement noise averages down.
More importantly, their uncertainty is not computed by simply propagating thermometer precision. Instrument noise is a minor term. The published uncertainty explicitly includes station breakpoints and homogenization uncertainty, spatial sampling and coverage uncertainty, and methodological sensitivity in the statistical reconstruction. Those effects dominate the error bars, which are orders of magnitude larger than milli-Kelvins.
Uncertainty here does not “accumulate”; it is quantified and propagated through a full statistical model. Saying they “don’t understand uncertainty” misrepresents both the methodology and the reported results.
Ah yes, the standard climatology pseudoscience claim that all error is random, Gaussian, and cancels.
Translation—“if I ignore it, it will just go away.”
Yet somehow their graph shows tiny “error bars” in the milli-Kelvin range. You aren’t very good at this.
And guess what, uncertainty is still not error, something both you and Berserkly Earth don’t understand.
This misrepresents both what Berkeley Earth claims and basic error analysis.
No one says “all error is random” or that uncertainty is the same as error. The point is that independent, unbiased errors average down, while systematic errors are explicitly identified, corrected, or bounded. Berkeley Earth treats these separately and documents both.
Instrument noise is not the dominant term in global mean temperature uncertainty. Spatial sampling, station moves, time-of-observation changes, and coverage gaps are. Those uncertainties shrink dramatically when averaging thousands of stations over large areas and long periods, which is why the global mean uncertainty can be much smaller than individual measurement errors. That is standard statistics, not pseudoscience.
Large local uncertainties do not imply large uncertainty in a global average. Confusing those two is the error here.
Your whole screed relies on the law of large numbers (LLN), more specifically, the strong LLN. There are very specific assumptions that must be met to use this, and the related /√n factor. Too many statisticians are never exposed to this. The text books deal with constant populations and data with no uncertainty.
There is a reason the GUM specifies the use of repeatable conditions. One of those is measuring the same thing. That is only one condition necessary for the LLN to be invoked and to use the experimental standard deviation of the mean.
The strong LLN requires IID in the samples of a population. That will be difficult to overlook when using various averages of different populations.
You will end up making a choice, multiple samples of size one, or one sample of a large size. One sample can not create a sample means distribution, only multiple samples can do that.
Berkeley Earth is not invoking the strong LLN on IID samples of a single population. Global mean temperature is an estimated parameter derived from a spatiotemporal field, and its uncertainty is computed via error propagation and resampling, not by assuming IID thermometer readings.
The √n reduction applies to independent error components, not to populations being “identical.” Independence is addressed explicitly by modeling spatial correlation lengths and temporal autocorrelation. Correlated errors are not assumed to cancel; they are accounted for and bounded. This is standard geostatistics, not textbook LLN hand-waving.
The GUM does not prohibit averaging heterogeneous measurements. It requires that uncertainty sources be identified and propagated correctly, which is exactly what Berkeley Earth does by separating measurement error, bias corrections, spatial sampling uncertainty, and coverage uncertainty.
Finally, no one is claiming a “sample means distribution” from a single realization. The uncertainty bars represent confidence intervals on the estimated global mean, derived from ensemble reconstruction and Monte Carlo methods. That is estimation theory, not misuse of LLN.
Invoking IID requirements here is applying the wrong theorem to the wrong problem.
“its uncertainty is computed via error propagation and resampling, not by assuming IID thermometer readings.”
What error propagation? One more time, the minimum measurement uncertainty of most temperature measuring stations is +/- 0.3C. Where do you see that propagated anywhere in climate science?
It’s not even propagated into climate science’s daily mid-range temperature! Error propagation would give a measurement uncertainty of the mid-range temperature as being +/- 0.4C! The average of two station’s mid-range temperatures would give a measurement uncertainty of +/- 0.6C!
Does the equation
u_c^2(y) = Σ u^2(x_i)
mean nothing to you at all?
“The √n reduction applies to independent error components, not to populations being “identical.” Independence is addressed explicitly by modeling spatial correlation lengths and temporal autocorrelation. “
Meaningless word salad!
Spatial correlation in climate science is garbage. It assumes a flat earth with a homogenous surface and atmosphere – which is *NOT* reality. Pike’s Peak and Colorado Springs are quite close based on longitude and latitude but the correlation between their temperatures is essentially ZERO. San Diego and Ramona, CA are about 30 miles apart but have VASTLY different temperatures. Climate science doesn’t even recognize that the temporal autocorrelation between the northern hemisphere and southern hemisphere can’t be addressed solely by differencing. Warm temperatures and cold temperatures have different variances. Yet climate science just jams temperature measurements from both hemispheres together with no weighting at all for the effect of differing variances!
All you are doing is parroting the usual climate science excuses that might fool the uninitiated. Those excuses simply don’t stand up to scrutiny based on physical reality.
The ±0.3 °C figure applies to an individual measurement. Climate datasets do not report the uncertainty of individual station readings or daily mid-range values. They report the uncertainty of an estimated large-scale mean anomaly. Those are different measurands with different uncertainty propagation.
The equation you quote applies when combining independent uncertainty components for a single derived quantity. It does not say that averaging many observations cannot reduce uncertainty in an estimate. Error propagation for an estimator depends on the estimator and the covariance structure of the inputs. That is why spatial and temporal correlation are explicitly modeled. Correlated components do not cancel. Independent components do.
Spatial correlation is not assumed flat or homogeneous. It is empirically estimated from data and varies with distance, geography, and scale. Pointing out that two nearby locations can have different local climates does not invalidate spatial correlation of anomalies over larger scales. Local variance does not negate large-scale coherence.
Northern and southern hemisphere data are not “jammed together.” They are area-weighted, baseline-adjusted, and combined only after accounting for differing coverage and variance. That is documented.
Nothing here requires assuming IID thermometer readings, Gaussian cancellation of all uncertainty, or millikelvin instrument accuracy. Those are strawmen. What is being estimated is a large-scale statistical quantity, and its uncertainty reflects how well that quantity is constrained, not the precision of any one sensor.
You are applying single-measurement metrology rules to a field-scale estimation problem. That category error is doing all the work in your argument.
Of course they don’t. They assume everything is Gaussian, random and cancels. Every measurement becomes 100% accurate.
That allows the assumption that resolution can be anything you want it to be. Voila, you can have measurements measured to the units digit magically become accurate to the 1/1000ths decimal.
Typical of a statistician that never studied metrology and it’s purpose. Read the GUM Sections 7 and 8.
“The ±0.3 °C figure applies to an individual measurement.”
You propagate the individual measurement uncertainties! What do you think Eq 10 in the GUM implies?
u_c^2(y) = Σu^2(x_i)
Bullshite. Complete and utter bullshite!
They don’t report the uncertainty of individual station readings or daily mid-range values because it would subsume the anomaly differences! It would turn their whole averaging heirarchy into garbage!
That “large-scale mean anomaly” DOES HAVE MEASUREMENT UNCERTAINTY. Ignoring it means the “large-scale mean anomaly” is simply garbage.
As I’ve pointed out to you multiple times, an anomaly is nothing more than a linear transformation of a distribution using a constant. It does *NOT* change the standard deviation of the distribution one iota! And as the GUM lays out, the standard deviation *IS* the measurement uncertainty.
Climate science just totally ignores all physical science concepts as well as all basic statistical concepts in order to make their case. There is not another discipline that I know of that always assumes that all measurement uncertainty is random, Gaussian, and cancels, i.e. that the standard deviation of a data set is always equal to zero.
You are back to trying to use the SEM as the measurement uncertainty. When you use the phrase “uncertainty in an estimate” you are talking of the SEM – AND NOT THE ACCURACY OF THE MEAN!!!
An inaccurate set of data simply can’t produce an accurate mean, no matter how precisely you calculate that mean it will always be as inaccurate as the data.
No, they don’t. Even an apprentice carpenter can tell you that on their second day of work! The uncertainties in the measurement of two boards, i.e. two different components, used to span a basement DO NOT CANCEL. They add!
As I keep saying, I NEVER want to drive on a bridge designed by a climate scientist!
“Northern and southern hemisphere data are not “jammed together.” They are area-weighted, baseline-adjusted, and combined only after accounting for differing coverage and variance. That is documented.”
I have yet to read a climate science article where the variances of the temperature in the hemispheres was used as a weighting factor in determining the global average temperature. This is *YOUR* assertion. *YOU* provide a reference showing how this is done or I will have to assume that you are just making this up!
And again, area-weighting is garbage unless *all* components are included, including things such as toopgraphy, evaportranspiration, pressure fronts, etc. I have yet to see where that is done in any climate science paper on the global average temperature. PROVIDE A REFERENCE.
And, again, baseline adjustments do *NOT* cancel out measurement uncertainty. That is a linear transformation with a constant which does *NOT* change the standard deviation of the distribution at all. Why you keep ignoring this can only be assigned to willful ignorance. The worst kind of ignorance.
Of course it does! Because it increases the variance of the data! Exactly how much training do you have in statistics anyway?
You wouldn’t last a week in a job where actual understanding of measurement uncertainty is critical. If you were a professional engineer signing off on a project which could actually result in harm to humans you would be in jail by now. Balcony collapses, bridge failures, space shuttle explosions, etc come to mind immediately!
Of course they aren’t! However that means they can’t divide by the √n because they do not meet the requirements for doing so.
There is no such thing as independent error components that are divided by the √n. Read the GUM dude. If you have independent errors (and errors are not used anymore) then you also have different populations. Those error components add together and are not divided by √n.
What you are describing is f(X1, X2, …, Xn). Those are independent evaluations of different things that make up a measurand. Their uncertainty (error) adds as shown in every metrology book I physically own or use online.
If you are using the √n, then your sample and sample means distribution must come from the same population! Show us a research or text book that shows something different.
If you add or subtract two (or more) population, you add their means AND add their individual variances. That is why uncertainy always adds.
It is telling that you can never show a resource for your assertions. Until you can they are not meaningful.
“If you add or subtract two (or more) population, you add their means AND add their individual variances. That is why uncertainy always adds.”
It’s futile pointing this out to someone incapable of learning, but once again, adding is not taking an average. Adding the measurement uncertainties will give you the measurement uncertainty of a sum, but that uncertainty has to be divided by n when you divide the result by n.
For reference look at the general equation in the GUM. Many have gone through this in detail with the Gormans. but they just have too many cognative issues to accept it. They will just ignore the partial derivatives. or switch to using relative uncertainties. Then the next time we are just back to square one – “Uncertainties just add. Nobody has provided a teference for why you divide by n.”
That is because the subtraction of two random variables is what is done to calculate an anomaly! Anomalies ARE NOT averages.
As I said.
I probably should have said.
If you add or subtract two (or more) population, you add or subtract their means …
Climate science and you throw this variance value away and calculate the variance of the value of the difference which reduces the absolute value by a factor of 10 to 100.
Climate science doesn’t want to see anomaly values of 0.025 ±1.0
“If you can’t define a functional relationship then you SIMPLY DO NOT KNOW if causality exists or not.”
Deflection. You were not talking about anomalies, but averages.
Just a typical example of your proclivity to assume facts not in evidence.
On the other hand, if you add means of different input quanties to obtain a measurand, the uncertainties add. Read the GUM.
“Just a typical example of your proclivity to assume facts not in evidence.”
The facts being that your entire comment was about averaging – why did you keep asking about the √n rule if you were taking about anomalies?
“if you add means of different input quanties to obtain a measurand, the uncertainties add. Read the GUM.”
No one is disputing that – the question is about the uncertainty of the means.
Over and over again. Dividing the sum of individual measurement uncertainties by the number of individual measurements GIVES YOU THE AVERAGE UNCERTAINTY.
THE AVERAGE UNCERTAINTY IS NOT THE UNCERTAINTY OF THE AVERAGE!
All you do in finding the average measurement uncertainty is make it simpler to find the total measurement uncertainty. (U_avg* n) is simpler to calculte than u1 + u2 + … + un.
The uncertainty of the average (being the best estimate of the property being measured) remains the standard deviation of the population measurement values surrounding the mean. You can’t reduce the standard deviation of the population measurement values by just dividing it by n.
You simply can’t read.
GUM: B.2.18
uncertainty (of measurement)
parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand
You can’t reduce the dispersion of the values that could reasonably be attributed to the measurand.
Now, use your Argument by Equivocation fallacy and tell us you weren’t talking about measurement uncertainty but the standard deviation of the sample means.
ROFL!!! You’ve been given Eq 13 from the GUM and you *still* say that the partial derivatives in a functional relationship with multiplication or division doesn’t properly consider the partial derivatives. You’ve been given Possolo’s example of finding the uncertainty of the volume of a barrel and you *still* say he didn’t do the partial derivatives correctly.
When you try to tell the experts they are wrong perhaps you should reconsider!
You *still* can’t do basic algebra. You claim that R^2 is not R * R (i.e. a multiplication) and therefore you don’t need to use relative uncertainties. You can’t even understand that H in cm and R^2 in cm^2 are different dimensions and therefore the relative uncertainties *must* be used.
I am VERY sure that you haven’t figured out Eq 13 in the GUM yet. You never will.
Uncertainties *do* add. See how Possolo did the measurement uncertainty of a barrel. He didn’t divide the total measaurement uncertainty by 2 because there were two influence factors involved — i.e. “n = 2”. The measurement uncertainty of the volume is the ADDITION of the measurement uncertainties of the influence factors.
You are a true masochist. You just keep coming on here and making the same idiotic assertions even after them being shown to be wrong multiple times, MULTIPLE TIMES. I can only assume you *like* being shown to be wrong because it causes you the pain you need so badly.
“Dividing the sum of individual measurement uncertainties by the number of individual measurements GIVES YOU THE AVERAGE UNCERTAINTY. ”
Your usual ranting. As has been explained to you, many many times, you are not dividing individual measurement uncertainties by n. The uncertainty of the average is not the average uncertainty.
“You simply can’t read.”
I can read and understand, you can read, but fail to understand.
“You can’t reduce the dispersion of the values that could reasonably be attributed to the measurand.”
You still haven’t tried to understand what “attributed to the measurand” means. You still think it means all the values that went into calculating the measurement. You still don’t understand that it means the values that the measurand can reasonably be.
Maybe the new definition, given in TN1900, will make it clearer (though I suspect you will still find reasons to misunderstand it)
“Now, use your Argument by Equivocation fallacy and tell us you weren’t talking about measurement uncertainty but the standard deviation of the sample means.”
No – I was talking about measurement uncertainty – i.e. assuming you want an exact average and only interested in the uncertainty coming from the individual measurements. It’s you brother who used the term “uncertainty” without qualifying it in the way you insist.
“You’ve been given Eq 13 from the GUM …”
This is just getting pathetic. I keep having to explain to you that equation 13 is not applicable to an average function. It only works when all the operations are multiplying, dividing and raising to a power. Your inability to remember this is exactly what I meant when I talked about your inability (or is it unwillingness) to learn.
It’s hardly worth reading the rest of your comment – it will just be you again insisting you can use the uncertainty of the volume of a cylinder in order to guess how partial derivatives work – and then you lying about me.
“You’ve been given Possolo’s example of finding the uncertainty of the volume of a barrel and you *still* say he didn’t do the partial derivatives correctly. ”
And there we go. Stop this pathetic strawman argument. I am absolutely not saying the partial derivatives are done wrong in that example. I’ve explained to you far to many times how you get the result of that example when you do the partial derivatives correctly. You just fail to understand that you can not use simplification when your function is a linear equation.
“You claim that R^2 is not R * R”
Lie.
“therefore you don’t need to use relative uncertainties.”
You do not use relative uncertainties – full stop. When using equation 10 you use absolute uncertainties – always. In the case where your function is of the specific for, you can simplify equation 10 to equation 13, which uses relative uncertainties and not the partial derivatives. Your problem is you just try to mix these two concepts up, because however good you think your algebra is, it’s just wrong.
“You can’t even understand that H in cm and R^2 in cm^2 are different dimensions”
Lie. And still irrelevant to the question of an average. All the measurement in an average have the same dimension, and the average has the same dimension.
“I am VERY sure that you haven’t figured out Eq 13 in the GUM yet.”
And you’d be wrong. Do you want me to find the comment section when I explained it to you a few years back?
“See how Possolo did the measurement uncertainty of a barrel. He didn’t divide the total measaurement uncertainty by 2 because there were two influence factors involved”
Do you really not see how insane this obsession with this water tank example is. You do not divide anything by 2 becasue the volume is not the average of the height and radius.
“You are a true masochist.”
I must be, given I still think I can explain basic calculus to someone of your intellect.
You simply can’t do even basic algebra. Σx_i/n ==> x_1/n + x_2/n +… _ x_n/n
This is the AVERAGE UNCERTAINTY.
Σx_i / sqrt(n) ==> x_1/sqrt(n) + x_2/sqrt(n) + … + x_n/sqrt(n)
This is the STANDARD DEVIATION OF THE SAMPLE MEANS.
NEITHER IS THE MEASUREMENT UNCERTAINTY OF THE AVERAGE.
The measurement uncertainty is exactly what Eq 10 in the GUM states:
u_c^2(y) = Σu^2(x_i)
The sum of the individual uncertainties. That result is given as a standard deviation. If the mean of the values in the data set is used as a best estimate for the value being measured then the measurement uncertainty of that best estimate is the sum of the uncertainties associated with the elements in the data set!
Give me a break! I gave you the definition of what the measurement uncertainty is according to the GUM. All you are doing here is showing your total inability to read and understand basic English.
The entire data set is *NOT* measurement uncertainty!
From the GUM, B.2.18:
NOTE 1 The parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence.
The standard deviation is calculated using ALL of the elements in the data set. But the standard deviation does *NOT* include all the elements in the data set.
YOU CAN’T EVEN GET THIS ONE CORRECT!
What in Pete’s name do you think this is saying? The typical quantitative indication of the dispersion (or scatter) of such distribution IS THE STATISTICAL DESCRIPTOR KNOWN AS THE STANDARD DEVIATION!
“No – I was talking about measurement uncertainty – i.e. assuming you want an exact average”
You can’t even get this one correct. You don’t even know when you are using Equivocation. It’s a by-product of your continual failure to learn the basics.
The measure for how “exact” your calculation of the population mean is from sampling is the statistical descriptor known as the STANDARD DEVIATION OF THE SAMPLE MEANS. It is *NOT* the standard deviation!
“You simply can’t do even basic algebra.”
Just stop these childish insults if you want to be taken seriously. Then stop shouting. It really doesn’t help your position.
“Σx_i/n ==> x_1/n + x_2/n +… _ x_n/n
This is the AVERAGE UNCERTAINTY.”
Only if the x_i’s are taken to be uncertainties.
“Σx_i / sqrt(n) ==> x_1/sqrt(n) + x_2/sqrt(n) + … + x_n/sqrt(n)
This is the STANDARD DEVIATION OF THE SAMPLE MEANS.”
Huh? You really need to define your terms, and say what the x_i’s represent.
“NEITHER IS THE MEASUREMENT UNCERTAINTY OF THE AVERAGE.”
Indeed they are not. The measurement uncertainty of an average is
u_c(avg)^2 = Σu(x_i)^2 / n
“The measurement uncertainty is exactly what Eq 10 in the GUM states:
u_c^2(y) = Σu^2(x_i)”
As you must surely understand by now, that is not what Eq10 states. As always you keep ignoring the partial derivatives.
u_c^2(y) = Σ[∂y/∂x_i u(x_i)]^2
“I gave you the definition of what the measurement uncertainty is according to the GUM.”
And I suggested you keep misunderstanding it.
“The entire data set is *NOT* measurement uncertainty!”
But you seem to think that the standard deviation of the entire data set is the measurement uncertainty of the mean of that data set. At least that’#s what you sometimes think. Other times you are saying it’s the sum of the measurement uncertainties.
“The typical quantitative indication of the dispersion (or scatter) of such distribution IS THE STATISTICAL DESCRIPTOR KNOWN AS THE STANDARD DEVIATION!”
Calm down. You need to try to think about what dispersion you are interested in, which depends on what measurand you are talking about. If your measurand is the mean of a set of values, then the measurement uncertainty is given by SD of the dispersion of values of that mean, not of all the values that made up that mean. That is it’s the standard deviation of the mean.
“Only if the x_i’s are taken to be uncertainties.”
It simply doesn’t matter what the x_i quantities are! Σx_i/n IS THE AVERAGE of x_1, x_2, …, x_n
“Huh? You really need to define your terms, and say what the x_i’s represent.”
I DID define my terms. Σx_i is the sum of x_1, x_2, …, x_n. It simply doesn’t matter what x_i represents. x_i could be board lengths in cm. x_i could be barrel volumes in ft^3. x_i could be energy in joules. x_i could be uncertainty with a dimension of acres!
Do you think you are making some intelligent point here? All you are doing is showing how dense you are between the ears!
“Indeed they are not. The measurement uncertainty of an average is
u_c(avg)^2 = Σu(x_i)^2 / n”
You can’t even do enough simple algebra to get this one correct.
That is *NOT* the measurement uncertainty of an average. An average is Σx_i/n. (Instead of writing Σx_i/n I’m just going to use Σx/n)
Avg = Σx/n
Since this is a division you must do relative uncertainty.
u(avg)^2/avg = [ u(Σx) / Σx/n ]^2 (1/n)^2 where (1/n)^2 is the partial derivative
This gives [u(Σx) / Σx ]^2 (n^2) (1/n^2) = u^2(avg)/avg ==>
u^2(avg)/avg = [u(Σx) / Σx ]^2
It’s all simple algebra.
You keep wanting to define the standard deviation of the sample means as the measurement uncertainty. It isn’t. Until you learn that simple fact you are going to remain lost in the forest for the trees.
“I DID define my terms. Σx_i is the sum of x_1, x_2, …, x_n. It simply doesn’t matter what x_i represents. x_i could be board lengths in cm. x_i could be barrel volumes”
OK, so you are saying they are values of measurements. How does that follow from you claim that their average was the average uncertainty? And then, how does their sum divided by √n give you the standard deviation of the mean?
As I said at the start, it’s futile trying to explain this to you. You just keep going round in circles. You think that just repeating your own mistakes, somehow is an argument.
“Avg = Σx/n
Since this is a division you must do relative uncertainty.”
If by “do” you mean using the specific rules, as laid down by Taylor -yes. If you mean using equation 10, then no.
“u(avg)^2/avg = [ u(Σx) / Σx/n ]^2 (1/n)^2 where (1/n)^2 is the partial derivative”
Why are you using partial derivatives if you are not using equation 10? And if you are using equation 10, why are you using relative uncertainties. You are desperately trying to shove a square peg into a round hole, in order to get the result you want. And even doing this your algebra is hopelessly wrong. u(Σx) / Σx/n is not a relative uncertainty. It’s the uncertainty of the sum divided by the average. You want the uncertainty of the sum divided by the sum. Also u(avg)^2/avg, should be [u(avg)/avg]^2.
“u^2(avg)/avg = [u(Σx) / Σx ]^2”
Assuming you mean
[u(avg)/avg]^2 = [u(Σx) / Σx ]^2
you’ve managed to get the correct result. You still don’t seem to understand what it is saying. If the uncertainty of the average divided by the average is equal to the uncertainty of the sum divided by the sum, and the sum is n times bigger than the average, what does that tell you about the uncertainty of the average compared to the uncertainty of the sum?
“You keep wanting to define the standard deviation of the sample means as the measurement uncertainty.”
No I am not. By measurement uncertainty I am propagating the measurement uncertainties of the individual measurements into the combined uncertainty for the average of all those measurements. This is not the sampling uncertainty. The sampling uncertainty is the SEM, it’s treating each value as a random value from the population, and that randomness that determines the sampling uncertainty. This is usually much bigger than any uncertainty caused by the measurement.
The measurement uncertainty is only really relevant if you want to know the uncertainty of the exact average of all your values. Or if you want to see how much of an impact the measurement uncertainty has on the sample uncertainty.
“OK, so you are saying they are values of measurements.”
Once again you show your lack of reading comprehension skills.
Is “x_i could be uncertainty with a dimension of acres!” describing the value of a measurement? IT IS DESCRIBING THE UNCERTAINTY OF A MEASUREMENT!
“How does that follow from you claim that their average was the average uncertainty?”
Σx_n / n IS AN AVERAGE. Why is that so hard for you to understand?
It does *NOT* give you the standard deviation of the mean! It gives you the standard deviation of the SAMPLE MEANS.
SEM= SD/√n
SEM is the standard deviation of the sample means
SD is the standard deviation associated with the population mean
Stop being so willfully ignorant. It is the worst kind of ignorance there is.
The standard deviation of the sample means is a measure of sampling uncertainty, not measurement uncertainty.
OMG. You *still* haven’t bothered to read the GUM, especially the part surrounding Eq 12 (I’m sorry I’ve been calling it Eq 13).
—————————————————
Eq 12:
If Y is of the form cX_1^P1X_2^P2…X_n^Pn
—————————————————-
This is *exactly the same as Taylor. You don’t seem to understand that division is nothing more than a multiplication by a fraction. It is *still* a multiplication!
Compare this with the definition of Y for Eq 10;
Y = f(X1, X2, …, Xn
Do you see the difference? Once has comma’s between the input quantities. The other does not.
A similar way to state Eq 12 would be Y = f(cX_1X_2…X_n)
You use relative uncertainty with multiplication of factors. You don’t need to use relative uncertainty with additive factors. There is no contradiction between Taylor and the GUM
Request after request, ad infinitum, and you still refuse to actually study Taylor and work out all the examples and get the same answers as in the back of the book. It’s proof you wish to remain willfully ignorant, the worst kind of ignorance. There is no space between Taylor and the GUM, its an artifact of your willful ignorance!
YOU STILL DON’T UNDERSTAND WHAT POSSOLO DID! You can claim all you want that you do but this kind of statement stands as mute proof that you don’t!
Where in Pete’s name do you think the sensitivity factors, p_i, come from in Eq 12 if it isn’t from the partial derivatives?
I’ve shown you the derivation of Eq 12 at least a dozen times. And you *still* adamantly claim that I don’t know how to do partial derivatives which, in turn, means you don’t think Possolo knows how to do partial derivatives The use of partial derivatives is part and partial of Eq 12. And the use of relative uncertainty is also part and parcel of Eq 12.
“Once again you show your lack of reading comprehension skills.”
Nothing you say is worth reading. I ask you to say what you are averaging snd yiu say tgey could be values of things or they could be uncertainties if measurements. Then you say that the average if the uncertainties means nothing. and then that dividung the sun by root N gives you the standard deviation of the sample mean. None of this makes any sense.
“It does *NOT* give you the standard deviation of the mean! It gives you the standard deviation of the SAMPLE MEANS”
That’s the same thing. You just keep changing tge words and pluralising the word mean for no reason. And regardless, the points the same. Divinding the sum by √N is not giving you anything.
“OMG. You *still* haven’t bothered to read the GUM, especially the part surrounding Eq 12”
You mean the part I kerp having to point out ti you everytine you try to use it with an average?
“If Y is of the form cX_1^P1X_2^P2…X_n^Pn”
Yes that’s the one.
“Do you see the difference?”
Yes. Equation 10 works for any function, Equation 12 only for one specific form.
“Once has comma’s between the input quantities. The other does not.”
Rofl.
“There is no contradiction between Taylor and the GUM”
There isn’t. The contradiction is with everything you say.
“Where in Pete’s name do you think the sensitivity factors, p_i, come from in Eq 12 if it isn’t from the partial derivatives?”
How many more times do I gave to explain this to you. They cone from dividing the partial derivatives in equation 10, by tge square of the result, and then cancelling.
“I’ve shown you the derivation of Eq 12 at least a dozen times.”
Lots of time after I explained it to you, but you still don’t seem to understand that it only works when the function is of the particular form. You’ve admitted it in your comment, but you still don’t seem to get that an average is not of that form.
“means you don’t think Possolo knows how to do partial derivatives”
And there’s that lie agin. I’m sure he does, but you don’t. You are just incapable of starting with equation 10 and applying it to an average. You keep avoiding the obvious points that equation 10 is not using rekative uncertainties and tgat the partial derivative of x/n is 1/n. But instead try to use equation 12 despite it being inappropriate firvthis fubction. And then whenecer I point out your mistake, you just accuse me of disagreeing with Possolo. Yet you keep ignoring all the examples from the same book whichare not using equation 12. E.g. the gold coins.
“Nothing you say is worth reading.”
Apparently NOTHING is worth reading for you. Not Taylor. Not Bevington. Not Possolo. Not the GUM. None of the ISO documentation. Absolutely nothing!
“I ask you to say what you are averaging snd yiu say tgey could be values of things or they could be uncertainties if measurements.”
*I* am not averaging ANYTHING. You asked me what x_i can be and I told you. They can be anything. That isn’t the same as saying I average them. I sometimes average them and sometimes I don’t.
You *still* don’t understand what the term “best estimate” means in the real world of measurements. it stems from your statistical view that the mean is the true value and is the only value that needs to be considered. In your statistical view there simply isn’t such a thing as an asymmetric distribution where the mean is *NOT* the “best estimate” of the value of a property being measured.
I *might* use a mean as the best estimate and I might use the mode and I might use something else. It’s why I’ve always advocated for the use of the 5-number statistical description rather than the mean/standard deviation.
The average uncertainty does *NOT* give you anything useful. The examples given you for this are legion and you’ve ignored each and every one of them.
If I am signing you up for a multi-year sales contract for bolts for a long-term critical project and I tell you they are 3″ +/- average-measurement-uncertainty would you buy them? How about if I tell you they are 3″ +/- SEM?
I’ve lived in the real world for 75 years. I’ve learned the hard way along the journey exactly how measurement uncertainty works. The *average* measurement uncertainty is of absolutely no use for me in the real world.
Exactly what use is the *average* measurement uncertainty of a set of measurements to you there in statistical world?
“Apparently NOTHING is worth reading for you.”
No. Just what you write. I know your ego is so large you believe you actually wrote all those thins you list, but that’s your problem.
“it stems from your statistical view that the mean is the true value and is the only value that needs to be considered.”
Lie.
“In your statistical view there simply isn’t such a thing as an asymmetric distribution”
Lie.
“I’ve lived in the real world for 75 years.”
Yet you argue as if you were 12.
“The *average* measurement uncertainty is of absolutely no use for me in the real world.”
Then why keep bringing it up? I keep telling you it means nothing.
You still haven’t refuted any of my assertions showing how *your* assertions are wrong headed as can be.
Do you enjoy self-flagellation?
It works when the function is of the form
y = f(cX_1X_2…X_n)
i.e. A MULTIPLICATION!
Exactly what both Taylor and Bevington lay out in their tomes.
This is the exact form of V = πHR^2
Which is why Possolo used the form of u(V)/V – relative uncertainty!
And you do *NOT* ignore the partial derivatives when the function is of this form. You use RELATIVE UNCERTAINTIES.
So now you are at least admitting that the partial derivatives are not just being ignored! Which is opposite of your statement: “Why are you using partial derivatives if you are not using equation 10? “
Dividing by the square of the result and cancelling IS USING RELATIVE UNCERTAINTY!
Have you progressed enough to admit that R^2 = R * R? Or how about in H = V/πR^2 is MULTIPLYING BY (1/πR^2)?
So that the form is H = f((1/π)(1/R^2)V)?
“Yes. Equation 10 works for any function, Equation 12 only for one specific form.”
OMG! Equation 10 works ONLY for additive functions, not “any” function. Equation 10 does *NOT* work when multiplication is involved! And Eq 10 doesn’t work for *all* additive functions. If you had bothered to study Taylor Chapter 7 you would know that!
Can you not read at all?
An average is a multiplication form! And it’s not just that. The influence factors have different dimensions! You *must* use relative uncertainty when different elements are involved. In the volume equation one factor is in cm and the other in cm^2. How do you add absolute measurement uncertainties when the dimensions are different. In an average of the measurement uncertainty of a set of 2″x4″ boards you have inches for the lengths and “units” for n. How do you add the measurement uncertainties when one has the dimension of “inches” and the other has the dimension of “units”? Measurement uncertainties inherit the dimensions of the stated values – if you had bothered to actually study Taylor or Bevington you would know that!
Who taught you how do to dimensional analysis?
Equation 10 DOES NOT WORK FOR MULTIPLICATION FUNCTION FORMS. Eq 10 DOES NOT WORK WHEN FACTORS HAVE DIFFERENT DIMENSIONS.
Both of these legislate that the uncertainty of the average has to be done with Eq 12!
Are you *totally* incapable of reading?
YOU CAN’T USE EQ 10 WITH THE AVERAGE!
The average is a multiplicative form. (1/n) * Σx
You must use Eq 12. Can you not read the GUM at all? I’ve given you the forms. You even said you understand them. Yet you can’t seem to understand that (1/n) * Σx is multiplication!
And the partial derivative of x/n in Eq 12 *IS* also 1/n!!!!!!!
The difference is that when you use relative uncertainties the 1/n factor gets cancelled! n^2 / n^2 = 1!!!!
Why do you think Possolo used Eq 12 instead of Eq 10?
One more time: (/1n) * Σx is multiplication. It is of the EXACT FORM the GUM specifies for using Eq 12.
How is that inappropriate?
Multiplication involves scaling the result of the component contributions. Only relative uncertainties properly handle the scaling. If you had studied Taylor, Bevington, Possolo, or any other experts on metrology you would understand this.
Can you read at all?
Are you speaking of Taylor’s book? The only place he speaks of gold crowns is comparing the measurement uncertainty of two different experiments. One has such a large uncertainty that it is of no use in deciding if the coin is gold or not. How in Pete’s name do you think that has anything to do with Eq 10 or Eq 12?
“It works when the function is of the form
y = f(cX_1X_2…X_n)”
That’s not how you write a function.
“This is the exact form of V = πHR^2”
But not the form of y = (x1 + x2 + … + xn) / n.
“And you do *NOT* ignore the partial derivatives when the function is of this form.”
You don;t need to know them because you can use equation 12. This is derived from 10, but means you can skip straight to the [part where most of the partial derivatives have been cancelled. That’s why it’s useful. You just need to look at the powers of each term, rather than write out the full partial derivative for each.
“So now you are at least admitting that the partial derivatives are not just being ignored! Which is opposite of your statement: “Why are you using partial derivatives if you are not using equation 10? ““
I was pointing out that you were writing out the partial derivatives whilst also using relative uncertainties – i.e. mixing up 10 and 12. This was in response to you writing
“Have you progressed enough to admit that R^2 = R * R?”
Pathetic. You are still arguing like a 12 year old. This is just your version of the “have you stopped beating your wife” question. I have never said that R^2 is not R*R. It’s the definition of R^2. I’ve no idea what false memory you are now dredging up.
“So that the form is H = f((1/π)(1/R^2)V)?”
That is not how you write a function.
“OMG! Equation 10 works ONLY for additive functions…”
I don;t think you mean additive function, but regardless you are wrong. If you disagree provide an actual reference that says that.
You really should know this, given that you keep pointing out how equation 12 can be derived from equation 10. That would just not be possible if equation 10 did not work for the volume of a cylinder.
“Both of these legislate that the uncertainty of the average has to be done with Eq 12!”
Rather than just endlessly repeating the same falsehood, could you actually explain how you think that (x1 + x2 + … + xn) / n is of the form cX_1X_2…X_n.
Then could you look at the Possolo example of the gold coins is derived if you think it is using equation 12.
“The influence factors have different dimensions!”
How on earth do you average things with different dimensions?
“In an average of the measurement uncertainty of a set of 2″x4″ boards you have inches for the lengths and “units” for n.”
A unit is not a dimension.
And even if n had a dimension you are still wrong. Equation 10 can work with inputs of different dimensions. This is all taken care of by partial derivatives. Consider your cylinder example. V = πHR^2.
V has units cm^3, H and R have units cm. The same for their uncertainties. But ∂V/∂H = πR^2 has units cm^2, ∂V/∂R = 2πHR has units cm^2. Multiplying the partial derivative of each by the relevant uncertainty has units cm^3. No problem with dimensions.
“In the volume equation one factor is in cm and the other in cm^2.”
Wrong again. Both are in cm. You do realize that R^2 is not an input quantity, only R.
“Both of these legislate that the uncertainty of the average has to be done with Eq 12!”
And how do you get around the law that says equation 12 doesn’t work for functions with addition or subtraction?
Ignoring the rest of this increasingly hysterical rant. You are just repeating your self without ever trying to understand any of the points I’ve made.
But to answer one question –
“Are you speaking of Taylor’s book?”
No. I’ve already given you the example, it’s from Possolo’s book, the one with the storage tank example. Page 26.
Three coins are weighed in pairs, and the results are used to estimate the weight of each. So for example
m_1 = 1/2[m_(1+3) + m_(1+2) – m_(2+3)]
the uncertainty of m_1 is given using equation 10, despite the fact that it involves a division by 2.
Let me make this simple for you. I KNOW that with your lack of algebraic skills you’ll never figure out what Possolo did.
Let’s start with the weight of coin1, i.e. m1.
He wrote: m1 = (1/2) (m_1+2 + m_1+3 – m_2+3)
We know that
m_1+2 = m1 + m2
m_1+3 = m1 + m3
m_2+3 = m2 + m3
So we write his function statement and get
m1 = (1/2) (m1 + m2 +m1 + m3 -(m2+m3) ==>
(1/2)(2m1 + m2 – m2 + m3 – m3) ==> (1/2)(2m1)
We now simplify: m1 = (1/2)(2m1) ==> 2m1 = 2m1 (multiply both sides by 2)
2m1 = 2m1 can be rewritten as m1 + m1 = m1 + m1.
We no subtract m1 from both sides and get
m1 + m1 – m1 = m1 + m1 – m1 ==> m1 = m1
He did the same exact thing for m2 and m3.
NO MULTIPLICATION OF INFLUENCE FACTORS INVOLVED. NO SCALING INVOLVED.
Thus Eq 10 is applicable.
The simple rule, as laid out by *ALL* metrology experts, including the GUM, is:
FOR ADDITION AND SUBTRACTION ADD THE MEASUREMENT UNCERTAINTIES. FOR MULTIPLICATION ADD THE RELATIVE MEASUREMENT UNCERTAINTIES.
Eq 10 is for adding measurement uncertainties. Equation 12 is for adding relative measurement uncertainties. Which one gets used depends entirely on which one is applicable.
Your lack of reading skills and algebra skills does *NOT* invalidate these rules. It does not make which one that gets used into an arbitrary choice.
And it has NOTHING to do with partial derivatives being ignored, misused, invalidated, or any other misguided crap you can come up with!
“Let me make this simple for you. I KNOW that with your lack of algebraic skills you’ll never figure out what Possolo did.”
Save your bandwidth. I know what the example does. You’ve posted 12 comments today alone, many of trdius length. Why should I respond when you frame each comment with such patronising insults. If you think thee is something about the coin example or any other example which demonstrates why your claimes fo not hold, then just state it.
You keep pouring out these insults. yet as far as I can see, you still haven’t addressed the main point. If you accept that equation 12 is derived from equation 10, then you have to accept that equation 10 works for all cases where equation 12 holds.
And if you think equation 10 doesn’t apply to an average, you need to explain why it gives yhe same result ss for the MC method.
*YOU* ASKED ABOUT WHY POSSOLO USED EQ 10 WITH THE COIN EXAMPLE! I explained why.
It’s not *my* problem you didn’t like the answer.
Bullshite! Equation 12 is merely Eq 10 for relative uncertainty! The GUM itself says so!
GUM:
“5.1.6 If Y is of the form Y = cX_1^P1X_2^P2…X_n^Pn and the exponents p_i are known positive or negative numbers having negligible uncertainties, the combined variance, Equation (10), can be expressed as”
“2This is of the same form as Equation (11a) but with the combined variance u_c ( y) expressed as a relative
combined variance [u_c( y) /y]^2 and the estimated variance u^2(xi) associated with each input estimate expressed as an estimated relative variance [u(xi) /xi]^2 .”
“Why should I respond when you frame each comment with such patronising insults.”
When you keep repeating the same old, idiotic assertions over and over and over ad infinitum and keep getting them knocked down time after time after time ad infinitum it’s insulting to *everyone else’s* intelligence having to read your crap ad infinitum.
You are like a puppy that won’t learn to stop crapping on the living room rug after having been corrected multiple times. If correcting you won’t do it them maybe rubbing your nose in it time after time will.
If you don’t like the coin example, lets look at the Wheatstone Bridge example.
The function is (For simplicity I’ll ignore the R label, and just use the subscripts)
U = GF(E^-1 + H^-1)
According to you, as this involves multiplication, you have to use equation 12 and relative uncertainties. According to Possolo
The approximation of equation 12, “cannot be used here” because it’s not a “simple product of powers”. As I say equation 12 can only be used when the function is of the appropriate form.
He continues:
That general form is, of course, equation 10. No relative uncertainties and the actual partial derivatives.
So are you going to say Possolo is wrong, or are you going to make another hand-waving excuse to explain why the multiplications in this example aren’t really multiplications. For bonus marks claim again that I don;t understand algebra and that I’m cherry picking this example.
“According to you, as this involves multiplication, you have to use equation 12 and relative uncertainties. According to Possolo”
The formula does *NOT* use multiplication! YOU JUST KEEP DEMONSTRATING THAT YOU CAN’T DO SIMPLE ALGEBRA!
The formula simplifies to a/R_e + a/R_h
THAT IS A SIMPLE SUM! Do you see the plus sign? It is *NOT* an asterisk!
Where do you see any scaling here?
“That general form is, of course, equation 10. No relative uncertainties and the actual partial derivatives.”
Equation 10 can be used because there is no scaling being done! And Eq 12 ALSO USES ACTUAL PARTIAL DERIVATIVES. You simply can’t do enough simple algebra to understand that powers used as weighting factors in Eq 12 are derived from the ACTUAL PARTIAL DERIVATIVES. If the actual partial derivatives were *NOT* used in Eq 12 then you would not get the weighting factors.
“The formula does *NOT* use multiplication! YOU JUST KEEP DEMONSTRATING THAT YOU CAN’T DO SIMPLE ALGEBRA!”
Once again illustrating that writing in capitals and throwing tired insults, is a sure sign that Tim is wrong.
There are two multiplications and two divisions in the equation.
R_G * R_F * (1/R_E + 1/R_H)
“The formula simplifies to a/R_e + a/R_h”
And you still have two divisions. but of course a is still the product of two measurements.
“THAT IS A SIMPLE SUM! Do you see the plus sign? It is *NOT* an asterisk!”
Ironic that Tim doesn’t understand what “simple” means. Of cotse there’s a plus sign.
That’s why you can’t use equation 12. Just as there are plus signs in an average.
“Equation 10 can be used because there is no scaling being done!”
Remakable. So now we have moved from you can’t use equation 10 if there is any multiplication, to you can’t use equation 10 if there is multiplication by a constant.
This is such a nonsense, and majes it clear Tim doesn’t understand why equation 10 works. The whole point is to approximate the uncertainty by treating each input as having a linear change, that is a scaling. When you only have scaling, i.e. a linear equation the approximation becomes exact.
“And Eq 12 ALSO USES ACTUAL PARTIAL DERIVATIVES.”
Stop equivocating. In what way does it use partial detivatives? It’s detived from equation 10 that uses partial derivatives, but the simplification means that you do not need to actually work out any derivative in order to use the equation.
“And you still have two divisions.”
You are just blind. That’s all I can assume.
[(R_g^2) (R_f^2) / R_e^2 ] u^2(R_e))/R_e^2
is what?
You don’t see that a/R_e is a division?
*YOU* don’t see that u^2(R_e) / R_e^2 IS A RELATIVE UNCERTAINTY?
And that the formula is *adding* relative uncertainties modified by a sensitivity component?
IT’S EQ 12!!!!!
Didn’t you say you had a degree in math? How in Pete’s name did you that when you can’t do high school freshman algebra?
“*YOU* don’t see that u^2(R_e) / R_e^2 IS A RELATIVE UNCERTAINTY?”
That’s a relative uncertainty, or at least the square of one. It has sfa to do with what you were saying. Remember,
“The formula simplifies to a/R_e + a/R_h”
a in that simplification is R_G * R_F, not u(R_e).
“And that the formula is *adding* relative uncertainties modified by a sensitivity component?”
You are, as so often, getting completely confused. You are assuming that just because equation 10 leaves you with multiplications of relative uncertainties, that means you are using equation 12. Just not so. As should be obvious from the fact that the result is not a relative uncertainty. Possolo states this explicitly. I’ve already give you the quote. R_U is not a simple product of powers, so you cannot use the approximation used for the water tank example (equation 12), so you have to use the general form of the equation (i.e equation 10).
Try it for yourself. Use equation 12 and see if you get the same result.
You insult my maths ability, but then demonstrate you are incapable of just using equations as written. Throughout are arguments, you always try to ignore any actual algebra, and just try to figure things out by examples with no understanding of how they actually work.
It’s so obvious that your constant need to accuse me of not understanding how algebra works, is your projection.
“could you actually explain how you think that (x1 + x2 + … + xn) / n is of the form cX_1X_2…X_n.”
f(x1 + x2 + … + xn) ADDS THE INFLUENCE FACTORS.
f(cX_1X_2….X_n) can be written as
f(c * X_1 * X_2 * …. * X_n).
One has plus signs between the factors. The other has the multiplication sign between the influence factors.
Once again, it is simple algebra.
For short hand in writing equations it is typical to write X_1 * X_2 as (X_1X_2) leaving out the asterisk between the elements. Do you *really* not understand this?
Just how much actual algebra instruction have you received in your life? You can’t seem to be able to make out the simplest algebraic manipulations let alone how to write algebraic forms.
“One has plus signs between the factors. The other has the multiplication sign between the influence factors.
Once again, it is simple algebra.”
You win. I can’t compete with this level of stupidity.
“For short hand in writing equations it is typical to write X_1 * X_2 as (X_1X_2) leaving out the asterisk between the elements. Do you *really* not understand this?”
Projection at its finest.
“Just how much actual algebra instruction have you received in your life?”
Apart from my BSc and MSc in mathematics, not much.
“Apart from my BSc and MSc in mathematics, not much.”
Apparently you cherry-picked your way through those just like you do here – nothing ever really sank in.
If you can’t understand that x * y is many times written just xy then NOTHING apparently sank in.
And that’s the only reason someone can’t understand why
f(x + y + z + …)
is not the same as
f(xyz….)
“If you can’t understand that x * y is many times written just xy then NOTHING apparently sank in.”
Stop making stuff up. When have I ever said anything else? It’s ehy arguing with you is so pointless – you never listen, you just hear what you want to hear.
‘f(x + y + z + …)
is not the same as
f(xyz….)”
Correct. The problem is you said they were the same. But if you now understand they are different, you must surely understand why an average, involving addition, cannot use equation 12.
Remember this started when I asked
and your response was
I assumrd you were defending your claim that an average was of the correct form. Now it seems you may be agreeing it is not of the same form – but then that means you agree you cannot use equation 12 with an average.
It would be helpful if you just answer the question, do you think that (x1 + x2 + … + xn) / n is of the form cX_1X_2…X_n or not?
Please try to answer the question unambigously, without a rant.
me: ““It works when the function is of the form
y = f(cX_1X_2…X_n)””
you: “That’s not how you write a function.”
me: “This is the exact form of V = πHR^2”
you: “But not the form of y = (x1 + x2 + … + xn) / n.”
This is truly getting tedious. You have made so many idiotic assertions that you can’t even remember what you have asserted.
No, see above. I was pointing out to you that the function definitions for Eq 10 and for Eq 12 were different.
f(x + y + z +…) is *NOT* the same as f(xyz…)
And you come back saying the second one isn’t even how you write a function!
Like I said, this is getting absolutely tedious. You can’t even keep track of what your own positions are! First the average measurement uncertainty isn’t useful and then it is because it is the measurement uncertainty of the average. And then the function definition for Eq 10 and Eq 12 are different and then they are the same and then they are different. The SEM is the measurement uncertainty of the average and then it isn’t and then it is. Variances add and then they don’t add and then they do. All measurement uncertainty is random, Gaussian and cancels and then it isn’t and then it is. And on and on and on ….. ad infinitum.
Like I keep saying, either you can’t read or you refuse to. I laid out what I was saying EXPLICITLY. And you couldn’t figure it out and went off making up your own strawman for what I EXPLICITLY stated.
Like I keep saying, either you can’t read or you refuse to.
I have EXPLICITLY said that an average is a scaling function. It uses multiplication. Therefore Eq 12 is the appropriate form. Each and every time I have laid out how to calculate the uncertainty of the best estimate for a value I have used Eq 12 – and you can’t even seem to understand enough simple algebra to tell which formula I am using. Just like you keep saying that Eq 12 doesn’t use partial derivatives when it is obvious that it does!
I’ll give you the same answer as always:
————————————-
f(x1 + x2 + … + xn) ADDS THE INFLUENCE FACTORS.
f(cX_1X_2….X_n) can be written as
f(c * X_1 * X_2 * …. * X_n).
———————————-
If you can’t see the words “ADDS THE INFLUENCE FACTORS.”and how that the use of asterisks is *NOT* adding then you are truly lost.
Do you maybe need to use a larger font on your monitor so you can differentiate between + and * ?
You’re right, this is getting tedius. Especially when you keep misunderstanding what I’m saying in order to score cheap points.
You claimed I didn’t umderstand that xy was another way of writing x × y. You base this on me saying, y = f(cX_1X_2…X_n) is not how you write a function, and on me pointing out that (x1 + x2 + … + xn) / n is not of the form cX_1X_2…X_n.
I’m not sure I even want to understand what twisted thought process leads you to make such a conclusion.
“f(x + y + z +…) is *NOT* the same as f(xyz…)”
In which case why are you claiming you can use equation 12 for an average? And please learn how to write the definition of a function.
“I have EXPLICITLY said that an average is a scaling function. It uses multiplication. Therefore Eq 12 is the appropriate form.”
Then you are the one who can’t read. I don’t know how much simpler it can be. Equation 12 can only be used when the function is of the form cX_1X_2…X_n, along with powers. You admit an average is not of that form. But then you insist you have to use equation 12.
“I’ll give you the same answer as always:”
Which is? It’s a simple yes/no question. It doesn’t require lines of cryptic nonsense.
“f(x1 + x2 + … + xn) ADDS THE INFLUENCE FACTORS.
f(cX_1X_2….X_n) can be written as
f(c * X_1 * X_2 * …. * X_n).”
Is that a yes or a no?
“If you can’t see the words “ADDS THE INFLUENCE FACTORS.”and how that the use of asterisks is *NOT* adding then you are truly lost.”
Is that a yes or a no?
Obviously, the only logical intetpretation is “no”. But logic isn’t your strong point, and it’s telling you won’t simply use the word “no”.
I gave you YOUR own quotes. If you can’t even figure out what *you* are saying how do you expect anyone else to?
Funny that you don’t have a quote of me saying that! IT’S BECAUSE I DIDN’T SAY THAT! Equation 12 is for propagating measurement uncertainty. YOU are the only one that thinks the measurement uncertainty is an average.
I did write that function. THE GUM DID!
5.1.6 If Y is of the form Y = X_1^P1X_2^P2…X_n^Pn
Even though people have been PLEADING with you for over two years to actually sit down and read the GUM for meaning and context you have adamantly refused. This is the result – you claiiming that those writing the GUM don’t know how to write a function.
Would it suit you better if y = cX_1X_2 were written
as y = f(X_1,X2,c) = cX_1_X2?
My guess is that it will still be incomprehensible to you for distinguishing when you use Eq 10 and when you use Eq 12.
“I gave you YOUR own quotes.”
And then made up an insane conclusion. How you think anything I said was saying that xy was not the same as x*y, is something only you and your strawmen know.
“Funny that you don’t have a quote of me saying that! IT’S BECAUSE I DIDN’T SAY THAT!”
https://wattsupwiththat.com/2026/01/10/dramatic-fall-in-global-temperatures-ignored-by-narrative-captured-mainstream-media/#comment-4154764
“I did write that function. THE GUM DID!
5.1.6 If Y is of the form Y = X_1^P1X_2^P2…X_n^Pn”
But that’s not what you keep saying. You keep saying f(X_1^P1X_2^P2…X_n^Pn). That’s meaningless.
“you claiiming that those writing the GUM don’t know how to write a function.”
Your projection is off the charts. Every time I correct one of your mistakes you insist I’m actually correcting some authority, when I’m actually pointing out that you are misunderstanding them.
“Would it suit you better if y = cX_1X_2 were written
as y = f(X_1,X2,c) = cX_1_X2?”
That would be better, yes. But there is no need to include c in the function. It’s a constant.
“And if you are using equation 10, why are you using relative uncertainties. “
Who says I am using Eq 10? I am using Eq 12. Eq 12 is derived from Eq 10 when the function involves multiplication.
y = x1 + x2 + … + xn
IS NOT THE SAME AS
y = x_1 * x_2 * … * x_n
As I keep saying, you don’t even understand simple algebra yet you continue to try and lecture everyone on how they are doing the math wrong!
your lack of algebra skills is showing again. Go look at what the base function is.
You STILL haven’t figured out relative uncertainty. I keep referring you to Taylor and his y = Bx example in Chapter 3 and you simply refuse to study it for meaning and context.
u(y)/y = u(x)/x
Relative uncertainties have no dimension. They are percentages. The percentage uncertainty in the independent variable is the same as the percentage uncertainty in the dependent variable. It simply doesn’t matter if y is much greater than x or or if x is much larger than y!
It’s why you and those defending climate science and the use of anomalies can’t believe that the standard deviation of the anomaly is the same as the standard deviation of the parent distributions.
“No I am not. “
yes, you are. The truly sad fact is that you don’t even realize you do it!
“I am propagating the measurement uncertainties of the individual measurements into the combined uncertainty for the average of all those measurements.”
you do *NOT* do that by by dividing the propagated sum of the measurement uncertainties of the individual measurements by the number of measurements. That merely gives you the average measurement uncertainty. The average uncertainty is *NOT*, let me repeat *NOT*, the dispersion of the values that are reasonable to assign to the property being measured.
The dispersion of the values that are reasonable to assign to the property being measured is the standard deviation of the population.
Eq 4 of the GUM defines this:
s^2(q_k) = [ 1/(n-1) ] Σ (q_j – q_bar)^2
This formula IS THE VARIANCE (STANDARD DEVIATION squared) OF THE POPULATION. You do *NOT* divide this by “n” again to get the measurement uncertainty.
√[ s^2(q_k)] is the standard deviation derived from the variance of the population.
You better hope your sampling uncertainty is *NOT* greater than the measurement uncertainty! If it is then your sampling protocol is pure and utter garbage!
For one sample, which is what one temperature data set is, the SEM itself has an uncertainty of about 35%. That may very well be larger than the combined measurement uncertainty – but it also means your sampling is garbage!
If you consider the temperature data set to be made up of multiple samples then the sample size is 1. That means the SEM is equal to the SD of the population.
You are caught either way.
“You STILL haven’t figured out relative uncertainty. I keep referring you to Taylor and his y = Bx example in Chapter 3 and you simply refuse to study it for meaning and context. ”
That’s it. The death of irony. It’s something I’ve been pointing out to Tim since the beginning. He still doesn’t get why it’s the whole point. The uncertainty of the average is the uncertainty of the sum divided by N.
“It simply doesn’t matter if y is much greater than x or or if x is much larger than y!”
It matters because it means that if y is much greater than x, then the uncertainty of y has to be much greater than the uncertainty of x. That’s what y = Bx means, it follows from u(y)/y = u(x)/x. And it means that if the average is 100th the size of the sum, than the uncertainty of the average must be 100th the size of the uncertainty of the sum.
“It’s why you and those defending climate science and the use of anomalies can’t believe that the standard deviation of the anomaly is the same as the standard deviation of the parent distributions.”
And just as you think Tim might be in sight of understanding his error, he deflect into a completely unrelated, but equally wrong claim.
“you do *NOT* do that by by dividing the propagated sum of the measurement uncertainties of the individual measurements by the number of measurements. ”
Just repeating this does not make it any more true. Try to address all the reasons I’ve given you for why that’s exactly how you propagate the measurement uncertainties in an average, without resorting to yet more insults. You might actually learn something.
“The dispersion of the values that are reasonable to assign to the property being measured is the standard deviation of the population.”
Again, you are not assigning values to the measurand. The Gum definition is the values that could reasonably be attributed to the measurand. The standard deviation of the population does not give you a range of values that could reasonably be attributed to the mean.
“Eq 4 of the GUM defines this:
s^2(q_k) = [ 1/(n-1) ] Σ (q_j – q_bar)^2
This formula IS THE VARIANCE (STANDARD DEVIATION squared) OF THE POPULATION.”
And as always you ignore the next equation, which tells you to divide that by n to get the variance of the mean, and that this or it’s square root is the uncertainty of the mean.
“You better hope your sampling uncertainty is *NOT* greater than the measurement uncertainty!”
Why? Usually if I were conducting a survey I would want the uncertainty of my measurements to be less than the variation in the values I was measuring. It’s only in your strange world that measurement uncertainty has nothing to do with measuring things that this would be a problem.
“he uncertainty of the average is the uncertainty of the sum divided by N.”
For the umpteenth time:
Conclusions:
The average measurement uncertainty is *NOT* the standard deviation of the parent distribution.
The average measurement uncertainty does *NOT* describe the interval of values that can be reasonably assigned to the property being measured.
You can continue to argue that the average measurement uncertainty is the measurement uncertainty of the average but you only continue to make a fool of yourself when you do so.
Here is what Copilot has to say on the subject:
—————————————
The GUM’s position:
The measurement uncertainty of the mean is not the SEM.
The SEM,
SEM = s/√n
is a Type A statistical estimator describing how precisely you have estimated the sample mean from repeated observations. It is a property of the sampling process, not the measurand.
But the GUM is not concerned with sampling precision alone. It is concerned with the range of values of the measurand that are consistent with all available information.
That is a fundamentally different object.
——————————————
Your “average measurement uncertainty” is ALSO NOT CONSISTENT WITH ALL AVAILABLE INFORMATION available in the parent distribution.
The “average measurement uncertainty” is truly useless in the real world which is what metrology is all about.
“For the umpteenth time”
Followed by a load of points not addressing my statement. And then followed by his usual lie that I’m talking about the average of measurement uncertainties. Concluding with the appeal to the authority of artificial “intelligence”.
All deflecting ftom the point. which is if you accept u(avg)/avg = u(sum)/sum, then you have to also accept that u(avg) = u(sum)/N.
“All deflecting ftom the point. which is if you accept u(avg)/avg = u(sum)/sum, then you have to also accept that u(avg) = u(sum)/N.”
NO! This is just wrong! How many times do you have to be shown this?
u(sum)/N is a division. It is a SCALING operation. YOU MUST USE RELATIVE UNCERTAINTY!
avg = (x_sum)/n
[u(avg)(x_sum/n)]^2 = (1/n)^2 (n/x_sum)^2 u(sum)^2 –>
[u(avg)/avg ]^2 = [ u(sum)/x_sum]^2 –>
u(avg)/avg = u(sum)/x_sum
THERE IS NO “N” TERM IN THE MEASUREMENT UNCERTAINTY OF THE AVERAGE!
This is EXACTLY how Taylor gets to the measurement uncertainty of (y = Bx) as
u(y)/y = u(x)/x
Until you abandon your willful ignorance and actually work through the examples in Taylor and figure out why he does what he does you are *NEVER* going to get this right.
Is it that your lack of algebra skills makes you unable to actually work the problems out?
Your tying uourself in knots.
“This is EXACTLY how Taylor gets to the measurement uncertainty of (y = Bx) as
u(y)/y = u(x)/x”
You ignore the part where you multiply through by y to get
u(y) = Bu(x).
But keep ranting about my lack of algebra skills, and maybe you’ll be able to keep avoiding the obvious.
YOU HAVE TO GET THE RELATIVE UNCERTAINTY FIRST!
You *still* havven’t figured that out yet. You simply don’t live in the real world!
“YOU HAVE TO GET THE RELATIVE UNCERTAINTY FIRST!”
Talk about not understanding algebra. No you do not. You just need to know what the end result is.
“All deflecting ftom the point. which is if you accept u(avg)/avg = u(sum)/sum, then you have to also accept that u(avg) = u(sum)/N.”
You said in another message:
“I do not say, and have never said, that averaging measurement uncertainty tells you anything.”
u(sum)/N *IS* the average measurement uncertainty.
Which is it? Does the average measurement uncertainty tell you something or does it tell you nothing?
“u(sum)/N *IS* the average measurement uncertainty. ”
Not worth continuing at this point. Tim is either too senile or too much of a troll to remember all the times I’ve explained to him that u(sum) / n is not the average meadurement uncertainty. If his algebra skills werevas good as he thinks, he should be able to work it out for himself.
from Copilot:
———————————-
Steps to Calculate an Average1
For example, if the numbers are 10, 20, and 30:
————————————-
u(sum) fits the Sum step
“n” fits the Count step
u(sum)/n sure does fit the “average” step
And you think u(sum)/n is *not* the average measurement uncertainty? I suspect you are, once again, using the argumentative fallacy of Equivoction and defining u(sum) as something other than the sum of the uncertainties.
So what definition *are* you using for u(sum)?
“from Copilot”
It’s worry that the young have to ask a random AI just to figure out how to calculate an average.
“u(sum) fits the Sum step”
No it does not. The uncertainty of a sum, assuming independent measurements is √[u(x1)^2 + … u(x2)^2]. This is not the same as just adding the uncertainties.
“And you think u(sum)/n is *not* the average measurement uncertainty?”
I know it’s not the average measurement uncertainty. I’ve explained why it’s not the average measurement uncertainty. Tim will never get it, because he’s incapable of admitting he’s wrong about anything.
If you have n values each with uncertainty u(x), then the average uncertainty will be u(x). The actual uncertainty of the average, assuming independence etc, will be u(x)/√n. In case this need to be spelt out again
u(x) ≠ u(x) / √n
unless n = 1.
“I suspect you are, once again, using the argumentative fallacy of Equivoction and defining u(sum) as something other than the sum of the uncertainties.”
Only in Gormanland is stating the correct uncertainty for a sum, equivocation.
“And it means that if the average is 100th the size of the sum, than the uncertainty of the average must be 100th the size of the uncertainty of the sum.”
Thank you Captain Obvious. So what?
Once again, you totally ignore the assumptions made in the text of the book. This only works if EVERY ELEMENT IS THE SAME! Thus the average is a complete statistical descriptor of every element involved.
Taylor: “we might measure the thickness T of 200 identical sheets of paper” (bolding mine, tpg)
In other words, you are CHERRY PICKING again! Someday you *REALLY* should learn to read and understand the context and meaning of the text surrounding the pieces you cherry pick.
“Thank you Captain Obvious. So what?”
It was obvious to everyone but Tim. The “so what” is that it’s the obvious truth he’s been denying for the past 5 years.
“Once again, you totally ignore the assumptions made in the text of the book.”
The assumption is yhat you are scaling a value by an exact number with no uncertainty. That’s it. The only assmption. Here’s the relevant quotes.
Talking about cherry picking quotes. Tim goes on to quote an assumption in a single example, and implies that holds for all uses of the rule.
“The assumption is yhat you are scaling a value by an exact number with no uncertainty.”
IT DOESN’T HAVE TO BE AN EXACT NUMBER! YOU CAN’T EVEN GET THIS ONE RIGHT. That is *NOT* the point of the assumption!
The fact that B is a constant just makes the uncertainty component for it equal to 0.
If B was *not* a constant it just adds a second term to the RELATIVE UNCERTAINTY! There would be a δB/B term on the right side in addition to the δx/x.
δy/y = δx/x + δB/B
The salient point in the assumption is that IT IS A SCALING OPERATION. It doesn’t matter if the scaling is done by a constant or a variable!
“Talking about cherry picking quotes. Tim goes on to quote an assumption in a single example, and implies that holds for all uses of the rule.”
The operative words in the quote you give are “the fractional uncertainty in q = Bx is the sum if the fractional uncertainties in B and x. Because δB = 0,”
“fractional uncertainty” is just another way of saying relative uncertainty.
Fractional uncertainty is used because it is a SCALING operation.
THE RULE IS THAT RELATIVE UNCERTAINTY IS USED FOR SCALING OPERATIONS!
And YES, that rule holds for all scaling operations. Scaling operations are typically identified by the use of multiplication in the functional relationship!
“THE RULE IS THAT RELATIVE UNCERTAINTY IS USED FOR SCALING OPERATIONS!”
Stop shouting. The rule is that when you scale the absolute uncertainty scales. That’s a consequence of multiplying adds relative uncertainties.
“That’s a consequence of multiplying adds relative uncertainties.”
So *now* you are agreeing that you use Eq 12 when multiplication is involved.
Why was it so necessary to rub your nose in your crappy assertions otherwise before you finally came to this conclusion?
“So *now* you are agreeing that you use Eq 12 when multiplication is involved.”
You’re an idiot. Not worth explaining this again.
“And just as you think Tim might be in sight of understanding his error, he deflect into a completely unrelated, but equally wrong claim.”
It’s EXACTLY the same claim for both!
You have asserted:
Both are based on believing that a linear transformation reduces the standard deviation of the parent distribution. It doesn’t. And it is the standard deviation of the parent distribution that determines the measurement uncertainty!
Now, come back and whine about how you are just misunderstood!
“It’s EXACTLY the same claim for both! ”
You have an odd definition of “exactly”. Maybe you think writing in all caps removes any meaning the word has.
“Averaging reduces measurement uncertainty”
What I’ve said is that the messurement uncertainty of an average can be less than the uncertainties of the individual measurements. You can apply that to the idea that meaduring the same thing multiple times reduces the measurement uncertainty, but it’s not my claim.
“Anomalies reduce measurement uncertainty”
Only when the uncertainties have a positive correlation, e.g. if there is a systematic error. For random uncertainties an anomaly will increase the measurement uncertainty slightly.
“Both are based on believing that a linear transformation reduces the standard deviation of the parent distribution”
That has zero to do with anything. It just demonstrates tat Tim has no clue about what’s being said.
“And it is the standard deviation of the parent distribution that determines the measurement uncertainty!”
You can kerp assetting that as often as you wish. It’s still clearly nonsense. And if you think that’s how to determine measurement uncertainty of a mean, it should be obvious evrn yo you, that the SD of anomalies will be smaller than the SD of temperatures. so by your own logic there is less mrasurememt uncetainty when using anomalies.
“Now, come back and whine about how you are just misunderstood!”
Only in Gormanland is corecting lies equivalent to whining.
It cannot be less. u(avg)/avg = u(sum)/sum
You’ve been shown the illogic of your assertion multiple times.
The standard deviation of the sample means can be less than the measurement uncertainties of the average. But the measurement uncertainty of an average cannot be less than the sum of the component measurement uncertainties.
Measurement uncertainty ADDS!
As I keep trying to tell you:
Conclusion:
The average is a best estimate of a value. The measurement uncertainty of the average is the interval of values that can be reasonably assigned to the best estimate. That interval of values is the sum of the variances associated with the component elements. The average variance of the component elements does *NOT* describe the totality of the values that can be attributed to the best estimate.
You are STILL stuck with the meme of “all measurement uncertainty is random, Gaussian, and cancels”. That meme only works for a few measurement data sets in the real world, primarily when multiple measurements of made of the SAME thing using the same instrument under the same conditions. That just doesn’t apply very often in the real world.
Even when you are measuring the same thing multiple times using the same instrument under the same conditions and the assumption that all measurement uncertainty is random, Gaussian, and cancels THAT ONLY HELPS IN DETERMINING A BEST ESTIMATE OF THE VALUE OF THE PROPERTY BEING MEASURED. The measurement uncertainty of that best estimate, i.e. the average/mean, is STILL the standard deviation of the values in the distribution. Go read the GUM again about Type A measurement uncertainty. In this case there is *NO* “average” measurement uncertainty. There is only *the* standard deviation.
If you have a situation with different things whose measurements have differing measurement uncertainties the “average” measurement uncertainty of those component elements does *NOT* describe the totality of the values that can reasonably be attributed to the best estimate for the property being measured. The totality of the values that can be attributed to the best estimate of the property being measured is the sum of the variances of the component elements, V_total = ΣV_elements. The totality of the values that can be attributed to the best estimate of the property being measured is *NOT* ΣV_elements/nbr_of_elements.
“It cannot be less. u(avg)/avg = u(sum)/sum”
All of Tim’s algebraic skills on display there. He still doesn’t understand proportions.
“Only when the uncertainties have a positive correlation, e.g. if there is a systematic error. For random uncertainties an anomaly will increase the measurement uncertainty slightly.”
What is the correlation of the systematic uncertainties between the measuring station at the top of Pikes Peak and the main station in Colorado Springs?
You continue to be stuck in statistical world looking at your blackboard where you can only see the situation where you have a Type A measurement uncertainty situation.
Do you not realize that in the real world THAT IS A SITUATION THAT RARELY HAPPENS? It rarely even happens in a licensed calibration lab – their calibration reports contain an entry for the uncertainty of the calibration!
Nor is anyone claiming that an anomaly will *increase* the measurement uncertainty. The fact is that the anomaly will inherit the measurement uncertainty of the parent distribution. That’s because the anomaly is nothing more than a linear transformation by a constant which does not change the standard deviation of the parent distribution.
Why those defending climate science can’t accept this is beyond me. It’s all based on the assumption that all measurement uncertainty is random, Gaussian, and cancels – i.e. the standard deviation of the measurements = zero! Leaving only the sampling uncertainty which can be minimized by adding more measurements assumed to be 100% accurate! These so-called statisticians can’t even understand that if the SD = 0 then there will be ZERO sampling uncertainty derived from the measurements.
“What is the correlation of the systematic uncertainties between the measuring station at the top of Pikes Peak and the main station in Colorado Springs?”
Irrelevant as regard an anomaly.
“…you can only see the situation where you have a Type A measurement uncertainty situation.”
Hiw do you get a type A uncertainty from a weather station?
“Nor is anyone claiming that an anomaly will *increase* the measurement uncertainty.”
Jim was. I am. If all uncertainties are independent,
u(anomaly) = √(u(temp)² + u(base)²)
“That’s because the anomaly is nothing more than a linear transformation by a constant which does not change the standard deviation of the parent distribution.”
And this is your problem, talking about the sampling distribution rather than the mesurement uncertainty.
“It’s all based on the assumption that all measurement uncertainty is random, Gaussian, and cancels – i.e. the standard deviation of the measurements = zero!”
Complete nonsense. Why on earth would a random Gaussian dustribution have a standard deviation of zero?
Malarky! If the temperature at Pikes Peak is used to create an anomaly for Pikes Peak and the temperature at Colorado Springs is used to calculate an anomaly for Colorado Springs AND THEN THE ANOMALIES ARE AVERAGED LIKE IN THE GLBOAL AVERAGE TEMPERATURE, then what is the correlation between the two anomalies?
I kinow you won’t answer but I just have to ask!
Talk about irrelevant! You *don’t* get a Type A from a weather station. That doesn’t mean *YOU* do not view the single measurement from the weather station as a Type A measurement uncertainty.
That is *NOT* what Jim claimed. He claimed that the anomaly will inherit the measurement uncertainty of the parent distributions, just like I am. “Equal” is not “increasing”.
u(anomaly) = √(u(temp)² + u(base)²)
That’s not “increasing” anything. That is calculating the anomaly uncertainty. It’s done by propagating the uncertainties of the parent distributions. The anomaly doesn’t start out with an inherent measurement uncertainty which Jim is increasing, it just inherits the uncertainties of the parent.
You *really” can’t read at all, can you?
““That’s because the anomaly is nothing more than a linear transformation by a constant which does not change the standard deviation of the parent distribution.””
What does a linear transformation of a parent distribution by a constant have to do with a sampling distribution? Are you drunk?
The standard deviation has to do with the parent distribution, not with a sampling distribution (i.e. from which the SEM is calculated) taken from the parent distribution!
“Complete nonsense. Why on earth would a random Gaussian dustribution have a standard deviation of zero?”
It wouldn’t. That doesn’t keep you and climate science from assuming all of the measurement uncertainty is random, Gaussian, and cancels so you can say the SEM is the measurement uncertainty of the mean.
“If the temperature at Pikes Peak is used to create an anomaly for Pikes Peak and the temperature at Colorado Springs is used to calculate an anomaly for Colorado Springs AND THEN THE ANOMALIES ARE AVERAGED LIKE IN THE GLBOAL AVERAGE TEMPERATURE, then what is the correlation between the two anomalies?”
What relevance does that have to do with the uncertainty of an anomaly?
“You *don’t* get a Type A from a weather station. That doesn’t mean *YOU* do not view the single measurement from the weather station as a Type A measurement uncertainty.”
Why would you do that? If it’s not a Type A uncertainty, it’s a type B, and you treat it as a Type B.
“u(anomaly) = √(u(temp)² + u(base)²)
That’s not “increasing” anything.”
It’s increasing the uncertainty of the temperature – as in, the uncertainty of the anomaly is slightly larger than the uncertainty of the temperature.
“What does a linear transformation of a parent distribution by a constant have to do with a sampling distribution? Are you drunk?”
Then what distribution are you talking about? And what linear transformation are you talking about?
“The standard deviation has to do with the parent distribution, not with a sampling distribution”
How do you know what the parent distribution is? You are the one claiming that measurement uncertainty of an average is the standard deviation of all measured values. That is a sample.
“It wouldn’t.”
Then why claim that was an assumption?
“That doesn’t keep you and climate science from assuming all of the measurement uncertainty is random, Gaussian, and cancels so you can say the SEM is the measurement uncertainty of the mean.”
Stop distracting. Your claim was that people were assuming the standard deviation of all measurements was zero. You keep making these absurd claims, then when called out you just jump to a different claim.
The issue is when SD and SEM become meaningful. That judge ment comes from resolution. Once you have a value below what you actually measured you have entered the dimension of guessing.
If I am a teacher and I ALWAYS grade in full points is an average of 75.75 meaningful? No one will ever obtain that grade.
Statisticians love this. They can crow how accurately they are able to calculate the mean. With enough tests they can calculate a mean to 75.7548.
The problem is that the mean to that resolution is worthless and meaningless. No difference to temps being measured to units digit. Mathturbation to 1/1000ths is not scientific. It is more akin to astrology.
“Once you have a value below what you actually measured you have entered the dimension of guessing.”
Is a sum divided by a constant more of a guess than the sum?
“If I am a teacher and I ALWAYS grade in full points is an average of 75.75 meaningful?”
Average of what? Lets assume it’s the average grade of all your pupils. I would say it’s meaningful as a statement of your average success.
“No one will ever obtain that grade.”
Which is the problem we keep running into. You want the average to represent 1 specific result, whereas the average actually represents the average if all results. It’s the difference between the average result, and the result of an average pupil. It’s the same readon why 3.5 is yhe expected roll of a fair die, even though you can never get 3.5 in a single roll.
“Try to address all the reasons I’ve given you for why that’s exactly how you propagate the measurement uncertainties in an average, without resorting to yet more insults.”
In Eq 12 the 1/n factor cancels and you just add the measurement uncertainties of the measurements.
You have tied yourself into knots trying to justify that relative uncertainties don’t have to be used for the average. Asserting all kinds of garbage like those using Eq 12 don’t understand partial derivatives, or Σx/n is *NOT* scaling or division, or that you don’t have to use Eq 12 in the case of scaling or multiplication.
Averaging measurement uncertainty does not produce a physically meaningful uncertainty for each measurement.
Copilot:
—————————————
does averaging measurement uncertainties produce a physically meaningful uncertainty for each measurement
No — averaging measurement uncertainties does not produce a physically meaningful uncertainty for each measurement. It only produces a descriptive statistic, not a physically interpretable one.
This is one of those places where arithmetic and metrology part ways.
————————————–
Chatgpt:
————————————–
does averaging measurement uncertainties produce a physically meaningful uncertainty for each measurement
Averaging uncertainties does not generally produce a physically meaningful uncertainty for each measurement—it’s usually a loss of information, not a refinement.
—————————————–
If averaging measurement uncertainty cannot produce a physically meaningful uncertainty for each measurement then it can’t produce a physically meaningful uncertainty for the mean of the stated values either.
Now you are just being crazy!
GUM: 3.3.5
” The estimated variance u^2 characterizing an uncertainty component obtained from a Type A evaluation is calculated from series of repeated observations and is the familiar statistically estimated variance
s^2 (see 4.2). The estimated standard deviation (C.2.12, C.2.21, C.3.3) u, the positive square root of u2, is thus u = s and for convenience is sometimes called a Type A standard uncertainty.” (bolding mine, tpg)
GUM: 2.3.1
standard uncertainty
uncertainty of the result of a measurement expressed as a standard deviation (bolding mine, tpg)
GUM: 6.2.1
The result of a measurement is then conveniently expressed as Y = y ± U, which is interpreted to mean that the best estimate of the value attributable to the measurand Y is y, and that y − U to y + U is an interval that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to Y. Such an interval is also expressed as y − U u Y u y + U.”
“And as always you ignore the next equation, which tells you to divide that by n to get the variance of the mean, and that this or it’s square root is the uncertainty of the mean.”
The variance of the mean IS HOW PRECISELY YOU HAVE LOCATED THE POPULATION MEAN.
IT TELLS YOU NOTHING ABOUT MEASUREMENT UNCERTAINTY ASSOCIATED WITH THE MEAN!
Using your logic a set of measurements making up a parent population can be hugely inaccurate while also resulting in a highly accurate mean value!
You have apparently fallen back into being unable to differentiate between “precision” and “accuracy”. If you are shooting at a target and all shots go into the same hole but that hole is a foot away from the bullseye then your shot placement is highly precise BUT TOTALLY INACCURATE.
s^2(q_bar) is how precisely you have located the mean of the population. But that mean inherits the accuracy of the measurements. The accuracy of the mean is not how precisely you have located the mean.
“uncertainty of the mean”
Now you are back to the argumentative fallacy of Equivocation again. In statistical world the “uncertainty of the mean” is sampling uncertainty. In the real world “uncertainty of the mean” is how accurate the mean is. But *YOU* never define which one you are speaking of so you can always use the excuse “but I was speaking of the other definition”. Pathetic.
If I am buying bolts from you I simply don’t care how precisely you have located the mean of their lengths. What I care about is the range of lengths I can expect to encounter – i.e. the standard deviation of the lengths of the bolts making up the population!
“It’s only in your strange world that measurement uncertainty has nothing to do with measuring things that this would be a problem.”
ROFL!!!
This from the guy that thinks the average measurement uncertainty is somehow physically meaningful in the real world?
“This from the guy that thinks the average measurement uncertainty is somehow physically meaningful in the real world?”
How often are you going to repeat this lie? I worry you don’t even understand it’s a lie – maybe you’ve repeated it so often you firmly believe it. or maybe you are suffering from dementia. But regardless, I just have to keep correcting you. I do not say, and have never said, that averaging measurement uncertainty tells you anything. What I’m interested in is the measurement uncertainty of the average. Not the average measurement uncertainty.
Now are you ever going to explain how the standard deviation of a set of different values, tells you anything about the measurement uncertainty of their mean?
Then how did you come up with this statement:
bellman:
“The measurement uncertainty of an average is
u_c(avg)^2 = Σu(x_i)^2 / n”
Σu(x_i)^2 / n IS THE AVERAGE MEASUREMENT UNCERTAINTY.
And you are stating that is the measurement uncertainty of the average.
Are you now abandoning the assertion you have made over and over that the average measurement uncertainty is the measurement uncertainty of the average? If so, how long are you going to remember that you have refuted your own assertion of such an idiocy?
Simple. Variance is the metric for uncertainty of a distribution. If that distribution is made up of measurements then it is a “measurement uncertainty”.
Then the rule is Variance_total = Var_1 + Var_2 + … +Var_n
The GUM, the governing document for metrology, states that the measurement uncertainty is given as a standard deviation, the square root of the variance of the distribution from which the mean is derived.
The variance of the distribution is the sum of the variances of the component elements.
THIS IS BASIC. Anyone that has even just scanned the GUM or Taylor/Bevingon/Possolo books would understand this.
But “scanning” is *NOT* the same thing as cherry picking. You just keep looking for pieces and parts that you think confirm your misconceptions – cherry picking. You never even bother to actually scan anything for meaning and context.
It’s like the quote from Taylor about the assumption made for y = Bx. You cherry picked out the fact that B was a constant when that wasn’t the point of assumption at all! The point was that fractional uncertainty needed to be used because it is a scaling operation! Thus the measurement uncertainty for y would be u(y)/y = u(x)/x + u(B)/B and since u(B) = 0 the measurement uncertainty component for B becomes zero. That’s a *result* of the assumption and not an assumption on its own!
When the partial derivative is 1, of course I ignore it. You don’t know enough algebra and calculus to even realize it!
For x1 + x2 + x3 + …. + xn
∂(x1)/∂x = 1, and so on all the way through!
Why do you insist on making a fool of yourself?
I think that because that is EXACTLY what it is!
The mean is an estimate for the value of the property of the measurand that is being measured. It is *NOT* a “true value”. If it was the “true value” then the SEM should be zero!
The SEM only tells you how close you are to the population mean. It does *NOT* tell you anything about how accurate that population mean is.
What tells you about the accuracy of the mean is the standard deviation of the population, not the standard deviation of the sample means (misnamed the SEM).
You are back to applying the meme you say that you don’t use: all measurement uncertainty is random, Gaussian, and cancels. That meme is ONLY applicable to explaining why the mean can be used as the “best estimate” for the value of the property being measured. If the measurement uncertainty distribution is not Gaussian then you can’t justify using the mean as the “best estimate” for the value of the property being measured. But since you ALWAYS assume the distribution is Gaussian, you ALWAYS assume the mean is the best estimate – and typically is the TRUE VALUE because it is 100% accurate!
If the measurement uncertainties are standard deviations, be they type A or Type B, then what gets added is their variances.
The root-sum-square process is nothing more that the statistic truism of
Variance_total = Variance_1 + Variance_2 + … + Variance_n
That’s why the GUM defines measurement uncertainty as using squares.
u_c^2(y) = u(x1)^2 + u(x2)^2 + …. ==. Σu^2(x_i)
You are adding variances – which when you take the square root gives you the standard deviation!
u_c^2(y) IS THE SQUARE OF A STANDARD DEVIATION! It is the measurement uncertainty of the measurement data set!
It’s why the GUM has two definitions:
s^2(q_k) vs s^2(q_bar)
THEY ARE NOT THE SAME!!!
s^2(q_k) is the sum of the element measurement uncertainties.
s^2(q_bar) is the standard deviation of the sample means.
Go look the difference up in GUM 4.2.2 and 4.2.3.
I know EXACTLY what dispersion I am looking for. It is the dispersion of the values which can be reasonably assigned to the value of the property being measured of a measurand.
It is *NOT* the dispersion of the sample means drawn from a parent population.
s^2(q_k) gives you the interval in which you can expect a subsequent measurement to fall.
s^2(q_bar) only tells you how close you are to the population mean. It does *not* tell you an interval in which you can expect a subsequent measurement to fall.
What in Pete’s name do you think all of us have been trying to tell you?
It is *NOT* the dispersion of values of that mean – that is the SEM, the dispersion of the sample means.
That is *NOT* the measurement uncertainty associated with the mean if that mean is used as the best estimate of the value of the measured property.
The dispersion of values of the mean WILL NOT TELL YOU WHAT IS CONSIDERED REASONABLE FOR A SUBSEQUENT MEASUREMENT.
The dispersion of the values described by the parent distribution, its STANDARD DEVIATION, *WILL* tell you what is considered reasonable for a subsequent measurement.
GET THIS INOT YOUR HEAD! THE SEM IS *NOT* MEASUREMENT UNCERTAINTY, IT IS SAMPLING ERROR!
“When the partial derivative is 1, of course I ignore it.”
And you are back to claiming the partial derivative of x/n is 1.
“For x1 + x2 + x3 + …. + xn
∂(x1)/∂x = 1, and so on all the way through!”
That’s a sum, not an average.
“Why do you insist on making a fool of yourself?”
I’m not the one doing that.
You can’t even admit that Σx_i/n is a DIVISION!
And dividing the sum by “n” is not a dividing operation, eh?
Unfreakingbelievable!
No guessing. Just an understanding of the basic concepts of metrology, including when relative uncertainty needs to be used. Something you simply refuse to learn.
It has nothing to do with “linear”. I can’t even tell if you know what a linear equation *is*. The volume of a specific cylinder *is* a linear equation. Possolo’s tank has a constant radius, therefore πr^2 is a constant leaving the volume a direct, linear product of a constant and the height of the cylinder. If you graph the equation πr^2 becomes the slope of the plot.
Therefore Possolo used Eq 13 from the GUM, just as I did. Which you *still* can’t accept as proper.
“You do not use relative uncertainties – full stop.”
ROFL!!! You say you understand what Possolo did and then turn around and say he didn’t use relative uncertainty!
Unfreakingbelievable!
“When using equation 10 you use absolute uncertainties – always. In the case where your function is of the specific for, you can simplify equation 10 to equation 13, which uses relative uncertainties and not the partial derivatives. “
YOU STILL CAN’T DO SIMPLE ALGEBRA!
Equation 10 is used when you do *NOT* have multiplication or division. It is a simple sum of the measurement uncertainties. Equation 13 is used when you have multiplication or division. It isn’t a matter of using relative uncertainty instead of the partial derivatives!
YOU STILL USE THE PARTIAL DERIVATIVES WHEN USING EQ 13!
It’s how you get the sensitivity value based on the power of the element of interest.
I’ve explained the simple algebra to you multiple times and you either refuse to learn or you are incapable of learning how it works.
v = πhr^2
partial derivative with respect to r is 2πhr
The relative uncertainty becomes [2πhr/πhr^2] u(r) ==> (2/r) u(r)
(Simple algebra. The πhr piece is divided by πhr^2. Do the cancellation.)
This is rearranged to 2[ u(r)/r ]
u(r)/r is the relative uncertainty of r and the sensitivity factor becomes 2.
EXACTLY WHAT EQ 13 SHOWS DOING.
There is no magic. It’s just basic algebra!
If the measurements have the dimension of “cm” and the number of elements has the dimension of “units” YOU think the two different elements have the same dimension?
The average has a dimension of cm/unit.
Where did you learn how to do dimensional analysis?
All you’ve done is spread the total measurement uncertainty equally across all the elements of the data set. You have *NOT* found the measurement uncertainty of the average. The measurement uncertainty of the average is the standard deviation of data set, it is *NOT* the average measurement uncertainty.
This is why you can *NOT* reduce measurement uncertainty by averaging. Thinking you can reduce measurement uncertainty by averaging is based on the garbage meme that the average measurement uncertainty is the measurement uncertainty of the average.
You didn’t explain it at any point in time. YOU STILL CAN’T EXPLAIN IT TODAY!
you just stated in *this* message: “equation 13, which uses relative uncertainties and not the partial derivatives.”
You *still* don’t understand the basic metrology concepts or how simple algebra works. You can’t do simple dimensional analysis. And yet you come on here and think you can lecture everyone how they are wrong in how they do measurement uncertainty?
“You can’t even admit that Σx_i/n is a DIVISION!”
It is not only a division. What do you think “Σ” means?
” I can’t even tell if you know what a linear equation *is*. The volume of a specific cylinder *is* a linear equation.”
Well, I can tell from that, that you don’t know what a linear equation is.
” Possolo’s tank has a constant radius”
If there is only one tank it has a constant radius and a constant height, and a constant volume. That has nothing to do with applying the function V = πHR^2. The function is not treating R and H as constants. They are input quantities which could be any value. Only π is a constant.
“Therefore Possolo used Eq 13 from the GUM, just as I did. Which you *still* can’t accept as proper.”
Stop lying.
Also, it’s equation 12, not 13.
“YOU STILL CAN’T DO SIMPLE ALGEBRA!”
You still can’t figure out how to use the shift key.
“Equation 10 is used when you do *NOT* have multiplication or division. It is a simple sum of the measurement uncertainties. Equation 13 is used when you have multiplication or division.”
And there in a nutshell is the problem.
You say Equation 10 can only be used if there is no multiplication or division, and that equation 12 must be used if there is any multiplication or division anywhere in the function. I say that equation 10 can be used for any function (as long as it’s analytical), and that equation 12 is a special case of equation 10 that can only be used when the function involves nothing but multiplication division or raising to a power.
Which of us is correct? Obviously I know I’m right, but so do you – and just shouting at each other will not resolve the issue. So let me try to state the justification for my interpretation.
1) There is nothing in the GUM’s description of equation 10 that says you can not have multiplication or division in the function.
2) The GUM starts it’s description of equation 12
That tells me two things. One, the function for Y must be of that specific form, and two, equation 12 is derived from equation 10.
3) Look at the example of weighing coins in the Possolo book, you keep quoting. Page 26. 3 gold coins are weighed in pairs in order to estimate the weight of each coin. Even though each equation involves dividing by 2, the equation used is clearly (10). No relative uncertainties, the partial derivative for each of the coins is 1/2.
4) Equation 12 is derived from equation 10 simply by dividing through by the square of the result. This means that when the function is of the correct form, most of the individual terms cancel out, leaving you with an equation involving reactive uncertainties. This simply won’t work if the function involves adding or subtracting.
To see this in the simplest form, consider the function is A = WH, where A is area, W is width and H is height of a rectangle. As ∂A/∂W = H, and ∂A/∂H = W, we get by Equation 10
u_c(A)^2 = H^2u(W)^2 + W^2u(H)^2
This is a perfectly correct estimate of the uncertainty – and is telling you something about the relative importance of the different uncertainties.
We can then simplify this by dividing through by A^2.
[u_c(A)/A]^2 = [H^2/A^2]u(W)^2 + [W^2/A^2]u(H)^2
and because A = WH
[u_c(A)/A]^2 = [H^2/(W^2H^2)]u(W)^2 + [W^2/(W^2H^2)]u(H)^2
And then just cancel terms to get
[u_c(A)/A]^2 = [u(W)/W]^2 + [u(H)/H]^2
This same method works for any equation involving just multiplication, division and raising to powers, but not addition or subtraction. The derived equation will be correct, but not a simplification.
5) If you want to test this, you can do what Possolo suggests and use Monte Carlo methods.
Dude. The Σxᵢ/n breaks down into (x1/n + x2/n + …). Lots of division.
The only other option is to use
Y = Σxᵢ and f(Y, n) = Y/n. Still a division and the factor of “n” disappears as it should.
The GUM Section 4.2 defines an input quantity as a random variable for use in finding a mean and a variance.
Try as you might, you’ll never get a measurement function defined as an average of an average. That means each input quantity is a separate measurement divided by a constant that disappears.
“Dude. The Σxᵢ/n breaks down into (x1/n + x2/n + …). Lots of division.”
And lots of adding. Not of the for required for equation 12.
“Y = Σxᵢ and f(Y, n) = Y/n.”
And now you have an equation of the for to use with equation 12. You just need to know the uncertainty of Y, that is the uncertainty of the sum of all your values. Then equation 12 gives you
[u(Y/n)/(Y/n)]^2 = [u(Y)/Y]^2
and as there is just one term this is equivalent to
u(Y/n)/(Y/n) = u(Y)/Y
and this rearranges to
u(Y/n) = u(Y)/n.
“The GUM Section 4.2 defines an input quantity as a random variable for use in finding a mean and a variance.”
This is your continuing efforts top muddy the waters. You keep latching on to any mention of a mean, regardless of whether it’s relevant or not. 4.2 to is describing a Type A uncertainty. You have one input quantity, you make multiple measurements, take the average and the uncertainty is the sd / √n. That can be used as the uncertainty for that input in equation 10, or 12. Or you can use 4.3, a type B uncertainty. You have one measurement and an estimated uncertainty derived from any method. E.g looking at manufactures specifications for that instrument. You can use that Type B uncertainty in equation 10 or 12.
All of this is irrelevant to the question of how you calculate the combined uncertainty, when the function is that of an average of different things, using equation 10.
“Try as you might, you’ll never get a measurement function defined as an average of an average.”
Why on earth not? I have to rods. I want to know the average of the two rods. I measure each one once, add the lengths and divide by 2. I use a type B uncertainty and use equation 10 to determine the combined uncertainty.
Or I measure each rod 100 times and take it’s average. I use the experimental standard deviation of the mean as the uncertainty of each mean. Then I use equation 10 to calculate the combined standard uncertainty. In that case I am taking an average of average. Why do you think this is impossible?
“That means each input quantity is a separate measurement divided by a constant that disappears.”
Except it doesn’t disappear. If the function is x/n, then the derivative with respect to x is 1/n.
“If you want to test this, you can do what Possolo suggests and use Monte Carlo methods.”
As I doubt anyone will do this – let me. I’ll use the recommended NIST Uncertainty Machine for this.
First, the water tank, using Possolo’s example.
===== RESULTS ============================== Monte Carlo Method Summary statistics for sample of size 1000000 ave = 7204.4 sd = 53.8 median = 7204.2 mad = 54 Coverage intervals 99% ( 7067, 7344) k = 2.6 95% ( 7099, 7310) k = 2 90% ( 7116, 7293) k = 1.6 68% ( 7151, 7258) k = 0.99 ANOVA (% Contributions) w/out Residual w/ Residual R 91.69 91.69 H 8.31 8.31 Residual NA 0.00 -------------------------------------------- Gauss's Formula (GUM's Linear Approximation) y = 7204.3 u(y) = 53.7 SensitivityCoeffs Percent.u2 R 1700 92.0 H 220 8.3 Correlations NA 0.0 ============================================This shows you get pretty much the same result using the MC method a for the equation 10 / 12 estimates. One thing to note is the bottom part, where it tells you the Sensitivity Coefficients, i.e. the values obtained from the partial derivatives. For R this is 2πRH = 2π ✕ 8.4 X 32.5 ≃ 1700. And for H it’s πR^2 ≃ 220.
Now for an average of 10 things.
(x0+x1+x2+x3+x4+x5+x6+x7+x8+x9)/10
I set each of the 10 values to an arbitrary different integer, and made the standard uncertainty for each 0.5. If I’m correct the combined uncertainty should be 0.5 / √10 ≃ 0.158.
If Tim is correct the combined uncertainty should be the sum of all the uncertainties in quadrature, i.e. 0.5 ✕ √10 ≃ 1.58.
Let’s see
===== RESULTS ==============================
Monte Carlo Method
Summary statistics for sample of size 1000000
ave = 15.8
sd = 0.158
median = 15.8
mad = 0.16
Coverage intervals
99% ( 15.39, 16.21) k = 2.6
95% ( 15.49, 16.11) k = 2
90% ( 15.54, 16.06) k = 1.6
68% ( 15.64, 15.96) k = 1
ANOVA (% Contributions)
w/out Residual w/ Residual
x0 10.07 10.07
x1 10.01 10.01
x2 10.00 10.00
x3 10.08 10.08
x4 9.94 9.94
x5 9.96 9.96
x6 9.98 9.98
x7 9.99 9.99
x8 9.97 9.97
x9 10.00 10.00
Residual NA 0.00
——————————————–
Gauss’s Formula (GUM’s Linear Approximation)
y = 15.8
u(y) = 0.158
SensitivityCoeffs Percent.u2
x0 0.1 10
x1 0.1 10
x2 0.1 10
x3 0.1 10
x4 0.1 10
x5 0.1 10
x6 0.1 10
x7 0.1 10
x8 0.1 10
x9 0.1 10
Correlations NA 0
============================================
Well, lookie at that. sd = 0.158.
Also note that this is also the expected result from Gauss’s Formula (i.e. equatiopn 10) and the Sensitivity coefficients are 0.1 for each input – that is 1/10.
So what? the FUNCTION is one of division, or multiplication by a fraction.
But you can’t quite figure out how it is not a linear equation, right?
ROFL!!!
“If there is only one tank it has a constant radius and a constant height, and a constant volume.”
He is measuring ONE TANK! So what? That does not mean that the formula for volume is *not* a linear equation!
So what? It is still of the multiplicative form: y = f(πHR^2).
You haven’t actually studied ANY of the references you’ve been pointed at.
Multiplication involves scaling of the result. Only relative uncertainties properly handle this scaling. Jeesh, go as copilot or some other AI as to why the multiiplicative form requires the use of relative uncertainty!
Go read Note 1 in Section 5.1.6. See if you can understand it!
You simply can’t read. I suspect you are using the example of the three gold coins. There is *NO* scaling being done here. Just a different method for adding the measurements together. As usual, you have been caught cherry-picking without actually reading the context surrounding the cherry-picked piece for meaning and understanding.
Key entries in the example:
“The uncertainty associated with each weighing in this balance is constant, and does not depend on the mass being weighed, u(m) = u.”
“Since the expressions above are linear combinations of the weighings, Gauss’s formula is exact in this case.”
The measurement model used is like
m1 = (1/2) [ m_1+3 + m_1+2 – m_2+3)
And then you add the contributions using weighting.
Where do you see a division by n?
(m_1+3 + m_1+2 – m_2+3) ≠ (m_1+3 * m_1+2 * m_2+3)
One is additive and one is multiplicative. One is scaled and the other is not.
You are still showing you don’t understand basic algebra.
What happens when width is in miles and h is in feet? What happens when the width is 1000ft and the height is 300ft? Which one contributes the most to the uncertainty of the area? What if one is measured with a laser and the other with a metal tape?
Equation 12 will work in all of these because it handles the scaling of the units and/or uncertainties. Equation 10 is the one that will only work in one specific instance – similar units and similar variances (i.e. measurement uncertainties).
“He is measuring ONE TANK! So what? That does not mean that the formula for volume is *not* a linear equation!”
It’s not a question of how many tanks are measured. It’s about what the equation is. The formula is V = πHR^2. H and R are inputs. The volume is different for different values. That equation is not linear.
“So what? It is still of the multiplicative form: y = f(πHR^2).”
Are you doing this just to annoy me now. The function is y = f(H, R), it is defined as y = πHR^2. And it is not a linear equation.I’m not even sure why you are arguing it is, as by your logic linear equations are the ones that use equation 10, not 12.
“You simply can’t read. I suspect you are using the example of the three gold coins.”
Stop kicking irony, it’s already dead. I specifically told you what I was doing, and you even quoted it. It had nothing to do with three coins. It was the average of 10 values.
“There is *NO* scaling being done here. Just a different method for adding the measurements together.”
If you are now talking about the 3 coins example, what do you thing the multiply by 1/2 is if not a scaling?
“You are still showing you don’t understand basic algebra.”
Every time you say that, it’s obvious you have lost the argument.
“What happens when width is in miles and h is in feet?”
Why would you use such antique units? Regardless, it’s a simple matter to convert one to the other.
“What happens when the width is 1000ft and the height is 300ft?”
You are the one who claims to understand algebra, right? The area would be 300,000 square feet, what ever that is in real units.
“Which one contributes the most to the uncertainty of the area?”
That depends on the uncertainty. The uncertainty of the area is
u(A) = √[300^2 * u(W)^2 + 1000^2 * u(H)^2]
the uncertainty of the height will be by far the biggest fact, assuming both are similar in size.
“What if one is measured with a laser and the other with a metal tape?”
You could work all this out for your self. Say u(W) = 0.01′ and u(H) = 1′.
u(A) = √[300^2 * 0.01^2 + 1000^2 * 1^2]
≃ 1000 square feet.
Now try it using equation 12.
u(A)/A = √[u(W)^2/W^2 + u(H)^2/H^2]
= √[(0.01/1000)^2 + (1/300)^2]
≃ 0.0033
Multiply that by A you get
0.0033 * 300000 = 1000 square feet.
I’m really not sure why anyone who understands how the algebra worked would expect something different.
“It’s not a question of how many tanks are measured.’
It certainly *is*. You simply don’t understand 2-d calculus let alone 3-d calculus.
Calculating the volume of a tank is actually an integral of the factors. The bottom of the barrel has an area of πR^2. It is a CONSTANT. You then do
πR^2 ∫ dh from H = 0 to H = h (the height of the barrel)
This is as linear as it gets. A linear equation has a constant slope. The slope of the equation is πR^2 – a constant!
I tire of trying to teach you basic math, algebra, and calculus.
Please, PLEASE go back to high school and go through these classes again!
“It certainly *is*. You simply don’t understand 2-d calculus let alone 3-d calculus.”
If you think height and radius are constants, what good is calculus? You just have a single point. That is not analytical.
“πR^2 ∫ dh from H = 0 to H = h”
Or as mist people would day. multiply the area by the height. The problem is still that R is not a constant in the function, and πHR² is not a linear function.
“I tire of trying to teach you basic math, algebra, and calculus. ”
Maybe you should stick to something you are good at.
JUDAS H PRIEST! You simply cannot read.
EXACTLY what do you think the words “he bottom of the barrel has an area of πR^2. It is a CONSTANT.” mean to you?
Do you see an H in there somewhere? Are you dyslexic? Blind? Or just willfully ignorant?
Unfreakingbelievable.
Unfreakingbelieve! Somehow Possolo went from measuring a single tank to measuring ALL tanks in the universe using magic!
Like I keep saying, you have no points of congruence with the real world. You don’t find the volume of ALL possible tanks all at the same time. The volume is associated with ONE TANK at a time! For that one tank πR^2 is a constant!
If you have a second tank then the volume of tank1 adds to the volume of tank2. And for the second tank πR^2 is a constant as well!
Are you a “bubble” boy? Have you *ever* done anything as simple as building a box from 2″x4″ boards?
“Somehow Possolo went from measuring a single tank to measuring ALL tanks in the universe using magic!”
Stop squirming. You attack me for my poor algebraic skills, then you claim that V = πHR² is a linear equation. In order to make that work is to claim that R is a constant and so it is a trivial inear equation on H. But this is not describing the equation, where R is an input not a constant.
Somehow you’ve convinced yourself that Possolo is only interested in the volume of system tanks with a fixed radius. but different heights.
“The volume is associated with ONE TANK at a time! For that one tank πR^2 is a constant!”
And yet the height isn’t.
The whole point of having a function, such as for the volume. is that it works with any input in it’s domain. The function is declared as f(R, H). The definition is V = πHR². That is non-linear equation to anyone who actually understands algebra. If you want to define a different function for a fixed radius then you can. But why would you? In that case you just have uncertainty of a single variable and it’s just a scaling operation.
“Stop squirming. You attack me for my poor algebraic skills, then you claim that V = πHR² is a linear equation.”
We are discussing MEASUREMENT UNCERTAINTY – usiing Possolo’s example FOR ONE TANK. For one tank, the equation for volume *IS* a linear equation because πR^2 IS A CONSTANT!
As usual, you want to push the discussion off into irrelevancies because you know you are looking stupid for not understanding how and when you use relative uncertainties and for not understanding that Eq 12 *does* use partial derivatives.
You are *NEVER* going to figure this out because you can’t do simple algebra or calculus and you are just adamant about remaining willfully ignorant on the subject. If you had bothered to study *ANY* of the reference books you’ve been referred to you wouldn’t be here claiming the SEM is the measurement uncertainty of the average and that the average measurement uncertainty is the measurement uncertainty of the average and that that Eq 12 doesn’t do partial derivates correctly and that averaging reduces measurement uncertainty and that averaging increases resolution and that an anomaly doesn’t inherit the measurement uncertainties of the parent distribution and that a linear transformation of a distribution by a constant doesn’t change the standard deviation of the distribution and on and on and on ….. for the last two weeks.
I have commissions for customers I *have* to finish. I simply don’t have any more time to keep on correcting your idiotic claims about how metrology works.
ByeBye for now. I’ll not respond any further.
“I have commissions for customers I *have* to finish. I simply don’t have any more time to keep on correcting your idiotic claims about how metrology works.”
Stop whining. Nobody is forcing you to write so many comments a day, just to display your ignorance. If you want to save time, try not to waste so much of it on personal attacks.
The simple fact here is that V = πHR² is not a linear function. Possolo states this explicitly
“You are the one who claims to understand algebra, right? The area would be 300,000 square feet, what ever that is in real units.”
You totally missed the point! If the measurement uncertainties have different units, represent different things, and are measured using different devices then how do you add them?
You can’t just simply use a conversion factor from miles to feet since the measurement uncertainties are *different*. What is the measurement uncertainty of a metal tape when measuring miles vs feet? Is it different?
Or do you just use relative uncertainties?
“You totally missed the point!”
You were making a point?
“If the measurement uncertainties have different units, represent different things, and are measured using different devices then how do you add them?”
You comvert to a common set of units. This is one of the advantages of using SI units.
“You can’t just simply use a conversion factor from miles to feet since the measurement uncertainties are *different*.”
How is it that the self proclaimed expert in measurememt uncertainty can’t even figure out that when you convert the units you also convert the uncertainty?
“You comvert to a common set of units. This is one of the advantages of using SI units.”
In other words SCALE using a SCALING function. Guess what you must do when you use a scaling function.
“How is it that the self proclaimed expert in measurememt uncertainty can’t even figure out that when you convert the units you also convert the uncertainty?”
Like I said, you have no point of congruence with the real world. How do you convert the measurement uncertainty?
You have different measurement devices. You have different measurands that are significantly different. The measurement uncertainty of measuring 1000 yards can’t be converted to feet simply by multiplying by 3. That “3” is a scaling function. Meaning what you *should* be doing is using relative uncertainty.
Say one measurement is 1000 yards and another is 1 foot. Do you *really* think the measurement uncertainty of the 1000 yard measurement can be converted by just multiplying by 3 so you can just add it to the measurement uncertainty of the 1 foot length?
Say the measurements are 1000 +/- 1 yard ==> 3000 +/- 3 feet ==> 36,000 +/- 36 inch and the other is 12 +/- .5 inch.
Do you just add the measurement uncertainties and come up with +/- 36.5″?
If you are calculating area how do you convert the +/- 36.5″ to square inches?
Or do you add the relative uncertainties? +/- 0.1% for the long measurement and 4% for the short one to get a relative uncertainty of 4%?
“In other words SCALE using a SCALING function. Guess what you must do when you use a scaling function.”
I’ve already told you. You have to scale the uncertainties.
“How do you convert the measurement uncertainty?”
I’ve already told you.
“The measurement uncertainty of measuring 1000 yards can’t be converted to feet simply by multiplying by 3. ”
Why not? Let me guess. it’s because you don’t understand anything you’ve read and are incapable of admitting you are wrong.
“Meaning what you *should* be doing is using relative uncertainty. ”
You can use either. They are both the same. Say I measure something as 200cm with an uncertainty of 1cm. That’s the same as 2m with an uncertainty 0.01m, or 2000mm with an uncertainty of 10mm. And all are the same as a relative uncertainty of 0.5%.
“Do you just add the measurement uncertainties and come up with +/- 36.5″?”
No. You have to use the same units.
“If you are calculating area how do you convert the +/- 36.5″ to square inches?”
Why are you multiplying 3 things to get an area?
Seriously though. either use the easy way and use relative uncertainties, either using the specific rule for multiplication or equation 12. Then multiply the combined uncertainty by the resulting area to get an uncertainty in square units. Or go the long way, and use equation 10. Each term is an uncertainty in length, multiplied by a sensitivity coefficient in length, to give you length squared.
The result will be the same.
“If the measurements have the dimension of “cm” and the number of elements has the dimension of “units” YOU think the two different elements have the same dimension?”
The number of elements is dimensionless.
“You didn’t explain it at any point in time.”
https://wattsupwiththat.com/2022/11/03/the-new-pause-lengthens-to-8-years-1-month/#comment-3636787
“The number of elements is dimensionless.”
Bullshite! The only thing that is dimensionless is a percentage, i.e. a relative uncertainty.
Then what do you think its dimension is? And what do you think the dimension is of an average. Say 5 weights measured in kg. What is the dimension of their average kg / (?)?
I told you what the dimension is. It is “units”.
Measurement uncertainties are *NOT* values that can be averaged. They are statements about spread, they are a statistical descriptor. They are *not* a measurand with a value.
Measurement uncertainties of different sizes reflect different quality of the information. When you create an average of different measurement uncertainties you are losing valuable information about which measurements are the most reliable.
Many statistical descriptors use individual measurement uncertainties. Protocols such as weighted means, least squares, and the chi-squared test rely on individual uncertainties. using an average measurement uncertainty thus produces incorrect weights and misleading statistical descriptors.
The average measurement uncertainty simply doesn’t reflect the spread of the measurements in the parent distribution. Therefore it can’t accurately convey to subsequent measurements what the reasonable values to expect are.
“I must be, given I still think I can explain basic calculus to someone of your intellect.”
you can’t even do simple algebra. And you think you can lecture on basic calculus?
You: “equation 13, which uses relative uncertainties and not the partial derivatives.”
Yeah, you can lecture on basic calculus!!!! ROFL!!
You are still applying the wrong framework to the wrong problem.
The √n reduction is not being applied to “different populations being added together.” It applies to the uncertainty of an estimator when that estimator is constructed from multiple observations with partially independent error components. That is estimation theory, not the strong LLN and not single-measurand metrology.
You are correct that when combining independent uncertainty components of a single measurement via f(X1,X2,…,Xn), variances add. That is not what is happening here. Global mean temperature anomaly is not a single measurement composed of components. It is a spatially averaged field estimate.
In that context, the variance of the estimator depends on the covariance matrix of the observations. Independent components reduce estimator uncertainty; correlated components do not. This is why spatial correlation lengths are estimated and why uncertainty does not scale as √n blindly. No IID assumption is required.
Nothing in the GUM forbids this. The GUM explicitly distinguishes between uncertainty of an individual measurement and uncertainty of an estimated quantity derived from many observations. You are treating those as the same thing. They are not.
So no, Berkeley Earth is not violating metrology rules, and no, √n is not being applied to “different populations.” The claim that uncertainty must always add regardless of estimator structure is simply false in statistics, geophysics, and metrology alike.
“estimator structure” — you just made this up.
In essence, what you trendology people are claiming is that the accuracy and resolution of any temperature measurement instrument you happen to employ in your holy global averages is completely and totally irrelevant, you will always be able to discern these tiny milli-Kelvin changes. It could be ±10°C and 2°C, and you”d still be able to see 10 mK.
I call BS (again).
AlanJ and his compatriots don’t even recognize that they are caught in a catch-22.
If the temperature data set is a single sample then the SEM itself has an uncertainty of about 35% – not fit for purpose.
If the temperature data set is made up of multiple samples then the sample size of each element is 1. For a sample size of 1 the SD of the population is the measurement uncertainty – i.e. the standard deviation of the temperature data itself. But of course climate science never worries about the shape of the data distribution or what the SD might be. It’s just always assumed to be zero.
They wind up with uncertainty at a level that finding a milli-Kelvin difference is impossible. Even if you have enough digits to get a result in the hundredths or thousandths digit it is simply not fit for purpose.
And how many times has this been explained to bellman & co.?
The esimator is the mean. How precisely you locate the mean of a set of data TELLS YOU NOTHING ABOUT THE ACCURACY OF THE MEAN.
How many times do you need this repeated before it sinks in?
What in Pete’s name do you think you are averaging, i.e. field estimates?
How is spatial averaging different than just plain averaging? Is your spatial weighting based on *all* of the elements that affect temperature such as topography, elevation, pressure fronts, cloudiness, evapotranspiration, and microclimate differences?
If not, then it’s a garbage average.
You are *still* applying the garbage meme that the standard deviation of the sample means, i.e. the SEM, is the measurement uncertainty of the mean. The “variance of the estimator” is how precisely you have located the mean of the population. It is *not* the accuracy of the mean. Inaccurate data means an inaccurate mean and it doesn’t matter how precisely you locate the mean of the inaccurate data. Wrong data can’t give accurate results. Period. End of Story.
You are *still* applying the garbage meme of “all measurement uncertainty is random, Gaussian, and cancels” so you can use the SEM as the measurement uncertainty.
Distance is *NOT* sufficient to weight data elements. I live in the middle of soybean and corn fields. The evapotranspiration from those create a totally different microclimate than the major airport a mile from me. The temperatures measured at that airport are ALWAYS different than what is measured here, usually by as much as 0.5C. That’s enough measurement uncertainty to subsume anomaly differences in the hundredths digit!
Again, the SEM of the mean, which is the estmator uncertainty you are speaking of, IS NOT THE MEASUREMENT UNCERTAINTY OF THE MEAN. You are still using the garbage meme that “all measurement uncertainty is random, Gaussian, and cancels”.
Of course it does. But first you have to *read* the GUM.
See Equation 3 in GUM:
Average: q_bar = (1/n)Σq_k
See Equation 4 in the GUM:
Variance: s^2(q_k) = (1/(n-1)) Σ(q_j – q_bar)^2
See Equation 5 in the GUM:
Variance of the mean: s^2(q_bar) = s^2(q_k)/n
The square root of the variance is the MEASUREMENT UNCERTAINTY associated with the population, including the mean.
The variance divided by √n is sampling error, it is how precisely you have located the mean of the population.
You keep wanting to substitute s^2(q_bar) for s^2(q_k)
The GUM simply doesn’t provide for this substitution. You only get to it using the garbage meme that s^2(q_k) = 0! Thus leaving s^2(q_bar) as the only uncertainty involved.
The assumption that all measurement uncertainty is random, Gaussian, and cancels is *ONLY* useful in assigning the mean as the best estimator of the property being measured. That’s it. That’s all it provides for. But it does *NOT* mean that s^2(q_k) = 0. If the measurement uncertainty is *NOT* Gaussian, then the mode of the distribution will probably be the best estimator of the average. And the 3rd quartile value minus the 1st quartile value will be the best estimator of the measurement uncertainty of the distribution.
But it does *NOT* state that s^2(q_k) is the same as s^2(q_bar). They are two different things. But *you* always assume that s^2(q_k) = 0 leaving only s^2(q_bar) as a measurement uncertainty.
You can’t just assume the standard deviation of a population somehow disappears in order to make the measurement uncertainty appear smaller. That standard deviation remains. And it *is* the true measurement uncertainty.
Again, BE assigned this value TO A SINGLE MEASUREMENT. One piece of data all by itself. An indication that their data is simply not fit for purpose.
You are *still* caught in the same catch-22. Either the temperature data set is a single sample – when at least 30 samples are typically used as the minimum number providing useful results – or the sample size is 1 as each individual data element becomes an individual sample.
For a single sample the SEM itself has a standard deviation of about 35%, not very useful. For a sample size of 1 the SD of the population becomes the SEM.
In either case your results are not fit for purpose.
The estimator you are describing is a random variable made up of measurements from different stations. That random variable can have a Type A evaluation. See GUM Section 4.2.
More importantly this section defines the standard uncertainty.
The assumption for using the experimental uncertainty of the mean in Section 4.2 is that same condions of measurement be used. This requires the repeatable measurements OF THE SAME THING. You are not measuring the same thing.
By measuring the same thing multiple times, one meets (somewhat) the requirements for deriving the √n value. Those requirements are:
multiple samples of the same thing,the same mean for each sample,and the same variance for each sample.Following this through, you have two choices to make. One, multiple samples of size 1. Two, one sample of whatever size.
One sample does not allow one to create a sample means distribution and thus divide by √n is not available. A sample size of 1, just means the standard deviation is all you have.
The CLT derivation for the sample means distribution requires measurements be drawn from the same underlying distribution (population). The sample means distribution requires multiple samples and the mean of each sample is plotted into the sample means distribution. The standard deviation of the sample means distribution quantifies the random spread of values in the sample means distribution and is called the standard error of the mean.
The use of √n to calculate a standard error of the mean requires multiple samples all with the same mean and variance. Otherwise, one must find the standard deviation of the sample means to find the standard error of the mean.
The derivations of this is in most advanced statistics texts.
You are not dealing with geostatistics here. Geostatstics are not used to obtain global distributions. Nor is it used to obtain an average of different regions.
Everything I have read about geostatstics starts off with the assumption that individual components have no uncertainty. Typical of the entire statistics field.
Exactly. In your mind the ensembles have no uncertainty themselves. The only “errors” that exist in between the ensembles themselves.
If you are using √n, you are using the LLN.
Invoking IID requirements here is applying the wrong theorem to the wrong problem.
From:
https://link.springer.com/content/pdf/10.1007/978-3-031-12409-9_3.pdf
Funny how this introduction discusses using probability distributions and the need for IID in observations.
Perhaps you can discuss why IID is not necessary when doing your method.
BS.
You don’t understand uncertainty at all.
You need to show a resource where you found this.
The error paradigm did not ever use an “average down” technique. In fact a proper treatment results in estimates of error that can be surprisingly close to uncertainty.
This certainly is not supported by the GUM Type A evaluation where each input quantity’s uncertainty is ADDED using RSS.
Malarky! This was for ONE measurement in the data set. The only basic error analysis would be the accuracy of the instrument making the measurement. They didn’t have thermometers back then capable of accuracy in the hundredths digit.
You need to learn basic metrology concepts before trying to talk about “basic error analysis”.
Every time climate science uses the SEM as the measurement uncertainty or uses the “best fit” metric for a linear regression as the measurement uncertainty THEY ARE USING THE MEME THAT ALL MEASUREMENT UNCERTAINTY IS RANDOM, GAUSSIAN, AND CANCELS. There is simply no other method to get to these other two values as the measurement uncertainty.
Just because they don’t list out their assumptions doesn’t mean the assumptions aren’t inherent in what they do. bellman, for instance, uses this meme *ALL* the time without realizing he does so.
Independent, unbiased errors do *NOT* average down. You have *NOT* studied the GUM any more than bellman has.
GUM, Section E.5.3:
“Second, because ε_i = w_i − μ_i, and because the μ_i represent unique, fixed values and hence have no uncertainty, the variances and standard deviations of the εi and wi are identical.”
Do you have even the smallest clue as to what this is saying? e_i is the error from the true value. w_i is the best estimate of the input quantity, and u_i is the “true value” of the property of the measurand that is being measured.
If u_i is a fixed, constant amount then it is nothing more than a linear transformation of the standard deviation associated with w_i using a constant. A linear transformation of a distribution using a constant value DOES NOT CHANGE THE STANDARD DEVIATION OF THE DISTRIBUTION. Thus the standard deviation of the error distribution is the same as the standard deviation of the distribution made up of the estimated measurement quantities. It simply doesn’t matter if you use the error concept or the uncertainty concept – you wind up with the same standard deviation for both.
Averaging does *NOT* decrease anything to do with measurement uncertainty. Neither do anomalies since an anomaly is nothing more than the linear transformation of a distribution using a constant. The standard deviation of the anomalies will be exactly the same as the standard deviation of the parent distribution.
Measurement uncertainty is *NOT* noise. This is a result of you applying the meme of “all measurement uncertainty is random, Gaussian, and cancels”. You don’t even realize when you use the meme, just like bellman doesn’t.
Measurement uncertainty is the “interval containing those values that are reasonable to assign to the measurand”. Those values are *NOT* noise.
Like most of climate science you have never once, NOT ONCE, studied the ISO documents on creating an uncertainty budget. Each of the things you list *are* uncertainty components. But they do *NOT* dwarf or replace actual physical measurement uncertainty associated with using different measuring devices to measure different things under different conditions. The uncertainty of each ADDS to the total.
A typical instrument measurement uncertainty today is at least +/- 0.3C. If the other components you list are greater than this then exactly how does climate science come up with a measurement uncertainty in the milli-Kelvin without assuming that all instrument measurement uncertainty is random, Gaussian, and cancels?
“Those uncertainties shrink dramatically when averaging thousands of stations over large areas and long periods, which is why the global mean uncertainty can be much smaller than individual measurement errors. “
Averaging does *NOT* shrink uncertainties. Averaging large samples only allows more precisely locating the mean of the population. IT DOES NOT HELP SHRINK THE ACCURACY OF THE MEAN THAT IS CALCULATED. Bad data cannot give correct statistical descriptor values.
If your data is inaccurate then the mean of that inaccurate data is also inaccurate.
Until you can get that simple fact into your head you have no hope of understanding metrology concepts.
“That is standard statistics, not pseudoscience.”
It may be standard statistics where data elements are *NEVER* given as “estimated value +/- measurement uncertainty” but, instead, only as 100% accurate true values.
The real world simply doesn’t work like statistical world.
No one is claiming early thermometers had millikelvin accuracy. The point is that global mean temperature is not a single measurement. It is an estimated parameter derived from many measurements with known uncertainties, biases, and correlations. The uncertainty reported is the uncertainty of the estimate, not the accuracy of any one thermometer.
The GUM does not say averaging cannot reduce uncertainty in an estimate. It says a linear transformation does not change the standard deviation of a single distribution. That is true and irrelevant here. When estimating a mean from multiple observations, uncertainty depends on how independent uncertainty components propagate through the estimator. Independent components reduce uncertainty in the estimate; correlated components do not, and are treated separately. This is exactly what climate datasets do.
No one assumes “all uncertainty is random, Gaussian, and cancels.” Systematic uncertainties are explicitly modeled, corrected where possible, and bounded where not. That is why spatial sampling, station changes, and coverage dominate the uncertainty budget, not instrument precision alone.
Saying “bad data can’t give a good mean” is a slogan, not a theorem. In estimation theory, many noisy, biased measurements can yield a well-constrained estimate if the errors are characterized. That is how astronomy, geodesy, and metrology itself work.
You are applying single-measurement metrology rules to a field-scale statistical estimation problem. That category error is why you think millikelvin uncertainty is impossible. It isn’t.
Millikelvin resolution has nothing to do random noise being averaged down, which is bogus too.
Resolution determines the information available in a measurement. Values smaller than the resolution are not known. Mathematics simply can not “recover” information that is not known.
NIST has this.
https://www.itl.nist.gov/div898/handbook/mpc/section4/mpc451.htm
If you need milli kelvin temperatures to determine data differences, then you need measurement devices that can supply that resolution. Trying to tell folks your math techniques can read tea leaves and create information that was never measured is a joke.
Read that NIST statement closely. What do you think resolution relative to measurement NEEDS really means!
Resolution limits what a single reading can display. It does not set a hard lower bound on how precisely a mean of many readings can be estimated. If it did, sub-resolution averaging would be impossible in metrology, astronomy, geodesy, or signal processing. Yet it is routine.
No one is “recovering information that was never measured.” The information comes from many independent measurements of the same quantity, each with known uncertainty. Averaging does not invent detail in any one reading; it constrains the estimate of the underlying signal. That is exactly why NIST distinguishes between resolution, accuracy, and uncertainty.
The NIST quote you cite is about using a single instrument with inadequate resolution for a single measurement need. It does not say that ensembles of measurements cannot yield a more precise estimate of a mean or anomaly.
So millikelvin uncertainty in a global mean does not imply millikelvin thermometer resolution. It reflects uncertainty in the estimate, not resolution of individual instruments.
Chuckle. Gotta love deceptively chosen comparisons relative to the coolest base periods. That is what we expect from “Skeptical Science”, which is itself a name chosen to deceive, since there is no skepticism or science involved.
But hey, drawing lines through meaningless plots is always fun!
What’s fun is the fact that you are focusing on the base periods and not the key takeaway. Despite all these so-called unexplained coolings or pauses (which are really just examples of short term variability), the long term warming trend has continued, and continues to this day.
Long-term? It is actually a rather short period in terms of climate. Comparing that snapshot to anything is rather meaningless, and setting the comparison points at the coolest parts of the record is pure drama in an attempt to hide inherent variability. Not to mention that the record itself is not remotely suited for such comparisons to begin with. It probably has warmed since the end of the Little Ice Age, and as far as we can tell it has warmed since the cool period of the 1970s when claims were that the Ice Age was looming again.
The only takeaway is a pathetic attempt to evade the fact that the climate does not conform to the narrative that CO2 emissions drive the system, that our understanding of that system is not sufficient to model it and make predictions, and that no climate crisis exists.
Mark,
“Long-term? It is actually a rather short period in terms of climate. Comparing that snapshot to anything is rather meaningless”
If you weren’t so selective in applying your logic, you’d see that if the 45 year warming trend since 1979 is supposedly too short to meaningful, then a cooling that allegedly started in 2024 is even more meaningless.
“…and setting the comparison points at the coolest parts of the record is pure drama in an attempt to hide inherent variability.”
And setting the comparison points at the highest parts of the record? How is that any less dramatic?
You are missing the point. The cooling events are contrary to model projections that CO2 is a primary driver of climate, since they do not exist in those projections. They raise doubt as to the actual role of increased CO2 in the system as a whole.
The main point, however, is to illustrate that the messaging itself is inconsistent, since every isolated warm event is portrayed as direct evidence in support of the narrative, but equally dramatic cooling events are downplayed or ignored. That is dishonest.
Indeed, using any peak or valley, or period originating in one, as a baseline introduces a bias and supports a skewed interpretation. Only by presenting the entire record, including the proxy record for the entire Holocene, and the uncertainties in part due to instrument sparsity in most of the surface temperature record, can even a vague picture emerge.
Skeptical Science uses a tiny, provocatively scaled, carefully selected bit of data presented for its utility as propaganda, not for any purpose to simply inform.
Mark,
“You are missing the point. The cooling events are contrary to model projections that CO2 is a primary driver of climate, since they do not exist in those projections. They raise doubt as to the actual role of increased CO2 in the system as a whole.”
This is not a cooling event. This is short term variability modulated by ENSO.
“Only by presenting the entire record, including the proxy record for the entire Holocene, and the uncertainties in part due to instrument sparsity in most of the surface temperature record, can even a vague picture emerge.
Skeptical Science uses a tiny, provocatively scaled, carefully selected bit of data presented for its utility as propaganda, not for any purpose to simply inform.”
Again, you are selectively applying criticism. Chris Morrison is restricting his comparison to a few years, not the entire Holocene.
No, I am clarifying the point Morrison is trying to make, that you are apparently intentionally being obtuse regarding. Perhaps you should read his article. Again, it is the lack of consistent messaging and the promotion of an ideology by omission that he is criticising.
One can take exception to Chris’s attribution regarding the Hunga Tonga event, but he is otherwise spot on.
Yes, the current cooling, like the warming that immediately preceded it, is short-term variability in part due to ENSO events. However, those events are portrayed in the press, or not portrayed when inconvenient, with undue diligence to the Net Zero agenda.
I am further criticising any attempts to portray these variations, these snippets of record, as determinative indications of any longer-term processes in the system, to wit, as resulting from anthropogenic emissions of CO₂.
The current warm period is not in any way remarkable in the longer history of climate, and any attempt to attribute current conditions to human emissions of CO₂ is unsupported by evidence. However, it is exactly that endeavor that drives the disparity in presentation as witnessed in materials from Skeptical Science and elsewhere in the alarmist genre.
“One can take exception to Chris’s attribution regarding the Hunga Tonga event, but he is otherwise spot on.
Yes, the current cooling, like the warming that immediately preceded it, is short-term variability in part due to ENSO events.”
(boldmine)
These two positions cannot be true. Chris is explicitly alleging that (1) the present cooling is primarily caused by dissipation of stratospheric water vapor from Hunga Tonga, and (2) being ignored because it contradicts an anthropogenic narrative.
But you are simultaneously agreeing that the recent cooling is simply short term variability modulated by ENSO.
If the latter is true (and I agree that it is) then the alleged cooling is not exceptional, and therefore it is not something that is being ignored to protect a narrative. It is behaving exactly as expected in a post El Nino transition period.
For a mainstream scientific perspective on global temperature trends and variability, this article may be helpful:
https://www.carbonbrief.org/factcheck-no-global-warming-has-not-paused-over-the-past-eight-years/
>The cooling events are contrary to model projections that CO2 is a primary driver of climate, since they do not exist in those projections. They raise doubt as to the actual role of increased CO2 in the system as a whole.
They are more remotely inconsistent to projections showing CO2 driven warming. These projections include patterns of natural variability as well. Our ability to model this natural variability is one reason we are confident that the observed trend is forced.
Except you do NOT have the ability to model natural variability.
Models have zero clue when the next major El Nino event will occur.
Models do not understand changes in absorbed solar radiation.
Models are also based on an atmosphere that doesn’t exist on this planet.
There is NO DOUBT that if there is any effect from enhanced atmospheric CO2, it is way too small to measure and is totally insignificant…
… a flea bite on an elephant’s posterior
“… a flea bite on an elephant’s posterior”
So, you?
Where in the entire CMIP ensemble is this variability precisely projected? Almost the entirety of that assembly overstates the expected warming relative to the observed inconstant trend, and nowhere in that observed trend is anything exceptional.
If the system could be accurately modeled, there would not be 60+ model outputs. There would only need to be one, and there would not be such a wide and increasing estimate range for ECS.
No one can explain why the warm periods of the past 8000 years periodically gave way to cooler periods, or why the coolest period of the last 10,000 years, the Little Ice Age, began, or why it again began to warm two centuries or so ago, long before CO₂ concentration began to increase.
Unless and until these changes can be explained, until the processes of clouds and ocean dynamics can be confidently included, all of the fancy computer games currently being concocted are pure speculation, and any vague resemblance to observations one or two of them may possess is nothing more than coincidence.
CMIP models are not designed to reproduce the exact observed year-to-year temperature path. Internal variability is chaotic; the test is whether observations fall within the ensemble spread under the same forcings, which they do. Expecting precise trajectory matching misunderstands what climate models are for.
Multiple models exist because unresolved processes (clouds, turbulence, ocean mixing) introduce structural uncertainty. Ensembles quantify that uncertainty; they are not evidence of ignorance. Every complex physical system is modeled this way.
Holocene climate variability is not mysterious. Orbital forcing, volcanism, solar variability, ice-sheet changes, and ocean dynamics explain past warm and cool periods. The Little Ice Age is well linked to volcanic clustering amplified by feedbacks. Those drivers are weak today, while greenhouse forcing is large, measured, and increasing.
Early post-LIA warming does not explain the late-20th-century acceleration. That acceleration cannot be reproduced without anthropogenic forcing and is accompanied by greenhouse fingerprints (ocean heat uptake, stratospheric cooling, spectral IR changes).
The issue is not “when” the models predict a significant pause or even cooling but do the models ever show them occurring at all in the frequency past measurements indicate.
Yes. Models do produce pauses and periods of reduced warming at frequencies comparable to observations. Individual CMIP realizations show decades of slowed warming or temporary cooling due to internal variability, even under long-term positive forcing. What models do not show is sustained multi-decadal warming cessation without an offsetting forcing, which matches observations.
The apparent discrepancy usually comes from comparing one observed realization to the ensemble mean. The mean smooths variability by design, while individual runs retain it. When observations are compared to individual model realizations rather than the mean, the frequency and duration of pauses fall well within the modeled range.
Short-term variability does not falsify the forced trend. It is explicitly part of the system.
“Our ability to model this natural variability is one reason we are confident that the observed trend is forced”.
This statement is ridiculous. It is currently impossible to quantify how much of the warming of the last century is due to natural causes or due to human activity. Anybody claiming otherwise is either lying or delusional. The fact that you believe this shows your complete lack of knowledge of climate science.
What do you have to say about the 40 year cooling period between 1940 and 1980?
It’s well known and accepted in climate science (but is a fav topic for sceptics who have a one-horse pony false idea that GW is caused by CO2 alone and not mediated by other things).
The answer lies in this graph ….
NB: look at the rate of change of anthro forcing (black dots bottom right), that sums the factors above
That considers the +ve anthro forcing (GHGs) vs aerosols etc.
Until the 70’s the -ve forcing (primarily the aerosols from the ramp-up of industry after WW2) outweighed the +ve forcing of burnt fossil fuel.
It is only after that period the CO2/CH4/N20 forcing increased significantly.
This graph doesn’t show what you think it does. Unless CO2 creates its own energy, it’s radiative intensity derives from emissions that would have bypassed CO2. Exactly what difference does radiation from CO2 or directly from the surface matter. CO2 must cool to radiate so the end result is the same.
I see no one has answered you. Is that because there is no answer? Or is it just an inconvenient observation?
How amazingly convenient!
Graemethecat,
How is it amazingly convenient?
That the period of increased aerosol should coincide so perfectly with the cooling episode, thus providing a convenient alibi for the failure of the hypothesis.
Here is the REAL point. What exactly is the trend you are pointing out telling you.
Can you determine from this trend what the cause is?
You know you can’t. All you can say is the Precautionary Principle must be used to protect us. At least Chicken Little had physical experimental proof of something hit him on the head. You don’t even have any physical experimental evidence of what the cause is.
All you are doing is running around yelling, “We are going to burn the earth up and I don’t know what is doing it”. Not exactly a good look. Not exactly scientific.
LOL, then you must accept that Earth has been cooling for the last 3,350 years…….. as it began the long cool down that happens in every interglacial period.
And how relevant is a window like 2023–2025 in a multi millennial context?
It is 3,350 years long versus your infitestimal 3 years which is you clear don’t understand that it was the turning point of the interglacial period, where cooling is now the trend that runs into the next glaciation phase.
It isn’t my infinitesimal three years. It’s Chris Morrison’s.
LOL, you brought it up fella while you ignore the OBVIOUS point of the 3,350 years long over all cooling trend as your feeble attempt to deflect is exposed.
Shame on you!
Sunsettommy, bringing in multi millennial cooling trends is a change of subject.
The claim under discussion is Chris Morrison’s assertion that the recent cooling from April 2024 through 2025 is exceptional and undermines mainstream climate attribution.
I keep saying this – but it’s falling on deaf ears. Since the graph starts in 1979, this is just too short a period in which to draw definitive conclusions. Yes, agreed, within that period, we see a warming trend of 0.3C in in 46 years…..big wow! 😮 Isn’t this well within the range of natural variability? The climate (or climates) is/are variable by definition.
Of course, some sort of change in greenhouse gases might be at least partly responsible but I really don’t buy it, that any firm conclusion can be reached….yet! Equally, the culprit could be: land use changes; reduction of SO2 aerosols; cloud cover changes; seafloor volcanic activity; etc….or any combination of these.
Also, what about the prospect that warming actually precedes rising CO2? So many variables to consider. I personally don’t believe there’s any computer model or algorithm that can realistically manage so many variables, many of which are quite possibly still unknown.
If buckwheat grows amongst good wheat, it’s very difficult to tell the difference between the good wheat and the weeds. Similarly, the observations on climate thus far tell us very little about what is true and what is false – what is wheat, and what is weed, what is true science and what is agenda driven junk science!
I agree that the Earth’s climate is multifaceted and that many variables contribute to its behavior.
That said, there are several points that are difficult to dismiss.
1) It is not controversial that greenhouse gases absorb and re-emit infrared radiation in all directions, including back toward the surface, thereby reducing the rate at which the planet loses heat. Increasing atmospheric GHG concentrations alters the energy balance and introduces feedbacks. This is basic radiative physics as Lindzen and Spencer can attest to.
2) Some observed patterns are hard to explain w/out a GHG contribution. The long term cooling of the mid and upper stratosphere is not readily explained by natural variability, and is a signature expected from the enhanced greenhouse effect. This doesn’t prove carbon dioxide is the sole driver, but it is a constraint that any alternative explanation must satisfy.
3) Figures such as Chris Morrison and Paul Homewood do not present objective analyses. Instead, they selectively use data to fit a narrative, which does nothing to advance scientific understanding.
So yes, uncertainty remains, and healthy skepticism like yours is very important but uncertainty does not imply that we know nothing.
The observed patterns are completely explained without needing the fantasy of warming by CO2.
CO2 may absorb, but never gets a chance to re-emit in the lower atmosphere.. Tom Shula and Markus Ott : The “Missing Link” in the Greenhouse Effect | Tom Nelson Pod #232
There is no evidence that CO2 has any measurable effect on energy balance in the atmospheric energy transfer.
Energy transfer by bulk air movement is bigger by 2 or 3 magnitudes. Any CO2 warming is totally trivial, if it exists at all.
Has never been observed or measured anywhere on the planet.
Stratospheric cooling at a set height is totally explained by natural warming of the troposphere.
You have never done an objective analysis, so you wouldn’t recognise it.. You work from propaganda only. From your comments, we can be pretty sure you do know basically nothing.
“There is no evidence that CO2 has any measurable effect on energy balance in the atmospheric energy transfer.”
The operative word here is “energy”. For some reason climate science never wants to actually talk about HEAT, which is measured in Joules. They always want to talk about temperature as if it is heat. Temperature is *NOT* heat. Joules is heat.
When was the last time you saw a climate science paper talk about HEAT balance at the TOA over time using joules instead of an average value for radiative flux which is joules/sec-m^2?
This is entirely correct. Climate “Scientists” are unable to distinguish temperature (an intensive variable measured in degrees C or K) from heat (an extensive variable measured in Joules).
Many thanks for this considered response. I certainly wouldn’t be as presumptuous to suggest that the endeavours of the scientific community on climate change, renders it clueless…or that it knows nothing! My point is that I don’t believe science is yet at a point where definitive conclusions can be drawn. The cooling of the stratosphere is certainly interesting but I do wonder if this is something that has potentially happened in the past when it wasn’t possible to measure stratospheric temperature and before the industrial era. Again, I point to the possibility that the rise in CO2, alongside stratospheric cooling, could be natural phenomena.
Here’s a question: if the stratosphere starts to warm (outside of occasional ssw events), will the mainstream climate science community start to revise its thinking?
You’re right that we can’t directly measure stratospheric temperature prior to the satellite and radiosonde era, so we cannot rule out the possibility that similar cooling episodes occurred in the past.
But that this cooling coincides with a documented increase in GHG concentrations, whose radiative properties are well established, makes a purely coincidental explanation unlikely.
“Here’s a question: if the stratosphere starts to warm (outside of occasional ssw events), will the mainstream climate science community start to revise its thinking?”
Yes.
“But that this cooling coincides with a documented increase in GHG concentrations, whose radiative properties are well established, makes a purely coincidental explanation unlikely.”
Correlation is not causation. And the radiative properties of the biosphere should be given in JOULES, not in average joules/sec-m^2.
Average flux is irrelevant. Total joules emitted is the relevant quantity.
How many joules does each square meter on the earth lose during the total diurnal period compared to the limited amount of joules provided during daylight by the sun?
Do *YOU* know? Do you have a clue as to how to calculate it?
” It is not controversial that greenhouse gases absorb and re-emit infrared radiation in all directions, including back toward the surface, thereby reducing the rate at which the planet loses heat.”
You obviously have no understanding of the relationship between temperature and heat.
Radiation out (i.e. heat loss) = T^4.
As temperature goes up linearly heat loss goes up exponentially. Heat loss INCREASES as temperature goes up!
Why does climate science adamantly and stubbornly remain willfully ignorant on this?
If you integrate T^4 to get total heat loss from T0 to T1 you actually wind up evaluating T^5/5! A fifth order function compared to the linear increase in temperature!
” Some observed patterns are hard to explain w/out a GHG contribution. The long term cooling of the mid and upper stratosphere is not readily explained by natural variability, and is a signature expected from the enhanced greenhouse effect. ”
As CO2 concentration goes up the mid and upper stratosphere do go down. What climate science REFUSES to admit is that a larger, cooler concentration of CO2 can emit just as much heat to space as a smaller, warmer concentration of CO2. Climate science likes to make an unstated assumption that the CO2 concentration remains the same while, at the same time, it gets cooler.
“Instead, they selectively use data to fit a narrative, which does nothing to advance scientific understanding.”
As you have aptly shown here, climate science also makes all kinds of unstated assumptions such as 1. CO2 traps heat and, 2. CO2 concentration at altitude doesn’t go up as CO2 concentration in the lower atmosphere goes up.
And we are to believe that climate science is advancing scientific understanding?
Heat is measured in JOULES. Why does climate science NEVER address heat loss in dimensions of JOULES, i.e. total heat loss over time, and refuses to move off the assumption that temperature is heat?
Neutral,
“Yes, agreed, within that period, we see a warming trend of 0.3C in in 46 years…..big wow! Isn’t this well within the range of natural variability? The climate (or climates) is/are variable by definition.”
Using the UAH satellite record:
1981-1990 (first full decade) average: −0.27 °C
2021-2030 (average so far): +0.39 °C
That’s a change of roughly +0.7°C in under 50 years.
And ALL of it occurring at totally natural El Nino events.
UAH reports a global average anomaly of −0.25 °C for August 1982–July 1983 (a strong El Nino period) and +0.78 °C for August 2023–July 2024 (the most recent strong El Nino), representing an increase of over 1 °C in less than 50 years.
Yep, my mistake 0.7C it is, although to be fair, I meant to write 0.3C above average, as the 1980s were cooler than the average. There’s undoubtedly an upward trend, but I will still take a lot of convincing that it’s outside of relative normality.
You didn’t answer my question about how you believe the mainstream climate science community would likely respond to a gradual warming of the stratosphere?
“You didn’t answer my question about how you believe the mainstream climate science community would likely respond to a gradual warming of the stratosphere?”
Yes, I believe they would revise their thinking because scientific progress depends on updating and refining models in light of new observations.
You seem to be speaking of something other than “climate science”…
“climate science” is stuck with so many proven fallacies that it can only be called cult-like religion.. not a branch of any rational science.
“not a branch of any rational science.”
Bnice believes there was an El Niño in 2025, (which is not supported by the 2nd chart provided by Morrison), yet still presents himself as the arbiter of rational science.
OMG… El drone seems to think that the EL Nino effect disappeared in 2024, and didn’t continue right through 2025.
Doesn’t know the difference between the ENSO indicator and an El Nino event..
So funny !!
Once equatorial Pacific SSTs are cooler than average, the ocean is no longer exporting anomalous heat to the atmosphere.
And the atmosphere does lag the ocean, but this lag is on the order of months, not years. The global avg. temperature peak occurs 3-6 months after the Nino3.4 maximum. If the last month meeting the El Nino threshold was May 2024, then even allowing for generous atmospheric lag, the system is no longer under El Nino influence.
Your mechanism doesn’t make any sense.
Exactly what is “anomalous” heat? That makes no sense. The warm water came from somewhere. Was it not “exporting” (whatever that means) heat to the atmosphere where it originated?
The warm water originates from the ocean. During El Nino, the equatorial Pacific is warmer than average, and that positive anomaly enhances evaporation and deep convection. This results in more latent heat flux into the atmosphere.
The ocean already has the warm water? So how does it’s movement to a different location change either the oceans overall temperature or the atmosphere’s overall temperature?
Hmmm? Latent heat changes the atmosphere’s temperature to a measurable value? Want to explain that?
Under normal conditions, trade winds keep warm water pooled in the western Pacific, while upwelling in the Nino3.4 region brings cooler water to the surface.
During El Nino, the weakening trade winds allows warm water from the Indo Pacific to shift eastward.
As a result, the thermocline deepens and normal upwelling is suppressed.
“Hmmm? Latent heat changes the atmosphere’s temperature to a measurable value? Want to explain that?”
Yes, in meteorology, latent heat is key in explaining how major thunderstorms develop.
When water vapor condenses, latent heat is released, warming the air parcel and increasing its buoyancy relative to the surrounding atmosphere. In turn, this fuels deeper convection.
Way to deflect. You answered with googled facts didn’t you?
Warm water is pooled and then moves. How does that warm the ocean? Describe in detail how an ocean heats by moving water around.
More deflection.
Is latent heat measurable?
Why does the latent heat release warm the air parcel? Does CO2 absorb it?
Does that condensing and subsequent radiation take place at the 2m level where air temperature is measured?
How does that condensing at altitude affect the SST’s.
No deflections on my behalf.
“Describe in detail how an ocean heats by moving water around.”
It’s not creating new energy. The warmer water is simply redistributed eastward and vertically displaces the colder water in the eastern Pacific. This leaves sea surface temperatures anomalously warm.
“Is latent heat measurable?”
Yes, look at a Stuve diagram during a period of instability and analyze the vertical temperature profile.
When the rising air parcel is dry, it rises and cools at a faster rate until it reaches its dew point and condenses. After that point, it still cools but at a slower rate (the effect of latent heat).
“Why does the latent heat release warm the air parcel? Does CO2 absorb it?”
Because it is the energy ocean water molecules absorb to break molecular bonds during evaporation, and that same energy is released back into the atmosphere when the vapor condenses.
Why do you ask about CO2?
“Does that condensing and subsequent radiation take place at the 2m level where air temperature is measured?”
No. Higher up.
“How does that condensing at altitude affect the SST’s.”
The causality runs in the opposite direction:
Warmer SSTs –> enhanced evaporation –> deep convection —-> condensation –> latent heat release aloft –> further convection.
Except what you claim to be the long term, isn’t.
Go a bit further back in time, and the planet has been cooling for the last 3000 to 5000 years, with a warm periods occurring about every 1000 years, with the current warming just being the latest.
What will happen to the large anomalies when the new baseline becomes 1995 – 2025?
That’s one of the problems with using anomalies as a warming index. It isn’t a constant measurement.
AJ is showing a classic example of the urban warming effect combined with junk data from all over the place and an agenda to show a continued warming trend via manic “adjustment”
So much junk data they could fabricate basically anything to suit their agenda.
But it is NOT REAL !!!
Exactly.
This isn’t the first time contrarians confidently claimed “unexplained cooling” based on short term variability:
https://wattsupwiththat.com/2018/08/14/the-planet-is-experiencing-an-unexplained-major-cooling-and-scientists-are-ignoring-it/
NASA CERES mission data clearly show the warming in your graph was 100% natural. Are you ready to follow the next set of stairs downward?
The alarmists now have to wait for the next major El Nino event.
Until then, we will get ‘not much happening’ or a cooling trend.
Where is the FAKE data from ??
Ahhh . . . see the small print of the bottom right face of the graph . . . says its from Berkeley Earth, not the most credible source of global temperature data, eh? Of course no mention that the plotted data has not been corrected for Urban Heat Island (UHI) contamination of temperature monitoring stations that has occurred from 1970 to 2023 or so.
And, of course, the cherry picking of ranges of data to show intermittent horizontal trends (“zero warming”) is completely arbitrary . . . one could just as easily have selected intermittent intervals of high warming rates followed by rapid declines to begin another “cycle”.
“You can fool some of the people all of the time, and all of the people some of the time, but you can not fool all of the people all of the time.”— arguably attributed to Abraham Lincoln
I gave up on BE when I went into their actual data and found them showing some temperature data from the late 1700’s with uncertainties values in the hundredths digit. Garbage from top to bottom as far as I’m concerned.
I remember asking one of them to show where the data came from for a certain region..
They didn’t have a clue !! Very funny !! 🙂
The planet has warmed up sine the coldest period of the last 100 years.
Proof positive that CO2 s going to kill us all.
Try going back 15000 years. Better than 90% of that period was warmer than it has been the last few years.
Ummmm . . . I think that is going back too far.
The The Holocene Climate Optimum is term used to denote a warm period roughly over the interval of about 8,000 to 5,000 years ago when global lower atmospheric temperatures (based on paleoclimatology proxies) where at or slight higher than those of the last 100 years or so.
The Earth exited its last glacial period about 11,500 years ago (ref: https://www.ncei.noaa.gov/sites/default/files/2021-11/1%20Glacial-Interglacial%20Cycles-Final-OCT%202021.pdf ). If one goes back 15,000 years, Earth was still deep in a glacial period with much lower temperatures . . . see attached graph.
The fundamental flaw in Climate “Science” is that previous climate excursions have occurred without change in CO2, yet it attributes modern warming to the gas.
I wonder if the uptick of 2023 was a delayed consequence of the 2022 Hunga Tonga eruption and its impact dissipating since 2024. though correlation is not causation.
See this article for more information.
https://wattsupwiththat.com/2025/12/30/the-2023-climate-event-revealed-the-greatest-failure-of-climate-science/
Several folks have been citing Hunga-Tonga for years. The current cooling matches our view perfectly.
“The current cooling matches our view perfectly.”
From February 2016 to September 2018, the UAH global anomaly cooled by 0.73°C, and from April 2024 to December 2025, it has dropped 0.64°C.
In both cases, a large El Niño was followed by a weak to moderate La Niña, or by conditions on the La Niña side of ENSO neutral.
WOW.. you have finally realised that the warming is all to do with non-human-caused El Nino events.
Well done !!!
Did I say that, bnice2000?
You showed the cooling event after the 2016 El Nino..
and the cooling part of the EL Nino event in 2023/24/25.
Seems you have actually come to the realisation that it is all about El Nion events, and not CO2
The cooling since mid 2024 is the natural cooling from a persistent El Nino even.
Just as the warming from mid 2023 was caused by that El Nino event.
How much effect the HT eruption had is unsure, but I suspect it is the reason for the much broader El Nino event than the 1998 and 2016 ones.
“The great tragedy of the settled climate science era, now facing increased scrutiny, is the draining of public confidence in once revered scientific institutions.”
This is a feature, not a bug of the great climate scare. It removes credibility from people who really know what they are talking about, and hands it to the people who can tell the scariest narratives. Theu leading to the triumph of the Precautionary Principle, which I call “Rule by People who Tell the scariest Stories”. And the scary stories cannot be contested, either – chalenging them is like Article 58 of the former Soviet Penal code.
From the article:Exhibit 1: The accurate UAH satellite record”
We need an overlay of the UAH chart and the NOAA and NASA charts for the same period of time.
The NOAA and NASA charts don’t show any cooling after 1998, on their charts. Their charts diverge from the UAH chart.
From 1999 to 2015, NOAA and NASA were claiming that 10 years during that period were hotter than 1998, the so-called “hottest year evah!” trick.
The UAH chart does not show any year after 1998, and before 2016, which could be claimed to be hotter than 1998. The year 2016, was one-tenth of a degree warmer than 1998, and four-tenths of a degree colder than 1934. 🙂
Chris Morrison, this article is mostly propaganda, not science.
Where did Javier Vinos make the calculations to compare the volume of HT-ejected water vapor to the amount of normally evaporated water vapor off the ocean since HT erupted? >Nowhere!<
Vinos has failed to explain how the ocean was warmed by a stratospheric water vapor increase AND not from absorbed solar radiation, nor any albedo change or contribution from the solar cycle.
Vinos has failed to appreciate that the ocean controls the lower troposphere, and that this ongoing relationship did not break down or get overwhelmed by the HT eruption and its water vapor.
Until that is done there is no factual basis to support his and now your hand-waving claims.
There is no reason to think the record measured water vapor increase in 2024 was not primarily related to the El Niño related ocean temperature increase.
The ocean was warmed by a combination of lower albedo from the triple-dip La Niña and strong solar irradiance, and is cooling due to more clouds now and from declining solar irradiance.
The pot calling the kettle black.
Thank you for your inattention to this matter.
Did you have something to say? Is this the level of mindlessness that has to exist to believe in such tripe that HT-driven stratospheric water vapor can warm the ocean?
Your argument is sound. While the HT event certainly had to have some effect, it seems very unlikely to have driven the anomaly in the atmosphere entirely or in the SST at all.
OK let me put on my tin foil hat and express a more sinister view of this fraud. It was not created just to enrich the few it was created to destroy Western Governments and economies. It was created to usher in the New World Order controlled by the .01% . The fact that open Western borders, a huge increase in drug use and the AGW scare all happened at the same time is not coincidental.
It is also worthy to note that the so called UN solution to the warming did not apply to China or India.
We often forget that Al Gore’s early education was as a divinity student.
So Al had a better appreciation of the reach of religion than most ordinary folk.
And while he still doing politics, he was putting together the basis for his new religion –
manmade CO2 as the “devil”, and “scientists” as the archangels.
He wasn’t too concerned about losing his presidential bid either.
Didn’t even contest the controversial votes that gave W. the gig.
Al had a better calling to pur$ue – climate televangelism.
The rest, as they say, is history.
‘We often forget that Al Gore’s early education was as a divinity student.’
We also often forget that Al Gore got a D in what was basically a science for dummies course from a school that rarely gives out Ds.
“Didn’t even contest the controversial votes that gave W. the gig.”
Did not contest eh! The election wound up being decided by the US Supreme Court.
I think the term is “globalists”. They wanted control over the planet. They have used the climate scare and Marxism as tools to gain control. It’s been a total failure, but lots of problems still exist from their actions.
You omitted The Population Bomb.
Computer Wizards are streaming out of the Cathedral of Climate Change in record numbers to dam net zero dogma and passionately embrace AI. Shaking the dust of green energy from their sandals making tracks to resurrect fossil fuels! I witnessed a number of wizard cloakes delivered to the Salvation Army for recycling……..
Climate Reanalyzer used to publish the 2M daily average temperature anomalies for the planet as a whole plus the NH and SH. I notice that they quit doing that, and I wonder why.
Forgive me if this has already been mentioned:
Exactly three years ago, the whole HT eruption/El Niño freakout was still in the future, on no one’s mind. Just 36 months ago. Even the Ukraine War is older than that.
https://wattsupwiththat.com/2023/02/01/uah-global-temperature-update-for-january-2023-0-04-deg-c/
Give the mainstream media a bit more time and after a few more cold snaps, winter storms and expanding ice expanses, they’ll be telling us it’s a certainty that a new Ice Age is almost upon us. So we’d better cut our fossil fuel usage because the increased emissions are preventing enough sunlight from reaching us and bringing on nothing less than global warming.
To my mind, it is too early to say “the world is cooling rapidly and the silence from the mainstream is both laughable and disgraceful.”. I agree with the thrust of the criticisms of the mainstream, but simple inspection of the temperature chart in the article shows temperatures returning from a large upward spike down to alignment with the previous overall warming trend (for the date range of the chart). Gavin Schmidt admitted that he had no explanation for the large spike around 2023. We know that the pattern of “greenhouse gas” warming should be gradual not sudden, so what we are probably looking at is a whole heap of short term natural variability – not represented in climate models – on a rising trend. Until the rising trend ends, we don’t have good evidence against the models’ “greenhouse gas” warming. We may have to have a lot of patience, because an analysis I heard some time ago from a leading solar physicist was that the overall warming trend of the last approx three centuries is solar-driven and is expected to continue for about another century.
Conclusion? Temperature simply can’t tell you what is happening to the biosphere known as Earth. It’s garbage from the very start. (Tmax + Tmin)/2 is *NOT* an average. It’s actually not even a mid-range value. The diurnal temperature curve is *NOT* Gaussian. The diurnal temperature curve is not even linear, it is a combination of sinusoidal and exponential decay. Nor does that mid-range temperature tell you *ANYTHING* about local climate. Nome, AK and OKC, OK can have the same mid-range daily temperature on any given day and neither one will tell you anything about their climates. And it just gets worse from there. Anomalies don’t help. In fact they obscure any possible derivation of actual climate because a cold climate can have the same anomaly as a warm climate – so which is more important to the local climate?
If folks would just pay attention and zoom way out in their perspective of global weather they would see that water in all its various forms and quantities, and atmospheric pressure are the main drivers regulating temperature on Earth at any place at any given moment.
Water vapor blows other greenhouse gasses out of the ballpark. The oceans hold over 99% of the Earth’s latent heat. Clouds can warm or cool what’s beneath.
Are these things properly represented in the models? It seems not.
Back to the original article: I think the drop in temperature is impressive – the second biggest net drop in temperature, on the satellite record, within the space of 2 years.
However, the drop in temperature is dwarfed by the net rise that preceded it! And that’s really the story of the “2 steps up, one step down”, all the way through the satellite record, at least in the last 25 years. Anyone who can’t see that has his/her head in the sand!
I don’t propose to challenge or support any particular theory on this – I simply don’t know enough. But 2 things occur to me.
1: The trend is definitely worth observing, wth a view to establishing what (if anything) unnatural can be attributed to it; and
2. Don’t be too quick to conclude anything much from brief rises and falls of temperature.
In the meantime, expect the media to continue hyping up the rises and ignoring the falls in temperature, until there are clear signs of a long term(at least 5 years) reversal of the current trend.
Meh. The temp as of today is only around 0.2-0.3 higher than it was in 1958.
Reasonable take.
Spike in 1998 as well, such happens.
“The great tragedy of the settled climate science era, now facing increased scrutiny, is the draining of public confidence in once revered scientific institutions.” Wholeheartedly disagree. The skepticism to which you refer is MUCH better than the blind reverence accorded the prophecies of doom based on MODELS. If it had developed a wider base sooner, perhaps the world could have avoided the hysteria of COVID19. No pronouncement by ‘experts’ should be accepted until they match the prediction to the data and defend it against critics’ hole-poking. I’m in my 8th decade, and have matriculated at almost a dozen institutions of ‘higher’ learning (generally, ‘higher’ refers to costs, not necessarily learning), and about the only thing agreed to by all is that you should never accept results (calculations, models, polls, etc.) until you have done your own assessment – hopefully using data, not emotions! Of course, I have no degrees or certificates in gender studies, microaggression analysis, LGBQTUSRVX+++ psychology, basket weaving, reparations or defense of the environment, so maybe I’m not the one to address the issue…